Science.gov

Sample records for advanced object-based software

  1. An advanced object-based software framework for complex ecosystem modeling and simulation

    SciTech Connect

    Sydelko, P. J.; Dolph, J. E.; Majerus, K. A.; Taxon, T. N.

    2000-06-29

    Military land managers and decision makers face an ever increasing challenge to balance maximum flexibility for the mission with a diverse set of multiple land use, social, political, and economic goals. In addition, these goals encompass environmental requirements for maintaining ecosystem health and sustainability over the long term. Spatiotemporal modeling and simulation in support of adaptive ecosystem management can be best accomplished through a dynamic, integrated, and flexible approach that incorporates scientific and technological components into a comprehensive ecosystem modeling framework. The Integrated Dynamic Landscape Analysis and Modeling System (IDLAMS) integrates ecological models and decision support techniques through a geographic information system (GIS)-based backbone. Recently, an object-oriented (OO) architectural framework was developed for IDLAMS (OO-IDLAMS). This OO-IDLAMS Prototype was built upon and leverages from the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS is an object-based architectural framework that affords a more integrated, dynamic, and flexible approach to comprehensive ecosystem modeling than was possible with the GIS-based integration approach of the original IDLAMS. The flexibility, dynamics, and interoperability demonstrated through this case study of an object-oriented approach have the potential to provide key technology solutions for many of the military's multiple-use goals and needs for integrated natural resource planning and ecosystem management.

  2. Advanced Software Development Workstation Project

    NASA Technical Reports Server (NTRS)

    Lee, Daniel

    1989-01-01

    The Advanced Software Development Workstation Project, funded by Johnson Space Center, is investigating knowledge-based techniques for software reuse in NASA software development projects. Two prototypes have been demonstrated and a third is now in development. The approach is to build a foundation that provides passive reuse support, add a layer that uses domain-independent programming knowledge, add a layer that supports the acquisition of domain-specific programming knowledge to provide active support, and enhance maintainability and modifiability through an object-oriented approach. The development of new application software would use specification-by-reformulation, based on a cognitive theory of retrieval from very long-term memory in humans, and using an Ada code library and an object base. Current tasks include enhancements to the knowledge representation of Ada packages and abstract data types, extensions to support Ada package instantiation knowledge acquisition, integration with Ada compilers and relational databases, enhancements to the graphical user interface, and demonstration of the system with a NASA contractor-developed trajectory simulation package. Future work will focus on investigating issues involving scale-up and integration.

  3. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    SciTech Connect

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enable rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed

  4. A flexible object-based software framework for modeling complex systems with interacting natural and societal processes.

    SciTech Connect

    Christiansen, J. H.

    2000-06-15

    The Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations. The DIAS infrastructure makes it feasible to build and manipulate complex simulation scenarios in which many thousands of objects can interact via dozens to hundreds of concurrent dynamic processes. The flexibility and extensibility of the DIAS software infrastructure stem mainly from (1) the abstraction of object behaviors, (2) the encapsulation and formalization of model functionality, and (3) the mutability of domain object contents. DIAS simulation objects are inherently capable of highly flexible and heterogeneous spatial realizations. Geospatial graphical representation of DIAS simulation objects is addressed via the GeoViewer, an object-based GIS toolkit application developed at ANL. DIAS simulation capabilities have been extended by inclusion of societal process models generated by the Framework for Addressing Cooperative Extended Transactions (FACET), another object-based framework developed at Argonne National Laboratory. By using FACET models to implement societal behaviors of individuals and organizations within larger DIAS-based natural systems simulations, it has become possible to conveniently address a broad range of issues involving interaction and feedback among natural and societal processes. Example DIAS application areas discussed in this paper include a dynamic virtual oceanic environment, detailed simulation of clinical, physiological, and logistical aspects of health care delivery, and studies of agricultural sustainability of urban centers under environmental stress in ancient Mesopotamia.

  5. Advanced fingerprint verification software

    NASA Astrophysics Data System (ADS)

    Baradarani, A.; Taylor, J. R. B.; Severin, F.; Maev, R. Gr.

    2016-05-01

    We have developed a fingerprint software package that can be used in a wide range of applications from law enforcement to public and private security systems, and to personal devices such as laptops, vehicles, and door- locks. The software and processing units are a unique implementation of new and sophisticated algorithms that compete with the current best systems in the world. Development of the software package has been in line with the third generation of our ultrasonic fingerprinting machine1. Solid and robust performance is achieved in the presence of misplaced and low quality fingerprints.

  6. Advanced software algorithms

    SciTech Connect

    Berry, K.; Dayton, S.

    1996-10-28

    Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.

  7. Advanced Software Development Workstation Project, phase 3

    NASA Technical Reports Server (NTRS)

    1991-01-01

    ACCESS provides a generic capability to develop software information system applications which are explicitly intended to facilitate software reuse. In addition, it provides the capability to retrofit existing large applications with a user friendly front end for preparation of input streams in a way that will reduce required training time, improve the productivity even of experienced users, and increase accuracy. Current and past work shows that ACCESS will be scalable to much larger object bases.

  8. Student Produced Advanced Mathematical Software.

    ERIC Educational Resources Information Center

    Hogben, Leslie

    The intent of this project was to develop a course for mathematics graduate students at Iowa State University. They would design and write computer programs for use by undergraduate mathematics students, and then offer the course and actually produce the software. Phase plane graphics for ordinary differential equations was selected as the topic.…

  9. Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.; Araujo, Ricardo

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for remote sensing applications is becoming more frequent. However, this type of information can result in several software problems related to the huge amount of data available. Object-based image analysis (OBIA) has proven to be superior to pixel-based analysis for very high-resolution images. The main objective of this work was to explore the potentialities of the OBIA methods available in two different open source software applications, Spring and OTB/Monteverdi, in order to generate an urban land cover map. An orthomosaic derived from UAVs was considered, 10 different regions of interest were selected, and two different approaches were followed. The first one (Spring) uses the region growing segmentation algorithm followed by the Bhattacharya classifier. The second approach (OTB/Monteverdi) uses the mean shift segmentation algorithm followed by the support vector machine (SVM) classifier. Two strategies were followed: four classes were considered using Spring and thereafter seven classes were considered for OTB/Monteverdi. The SVM classifier produces slightly better results and presents a shorter processing time. However, the poor spectral resolution of the data (only RGB bands) is an important factor that limits the performance of the classifiers applied.

  10. Advanced Extravehicular Mobility Unit Informatics Software Design

    NASA Technical Reports Server (NTRS)

    Wright, Theodore

    2014-01-01

    This is a description of the software design for the 2013 edition of the Advanced Extravehicular Mobility Unit (AEMU) Informatics computer assembly. The Informatics system is an optional part of the space suit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and caution and warning information. In the future it will display maps with GPS position data, and video and still images captured by the astronaut.

  11. The advanced software development workstation project

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III; Pitman, Charles L.

    1991-01-01

    The Advanced Software Development Workstation (ASDW) task is researching and developing the technologies required to support Computer Aided Software Engineering (CASE) with the emphasis on those advanced methods, tools, and processes that will be of benefit to support all NASA programs. Immediate goals are to provide research and prototype tools that will increase productivity, in the near term, in projects such as the Software Support Environment (SSE), the Space Station Control Center (SSCC), and the Flight Analysis and Design System (FADS) which will be used to support the Space Shuttle and Space Station Freedom. Goals also include providing technology for development, evolution, maintenance, and operations. The technologies under research and development in the ASDW project are targeted to provide productivity enhancements during the software life cycle phase of enterprise and information system modeling, requirements generation and analysis, system design and coding, and system use and maintenance. On-line user's guides will assist users in operating the developed information system with knowledge base expert assistance.

  12. Control Software for Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.

    2006-01-01

    Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.

  13. Software Framework for Advanced Power Plant Simulations

    SciTech Connect

    John Widmann; Sorin Munteanu; Aseem Jain; Pankaj Gupta; Mark Moales; Erik Ferguson; Lewis Collins; David Sloan; Woodrow Fiveland; Yi-dong Lang; Larry Biegler; Michael Locke; Simon Lingard; Jay Yun

    2010-08-01

    This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. These include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.

  14. The Advanced Software Development and Commercialization Project

    SciTech Connect

    Gallopoulos, E. . Center for Supercomputing Research and Development); Canfield, T.R.; Minkoff, M.; Mueller, C.; Plaskacz, E.; Weber, D.P.; Anderson, D.M.; Therios, I.U. ); Aslam, S.; Bramley, R.; Chen, H.-C.; Cybenko, G.; Gallopoulos, E.; Gao, H.; Malony, A.; Sameh, A. . Center for Supercomputing Research

    1990-09-01

    This is the first of a series of reports pertaining to progress in the Advanced Software Development and Commercialization Project, a joint collaborative effort between the Center for Supercomputing Research and Development of the University of Illinois and the Computing and Telecommunications Division of Argonne National Laboratory. The purpose of this work is to apply techniques of parallel computing that were pioneered by University of Illinois researchers to mature computational fluid dynamics (CFD) and structural dynamics (SD) computer codes developed at Argonne. The collaboration in this project will bring this unique combination of expertise to bear, for the first time, on industrially important problems. By so doing, it will expose the strengths and weaknesses of existing techniques for parallelizing programs and will identify those problems that need to be solved in order to enable wide spread production use of parallel computers. Secondly, the increased efficiency of the CFD and SD codes themselves will enable the simulation of larger, more accurate engineering models that involve fluid and structural dynamics. In order to realize the above two goals, we are considering two production codes that have been developed at ANL and are widely used by both industry and Universities. These are COMMIX and WHAMS-3D. The first is a computational fluid dynamics code that is used for both nuclear reactor design and safety and as a design tool for the casting industry. The second is a three-dimensional structural dynamics code used in nuclear reactor safety as well as crashworthiness studies. These codes are currently available for both sequential and vector computers only. Our main goal is to port and optimize these two codes on shared memory multiprocessors. In so doing, we shall establish a process that can be followed in optimizing other sequential or vector engineering codes for parallel processors.

  15. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  16. Advanced software development workstation project: Engineering scripting language. Graphical editor

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Software development is widely considered to be a bottleneck in the development of complex systems, both in terms of development and in terms of maintenance of deployed systems. Cost of software development and maintenance can also be very high. One approach to reducing costs and relieving this bottleneck is increasing the reuse of software designs and software components. A method for achieving such reuse is a software parts composition system. Such a system consists of a language for modeling software parts and their interfaces, a catalog of existing parts, an editor for combining parts, and a code generator that takes a specification and generates code for that application in the target language. The Advanced Software Development Workstation is intended to be an expert system shell designed to provide the capabilities of a software part composition system.

  17. Object-based media and stream-based computing

    NASA Astrophysics Data System (ADS)

    Bove, V. Michael, Jr.

    1998-03-01

    Object-based media refers to the representation of audiovisual information as a collection of objects - the result of scene-analysis algorithms - and a script describing how they are to be rendered for display. Such multimedia presentations can adapt to viewing circumstances as well as to viewer preferences and behavior, and can provide a richer link between content creator and consumer. With faster networks and processors, such ideas become applicable to live interpersonal communications as well, creating a more natural and productive alternative to traditional videoconferencing. In this paper is outlined an example of object-based media algorithms and applications developed by my group, and present new hardware architectures and software methods that we have developed to enable meeting the computational requirements of object- based and other advanced media representations. In particular we describe stream-based processing, which enables automatic run-time parallelization of multidimensional signal processing tasks even given heterogenous computational resources.

  18. Advanced information processing system: Input/output network management software

    NASA Technical Reports Server (NTRS)

    Nagle, Gail; Alger, Linda; Kemp, Alexander

    1988-01-01

    The purpose of this document is to provide the software requirements and specifications for the Input/Output Network Management Services for the Advanced Information Processing System. This introduction and overview section is provided to briefly outline the overall architecture and software requirements of the AIPS system before discussing the details of the design requirements and specifications of the AIPS I/O Network Management software. A brief overview of the AIPS architecture followed by a more detailed description of the network architecture.

  19. Advancing Software Architecture Modeling for Large Scale Heterogeneous Systems

    SciTech Connect

    Gorton, Ian; Liu, Yan

    2010-11-07

    In this paper we describe how incorporating technology-specific modeling at the architecture level can help reduce risks and produce better designs for large, heterogeneous software applications. We draw an analogy with established modeling approaches in scientific domains, using groundwater modeling as an example, to help illustrate gaps in current software architecture modeling approaches. We then describe the advances in modeling, analysis and tooling that are required to bring sophisticated modeling and development methods within reach of software architects.

  20. Byonic: Advanced Peptide and Protein Identification Software

    PubMed Central

    Bern, Marshall; Kil, Yong J.; Becker, Christopher

    2013-01-01

    Byonic™ is the name of a software package for peptide and protein identification by tandem mass spectrometry. This software, which has only recently become commercially available, facilitates a much wider range of search possibilities than previous search software such as SEQUEST and Mascot. Byonic allows the user to define an essentially unlimited number of variable modification types. Byonic also allows the user to set a separate limit on the number of occurrences of each modification type, so that a search may consider only one or two chance modifications such as oxidations and deamidations per peptide, yet allow three or four biological modifications such as phosphorylations, which tend to cluster together. Hence Byonic can search for 10s or even 100s of modification types simultaneously without a prohibitively large combinatorial explosion. Byonic’s Wildcard Search™ allows the user to search for unanticipated or even unknown modifications alongside known modifications. Finally, Byonic’s Glycopeptide Search allows the user to identify glycopeptides without prior knowledge of glycan masses or glycosylation sites. PMID:23255153

  1. Intelligent Software Tools for Advanced Computing

    SciTech Connect

    Baumgart, C.W.

    2001-04-03

    Feature extraction and evaluation are two procedures common to the development of any pattern recognition application. These features are the primary pieces of information which are used to train the pattern recognition tool, whether that tool is a neural network, a fuzzy logic rulebase, or a genetic algorithm. Careful selection of the features to be used by the pattern recognition tool can significantly streamline the overall development and training of the solution for the pattern recognition application. This report summarizes the development of an integrated, computer-based software package called the Feature Extraction Toolbox (FET), which can be used for the development and deployment of solutions to generic pattern recognition problems. This toolbox integrates a number of software techniques for signal processing, feature extraction and evaluation, and pattern recognition, all under a single, user-friendly development environment. The toolbox has been developed to run on a laptop computer, so that it may be taken to a site and used to develop pattern recognition applications in the field. A prototype version of this toolbox has been completed and is currently being used for applications development on several projects in support of the Department of Energy.

  2. Advanced Transport Operating System (ATOPS) utility library software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.

    1993-01-01

    The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.

  3. Software modifications to the Demonstration Advanced Avionics Systems (DAAS)

    NASA Technical Reports Server (NTRS)

    Nedell, B. F.; Hardy, G. H.

    1984-01-01

    Critical information required for the design of integrated avionics suitable for generation aviation is applied towards software modifications for the Demonstration Advanced Avionics System (DAAS). The program emphasizes the use of data busing, distributed microprocessors, shared electronic displays and data entry devices, and improved functional capability. A demonstration advanced avionics system (DAAS) is designed, built, and flight tested in a Cessna 402, twin engine, general aviation aircraft. Software modifications are made to DAAS at Ames concurrent with the flight test program. The changes are the result of the experience obtained with the system at Ames, and the comments of the pilots who evaluated the system.

  4. Advanced transport operating system software upgrade: Flight management/flight controls software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Debure, Kelly R.; Dickson, Richard W.; Heaphy, William J.; Parks, Mark A.; Slominski, Christopher J.; Wolverton, David A.

    1988-01-01

    The Flight Management/Flight Controls (FM/FC) software for the Norden 2 (PDP-11/70M) computer installed on the NASA 737 aircraft is described. The software computes the navigation position estimates, guidance commands, those commands to be issued to the control surfaces to direct the aircraft in flight based on the modes selected on the Advanced Guidance Control System (AGSC) mode panel, and the flight path selected via the Navigation Control/Display Unit (NCDU).

  5. Advances in Games Technology: Software, Models, and Intelligence

    ERIC Educational Resources Information Center

    Prakash, Edmond; Brindle, Geoff; Jones, Kevin; Zhou, Suiping; Chaudhari, Narendra S.; Wong, Kok-Wai

    2009-01-01

    Games technology has undergone tremendous development. In this article, the authors report the rapid advancement that has been observed in the way games software is being developed, as well as in the development of games content using game engines. One area that has gained special attention is modeling the game environment such as terrain and…

  6. Advanced software development workstation. OOPSLA 1992 Conference. Trip report

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    1992-01-01

    This report summarizes the main trends observed at the Object Oriented Programming Systems, Languages, and Applications Conference held in Vancouver, British Columbia. This conference is the main object-oriented event that allows us to assess the dynamism of the technology and to meet the main actors of the field. It is an invaluable source of information for the advanced software development project.

  7. Advanced software design and standards for traffic signal control

    SciTech Connect

    Bullock, D.; Hendrickson, C. )

    1992-05-01

    Improves traffic management and control systems are widely reported to be cost-effective investments. Simply retiming signals can provide significant benefits by reducing vehicle stops, travel times, and fuel consumption. The installation of advanced traffic management systems (ATMS) can provide even greater savings. However, many hardware and software obstacles have impeded the actual implementation of advanced traffic management systems. The general hardware and software limitations of current traffic signal control technology are reviewed in this paper. The impact of these deficiencies is discussed in the context of three example applications. Based on this discussion, the paper identifies several computing issues that should be addressed in order to reduce the effort involved with integrating existing traffic control devices. Adoption of standard industrial control computing platforms and development of new communication and software engineering models are recommendrecommended.

  8. Attentional spreading in object-based attention.

    PubMed

    Richard, Ashleigh M; Lee, Hyunkyu; Vecera, Shaun P

    2008-08-01

    The authors investigated 2 effects of object-based attention: the spread of attention within an attended object and the prioritization of search across possible target locations within an attended object. Participants performed a flanker task in which the location of the task-relevant target was fixed and known to participants. A spreading attention account predicts that object-based attention will arise from the spread of attention through an attended object. A prioritization account predicts that there will be a small, if any, object-based effect because the location of the target is known in advance and objects are not required to prioritize the deployment of attentional search. The results suggested that object-based attention operates via the spread of attention within an object. PMID:18665730

  9. Reliability, Safety and Error Recovery for Advanced Control Software

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2003-01-01

    For long-duration automated operation of regenerative life support systems in space environments, there is a need for advanced integration and control systems that are significantly more reliable and safe, and that support error recovery and minimization of operational failures. This presentation outlines some challenges of hazardous space environments and complex system interactions that can lead to system accidents. It discusses approaches to hazard analysis and error recovery for control software and challenges of supporting effective intervention by safety software and the crew.

  10. Advanced Transport Operating System (ATOPS) control display unit software description

    NASA Technical Reports Server (NTRS)

    Slominski, Christopher J.; Parks, Mark A.; Debure, Kelly R.; Heaphy, William J.

    1992-01-01

    The software created for the Control Display Units (CDUs), used for the Advanced Transport Operating Systems (ATOPS) project, on the Transport Systems Research Vehicle (TSRV) is described. Module descriptions are presented in a standardized format which contains module purpose, calling sequence, a detailed description, and global references. The global reference section includes subroutines, functions, and common variables referenced by a particular module. The CDUs, one for the pilot and one for the copilot, are used for flight management purposes. Operations performed with the CDU affects the aircraft's guidance, navigation, and display software.

  11. Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Damevski, Kostadin

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  12. Advanced software integration: The case for ITV facilities

    NASA Technical Reports Server (NTRS)

    Garman, John R.

    1990-01-01

    The array of technologies and methodologies involved in the development and integration of avionics software has moved almost as rapidly as computer technology itself. Future avionics systems involve major advances and risks in the following areas: (1) Complexity; (2) Connectivity; (3) Security; (4) Duration; and (5) Software engineering. From an architectural standpoint, the systems will be much more distributed, involve session-based user interfaces, and have the layered architectures typified in the layers of abstraction concepts popular in networking. Typified in the NASA Space Station Freedom will be the highly distributed nature of software development itself. Systems composed of independent components developed in parallel must be bound by rigid standards and interfaces, the clean requirements and specifications. Avionics software provides a challenge in that it can not be flight tested until the first time it literally flies. It is the binding of requirements for such an integration environment into the advances and risks of future avionics systems that form the basis of the presented concept and the basic Integration, Test, and Verification concept within the development and integration life cycle of Space Station Mission and Avionics systems.

  13. Software control of the Advanced Technology Solar Telescope enclosure PLC hardware using COTS software

    NASA Astrophysics Data System (ADS)

    Borrowman, Alastair J.; de Bilbao, Lander; Ariño, Javier; Murga, Gaizka; Goodrich, Bret; Hubbard, John R.; Greer, Alan; Mayer, Chris; Taylor, Philip

    2012-09-01

    As PLCs evolve from simple logic controllers into more capable Programmable Automation Controllers (PACs), observatories are increasingly using such devices to control complex mechanisms1, 2. This paper describes use of COTS software to control such hardware using the Advanced Technology Solar Telescope (ATST) Common Services Framework (CSF). We present the Enclosure Control System (ECS) under development in Spain and the UK. The paper details selection of the commercial PLC communication library PLCIO. Implemented in C and delivered with source code, the library separates the programmer from communication details through a simple API. Capable of communicating with many types of PLCs (including Allen-Bradley and Siemens) the API remains the same irrespective of PLC in use. The ECS is implemented in Java using the observatory's framework that provides common services for software components. We present a design following a connection-based approach where all components access the PLC through a single connection class. The link between Java and PLCIO C library is provided by a thin Java Native Interface (JNI) layer. Also presented is a software simulator of the PLC based upon the PLCIO Virtual PLC. This creates a simulator operating below the library's API and thus requires no change to ECS software. It also provides enhanced software testing capabilities prior to hardware becoming available. Results are presented in the form of communication timing test data, showing that the use of CSF, JNI and PLCIO provide a control system capable of controlling enclosure tracking mechanisms, that would be equally valid for telescope mount control.

  14. Constructing an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Sims, Michael; Podolak, Ester; Mckay, Christopher

    1990-01-01

    Scientific model building can be an intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot be easily distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We believe that advanced software techniques can facilitate both the model building and model sharing process. In this paper, we describe a prototype for a scientific modeling software tool that serves as an aid to the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities. Our prototype has been developed in the domain of planetary atmospheric modeling, and is being used to construct models of Titan's atmosphere.

  15. Development of Advanced Multi-Modality Radiation Treatment Planning Software

    SciTech Connect

    Nigg, D W; Hartmann Siantar, C

    2002-02-19

    The Idaho National Engineering and Environmental Laboratory (INEEL) has long been active in development of advanced Monte-Carlo based computational dosimetry and treatment planning methods and software for advanced radiotherapy, with a particular focus on Neutron Capture Therapy (NCT) and, to a somewhat lesser extent, Fast-Neutron Therapy. The most recent INEEL software product system of this type is known as SERA, Simulation Environment for Radiotherapy Applications. SERA is at a mature level in its life cycle, it has been licensed for research use worldwide, and it has become well established as a computational tool for research. However, along with its strengths, SERA also has some limitations in its structure and computational methodologies. More specifically, it is optimized only for neutron-based applications. Although photon transport can be computed with SERA, the simplified model that is used is designed primarily for photons produced in the neutron transport process. Thus SERA is not appropriate for applications to, for example, standard external-beam photon radiotherapy, which is by far more commonly used in the clinic than neutron based therapy.

  16. Patient Simulation Software to Augment an Advanced Pharmaceutics Course

    PubMed Central

    Schonder, Kristine

    2011-01-01

    Objective To implement and assess the effectiveness of adding a pharmaceutical care simulation program to an advanced therapeutics course. Design PharmaCAL (University of Pittsburgh), a software program that uses a branched-outcome decision making model, was used to create patient simulations to augment lectures given in the course. In each simulation, students were presented with a challenge, given choices, and then provided with consequences specific to their choices. Assessments A survey was administered at the end of the course and students indicated the simulations were enjoyable (92%), easy to use (90%), stimulated interest in critically ill patients (82%), and allowed for application of lecture material (91%). A 5-item presimulation and postsimulation test on the anemia simulation was administered to assess learning. Students answered significantly more questions correctly on the postsimulation test than on the presimulation test (p < 0.001). Seventy-eight percent of students answered the same 5 questions correctly on the final examination. Conclusion Patient simulation software that used a branched-outcome decision model was an effective supplement to class lectures in an advanced pharmaceutics course and was well-received by pharmacy students. PMID:21519411

  17. Advanced Data Format (ADF) Software Library and Users Guide

    NASA Technical Reports Server (NTRS)

    Smith, Matthew; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial. Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its 1/0 software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The ADF is a generic database manager with minimal intrinsic capability. It was written for the purpose of storing large numerical datasets in an efficient, platform independent manner. To be effective, it must be used in conjunction with external agreements on how the data will be organized within the ADF database such defined by the SIDS. There are currently 34 user callable functions that comprise the ADF

  18. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan : ASC software quality engineering practices Version 3.0.

    SciTech Connect

    Turgeon, Jennifer L.; Minana, Molly A.; Hackney, Patricia; Pilch, Martin M.

    2009-01-01

    The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and the software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.

  19. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Sussman, Alan

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  20. Object-Based Image Compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  1. Center for Technology for Advanced Scientific Componet Software (TASCS)

    SciTech Connect

    Govindaraju, Madhusudhan

    2010-10-31

    Advanced Scientific Computing Research Computer Science FY 2010Report Center for Technology for Advanced Scientific Component Software: Distributed CCA State University of New York, Binghamton, NY, 13902 Summary The overall objective of Binghamton's involvement is to work on enhancements of the CCA environment, motivated by the applications and research initiatives discussed in the proposal. This year we are working on re-focusing our design and development efforts to develop proof-of-concept implementations that have the potential to significantly impact scientific components. We worked on developing parallel implementations for non-hydrostatic code and worked on a model coupling interface for biogeochemical computations coded in MATLAB. We also worked on the design and implementation modules that will be required for the emerging MapReduce model to be effective for scientific applications. Finally, we focused on optimizing the processing of scientific datasets on multi-core processors. Research Details We worked on the following research projects that we are working on applying to CCA-based scientific applications. 1. Non-Hydrostatic Hydrodynamics: Non-static hydrodynamics are significantly more accurate at modeling internal waves that may be important in lake ecosystems. Non-hydrostatic codes, however, are significantly more computationally expensive, often prohibitively so. We have worked with Chin Wu at the University of Wisconsin to parallelize non-hydrostatic code. We have obtained a speed up of about 26 times maximum. Although this is significant progress, we hope to improve the performance further, such that it becomes a practical alternative to hydrostatic codes. 2. Model-coupling for water-based ecosystems: To answer pressing questions about water resources requires that physical models (hydrodynamics) be coupled with biological and chemical models. Most hydrodynamics codes are written in Fortran, however, while most ecologists work in MATLAB. This

  2. Advanced software development workstation project ACCESS user's guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    ACCESS is a knowledge based software information system designed to assist the user in modifying retrieved software to satisfy user specifications. A user's guide is presented for the knowledge engineer who wishes to create for ACCESS a knowledge base consisting of representations of objects in some software system. This knowledge is accessible to an end user who wishes to use the catalogued software objects to create a new application program or an input stream for an existing system. The application specific portion of an ACCESS knowledge base consists of a taxonomy of object classes, as well as instances of these classes. All objects in the knowledge base are stored in an associative memory. ACCESS provides a standard interface for the end user to browse and modify objects. In addition, the interface can be customized by the addition of application specific data entry forms and by specification of display order for the taxonomy and object attributes. These customization options are described.

  3. Advanced software system for yield improvement on manufacturing fab

    NASA Astrophysics Data System (ADS)

    Recio, Miguel; Fernandez, Almudena; Martin Santamaria, Victorino; Peman, Maria J.; Gonzalez, Gerardo; Hoyer, J. R.; Whitlock, Steve; James, David; Hausen, Mark

    1996-09-01

    How can we optimize the data collection and data processing to provide us with the most significant information to drive the yield enhancement in a fast and efficient way? With the aim of facing this question this paper describes a software- system which is a part of the global yield enhancement strategy at our factory. The software developed correlates data gathered both in-line and at the end of the fabrication process. The description of the system is accompanied with descriptions of historical cases trying to show the capabilities of the type of correlation. The paper also discusses the natural expansion of the software-system to convert it into a more 'knowledge-based' tool.

  4. Recent progress and advances in iterative software (including parallel aspects)

    SciTech Connect

    Carey, G.; Young, D.M.; Kincaid, D.

    1994-12-31

    The purpose of the workshop is to provide a forum for discussion of the current state of iterative software packages. Of particular interest is software for large scale engineering and scientific applications, especially for distributed parallel systems. However, the authors will also review the state of software development for conventional architectures. This workshop will complement the other proposed workshops on iterative BLAS kernels and applications. The format for the workshop is as follows: To provide some structure, there will be brief presentations, each of less than five minutes duration and dealing with specific facets of the subject. These will be designed to focus the discussion and to stimulate an exchange with the participants. Issues to be covered include: The evolution of iterative packages, current state of the art, the parallel computing challenge, applications viewpoint, standards, and future directions and open problems.

  5. Final Report for "Center for Technology for Advanced Scientific Component Software"

    SciTech Connect

    Svetlana Shasharina

    2010-12-01

    The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.

  6. Building Software Development Capacity to Advance the State of Educational Technology

    ERIC Educational Resources Information Center

    Luterbach, Kenneth J.

    2013-01-01

    Educational technologists may advance the state of the field by increasing capacity to develop software tools and instructional applications. Presently, few academic programs in educational technology require even a single computer programming course. Further, the educational technologists who develop software generally work independently or in…

  7. Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.

    ERIC Educational Resources Information Center

    Clymer, S. J.

    Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…

  8. Musical Composition and Creativity in an Advanced Software Environment

    ERIC Educational Resources Information Center

    Reynolds, Nicholas

    2002-01-01

    This paper serves as a brief description of research into the use of professional level music software as a learning tool for creativity and composition by primary school children. The research formed the basis of a Master of Information Technology in Education degree at the University of Melbourne. The paper examines the physical environment, the…

  9. Advanced characterization of hysteretic materials by object-oriented software

    NASA Astrophysics Data System (ADS)

    Ionita, V.; Gavrila, H.

    2002-04-01

    A new object-oriented software, which is dedicated for the characterization of magnetic materials, including the hysteresis effect, is presented. The product allows the development and utilization of an information base, containing experimental and numerical data related to the magnetic material behaviour. The materials may be modelled with different hysteresis models (Preisach, Jiles-Atherton, etc.).

  10. Evaluating Object-Based Image Analysis on Glacial Micromorphology

    NASA Astrophysics Data System (ADS)

    Chin, K. S.; Sjogren, D. B.

    2007-12-01

    Micromorphology has recently been applied more in analyzing glacial sediments at a microscopic level. It provides additional information and details that may help to explain glacial processes in areas where macro- scale observations cannot yield sufficient information. However, the process of interpreting thin sections has been very subjective, and reaching general consensus about glacial processes is difficult. Remote sensing technology is increasingly helpful in the development and advancement of many sciences; the concepts that lie behind the technology in object cognition used in other fields, such as landscape ecology, can be applied to micromorphology. Similar to what has been done to landscape ecology in the past, automating the process of interpreting objects in glacial sediments may potentially simplify and decrease the subjectivity of the process. Definiens Professional 5 is an object-based image analysis program that imitates human cognitive methods; it is used in this study to identify objects apart from background matrices in multiple thin section images of glacial sediments. The program's initial results proved that more work was needed to be done for better results, but overall the software produced promising results. The method is repeatable and continues to generate consistent results with no bias or ambiguity, so the application of this method to micromorphology and other areas alike will be valuable.

  11. Construction of an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.

    1992-01-01

    Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a test bed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.

  12. Recent advances in the CRANK software suite for experimental phasing

    SciTech Connect

    Pannu, Navraj S. Waterreus, Willem-Jan; Skubák, Pavol; Sikharulidze, Irakli; Abrahams, Jan Pieter; Graaff, Rudolf A. G. de

    2011-04-01

    Recent developments in the CRANK software suite for experimental phasing have led to many more structures being built automatically. For its first release in 2004, CRANK was shown to effectively detect and phase anomalous scatterers from single-wavelength anomalous diffraction data. Since then, CRANK has been significantly improved and many more structures can be built automatically with single- or multiple-wavelength anomalous diffraction or single isomorphous replacement with anomalous scattering data. Here, the new algorithms that have been developed that have led to these substantial improvements are discussed and CRANK’s performance on over 100 real data sets is shown. The latest version of CRANK is freely available for download at http://www.bfsc.leidenuniv.nl/software/crank/ and from CCP4 (http://www.ccp4.ac.uk/)

  13. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  14. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGESBeta

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  15. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan part 2 mappings for the ASC software quality engineering practices, version 2.0.

    SciTech Connect

    Heaphy, Robert; Sturtevant, Judith E.; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Minana, Molly A.; Hackney, Patricia; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2006-09-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR001.3.2 and CPR001.3.6 and to a Department of Energy document, ''ASCI Software Quality Engineering: Goals, Principles, and Guidelines''. This document also identifies ASC management and software project teams' responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  16. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1: ASC software quality engineering practices, Version 2.0.

    SciTech Connect

    Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Minana, Molly A.; Hackney, Patricia; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2006-09-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  17. Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan. Part 2, Mappings for the ASC software quality engineering practices. Version 1.0.

    SciTech Connect

    Ellis, Molly A.; Heaphy, Robert; Sturtevant, Judith E.; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2005-01-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, 'ASCI Software Quality Engineering: Goals, Principles, and Guidelines'. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  18. Advanced program development management software system. Software description and user's manual

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The objectives of this project were to apply emerging techniques and tools from the computer science discipline of paperless management to the activities of the Space Transportation and Exploration Office (PT01) in Marshall Space Flight Center (MSFC) Program Development, thereby enhancing the productivity of the workforce, the quality of the data products, and the collection, dissemination, and storage of information. The approach used to accomplish the objectives emphasized the utilization of finished form (off-the-shelf) software products to the greatest extent possible without impacting the performance of the end product, to pursue developments when necessary in the rapid prototyping environment to provide a mechanism for frequent feedback from the users, and to provide a full range of user support functions during the development process to promote testing of the software.

  19. Recent advances in the CRANK software suite for experimental phasing

    PubMed Central

    Pannu, Navraj S.; Waterreus, Willem-Jan; Skubák, Pavol; Sikharulidze, Irakli; Abrahams, Jan Pieter; de Graaff, Rudolf A. G.

    2011-01-01

    For its first release in 2004, CRANK was shown to effectively detect and phase anomalous scatterers from single-wavelength anomalous diffraction data. Since then, CRANK has been significantly improved and many more structures can be built automatically with single- or multiple-wavelength anomalous diffraction or single isomorphous replacement with anomalous scattering data. Here, the new algorithms that have been developed that have led to these substantial improvements are discussed and CRANK’s performance on over 100 real data sets is shown. The latest version of CRANK is freely available for download at http://www.bfsc.leidenuniv.nl/software/crank/ and from CCP4 (http://www.ccp4.ac.uk/). PMID:21460451

  20. Development of Advanced Multi-Modality Radiation Treatment Planning Software for Neutron Radiotherapy and Beyond

    SciTech Connect

    Nigg, D; Wessol, D; Wemple, C; Harkin, G; Hartmann-Siantar, C

    2002-08-20

    The Idaho National Engineering and Environmental Laboratory (INEEL) has long been active in development of advanced Monte-Carlo based computational dosimetry and treatment planning methods and software for advanced radiotherapy, with a particular focus on Neutron Capture Therapy (NCT) and, to a somewhat lesser extent, Fast-Neutron Therapy. The most recent INEEL software system of this type is known as SERA, Simulation Environment for Radiotherapy Applications. As a logical next step in the development of modern radiotherapy planning tools to support the most advanced research, INEEL and Lawrence Livermore National Laboratory (LLNL), the developers of the PEREGRTNE computational engine for radiotherapy treatment planning applications, have recently launched a new project to collaborate in the development of a ''next-generation'' multi-modality treatment planning software system that will be useful for all modern forms of radiotherapy.

  1. Advanced methods of microscope control using μManager software

    PubMed Central

    Edelstein, Arthur D.; Tsuchida, Mark A.; Amodaj, Nenad; Pinkard, Henry; Vale, Ronald D.; Stuurman, Nico

    2014-01-01

    μManager is an open-source, cross-platform desktop application, to control a wide variety of motorized microscopes, scientific cameras, stages, illuminators, and other microscope accessories. Since its inception in 2005, μManager has grown to support a wide range of microscopy hardware and is now used by thousands of researchers around the world. The application provides a mature graphical user interface and offers open programming interfaces to facilitate plugins and scripts. Here, we present a guide to using some of the recently added advanced μManager features, including hardware synchronization, simultaneous use of multiple cameras, projection of patterned light onto a specimen, live slide mapping, imaging with multi-well plates, particle localization and tracking, and high-speed imaging. PMID:25606571

  2. Using CONFIG for Simulation of Operation of Water Recovery Subsystems for Advanced Control Software Evaluation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Flores, Luis; Fleming, Land; Throop, Daiv

    2002-01-01

    A hybrid discrete/continuous simulation tool, CONFIG, has been developed to support evaluation of the operability life support systems. CON FIG simulates operations scenarios in which flows and pressures change continuously while system reconfigurations occur as discrete events. In simulations, intelligent control software can interact dynamically with hardware system models. CONFIG simulations have been used to evaluate control software and intelligent agents for automating life support systems operations. A CON FIG model of an advanced biological water recovery system has been developed to interact with intelligent control software that is being used in a water system test at NASA Johnson Space Center

  3. WinSRFR: Current Advances in Software for Surface Irrigation Simulation and Analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Significant advances have been made over the last decade in the development of software for surface irrigation analysis. WinSRFR is an integrated tool that combines unsteady flow simulation with tools for system evaluation/parameter estimation, system design, and for operational optimization. Ongoi...

  4. Object-Based Attention and Cognitive Tunneling

    ERIC Educational Resources Information Center

    Jarmasz, Jerzy; Herdman, Chris M.; Johannsdottir, Kamilla Run

    2005-01-01

    Simulator-based research has shown that pilots cognitively tunnel their attention on head-up displays (HUDs). Cognitive tunneling has been linked to object-based visual attention on the assumption that HUD symbology is perceptually grouped into an object that is perceived and attended separately from the external scene. The present research…

  5. Adult Roles & Functions. Objective Based Evaluation System.

    ERIC Educational Resources Information Center

    West Virginia State Vocational Curriculum Lab., Cedar Lakes.

    This book of objective-based test items is designed to be used with the Adult Roles and Functions curriculum for a non-laboratory home economic course for grades eleven and twelve. It contains item banks for each cognitive objective in the curriculum. In addition, there is a form for the table of specifications to be developed for each unit. This…

  6. Advanced Transport Operating System (ATOPS) Flight Management/Flight Controls (FM/FC) software description

    NASA Technical Reports Server (NTRS)

    Wolverton, David A.; Dickson, Richard W.; Clinedinst, Winston C.; Slominski, Christopher J.

    1993-01-01

    The flight software developed for the Flight Management/Flight Controls (FM/FC) MicroVAX computer used on the Transport Systems Research Vehicle for Advanced Transport Operating Systems (ATOPS) research is described. The FM/FC software computes navigation position estimates, guidance commands, and those commands issued to the control surfaces to direct the aircraft in flight. Various modes of flight are provided for, ranging from computer assisted manual modes to fully automatic modes including automatic landing. A high-level system overview as well as a description of each software module comprising the system is provided. Digital systems diagrams are included for each major flight control component and selected flight management functions.

  7. Proposal for constructing an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Sims, Michael H.; Podolak, Esther; Mckay, Christopher P.; Thompson, David E.

    1990-01-01

    Scientific model building can be a time intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We believe that advanced software techniques can facilitate both the model building and model sharing process. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing and using models. The proposed tool will include an interactive intelligent graphical interface and a high level, domain specific, modeling language. As a testbed for this research, we propose development of a software prototype in the domain of planetary atmospheric modeling.

  8. Cost benefits of advanced software: A review of methodology used at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla N.

    1993-01-01

    To assist rational investments in advanced software, a formal, explicit, and multi-perspective cost-benefit analysis methodology is proposed. The methodology can be implemented through a six-stage process which is described and explained. The current practice of cost-benefit analysis at KSC is reviewed in the light of this methodology. The review finds that there is a vicious circle operating. Unsound methods lead to unreliable cost-benefit estimates. Unreliable estimates convince management that cost-benefit studies should not be taken seriously. Then, given external demands for cost-benefit estimates, management encourages software enginees to somehow come up with the numbers for their projects. Lacking the expertise needed to do a proper study, courageous software engineers with vested interests use ad hoc and unsound methods to generate some estimates. In turn, these estimates are unreliable, and the cycle continues. The proposed methodology should help KSC to break out of this cycle.

  9. Center for Technology for Advanced Scientific Component Software (TASCS) Consolidated Progress Report July 2006 - March 2009

    SciTech Connect

    Bernholdt, D E; McInnes, L C; Govindaraju, M; Bramley, R; Epperly, T; Kohl, J A; Nieplocha, J; Armstrong, R; Shasharina, S; Sussman, A L; Sottile, M; Damevski, K

    2009-04-14

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  10. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1 : ASC software quality engineering practices version 1.0.

    SciTech Connect

    Minana, Molly A.; Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2005-01-01

    The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in DOE/AL Quality Criteria (QC-1) as conformance to customer requirements and expectations. This quality plan defines the ASC program software quality practices and provides mappings of these practices to the SNL Corporate Process Requirements (CPR 1.3.2 and CPR 1.3.6) and the Department of Energy (DOE) document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines (GP&G). This quality plan identifies ASC management and software project teams' responsibilities for cost-effective software engineering quality practices. The SNL ASC Software Quality Plan establishes the signatories commitment to improving software products by applying cost-effective software engineering quality practices. This document explains the project teams opportunities for tailoring and implementing the practices; enumerates the practices that compose the development of SNL ASC's software products; and includes a sample assessment checklist that was developed based upon the practices in this document.

  11. NASA's Advanced Multimission Operations System: A Case Study in Formalizing Software Architecture Evolution

    NASA Technical Reports Server (NTRS)

    Barnes, Jeffrey M.

    2011-01-01

    All software systems of significant size and longevity eventually undergo changes to their basic architectural structure. Such changes may be prompted by evolving requirements, changing technology, or other reasons. Whatever the cause, software architecture evolution is commonplace in real world software projects. Recently, software architecture researchers have begun to study this phenomenon in depth. However, this work has suffered from problems of validation; research in this area has tended to make heavy use of toy examples and hypothetical scenarios and has not been well supported by real world examples. To help address this problem, I describe an ongoing effort at the Jet Propulsion Laboratory to re-architect the Advanced Multimission Operations System (AMMOS), which is used to operate NASA's deep-space and astrophysics missions. Based on examination of project documents and interviews with project personnel, I describe the goals and approach of this evolution effort and then present models that capture some of the key architectural changes. Finally, I demonstrate how approaches and formal methods from my previous research in architecture evolution may be applied to this evolution, while using languages and tools already in place at the Jet Propulsion Laboratory.

  12. Earth Observing System (EOS)/ Advanced Microwave Sounding Unit-A (AMSU-A): Special Test Equipment. Software Requirements

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert

    1995-01-01

    This document defines the functional, performance, and interface requirements for the Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) Special Test Equipment (STE) software used in the test and integration of the instruments.

  13. Roles for software technologies in advancing research and theory in educational psychology.

    PubMed

    Hadwin, Allyson F; Winne, Philip H; Nesbit, John C

    2005-03-01

    While reviews abound on theoretical topics in educational psychology, it is rare that we examine our field's instrumentation development, and what effects this has on educational psychology's evolution. To repair this gap, this paper investigates and reveals the implications of software technologies for researching and theorizing about core issues in educational psychology. From a set of approximately 1,500 articles published between 1999 and 2004, we sampled illustrative studies and organized them into four broad themes: (a) innovative ways to operationalize variables, (b) the changing nature of instructional interventions, (c) new fields of research in educational psychology, and (d) new constructs to be examined. In each area, we identify novel uses of these technologies and suggest how they may advance, and, in some instances, reshape theory and methodology. Overall, we demonstrate that software technologies hold significant potential to elaborate research in the field. PMID:15831178

  14. Advanced Transport Operating System (ATOPS) color displays software description: MicroVAX system

    NASA Technical Reports Server (NTRS)

    Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.

    1992-01-01

    This document describes the software created for the Display MicroVAX computer used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery of February 27, 1991, known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global references section includes subroutines, functions, and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight Cathode Ray Tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.

  15. Advanced Transport Operating System (ATOPS) color displays software description microprocessor system

    NASA Technical Reports Server (NTRS)

    Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.

    1992-01-01

    This document describes the software created for the Sperry Microprocessor Color Display System used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global reference section includes procedures and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight cathode ray tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.

  16. Object-based mapping of drumlins from DTMs

    NASA Astrophysics Data System (ADS)

    Eisank, C.; Dragut, L.; Blaschke, T.

    2012-04-01

    Until recently, landforms such as drumlins have only been manually delineated due to the difficulty in integrating contextual and semantic landform information in per cell classification approaches. Therefore, in most cases the results of per cell classifications presented basic landform elements or broad-scale physiographic regions that were only thematically defined. In contrast, object-based analysis provides spatially configured landform objects that are generated by terrain segmentation, the process of merging DTM cells to meaningful terrain objects at multiple scales. Such terrain objects should be favoured for landform modelling due to the following reasons: Firstly, their outlines potentially better correspond to the spatial limits of landforms as conceptualised by geoscientists; secondly, spatially aware objects enable the integration of semantic descriptions in the classification process. We present a multi-scale object-based study on automated delineation and classification of drumlins for a small test area in Bavaria, Germany. The multi-resolution segmentation algorithm is applied to create statistically meaningful objects patterns of selected DTMs, which are derived from a 5 m LiDAR DEM. For the subsequent classification of drumlins a semantics-based approach, which uses the principles of semantic modelling, is employed: initially, a geomorphological concept of the landform type drumlin is developed. The drumlin concept should ideally comprise verbal descriptions of the fundamental morphometric, morphological, hierarchical and contextual properties. Subsequently, the semantic model is built by structuring the conceptualised knowledge facts, and by associating those facts with object and class-related features, which are available in commonly used object-based software products for the development of classification rules. For the accuracy assessment we plan an integrated approach, which combines a statistical comparison to field maps and a qualitative

  17. Recent advances in software for beamline design, accelerator operations and personnel training

    NASA Astrophysics Data System (ADS)

    Gillespie, George H.; Hill, Barrey W.; Martono, Hendy; Moore, John M.

    2000-03-01

    Accelerators are finding new applications in research, industry, medicine, as well as other fields, and there is a growing need for new tools to improve the productivity of scientists and engineers involved with these emerging accelerator applications. Several advances in computer software have been made that focus on meeting those needs. This paper summarizes recent work in the development of a unique software framework designed specifically to support the accelerator community: the Multi-Platform Shell for Particle Accelerator Related Codes (SPARC MP). SPARC MP includes a sophisticated beamline object model, an extensive library of GUI components, and supports a variety of particle optics codes and add-on tools. This framework has been used to create the Particle Beam Optics Laboratory (PBO Lab TM) family of software applications. PBO Lab has been used for beamline design, as a computer aid for teaching particle optics, and to support accelerator operations. Several popular charged particle optics programs, including MARYLIE, TRANSPORT, TURTLE and TRACE 3-D, have been integrated with a new version of PBO Lab. The modeling and simulation capabilities of these codes allow PBO Lab to support a wide spectrum of accelerator types. New external data interface tools are available to import beamline parameters from other sources, for example, to utilize magnet strengths generated by a control system. An overview of the new version of PBO Lab is presented.

  18. A Visual Basic simulation software tool for performance analysis of a membrane-based advanced water treatment plant.

    PubMed

    Pal, P; Kumar, R; Srivastava, N; Chowdhury, J

    2014-02-01

    A Visual Basic simulation software (WATTPPA) has been developed to analyse the performance of an advanced wastewater treatment plant. This user-friendly and menu-driven software is based on the dynamic mathematical model for an industrial wastewater treatment scheme that integrates chemical, biological and membrane-based unit operations. The software-predicted results corroborate very well with the experimental findings as indicated in the overall correlation coefficient of the order of 0.99. The software permits pre-analysis and manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. It allows quick performance analysis of the whole system as well as the individual units. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for hazardous wastewater. PMID:23982824

  19. Development of software standards for advanced transportation control systems. Final report. Volume 1. A model for roadway traffic control software

    SciTech Connect

    Bullock, D.; Hendrickson, C.

    1993-06-01

    A systematic approach to traffic engineering software development could provide significant advantages with regard to software capability, flexibility and maintenance. Improved traffic controllers will likely be essential for many of the proposed intelligent vehicle highway systems (IVHS) applications. The report proposes a computable language, called TCBLKS (Traffic Control BLocKS), that could provide the foundation for constructing real time traffic engineering software. This computable language is designed to be configured by a graphical user interface that does not require extensive software engineering training to use, yet provides much more flexibility and capability than possible by simply changing program parameters. The model is based upon the function block metaphor commonly used for constructing robust and efficient real time industrial control systems.

  20. Award-Winning CARES/Life Ceramics Durability Evaluation Software Is Making Advanced Technology Accessible

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CARES/Life software developed at the NASA Lewis Research Center eases this by providing a tool that uses probabilistic reliability analysis techniques to optimize the design and manufacture of brittle material components. CARES/Life is an integrated package that predicts the probability of a monolithic ceramic component's failure as a function of its time in service. It couples commercial finite element programs--which resolve a component's temperature and stress distribution - with reliability evaluation and fracture mechanics routines for modeling strength - limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength.

  1. Software development for the evaluation of the ergonomic compatibility on the selection of advanced manufacturing technology.

    PubMed

    Maldonado-Macías, A; Reyes, R; Guillen, L; García, J

    2012-01-01

    Advanced Manufacturing Technology (AMT) is one of the most relevant resources that companies have to achieve competitiveness and best performance. The selection of AMT is a complex problem which involves significant amount of information and uncertainty when multiple aspects must be taken into consideration. Actual models for the selection of AMT are found scarce of the Human Factors and Ergonomics perspective which can lead to a more complete and reliable decision. This paper presents the development of software that enhances the application of an Ergonomic Compatibility Evaluation Model that supports decision making processes taking into consideration ergonomic attributes of designs. Ergonomic Compatibility is a construct used in this model and it is mainly based in the concept of human-artifact compatibility on human compatible systems. Also, an Axiomatic Design approach by the use of the Information Axiom was evolved under a fuzzy environment to obtain the Ergonomic Incompatibility Content. The extension of this axiom for the evaluation of ergonomic compatibility requirements was the theoretical framework of this research. An incremental methodology of four stages was used to design and develop the software that enables to compare AMT alternatives by the evaluation of Ergonomic Compatibility Attributes. PMID:22316972

  2. Advanced Software for Analysis of High-Speed Rolling-Element Bearings

    NASA Technical Reports Server (NTRS)

    Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.

    2003-01-01

    COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.

  3. Advanced communications technology satellite high burst rate link evaluation terminal communication protocol software user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Communication Protocol Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Communication Protocol Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Communication Protocol Software allows users to control and configure the Intermediate Frequency Switch Matrix (IFSM) on board the ACTS to yield a desired path through the spacecraft payload. Besides IFSM control, the C&PM Software System is also responsible for instrument control during HBR-LET experiments, uplink power control of the HBR-LET to demonstrate power augmentation during signal fade events, and data display. The Communication Protocol Software User's Guide, Version 1.0 (NASA CR-189162) outlines the commands and procedures to install and operate the Communication Protocol Software. Configuration files used to control the IFSM, operator commands, and error recovery procedures are discussed. The Communication Protocol Software Maintenance Manual, Version 1.0 (NASA CR-189163, to be published) is a programmer's guide to the Communication Protocol Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Communication Protocol Software, computer algorithms, format representations, and computer hardware configuration. The Communication Protocol Software Test Plan (NASA CR-189164, to be published) provides a step-by-step procedure to verify the operation of the software. Included in the Test Plan is command transmission, telemetry reception, error detection, and error recovery procedures.

  4. An Advanced Educational Program for Software Design Engineering at Graduate School of Information Science and Technology of Osaka University

    NASA Astrophysics Data System (ADS)

    Masuzawa, Toshimitsu; Inoue, Katsuro; Murakami, Koso; Fujiwara, Toru; Nishio, Shojiro

    This paper gives an overview of an advanced educational program for software design engineering that is currently conducted at Graduate School of Information Science and Technology, Osaka University under the grant “ Initiatives for Attractive Education in Graduate Schools” from MEXT. Software design engineering is highly expected to play a critical role in winning success in designing the next-generation software systems. The aim of the program is to bring up young researchers with the latest design methodologies and practical design experience, who can pioneer the frontier of software design engineering. The program is conducted with the collaboration of industries that have rich practical experience and are facing the engineering problems to be solved in developing the next-generation software.

  5. Advanced functionality for radio analysis in the Offline software framework of the Pierre Auger Observatory

    SciTech Connect

    Abreu, P.; Aglietta, M.; Ahn, E.J.; Albuquerque, I.F.M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Ambrosio, M.; /INFN, Naples /Copenhagen Astron. Observ. /Nijmegen U., IMAPP

    2011-01-01

    The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs 'radio-hybrid' measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request.

  6. Performance analysis and optimization of an advanced pharmaceutical wastewater treatment plant through a visual basic software tool (PWWT.VB).

    PubMed

    Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha

    2016-05-01

    A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater. PMID:26856870

  7. Detection of windthrow areas by object based image segmentation

    NASA Astrophysics Data System (ADS)

    Schmoeckel, J.; Kauffmann, M.

    2003-04-01

    In high resolution aerial images, areas that are uniform from the view of the application are not represented by an average spectral pattern, but are resolved into their components. While this enhanced information content offers the possibility of a more differentiating and correct classification, the classical spectral classification of single pixels comes up against its limits. Image analysis methods that take into account local neighborhood characteristics (edges, textures) can help to some extent, but deliver crumbled information that needs additional treatment. The new method of object based multispectral image segmentation (software "eCognition") promises a sulution. In a first step, the image is segmented into areas that are "looking" uniform, with respect to spectral, textural and shape properties. For each area, some characteristic values are calculated. In the second step, the segments are classified according to these attributes. The classification can be refined by giving training areas and previous knowledge (fuzzy class membership functions). In a third step, the classification can be improved by iterative application of neighbourhood criteria. In this work, the object based segmentation approach is applied to the detection of windthrow areas in multispectral images gained by an airborne survey with a digital line scanner. The characteristic pattern of lying trees, that is obvious to the human observer, can be detected in this way. Additionally, foreground objects (clouds) and settelement areas, which must be excluded, can be found. The derivated damage pattern can be used for an analysis of orographical influence on storm damage to forests in mountain areas (contribution of J. Schmoeckel and Ch. Kottmeier).

  8. Framework Programmable Platform for the Advanced Software Development Workstation (FPP/ASDW). Demonstration framework document. Volume 1: Concepts and activity descriptions

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paul S.; Crump, John W.; Ackley, Keith A.

    1992-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at effectively combining tool and data integration mechanisms with a model of the software development process to provide an intelligent integrated software development environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The Advanced Software Development Workstation (ASDW) program is conducting research into development of advanced technologies for Computer Aided Software Engineering (CASE).

  9. Hybrid Modeling for Scenario-Based Evaluation of Failure Effects in Advanced Hardware-Software Designs

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land; Throop, David

    2001-01-01

    This paper describes an incremental scenario-based simulation approach to evaluation of intelligent software for control and management of hardware systems. A hybrid continuous/discrete event simulation of the hardware dynamically interacts with the intelligent software in operations scenarios. Embedded anomalous conditions and failures in simulated hardware can lead to emergent software behavior and identification of missing or faulty software or hardware requirements. An approach is described for extending simulation-based automated incremental failure modes and effects analysis, to support concurrent evaluation of intelligent software and the hardware controlled by the software

  10. Automated object-based classification of topography from SRTM data

    NASA Astrophysics Data System (ADS)

    Drăguţ, Lucian; Eisank, Clemens

    2012-03-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.

  11. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  12. Object-Based Analysis of Aerial Photogrammetric Point Cloud and Spectral Data for Land Cover Mapping

    NASA Astrophysics Data System (ADS)

    Debella-Gilo, M.; Bjørkelo, K.; Breidenbach, J.; Rahlf, J.

    2013-04-01

    The acquisition of 3D point data with the use of both aerial laser scanning (ALS) and matching of aerial stereo images coupled with advances in image processing algorithms in the past years provide opportunities to map land cover types with better precision than before. The present study applies Object-Based Image Analysis (OBIA) to 3D point cloud data obtained from matching of stereo aerial images together with spectral data to map land cover types of the Nord-Trøndelag county of Norway. The multi-resolution segmentation algorithm of the Definiens eCognition™ software is used to segment the scenes into homogenous objects. The objects are then classified into different land cover types using rules created based on the definitions given for each land cover type by the Norwegian Forest and Landscape Institute. The quality of the land cover map was evaluated using data collected in the field as part of the Norwegian National Forest Inventory. The results show that the classification has an overall accuracy of about 80% and a kappa index of about 0.65. OBIA is found to be a suitable method for utilizing 3D remote sensing data for land cover mapping in an effort to replace manual delineation methods.

  13. Framework programmable platform for the advanced software development workstation. Integration mechanism design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.

  14. Recent advances in hardware and software are to improve spent fuel measurements

    SciTech Connect

    Staples, P.; Beddingfield, D. H.; Lestone, J. P.; Pelowitz, D. G.; Bytchkov, M.; Starovich, Z.; Harizanov, I.; Luna-Vellejo, J.; Lavender, C.

    2001-01-01

    Vast quantities of spent fuel are available for safeguard measurements, primarily in Commonwealth of Independent States (CIS) of the former Soviet Union. This spent fuel, much of which consists of long-cooling-time material, is going to become less unique in the world safeguards arena as reprocessing projects or permanent repositories continue to be delayed or postponed. The long cooling time of many of the spent fuel assemblies being prepared for intermediate term storage in the CIS countries promotes the possibility of increased accuracy in spent fuel assays. This improvement is made possible through the process of decay of the Curium isotopes and of fission products. An important point to consider for the future that could advance safeguards measurements for reverification and inspection would be to determine what safeguards requirements should be imposed upon this 'new' class of spent fuel, Improvements in measurement capability will obviously affect the safeguards requirements. What most significantly enables this progress in spent fuel measurements is the improvement in computer processing power and software enhancements leading to user-friendly Graphical User Interfaces (GUT's). The software used for these projects significantly reduces the IAEA inspector's time expenditure for both learning and operating computer and data acquisition systems, At the same time, by standardizing the spent fuel measurements, it is possible to increase reproducibility and reliability of the measurement data. Hardware systems will be described which take advantage of the increased computer control available to enable more complex measurement scenarios. A specific example of this is the active regulation of a spent fuel neutron coincident counter's {sup 3}He tubes high voltage, and subsequent scaling of measurement results to maintain a calibration for direct assay of the plutonium content of Fast Breeder Reactor spent fuel. The plutonium content has been successfully determined

  15. Object-based landslide detection in different geographic regions

    NASA Astrophysics Data System (ADS)

    Friedl, Barbara; Hölbling, Daniel; Eisank, Clemens; Blaschke, Thomas

    2015-04-01

    , SPOT-5 images are combined with digital elevation models (DEM) for developing a consistent semi-automated landslide detection approach using eCognition (Trimble) software. Suitable image objects are generated by means of multiresolution segmentation. Expert knowledge, i.e. reported facts on features (e.g. mean object slope, mean NDVI) and thresholds that are commonly chosen by professionals for digital landslide mapping, is considered during classification. The applicability of a range of features is tested and the most promising parameters, i.e. features that produce appropriate results for both regions, are selected for landslide detection. However, minor adaptations of particular thresholds are necessary due to the distinct environmental conditions of the test sites. In order to reduce the number of required adjustments to a minimum, relational features and spectral indices are primarily used for classification. The obtained results are finally compared to manually digitized reference polygons and existing landslide inventories in order to quantify the applicability of the developed object-based landslide detection approach in different geographic regions.

  16. Framework Programmable Platform for the advanced software development workstation: Framework processor design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les

    1991-01-01

    The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.

  17. Development of tools for safety analysis of control software in advanced reactors

    SciTech Connect

    Guarro, S.; Yau, M.; Motamed, M.

    1996-04-01

    Software based control systems have gained a pervasive presence in a wide variety of applications, including nuclear power plant control and protection systems which are within the oversight and licensing responsibility of the US Nuclear Regulatory Commission. While the cost effectiveness and flexibility of software based plant process control is widely recognized, it is very difficult to achieve and prove high levels of demonstrated dependability and safety assurance for the functions performed by process control software, due to the very flexibility and potential complexity of the software itself. The development of tools to model, analyze and test software design and implementations in the context of the system that the software is designed to control can greatly assist the task of providing higher levels of assurance than those obtainable by software testing alone. This report presents and discusses the development of the Dynamic Flowgraph Methodology (DFM) and its application in the dependability and assurance analysis of software-based control systems. The features of the methodology and full-scale examples of application to both generic process and nuclear power plant control systems are presented and discussed in detail. The features of a workstation software tool developed to assist users in the application of DFM are also described.

  18. Space Flight Software Development Software for Intelligent System Health Management

    NASA Technical Reports Server (NTRS)

    Trevino, Luis C.; Crumbley, Tim

    2004-01-01

    The slide presentation examines the Marshall Space Flight Center Flight Software Branch, including software development projects, mission critical space flight software development, software technical insight, advanced software development technologies, and continuous improvement in the software development processes and methods.

  19. Inhibition of Return and Object-Based Attentional Selection

    ERIC Educational Resources Information Center

    List, Alexandra; Robertson, Lynn C.

    2007-01-01

    Visual attention research has revealed that attentional allocation can occur in space- and/or object-based coordinates. Using the direct and elegant design of R. Egly, J. Driver, and R. Rafal (1994), the present experiments tested whether space- and object-based inhibition of return (IOR) emerge under similar time courses. The experiments were…

  20. OBEST: The Object-Based Event Scenario Tree Methodology

    SciTech Connect

    WYSS, GREGORY D.; DURAN, FELICIA A.

    2001-03-01

    Event tree analysis and Monte Carlo-based discrete event simulation have been used in risk assessment studies for many years. This report details how features of these two methods can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology with some of the best features of each. The resultant Object-Based Event Scenarios Tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible (especially those that exhibit inconsistent or variable event ordering, which are difficult to represent in an event tree analysis). Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST method uses a recursive algorithm to solve the object model and identify all possible scenarios and their associated probabilities. Since scenario likelihoods are developed directly by the solution algorithm, they need not be computed by statistical inference based on Monte Carlo observations (as required by some discrete event simulation methods). Thus, OBEST is not only much more computationally efficient than these simulation methods, but it also discovers scenarios that have extremely low probabilities as a natural analytical result--scenarios that would likely be missed by a Monte Carlo-based method. This report documents the OBEST methodology, the demonstration software that implements it, and provides example OBEST models for several different application domains, including interactions among failing interdependent infrastructure systems, circuit analysis for fire risk evaluation in nuclear power plants, and aviation safety studies.

  1. I-SPINE: a software package for advances in image-guided and minimally invasive spine procedures

    NASA Astrophysics Data System (ADS)

    Choi, Jae Jeong; Cleary, Kevin R.; Zeng, Jianchao; Gary, Kevin A.; Freedman, Matthew T.; Watson, Vance; Lindisch, David; Mun, Seong K.

    2000-05-01

    While image guidance is now routinely used in the brain in the form of frameless stereotaxy, it is beginning to be more widely used in other clinical areas such as the spine. At Georgetown University Medical Center, we are developing a program to provide advanced visualization and image guidance for minimally invasive spine procedures. This is a collaboration between an engineering-based research group and physicians from the radiology, neurosurgery, and orthopaedics departments. A major component of this work is the ISIS Center Spine Procedures Imaging and Navigation Engine, which is a software package under development as the base platform for technical advances.

  2. Framework Programmable Platform for the Advanced Software Development Workstation: Preliminary system design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.

  3. ATOS: Integration of advanced technology software within distributed Spacecraft Mission Operations Systems

    NASA Technical Reports Server (NTRS)

    Jones, M.; Wheadon, J.; Omullane, W.; Whitgift, D.; Poulter, K.; Niezette, M.; Timmermans, R.; Rodriguez, Ivan; Romero, R.

    1994-01-01

    The Advanced Technology Operations System (ATOS) is a program of studies into the integration of advanced applications (including knowledge based systems (KBS)) with ground systems for the support of spacecraft mission operations.

  4. Research in software allocation for advanced manned mission communications and tracking systems

    NASA Technical Reports Server (NTRS)

    Warnagiris, Tom; Wolff, Bill; Kusmanoff, Antone

    1990-01-01

    An assessment of the planned processing hardware and software/firmware for the Communications and Tracking System of the Space Station Freedom (SSF) was performed. The intent of the assessment was to determine the optimum distribution of software/firmware in the processing hardware for maximum throughput with minimum required memory. As a product of the assessment process an assessment methodology was to be developed that could be used for similar assessments of future manned spacecraft system designs. The assessment process was hampered by changing requirements for the Space Station. As a result, the initial objective of determining the optimum software/firmware allocation was not fulfilled, but several useful conclusions and recommendations resulted from the assessment. It was concluded that the assessment process would not be completely successful for a system with changing requirements. It was also concluded that memory requirements and hardware requirements were being modified to fit as a consequence of the change process, and although throughput could not be quantitized, potential problem areas could be identified. Finally, inherent flexibility of the system design was essential for the success of a system design with changing requirements. Recommendations resulting from the assessment included development of common software for some embedded controller functions, reduction of embedded processor requirements by hardwiring some Orbital Replacement Units (ORUs) to make better use of processor capabilities, and improvement in communications between software development personnel to enhance the integration process. Lastly, a critical observation was made regarding the software integration tasks did not appear to be addressed in the design process to the degree necessary for successful satisfaction of the system requirements.

  5. VIPER: an advanced software package to support high-throughput LC-MS peptide identification

    SciTech Connect

    Monroe, Matthew E.; Tolic, Nikola; Jaitly, Navdeep; Shaw, Jason L.; Adkins, Joshua N.; Smith, Richard D.

    2007-06-01

    High throughput liquid chromatograph-mass spectrometry (LC-MS) based proteomics analyses have necessitated development of software to manipulate large volumes of detailed data and produce confident peptide/protein identifications. VIPER unites important data processing steps in a single software package that can be used to visualize peptide mass and LC elution (i.e. retention) time “feature” relationships from individual analyses, match these LC-MS features to accurate mass and time (AMT) tags of peptides previously identified in LC-MS/MS analyses, and to identify and quantify pairs of isotopically labeled peptides.

  6. The Center for Technology for Advanced Scientific Component Software (TASCS) Lawrence Livermore National Laboratory - Site Status Update

    SciTech Connect

    Epperly, T W

    2008-12-03

    This report summarizes LLNL's progress for the period April through September of 2008 for the Center for Technology for Advanced Scientific Component Software (TASCS) SciDAC. The TASCS project is organized into four major thrust areas: CCA Environment (72%), Component Technology Initiatives (16%), CCA Toolkit (8%), and User and Application Outreach & Support (4%). The percentage of LLNL's effort allocation is shown in parenthesis for each thrust area. Major thrust areas are further broken down into activity areas, LLNL's effort directed to each activity is shown in Figure 1. Enhancements, Core Tools, and Usability are all part of CCA Environment, and Software Quality is part of Component Technology Initiatives. The balance of this report will cover our accomplishments in each of these activity areas.

  7. Developing Open Source Software To Advance High End Computing. Report to the President.

    ERIC Educational Resources Information Center

    National Coordination Office for Information Technology Research and Development, Arlington, VA.

    This is part of a series of reports to the President and Congress developed by the President's Information Technology Advisory Committee (PITAC) on key contemporary issues in information technology. This report defines open source software, explains PITAC's interest in this model, describes the process used to investigate issues in open source…

  8. Object-based attention guided by an invisible object.

    PubMed

    Zhang, Xilin; Fang, Fang

    2012-11-01

    Evidence for object-based attention typically comes from studies using displays with visible objects, and little is known about whether object-based attention can occur with invisible objects. We investigated this issue with a modified double-rectangle cuing paradigm, which was originally developed by Egly et al. (J Exp Psychol Gen 123:161-177, 1994). In this study, low-contrast rectangles were presented very briefly, which rendered them invisible to subjects. With the invisible rectangles, we found a classical object-based attentional effect as indexed by the same-object effect. We also found the instantaneous object effect-object-based attention was dependent on the orientation of the rectangles presented with the target, providing evidence for the dynamic updating hypothesis (Ho and Yeh in Acta Psychol 132:31-39, 2009). These results suggest that object-based attention can be guided by an invisible object in an automatic way, with a minimal influence from high-level top-down control. PMID:22990295

  9. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  10. Advanced multilateration theory, software development, and data processing: The MICRODOT system

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Gallagher, J. F.; Vonroos, O. H.

    1976-01-01

    The process of geometric parameter estimation to accuracies of one centimeter, i.e., multilateration, is defined and applications are listed. A brief functional explanation of the theory is presented. Next, various multilateration systems are described in order of increasing system complexity. Expected systems accuracy is discussed from a general point of view and a summary of the errors is listed. An outline of the design of a software processing system for multilateration, called MICRODOT, is presented next. The links of this software, which can be used for multilateration data simulations or operational data reduction, are examined on an individual basis. Functional flow diagrams are presented to aid in understanding the software capability. MICRODOT capability is described with respect to vehicle configurations, interstation coordinate reduction, geophysical parameter estimation, and orbit determination. Numerical results obtained from MICRODOT via data simulations are displayed both for hypothetical and real world vehicle/station configurations such as used in the GEOS-3 Project. These simulations show the inherent power of the multilateration procedure.

  11. Parallel supercomputing: Advanced methods, algorithms and software for large-scale problems. Final report, August 1, 1987--July 31, 1994

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1994-12-31

    The focus of the subject DOE sponsored research concerns parallel methods, algorithms, and software for complex applications such as those in coupled fluid flow and heat transfer. The research has been directed principally toward the solution of large-scale PDE problems using iterative solvers for finite differences and finite elements on advanced computer architectures. This work embraces parallel domain decomposition, element-by-element, spectral, and multilevel schemes with adaptive parameter determination, rational iteration and related issues. In addition to the fundamental questions related to developing new methods and mapping these to parallel computers, there are important software issues. The group has played a significant role in the development of software both for iterative solvers and also for finite element codes. The research in computational fluid dynamics (CFD) led to sustained multi-Gigaflop performance rates for parallel-vector computations of realistic large scale applications (not computational kernels alone). The main application areas for these performance studies have been two-dimensional problems in CFD. Over the course of this DOE sponsored research significant progress has been made. A report of the progression of the research is given and at the end of the report is a list of related publications and presentations over the entire grant period.

  12. User-specific interfaces for clinical data-management systems: an object-based approach.

    PubMed Central

    Wilton, R.

    1992-01-01

    Multiple user-specific visual interfaces are desirable in any computer-based clinical data-management system that is used by different people with different jobs to perform. The programming and maintenance problems of supporting multiple user interfaces to a single information system can be addressed by separating user-interface functionality from data-management subsystems, and by building user interfaces from object-based software components whose functionality is bound to an underlying server-client data-management architecture. Experience with this approach in a patient-tracking system suggests that this object-based approach is viable in the design of a user interface for a clinical information system. PMID:1482880

  13. Object based data access at the D0 experiment

    SciTech Connect

    Fuess, S.; D0 Collaboration

    1995-11-01

    The D{O} Experiment at Fermilab is currently participating in the FNAL Computing Division`s ``Computing for Analysis Project`` (CAP) to investigate object based data storage and access. Following a short description of the CAP system architecture, the D{O} data model is explored. A brief discussion of the method of operation of the CAP system leads into a concluding section.

  14. Crossmodal Object-Based Attention: Auditory Objects Affect Visual Processing

    ERIC Educational Resources Information Center

    Turatto, M.; Mazza, V.; Umilta, C.

    2005-01-01

    According to the object-based view, visual attention can be deployed to ''objects'' or perceptual units, regardless of spatial locations. Recently, however, the notion of object has also been extended to the auditory domain, with some authors suggesting possible interactions between visual and auditory objects. Here we show that task-irrelevant…

  15. Metadata management for CDP in object-based file system

    NASA Astrophysics Data System (ADS)

    Yao, Jie; Cao, Qiang; Huang, Jianzhong

    2009-08-01

    Object-based storage system integrates advantage of both NAS and SAN, can be applied in large-capacity, low-cost and large-scale storage systems which are built from commodity devices. Continuous data protection (CDP) is a methodology that continuously captures or tracks data modifications and stores changes independent of the primary data, enabling recovery points from any point in the past. An efficient file system optimized for CDP is needed to provide CDP feature in object-based storage system. In this thesis, a new metadata management method is present. All necessary meta data information are recorded when changes happened to file system. We have a journal-like data placement algorithm to store these metadata. Secondly, this metadata management method provides both CDP feature and Object-based feature. Two type write operations are analyzed to reduce storage space consumption. Object-based data allocation algorithm can take the advantage of distributed file system to concurrently process CDP operations over storage nodes. Thirdly, history revisions and recovery operations are discussed. Finally, the experiment test result is present and analyzed.

  16. Software Systems 2--Compiler and Operating Systems Lab--Advanced, Data Processing Technology: 8025.33.

    ERIC Educational Resources Information Center

    Dade County Public Schools, Miami, FL.

    The course outline has been prepared as a guide to help the student develop the skills and knowledge necessary to succeed in the field of data processing. By learning the purpose and principles of compiler programs and operating systems, the student will become familiar with advanced data processing procedures that are representative of computer…

  17. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  18. Multiscale quantification of urban composition from EO-1/Hyperion data using object-based spectral unmixing

    NASA Astrophysics Data System (ADS)

    Zhang, Caiyun

    2016-05-01

    Quantification of the urban composition is important in urban planning and management. Previous research has primarily focused on unmixing medium-spatial resolution multispectral imagery using spectral mixture analysis (SMA) in order to estimate the abundance of urban components. For this study an object-based multiple endmember spectral mixture analysis (MESMA) approach was applied to unmix the 30-m Earth Observing-1 (EO-1)/Hyperion hyperspectral imagery. The abundance of two physical urban components (vegetation and impervious surface) was estimated and mapped at multiple scales and two defined geographic zones. The estimation results were validated by a reference dataset generated from fine spatial resolution aerial photography. The object-based MESMA approach was compared with its corresponding pixel-based one, and EO-1/Hyperion hyperspectral data was compared with the simulated EO-1/Advanced Land Imager (ALI) multispectral data in the unmixing modeling. The pros and cons of the object-based MESMA were evaluated. The result illustrates that the object-based MESMA is promising for unmixing the medium-spatial resolution hyperspectral imagery to quantify the urban composition, and it is an attractive alternative to the traditional pixel-based mixture analysis for various applications.

  19. Developing a Comprehensive Software Suite for Advanced Reactor Performance and Safety Analysis

    SciTech Connect

    Pointer, William David; Bradley, Keith S; Fischer, Paul F; Smith, Micheal A; Tautges, Timothy J; Ferencz, Robert M; Martineau, Richard C; Jain, Rajeev; Obabko, Aleksandr; Billings, Jay Jay

    2013-01-01

    This paper provides an introduction to the reactor analysis capabilities of the nuclear power reactor simulation tools that are being developed as part of the U.S. Department of Energy s Nuclear Energy Advanced Modeling and Simulation (NEAMS) Toolkit. The NEAMS Toolkit is an integrated suite of multi-physics simulation tools that leverage high-performance computing to reduce uncertainty in the prediction of performance and safety of advanced reactor and fuel designs. The Toolkit effort is comprised of two major components, the Fuels Product Line (FPL), which provides tools for fuel performance analysis, and the Reactor Product Line (RPL), which provides tools for reactor performance and safety analysis. This paper provides an overview of the NEAMS RPL development effort.

  20. Advances in hardware, software, and automation for 193nm aerial image measurement systems

    NASA Astrophysics Data System (ADS)

    Zibold, Axel M.; Schmid, R.; Seyfarth, A.; Waechter, M.; Harnisch, W.; Doornmalen, H. v.

    2005-05-01

    A new, second generation AIMS fab 193 system has been developed which is capable of emulating lithographic imaging of any type of reticles such as binary and phase shift masks (PSM) including resolution enhancement technologies (RET) such as optical proximity correction (OPC) or scatter bars. The system emulates the imaging process by adjustment of the lithography equivalent illumination and imaging conditions of 193nm wafer steppers including circular, annular, dipole and quadrupole type illumination modes. The AIMS fab 193 allows a rapid prediction of wafer printability of critical mask features, including dense patterns and contacts, defects or repairs by acquiring through-focus image stacks by means of a CCD camera followed by quantitative image analysis. Moreover the technology can be readily applied to directly determine the process window of a given mask under stepper imaging conditions. Since data acquisition is performed electronically, AIMS in many applications replaces the need for costly and time consuming wafer prints using a wafer stepper/ scanner followed by CD SEM resist or wafer analysis. The AIMS fab 193 second generation system is designed for 193nm lithography mask printing predictability down to the 65nm node. In addition to hardware improvements a new modular AIMS software is introduced allowing for a fully automated operation mode. Multiple pre-defined points can be visited and through-focus AIMS measurements can be executed automatically in a recipe based mode. To increase the effectiveness of the automated operation mode, the throughput of the system to locate the area of interest, and to acquire the through-focus images is increased by almost a factor of two in comparison with the first generation AIMS systems. In addition a new software plug-in concept is realised for the tools. One new feature has been successfully introduced as "Global CD Map", enabling automated investigation of global mask quality based on the local determination of

  1. Advanced software development workstation. Comparison of two object-oriented development methodologies

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    1992-01-01

    This report is an attempt to clarify some of the concerns raised about the OMT method, specifically that OMT is weaker than the Booch method in a few key areas. This interim report specifically addresses the following issues: (1) is OMT object-oriented or only data-driven?; (2) can OMT be used as a front-end to implementation in C++?; (3) the inheritance concept in OMT is in contradiction with the 'pure and real' inheritance concept found in object-oriented (OO) design; (4) low support for software life-cycle issues, for project and risk management; (5) uselessness of functional modeling for the ROSE project; and (6) problems with event-driven and simulation systems. The conclusion of this report is that both Booch's method and Rumbaugh's method are good OO methods, each with strengths and weaknesses in different areas of the development process.

  2. Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.

  3. Advanced Communications Technology Satellite high burst rate link evaluation terminal experiment control and monitor software maintenance manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1992-01-01

    The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document. The EC&M Software Maintenance Manual, Version 1.0 (NASA-CR-189161) is a programmer's guide that describes current implementation of the EC&M software from a technical perspective. An overview of the EC&M software, computer algorithms, format representation, and computer hardware configuration are included in the manual.

  4. Object-Based Classification and Change Detection of Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Park, J. G.; Harada, I.; Kwak, Y.

    2016-06-01

    Topography and geology are factors to characterize the distribution of natural vegetation. Topographic contour is particularly influential on the living conditions of plants such as soil moisture, sunlight, and windiness. Vegetation associations having similar characteristics are present in locations having similar topographic conditions unless natural disturbances such as landslides and forest fires or artificial disturbances such as deforestation and man-made plantation bring about changes in such conditions. We developed a vegetation map of Japan using an object-based segmentation approach with topographic information (elevation, slope, slope direction) that is closely related to the distribution of vegetation. The results found that the object-based classification is more effective to produce a vegetation map than the pixel-based classification.

  5. An object-based methodology for knowledge representation in SGML

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object-based methodology for knowledge representation and its Standard Generalized Markup Language (SGML) implementation is presented. The methodology includes class, perspective domain, and event constructs for representing knowledge within an object paradigm. The perspective construct allows for representation of knowledge from multiple and varying viewpoints. The event construct allows actual use of knowledge to be represented. The SGML implementation of the methodology facilitates usability, structured, yet flexible knowledge design, and sharing and reuse of knowledge class libraries.

  6. Object-Based Epistemology at a Creationist Museum

    NASA Astrophysics Data System (ADS)

    Wendel, Paul J.

    2011-01-01

    In a regional young-earth creationist museum, objects are presented as if they speak for themselves, purportedly embodying proof that the earth is less than 10,000 years old, that humans have lived on earth throughout its history, and that dinosaurs and humans lived simultaneously. In public lectures, tours, and displays, museum associates emphasize direct observation over inference or theory. These emphases resonate closely with the "object-based epistemology" of the late nineteenth century described in Steven Conn's Museums and American Intellectual Life, 1876- 1926. In Conn's description, museum objects, artfully arranged and displayed, were intended to speak for themselves, and observation and categorization were valued over experiment and theory. The regional young-earth creationist museum is observed to partly succeed and partly fail in implementing an object-based epistemology. Although object-based epistemology represents a nineteenth-century approach to knowledge and museum display, it is compatible with an inductive approach to biblical interpretation and it confers various rhetorical advantages to creationist arguments. It is concluded that a focus on the theory-laden nature of data would likely strengthen nature-of-science education efforts to increase public acceptance of evolution.

  7. Object-based Landslide Mapping: Examples, Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Eisank, Clemens; Friedl, Barbara; Chang, Kang-Tsung; Tsai, Tsai-Tsung; Birkefeldt Møller Pedersen, Gro; Betts, Harley; Cigna, Francesca; Chiang, Shou-Hao; Aubrey Robson, Benjamin; Bianchini, Silvia; Füreder, Petra; Albrecht, Florian; Spiekermann, Raphael; Weinke, Elisabeth; Blaschke, Thomas; Phillips, Chris

    2016-04-01

    Over the last decade, object-based image analysis (OBIA) has been increasingly used for mapping landslides that occur after triggering events such as heavy rainfall. The increasing availability and quality of Earth Observation (EO) data in terms of temporal, spatial and spectral resolution allows for comprehensive mapping of landslides at multiple scales. Most often very high resolution (VHR) or high resolution (HR) optical satellite images are used in combination with a digital elevation model (DEM) and its products such as slope and curvature. Semi-automated object-based mapping makes use of various characteristics of image objects that are derived through segmentation. OBIA enables numerous spectral, spatial, contextual and textural image object properties to be applied during an analysis. This is especially useful when mapping complex natural features such as landslides and constitutes an advantage over pixel-based image analysis. However, several drawbacks in the process of object-based landslide mapping have not been overcome yet. The developed classification routines are often rather complex and limited regarding their transferability across areas and sensors. There is still more research needed to further improve present approaches and to fully exploit the capabilities of OBIA for landslide mapping. In this study several examples of object-based landslide mapping from various geographical regions with different characteristics are presented. Examples from the Austrian and Italian Alps are shown, whereby one challenge lies in the detection of small-scale landslides on steep slopes while preventing the classification of false positives with similar spectral properties (construction areas, utilized land, etc.). Further examples feature landslides mapped in Iceland, where the differentiation of landslides from other landscape-altering processes in a highly dynamic volcanic landscape poses a very distinct challenge, and in Norway, which is exposed to multiple

  8. Advanced communications technology satellite high burst rate link evaluation terminal power control and rain fade software test plan, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Power Control and Rain Fade Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Power Control and Rain Fade Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Power Control and Rain Fade Software automatically controls the LET uplink power to compensate for signal fades. Besides power augmentation, the C&PM Software system is also responsible for instrument control during HBR-LET experiments, control of the Intermediate Frequency Switch Matrix on board the ACTS to yield a desired path through the spacecraft payload, and data display. The Power Control and Rain Fade Software User's Guide, Version 1.0 outlines the commands and procedures to install and operate the Power Control and Rain Fade Software. The Power Control and Rain Fade Software Maintenance Manual, Version 1.0 is a programmer's guide to the Power Control and Rain Fade Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Power Control and Rain Fade Software, computer algorithms, format representations, and computer hardware configuration. The Power Control and Rain Fade Test Plan provides a step-by-step procedure to verify the operation of the software using a predetermined signal fade event. The Test Plan also provides a means to demonstrate the capability of the software.

  9. Advancing global marine biogeography research with open-source GIS software and cloud-computing

    USGS Publications Warehouse

    Fujioka, Ei; Vanden Berghe, Edward; Donnelly, Ben; Castillo, Julio; Cleary, Jesse; Holmes, Chris; McKnight, Sean; Halpin, patrick

    2012-01-01

    Across many scientific domains, the ability to aggregate disparate datasets enables more meaningful global analyses. Within marine biology, the Census of Marine Life served as the catalyst for such a global data aggregation effort. Under the Census framework, the Ocean Biogeographic Information System was established to coordinate an unprecedented aggregation of global marine biogeography data. The OBIS data system now contains 31.3 million observations, freely accessible through a geospatial portal. The challenges of storing, querying, disseminating, and mapping a global data collection of this complexity and magnitude are significant. In the face of declining performance and expanding feature requests, a redevelopment of the OBIS data system was undertaken. Following an Open Source philosophy, the OBIS technology stack was rebuilt using PostgreSQL, PostGIS, GeoServer and OpenLayers. This approach has markedly improved the performance and online user experience while maintaining a standards-compliant and interoperable framework. Due to the distributed nature of the project and increasing needs for storage, scalability and deployment flexibility, the entire hardware and software stack was built on a Cloud Computing environment. The flexibility of the platform, combined with the power of the application stack, enabled rapid re-development of the OBIS infrastructure, and ensured complete standards-compliance.

  10. An object-based methodology for knowledge representation

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object based methodology for knowledge representation is presented. The constructs and notation to the methodology are described and illustrated with examples. The ``blocks world,`` a classic artificial intelligence problem, is used to illustrate some of the features of the methodology including perspectives and events. Representing knowledge with perspectives can enrich the detail of the knowledge and facilitate potential lines of reasoning. Events allow example uses of the knowledge to be represented along with the contained knowledge. Other features include the extensibility and maintainability of knowledge represented in the methodology.

  11. MeltDB 2.0–advances of the metabolomics software system

    PubMed Central

    Kessler, Nikolas; Neuweger, Heiko; Bonte, Anja; Langenkämper, Georg; Niehaus, Karsten; Nattkemper, Tim W.; Goesmann, Alexander

    2013-01-01

    Motivation: The research area metabolomics achieved tremendous popularity and development in the last couple of years. Owing to its unique interdisciplinarity, it requires to combine knowledge from various scientific disciplines. Advances in the high-throughput technology and the consequently growing quality and quantity of data put new demands on applied analytical and computational methods. Exploration of finally generated and analyzed datasets furthermore relies on powerful tools for data mining and visualization. Results: To cover and keep up with these requirements, we have created MeltDB 2.0, a next-generation web application addressing storage, sharing, standardization, integration and analysis of metabolomics experiments. New features improve both efficiency and effectivity of the entire processing pipeline of chromatographic raw data from pre-processing to the derivation of new biological knowledge. First, the generation of high-quality metabolic datasets has been vastly simplified. Second, the new statistics tool box allows to investigate these datasets according to a wide spectrum of scientific and explorative questions. Availability: The system is publicly available at https://meltdb.cebitec.uni-bielefeld.de. A login is required but freely available. Contact: nkessler@cebitec.uni-bielefeld.de PMID:23918246

  12. Observed bodies generate object-based spatial codes.

    PubMed

    Taylor, Alison; Flynn, Maria; Edmonds, Caroline J; Gardner, Mark R

    2016-09-01

    Contemporary studies of spatial and social cognition frequently use human figures as stimuli. The interpretation of such studies may be complicated by spatial compatibility effects that emerge when researchers employ spatial responses, and participants spontaneously code spatial relationships about an observed body. Yet, the nature of these spatial codes - whether they are location- or object-based, and coded from the perspective of the observer or the figure - has not been determined. Here, we investigated this issue by exploring spatial compatibility effects arising for objects held by a visually presented whole-bodied schematic human figure. In three experiments, participants responded to the colour of the object held in the figure's left or right hand, using left or right key presses. Left-right compatibility effects were found relative to the participant's egocentric perspective, rather than the figure's. These effects occurred even when the figure was rotated by 90° to the left or to the right, and the coloured objects were aligned with the participant's midline. These findings are consistent with spontaneous spatial coding from the participant's perspective and relative to the normal upright orientation of the body. This evidence for object-based spatial coding implies that the domain general cognitive mechanisms that result in spatial compatibility effects may contribute to certain spatial perspective-taking and social cognition phenomena. PMID:27235754

  13. Object-based attentional selection modulates anticipatory alpha oscillations.

    PubMed

    Knakker, Balázs; Weiss, Béla; Vidnyánszky, Zoltán

    2014-01-01

    Visual cortical alpha oscillations are involved in attentional gating of incoming visual information. It has been shown that spatial and feature-based attentional selection result in increased alpha oscillations over the cortical regions representing sensory input originating from the unattended visual field and task-irrelevant visual features, respectively. However, whether attentional gating in the case of object based selection is also associated with alpha oscillations has not been investigated before. Here we measured anticipatory electroencephalography (EEG) alpha oscillations while participants were cued to attend to foveal face or word stimuli, the processing of which is known to have right and left hemispheric lateralization, respectively. The results revealed that in the case of simultaneously displayed, overlapping face and word stimuli, attending to the words led to increased power of parieto-occipital alpha oscillations over the right hemisphere as compared to when faces were attended. This object category-specific modulation of the hemispheric lateralization of anticipatory alpha oscillations was maintained during sustained attentional selection of sequentially presented face and word stimuli. These results imply that in the case of object-based attentional selection-similarly to spatial and feature-based attention-gating of visual information processing might involve visual cortical alpha oscillations. PMID:25628554

  14. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  15. Object-based attentional facilitation and inhibition are neuropsychologically dissociated.

    PubMed

    Smith, Daniel T; Ball, Keira; Swalwell, Robert; Schenk, Thomas

    2016-01-01

    Salient peripheral cues produce a transient shift of attention which is superseded by a sustained inhibitory effect. Cueing part of an object produces an inhibitory cueing effect (ICE) that spreads throughout the object. In dynamic scenes the ICE stays with objects as they move. We examined object-centred attentional facilitation and inhibition in a patient with visual form agnosia. There was no evidence of object-centred attentional facilitation. In contrast, object-centred ICE was observed in 3 out of 4 tasks. These inhibitory effects were strongest where cues to objecthood were highly salient. These data are evidence of a neuropsychological dissociation between the facilitatory and inhibitory effects of attentional cueing. From a theoretical perspective the findings suggest that 'grouped arrays' are sufficient for object-based inhibition, but insufficient to generate object-centred attentional facilitation. PMID:26551577

  16. Object-based morphometry of the cerebral cortex.

    PubMed

    Mangin, J F; Rivière, D; Cachia, A; Duchesnay, E; Cointepas, Y; Papadopoulos-Orfanos, D; Collins, D L; Evans, A C; Régis, J

    2004-08-01

    Most of the approaches dedicated to automatic morphometry rely on a point-by-point strategy based on warping each brain toward a reference coordinate system. In this paper, we describe an alternative object-based strategy dedicated to the cortex. This strategy relies on an artificial neuroanatomist performing automatic recognition of the main cortical sulci and parcellation of the cortical surface into gyral patches. A set of shape descriptors, which can be compared across subjects, is then attached to the sulcus and gyrus related objects segmented by this process. The framework is used to perform a study of 142 brains of the International Consortium for Brain Mapping (ICBM) database. This study reveals some correlates of handedness on the size of the sulci located in motor areas, which was not detected previously using standard voxel based morphometry. PMID:15338731

  17. Object-based strategy for morphometry of the cerebral cortex.

    PubMed

    Mangin, J F; Rivière, D; Cachia, A; Papadopoulos-Orfanos, D; Collins, D L; Evans, A C; Régis, J

    2003-07-01

    Most of the approaches dedicated to automatic morphometry rely on a point-by-point strategy based on warping each brain towards a reference coordinate system. In this paper, we describe an alternative object-based strategy dedicated to the cortex. This strategy relies on an artificial neuroanatomist performing automatic recognition of the main cortical sulci and parcellation of the cortical surface into gyral patches. A set of shape descriptors, which can be compared across subjects, is then attached to the sulcus and gyrus related objects segmented by this process. The framework is used to perform a study of 142 brains of the ICBM database. This study reveals some correlates of handedness on the size of the sulci located in motor areas, which seem to be beyond the scope of the standard voxel based morphometry. PMID:15344455

  18. Attentional rhythm: a temporal analogue of object-based attention.

    PubMed

    De Freitas, Julian; Liverence, Brandon M; Scholl, Brian J

    2014-02-01

    The underlying units of attention are often discrete visual objects. Perhaps the clearest form of evidence for this is the same-object advantage: Following a spatial cue, responses are faster to probes occurring on the same object than they are to probes occurring on other objects, while equating brute distance. Is this a fundamentally spatial effect, or can same-object advantages also occur in time? We explored this question using independently normed rhythmic temporal sequences, structured into phrases and presented either visually or auditorily. Detection was speeded when cues and probes both lay within the same rhythmic phrase, compared to when they spanned a phrase boundary, while equating brute duration. This same-phrase advantage suggests that object-based attention is a more general phenomenon than has been previously suspected: Perceptual structure constrains attention, in both space and time, and in both vision and audition. PMID:23586668

  19. Introducing shape constraints into object-based traveltime tomography

    NASA Astrophysics Data System (ADS)

    Gaullier, G.; Charbonnier, P.; Heitz, F.; Côte, P.

    2016-09-01

    Traveltime tomography is a difficult, ill-posed reconstruction problem due to the nonlinearity of the forward model and the limited number of measurements usually available. In such an adverse situation, pixel-based regularization methods are generally unable to provide satisfactory reconstructions. In this paper we propose a novel object-based reconstruction method that introduces prior information about the shape of the structures to be reconstructed, which yields high quality geoacoustic inversion. The proposed method approaches the forward model by a series of linear problems, leading to a sequence of minimizations during which the shape prior is introduced. The method is demonstrated on synthetic and real data, collected on a specific bench dedicated to non-destructive testing of civil engineering structures.

  20. Advanced communications technology satellite high burst rate link evaluation terminal experiment control and monitor software user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1992-01-01

    The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document.

  1. Earth Observing System (EOS)/Advanced Microwave Sounding Unit-A (AMSU-A) software assurance plan

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert; Smith, Claude

    1994-01-01

    This document defines the responsibilities of Software Quality Assurance (SOA) for the development of the flight software installed in EOS/AMSU-A instruments, and the ground support software used in the test and integration of the EOS/AMSU-A instruments.

  2. A proto-object-based computational model for visual saliency.

    PubMed

    Yanulevskaya, Victoria; Uijlings, Jasper; Geusebroek, Jan-Mark; Sebe, Nicu; Smeulders, Arnold

    2013-01-01

    State-of-the-art bottom-up saliency models often assign high saliency values at or near high-contrast edges, whereas people tend to look within the regions delineated by those edges, namely the objects. To resolve this inconsistency, in this work we estimate saliency at the level of coherent image regions. According to object-based attention theory, the human brain groups similar pixels into coherent regions, which are called proto-objects. The saliency of these proto-objects is estimated and incorporated together. As usual, attention is given to the most salient image regions. In this paper we employ state-of-the-art computer vision techniques to implement a proto-object-based model for visual attention. Particularly, a hierarchical image segmentation algorithm is used to extract proto-objects. The two most powerful ways to estimate saliency, rarity-based and contrast-based saliency, are generalized to assess the saliency at the proto-object level. The rarity-based saliency assesses if the proto-object contains rare or outstanding details. The contrast-based saliency estimates how much the proto-object differs from the surroundings. However, not all image regions with high contrast to the surroundings attract human attention. We take this into account by distinguishing between external and internal contrast-based saliency. Where the external contrast-based saliency estimates the difference between the proto-object and the rest of the image, the internal contrast-based saliency estimates the complexity of the proto-object itself. We evaluate the performance of the proposed method and its components on two challenging eye-fixation datasets (Judd, Ehinger, Durand, & Torralba, 2009; Subramanian, Katti, Sebe, Kankanhalli, & Chua, 2010). The results show the importance of rarity-based and both external and internal contrast-based saliency in fixation prediction. Moreover, the comparison with state-of-the-art computational models for visual saliency demonstrates the

  3. Object-based change detection: dimension of damage in residential areas of Abu Suruj, Sudan

    NASA Astrophysics Data System (ADS)

    Demharter, Timo; Michel, Ulrich; Ehlers, Manfred; Reinartz, Peter

    2011-11-01

    Given the importance of Change Detection, especially in the field of crisis management, this paper discusses the advantage of object-based Change Detection. This project and the used methods give an opportunity to coordinate relief actions strategically. The principal objective of this project was to develop an algorithm which allows to detect rapidly damaged and destroyed buildings in the area of Abu Suruj. This Sudanese village is located in West-Darfur and has become the victim of civil war. The software eCognition Developer was used to per-form an object-based Change Detection on two panchromatic Quickbird 2 images from two different time slots. The first image shows the area before, the second image shows the area after the massacres in this region. Seeking a classification for the huts of the Sudanese town Abu Suruj was reached by first segmenting the huts and then classifying them on the basis of geo-metrical and brightness-related values. The huts were classified as "new", "destroyed" and "preserved" with the help of a automated algorithm. Finally the results were presented in the form of a map which displays the different conditions of the huts. The accuracy of the project is validated by an accuracy assessment resulting in an Overall Classification Accuracy of 90.50 percent. These change detection results allow aid organizations to provide quick and efficient help where it is needed the most.

  4. Object-based classification of semi-arid wetlands

    NASA Astrophysics Data System (ADS)

    Halabisky, Meghan; Moskal, L. Monika; Hall, Sonia A.

    2011-01-01

    Wetlands are valuable ecosystems that benefit society. However, throughout history wetlands have been converted to other land uses. For this reason, timely wetland maps are necessary for developing strategies to protect wetland habitat. The goal of this research was to develop a time-efficient, automated, low-cost method to map wetlands in a semi-arid landscape that could be scaled up for use at a county or state level, and could lay the groundwork for expanding to forested areas. Therefore, it was critical that the research project contain two components: accurate automated feature extraction and the use of low-cost imagery. For that reason, we tested the effectiveness of geographic object-based image analysis (GEOBIA) to delineate and classify wetlands using freely available true color aerial photographs provided through the National Agriculture Inventory Program. The GEOBIA method produced an overall accuracy of 89% (khat = 0.81), despite the absence of infrared spectral data. GEOBIA provides the automation that can save significant resources when scaled up while still providing sufficient spatial resolution and accuracy to be useful to state and local resource managers and policymakers.

  5. Geographic Object-Based Image Analysis - Towards a new paradigm

    NASA Astrophysics Data System (ADS)

    Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis - GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the 'per-pixel paradigm' and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.

  6. Object-based rapid change detection for disaster management

    NASA Astrophysics Data System (ADS)

    Thunig, Holger; Michel, Ulrich; Ehlers, Manfred; Reinartz, Peter

    2011-11-01

    Rapid change detection is used in cases of natural hazards and disasters. This analysis lead to quick information about areas of damage. In certain cases the lack of information after catastrophe events is obstructing supporting measures within disaster management. Earthquakes, tsunamis, civil war, volcanic eruption, droughts and floods have much in common: people are directly affected, landscapes and buildings are destroyed. In every case geospatial data is necessary to gain knowledge as basement for decision support. Where to go first? Which infrastructure is usable? How much area is affected? These are essential questions which need to be answered before appropriate, eligible help can be established. This study presents an innovative strategy to retrieve post event information by use of an object-based change detection approach. Within a transferable framework, the developed algorithms can be implemented for a set of remote sensing data among different investigation areas. Several case studies are the base for the retrieved results. Within a coarse dividing into statistical parts and the segmentation in meaningful objects, the framework is able to deal with different types of change. By means of an elaborated normalized temporal change index (NTCI) panchromatic datasets are used to extract areas which are destroyed, areas which were not affected and in addition areas which are developing new for cases where rebuilding has already started. The results of the study are also feasible for monitoring urban growth.

  7. A model of proto-object based saliency

    PubMed Central

    Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph

    2013-01-01

    Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601

  8. Techniques for region coding in object-based image compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2004-01-01

    Object-based compression (OBC) is an emerging technology that combines region segmentation and coding to produce a compact representation of a digital image or video sequence. Previous research has focused on a variety of segmentation and representation techniques for regions that comprise an image. The author has previously suggested [1] partitioning of the OBC problem into three steps: (1) region segmentation, (2) region boundary extraction and compression, and (3) region contents compression. A companion paper [2] surveys implementationally feasible techniques for boundary compression. In this paper, we analyze several strategies for region contents compression, including lossless compression, lossy VPIC, EPIC, and EBLAST compression, wavelet-based coding (e.g., JPEG-2000), as well as texture matching approaches. This paper is part of a larger study that seeks to develop highly efficient compression algorithms for still and video imagery, which would eventually support automated object recognition (AOR) and semantic lookup of images in large databases or high-volume OBC-format datastreams. Example applications include querying journalistic archives, scientific or medical imaging, surveillance image processing and target tracking, as well as compression of video for transmission over the Internet. Analysis emphasizes time and space complexity, as well as sources of reconstruction error in decompressed imagery.

  9. Object-based detection of vehicles in airborne data

    NASA Astrophysics Data System (ADS)

    Schilling, Hendrik; Bulatov, Dimitri; Middelmann, Wolfgang

    2015-10-01

    Robust detection of vehicles in airborne data is a challenging task since a high variation in the object signatures - depending on data resolution - and often a small contrast between objects and background lead to high false classification rates and missed detections. Despite these facts, many applications require reliable results which can be obtained in a short time. In this paper, an object-based approach for vehicle detection in airborne laser scans (ALS) and photogrammetrically reconstructed 2.5D data is described. The focus of this paper lies on a robust object segmentation algorithm as well as the identification of features for a reliable separation between vehicles and background (all nonevehicle objects) on different scenes. The described method is based on three consecutive steps, namely, object segmentation, feature extraction and supervised classification. In the first step, the 2.5D data is segmented and possible targets are identified. The segmentation progress is based on the morphological top-hat filtering, which leaves areas that are smaller than a given filter size and higher (brighter) than their surroundings. The approach is chosen due to the low computational effort of this filter, which allows a fast computation even for large areas. The next step is feature extraction. Based on the initial segmentation, features for every identified object are extracted. In addition to frequently used features like height above ground, object area, or point distribution, more complex features like object planarity, entropy in the intensity image, and lineness measures are used. The last step contains classification of each object. For this purpose, a random forest classifier (RF) using the normalized features extracted in the previous step is chosen. RFs are suitable for high dimensional and nonlinear problems. In contrast to other approaches (e.g. maximum likelihood classifier), RFs achieves good results even with relatively small training samples.

  10. Object-Based Parallel Framework for Scientific Computing

    NASA Astrophysics Data System (ADS)

    Pierce, Brian; Omelchenko, Y. A.

    1999-11-01

    We have developed a library of software in Fortran 90 and MPI for running simulations on massively parallel facilities. This is modeled after Omelchenko's FLAME code which was written in C++. With Fortran 90 we found several advantages, such as the array syntax and the intrinsic functions. The parallel portion of this library is achieved by dividing the data into subdomains, and distributing the subdomains among the processors to be computed concurrently (with periodic updates in neighboring region information as is necessary). The library is flexible enough so that one can use it to run simulations on any number of processors, and the user can divide up the data between the processors in an arbitrary fashion. We have tested this library for correctness and speed by using it to conduct simulations on a parallel cluster at General Atomics and on a serial workstation.

  11. Earth Observing System (EOS) Advanced Microwave Sounding Unit-A2 (EOS/AMSU-A): EOS Software Test Report

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This document describes the results of the formal qualification test (FQT)/ Demonstration conducted on September 10, and 14, 1998 for the EOS AMSU-A2 instrument. The purpose of the report is to relate the results of the functional performance and interface tests of the software. This is the final submittal of the EOS/AMSU-A Software Test report.

  12. Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) software management plan

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert

    1994-01-01

    This document defines the responsibilites for the management of the like-cycle development of the flight software installed in the AMSU-A instruments, and the ground support software used in the test and integration of the AMSU-A instruments.

  13. Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A): Software concept document

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert

    1994-01-01

    This document presents the concept for two separate but closely related software systems. The first is the special test equipment (STE) software used at Aerojet for AMSU-A instrument testing, and the second is the workstation software used at the spacecraft integration facility to monitor the AMSU-A instrument when installed on the spacecraft. The primary use of the STE software is to monitor the data output from the AMSU-A instruments, to command the instruments, and to perform automated thermal-vacuum calibration testing. The primary use of the workstation software is to monitor the AMSU-A instrument's performance through an Ethernet link during the instrument/spacecraft integration process.

  14. The Cementitious Barriers Partnership Experimental Programs and Software Advancing DOE’s Waste Disposal/Tank Closure Efforts – 15436

    SciTech Connect

    Burns, Heather; Flach, Greg; Smith, Frank; Langton, Christine; Brown, Kevin; Mallick, Pramod

    2015-01-27

    The U.S. Department of Energy Environmental Management (DOE-EM) Office of Tank Waste Management-sponsored Cementitious Barriers Partnership (CBP) is chartered with providing the technical basis for implementing cement-based waste forms and radioactive waste containment structures for long-term disposal. DOE needs in this area include the following to support progress in final treatment and disposal of legacy waste and closure of High-Level Waste (HLW) tanks in the DOE complex: long-term performance predictions, flow sheet development and flow sheet enhancements, and conceptual designs for new disposal facilities. The DOE-EM Cementitious Barriers Partnership is producing software and experimental programs resulting in new methods and data needed for end-users involved with environmental cleanup and waste disposal. Both the modeling tools and the experimental data have already benefited the DOE sites in the areas of performance assessments by increasing confidence backed up with modeling support, leaching methods, and transport properties developed for actual DOE materials. In 2014, the CBP Partnership released the CBP Software Toolbox –“Version 2.0” which provides concrete degradation models for 1) sulfate attack, 2) carbonation, and 3) chloride initiated rebar corrosion, and includes constituent leaching. These models are applicable and can be used by both DOE and the Nuclear Regulatory Commission (NRC) for service life and long-term performance evaluations and predictions of nuclear and radioactive waste containment structures across the DOE complex, including future SRS Saltstone and HLW tank performance assessments and special analyses, Hanford site HLW tank closure projects and other projects in which cementitious barriers are required, the Advanced Simulation Capability for Environmental Management (ASCEM) project which requires source terms from cementitious containment structures as input to their flow simulations, regulatory reviews of DOE performance

  15. Object-based change detection for landslide monitoring based on SPOT imagery

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Friedl, Barbara; Eisank, Clemens

    2014-05-01

    The steadily increasing availability of Earth observation (EO) data from a wide range of sensors facilitates the long-time monitoring of mass movements and retrospective analysis. Pixel-based approaches are most commonly used for detecting changes based on optical remote sensing data. However, single pixels are not suitable for depicting natural phenomena such as landslides in their full complexity and their transformation over time. By applying semi-automated object-based change detection limitations inherent to pixel-based methods can be overcome to a certain extent. For instance, the problem of variant spectral reflectance for the same pixel location in images from different points in time can be minimized. Therefore, atmospheric and radiometric correction of input data sets - although highly recommended - seems to be not that important for developing a straightforward change detection approach based on object-based image analysis (OBIA). The object-based change detection approach was developed for a subset of the Baichi catchment, which is located in the Shihmen Reservoir watershed in northern Taiwan. The study area is characterized by mountainous terrain with steep slopes and is regularly affected by severe landslides and debris flows. Several optical satellite images, i.e. SPOT images from different years and seasons with a spatial resolution ranging from 2.5 to 6.25 m, have been used for monitoring the past evolution of landslides and landslide affected areas. A digital elevation model (DEM) with 5 m spatial resolution was integrated in the analysis for supporting the differentiation of landslides and debris flows. The landslide changes were identified by comparing feature values of segmentation-derived image objects between two subsequent images in eCognition (Trimble) software. To increase the robustness and transferability of the approach we identified changes by using the relative difference in values of band-specific relational features, spectral

  16. Advancing the discussion about systematic classroom behavioral observation, a product review of Tenny, J. (2010). eCOVE observation software. Pacific City, OR: eCOVE Software, LLC.

    PubMed

    Froiland, John Mark; Smith, Liana

    2014-05-01

    Applied child psychologists and behavioral consultants often use systematic behavioral observations to inform the psychological assessment and intervention development process for children referred for attention and hyperactivity problems. This article provides a review of the 2010 version of the eCOVE classroom observation software in terms of its utility in tracking the progress of children with attention and hyperactive behaviors and its use in evaluating teacher behaviors that may impede or promote children's attention and positive behavior. The eCOVE shows promise as an efficient tool for psychologists and behavioral consultants who want to evaluate the effects of interventions for children with symptoms of ADHD, ODD, mood disorders and learning disorders; however, some research-based improvements for future models are suggested. The reviewers also share their firsthand experience in using eCOVE to evaluate teacher and student behavior exhibited on a television show about teaching urban high school students and during a movie about an eccentric new kindergarten teacher. Rich examples are provided of using strategic behavioral observations to reveal how to improve the classroom environment so as to facilitate attention, motivation and positive behavior among youth. Broader implications for enhancing the use of systematic behavioral observations in the assessment of children and adolescents with attention disorders and related behavioral problems are discussed. Key issues are examined such as the use of behavioral observations during psychological consultation to prevent the previously found gender bias in referrals for ADHD. Using behavioral observations to enhance differential diagnosis is also discussed. PMID:22408136

  17. Advanced Resistive Exercise Device (ARED) Flight Software (FSW): A Unique Approach to Exercise in Long Duration Habitats

    NASA Technical Reports Server (NTRS)

    Mangieri, Mark

    2005-01-01

    ARED flight instrumentation software is associated with an overall custom designed resistive exercise system that will be deployed on the International Space Station (ISS). This innovative software application fuses together many diverse and new technologies into a robust and usable package. The software takes advantage of touchscreen user interface technology by providing a graphical user interface on a Windows based tablet PC, meeting a design constraint of keyboard-less interaction with flight crewmembers. The software interacts with modified commercial data acquisition (DAQ) hardware to acquire multiple channels of sensor measurment from the ARED device. This information is recorded on the tablet PC and made available, via International Space Station (ISS) Wireless LAN (WLAN) and telemetry subsystems, to ground based mission medics and trainers for analysis. The software includes a feature to accept electronically encoded prescriptions of exercises that guide crewmembers through a customized regimen of resistive weight training, based on personal analysis. These electronically encoded prescriptions are provided to the crew via ISS WLAN and telemetry subsystems. All personal data is securely associated with an individual crew member, based on a PIN ID mechanism.

  18. An object-based approach to hierarchical classification of the Earth's topography from SRTM data

    NASA Astrophysics Data System (ADS)

    Eisank, C.; Dragut, L.

    2012-04-01

    Digital classification of the Earth's surface has significantly benefited from the availability of global DEMs and recent advances in image processing techniques. Such an innovative approach is object-based analysis, which integrates multi-scale segmentation and rule-based classification. Since the classification is based on spatially configured objects and no longer on solely thematically defined cells, the resulting landforms or landform types are represented in a more realistic way. However, up to now, the object-based approach has not been adopted for broad-scale topographic modelling. Existing global to almost-global terrain classification systems have been implemented on per cell schemes, accepting disadvantages such as the speckled character of outputs and the non-consideration of space. We introduce the first object-based method to automatically classify the Earth's surface as represented by the SRTM into a three-level hierarchy of topographic regions. The new method relies on the concept of decomposing land-surface complexity into ever more homogeneous domains. The SRTM elevation layer is automatically segmented and classified at three levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these recognised scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of the classes satisfy the regionalisation requirements of maximising internal homogeneity while minimising external homogeneity. Most objects have boundaries matching natural discontinuities at the regional level. The method is simple and fully

  19. Combining TerraSAR-X and SPOT-5 data for object-based landslide detection

    NASA Astrophysics Data System (ADS)

    Friedl, B.; Hölbling, D.; Füreder, P.

    2012-04-01

    Landslide detection and classification is an essential requirement in pre- and post-disaster hazard analysis. In earlier studies landslide detection often was achieved through time-consuming and cost-intensive field surveys and visual orthophoto interpretation. Recent studies show that Earth Observation (EO) data offer new opportunities for fast, reliable and accurate landslide detection and classification, which may conduce to an effective landslide monitoring and landslide hazard management. To ensure the fast recognition and classification of landslides at a regional scale, a (semi-)automated object-based landslide detection approach is established for a study site situated in the Huaguoshan catchment, Southern Taiwan. The study site exhibits a high vulnerability to landslides and debris flows, which are predominantly typhoon-induced. Through the integration of optical satellite data (SPOT-5 with 2.5 m GSD), SAR (Synthetic Aperture Radar) data (TerraSAR-X Spotlight with 2.95 m GSD) and digital elevation information (DEM with 5 m GSD) including its derived products (e.g. slope, curvature, flow accumulation) landslides may be examined in a more efficient way as if relying on single data sources only. The combination of optical and SAR data in an object-based image analysis (OBIA) domain for landslide detection and classification has not been investigated so far, even if SAR imagery show valuable properties for landslide detection, which differ from optical data (e.g. high sensitivity to surface roughness and soil moisture). The main purpose of this study is to recognize and analyze existing landslides by applying object-based image analysis making use of eCognition software. OBIA provides a framework for examining features defined by spectral, spatial, textural, contextual as well as hierarchical properties. Objects are derived through image segmentation and serve as input for the classification process, which relies on transparent rulesets, representing knowledge

  20. Evaluation Software in Counseling.

    ERIC Educational Resources Information Center

    Sabella, Russell A.

    Counselors today are presented with a number of differing applications software. This article intends to advance the counselor's knowledge and considerations of the various aspects of application software. Included is a discussion of the software applications typically of help to counselors in (a) managing their work (computer managed counseling);…

  1. Urban Extent Mapping Using Object-Based Texture Classification and Landsat Data

    NASA Astrophysics Data System (ADS)

    Wang, P.; Huang, C.; Tilton, J. C.; Tan, B.; Brown de Colstoun, E. C.; Wolfe, R. E.; Phillips, J. T.; Ling, P. Y.

    2014-12-01

    More than half of the world's population lives in urban areas, and yet urban population is still growing rapidly. It is important to monitor, understand, and model the growth of urban land and population. One of the prerequisites of many urban studies is detailed urban maps. At continental to global scale, Landsat is an ideal source of data for urban extent mapping. However, it is difficult to map urban areas using spectral data only, mainly due to spectral similarity between some urban and non-urban objects and spectral variability within the urban and non-urban class. Here we present an approach for mapping urban extent using object-based texture measures. Preprocessed Global Land Survey (GLS) 2010 Landsat surface reflectance images were segmented using a hierarchical segmentation software package and texture features were extracted at multiple levels of the segmentation hierarchy. We used training data derived from high-resolution imagery to train random forest classifiers for different continents. This method was evaluated in different areas from Europe and North America. Rigorous cross-scene validation gave overall accuracy score of 91.1%. The derived classifier for GLS 2010 images was also tested on GLS 2000 images and generated good results. It is shown that the proposed approach has great potential for global scale urban extent mapping.

  2. Experimentation in software engineering

    NASA Technical Reports Server (NTRS)

    Basili, V. R.; Selby, R. W.; Hutchens, D. H.

    1986-01-01

    Experimentation in software engineering supports the advancement of the field through an iterative learning process. In this paper, a framework for analyzing most of the experimental work performed in software engineering over the past several years is presented. A variety of experiments in the framework is described and their contribution to the software engineering discipline is discussed. Some useful recommendations for the application of the experimental process in software engineering are included.

  3. Optimizing Object-Based Classification in Urban Environments Using Very High Resolution GEOEYE-1 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Vicente, R.; Aguilar, F. J.; Fernández, A.; Saldaña, M. M.

    2012-07-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In fact, one of the most common applications of remote sensing images is the extraction of land cover information for digital image base maps by means of classification techniques. When VHR satellite images are used, an object-based classification strategy can potentially improve classification accuracy compared to pixel based classification. The aim of this work is to carry out an accuracy assessment test on the classification accuracy in urban environments using pansharpened and panchromatic GeoEye-1 orthoimages. In this work, the influence on object-based supervised classification accuracy is evaluated with regard to the sets of image object (IO) features used for classification of the land cover classes selected. For the classification phase the nearest neighbour classifier and the eCognition v. 8 software were used, using seven sets of IO features, including texture, geometry and the principal layer values features. The IOs were attained by eCognition using a multiresolution segmentation approach that is a bottom-up regionmerging technique starting with one-pixel. Four different sets or repetitions of training samples, always representing a 10% for each classes were extracted from IOs while the remaining objects were used for accuracy validation. A statistical test was carried out in order to strengthen the conclusions. An overall accuracy of 79.4% was attained with the panchromatic, red, blue, green and near infrared (NIR) bands from the panchromatic and pansharpened orthoimages, the brightness computed for the red, blue, green and infrared bands, the Maximum Difference, a mean of soil-adjusted vegetation index (SAVI), and, finally the normalized Digital Surface Model or Object Model (nDSM), computed from LiDAR data. For buildings classification, nDSM was the most important feature attaining producer and user

  4. Payload software technology: Software technology development plan

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Programmatic requirements for the advancement of software technology are identified for meeting the space flight requirements in the 1980 to 1990 time period. The development items are described, and software technology item derivation worksheets are presented along with the cost/time/priority assessments.

  5. Object-based landslide mapping on satellite images from different sensors

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Friedl, Barbara; Eisank, Clemens; Blaschke, Thomas

    2015-04-01

    Several studies have proven that object-based image analysis (OBIA) is a suitable approach for landslide mapping using remote sensing data. Mostly, optical satellite images are utilized in combination with digital elevation models (DEMs) for semi-automated mapping. The ability of considering spectral, spatial, morphometric and contextual features in OBIA constitutes a significant advantage over pixel-based methods, especially when analysing non-uniform natural phenomena such as landslides. However, many of the existing knowledge-based OBIA approaches for landslide mapping are rather complex and are tailored to specific data sets. These restraints lead to a lack of transferability of OBIA mapping routines. The objective of this study is to develop an object-based approach for landslide mapping that is robust against changing input data with different resolutions, i.e. optical satellite imagery from various sensors. Two study sites in Taiwan were selected for developing and testing the landslide mapping approach. One site is located around the Baolai village in the Huaguoshan catchment in the southern-central part of the island, the other one is a sub-area of the Taimali watershed in Taitung County near the south-eastern Pacific coast. Both areas are regularly affected by severe landslides and debris flows. A range of very high resolution (VHR) optical satellite images was used for the object-based mapping of landslides and for testing the transferability across different sensors and resolutions: (I) SPOT-5, (II) Formosat-2, (III) QuickBird, and (IV) WorldView-2. Additionally, a digital elevation model (DEM) with 5 m spatial resolution and its derived products (e.g. slope, plan curvature) were used for supporting the semi-automated mapping, particularly for differentiating source areas and accumulation areas according to their morphometric characteristics. A focus was put on the identification of comparatively stable parameters (e.g. relative indices), which could be

  6. Support Vector Machine Classification of Object-Based Data for Crop Mapping, Using Multi-Temporal Landsat Imagery

    NASA Astrophysics Data System (ADS)

    Devadas, R.; Denham, R. J.; Pringle, M.

    2012-07-01

    Crop mapping and time series analysis of agronomic cycles are critical for monitoring land use and land management practices, and analysing the issues of agro-environmental impacts and climate change. Multi-temporal Landsat data can be used to analyse decadal changes in cropping patterns at field level, owing to its medium spatial resolution and historical availability. This study attempts to develop robust remote sensing techniques, applicable across a large geographic extent, for state-wide mapping of cropping history in Queensland, Australia. In this context, traditional pixel-based classification was analysed in comparison with image object-based classification using advanced supervised machine-learning algorithms such as Support Vector Machine (SVM). For the Darling Downs region of southern Queensland we gathered a set of Landsat TM images from the 2010-2011 cropping season. Landsat data, along with the vegetation index images, were subjected to multiresolution segmentation to obtain polygon objects. Object-based methods enabled the analysis of aggregated sets of pixels, and exploited shape-related and textural variation, as well as spectral characteristics. SVM models were chosen after examining three shape-based parameters, twenty-three textural parameters and ten spectral parameters of the objects. We found that the object-based methods were superior to the pixel-based methods for classifying 4 major landuse/land cover classes, considering the complexities of within field spectral heterogeneity and spectral mixing. Comparative analysis clearly revealed that higher overall classification accuracy (95%) was observed in the object-based SVM compared with that of traditional pixel-based classification (89%) using maximum likelihood classifier (MLC). Object-based classification also resulted speckle-free images. Further, object-based SVM models were used to classify different broadacre crop types for summer and winter seasons. The influence of different shape

  7. Revealing text in a complexly rolled silver scroll from Jerash with computed tomography and advanced imaging software

    NASA Astrophysics Data System (ADS)

    Hoffmann Barfod, Gry; Larsen, John Møller; Lichtenberger, Achim; Raja, Rubina

    2015-12-01

    Throughout Antiquity magical amulets written on papyri, lead and silver were used for apotropaic reasons. While papyri often can be unrolled and deciphered, metal scrolls, usually very thin and tightly rolled up, cannot easily be unrolled without damaging the metal. This leaves us with unreadable results due to the damage done or with the decision not to unroll the scroll. The texts vary greatly and tell us about the cultural environment and local as well as individual practices at a variety of locations across the Mediterranean. Here we present the methodology and the results of the digital unfolding of a silver sheet from Jerash in Jordan from the mid-8th century CE. The scroll was inscribed with 17 lines in presumed pseudo-Arabic as well as some magical signs. The successful unfolding shows that it is possible to digitally unfold complexly folded scrolls, but that it requires a combination of the know-how of the software and linguistic knowledge.

  8. Software for the Canadian Advanced Nanospace eXperiment-4/5 (CanX-4/-5) Mission

    NASA Astrophysics Data System (ADS)

    Leonard, Matthew Leigh

    The CanX-4 and CanX-5 mission currently under development at The University of Toronto Institute for Aerospace Studies Space Flight Laboratory UTIAS/SFL is a challenging formation flying technology demonstration. Its requirements of sub-metre control accuracy have yet to be realized with nanosatellites. Many large technical challenges must be addressed in order to ensure the success of the CanX-4/5 mission. This includes the development of software for an intersatellite communication system, integration and optimization of key formation flying algorithms onto the Payload On-Board Computer as well as the development of a Hardware-In-The-Loop simulator for full on-orbit mission simulations. This thesis will provide background knowledge of the Space Flight Laboratory and its activities, the CanX-4/5 mission, and finally highlight the authors contributions to overcoming each of these technical challenges and ensuring the success of the CanX-4 and CanX-5 mission.

  9. Revealing text in a complexly rolled silver scroll from Jerash with computed tomography and advanced imaging software.

    PubMed

    Hoffmann Barfod, Gry; Larsen, John Møller; Lichtenberger, Achim; Raja, Rubina

    2015-01-01

    Throughout Antiquity magical amulets written on papyri, lead and silver were used for apotropaic reasons. While papyri often can be unrolled and deciphered, metal scrolls, usually very thin and tightly rolled up, cannot easily be unrolled without damaging the metal. This leaves us with unreadable results due to the damage done or with the decision not to unroll the scroll. The texts vary greatly and tell us about the cultural environment and local as well as individual practices at a variety of locations across the Mediterranean. Here we present the methodology and the results of the digital unfolding of a silver sheet from Jerash in Jordan from the mid-8(th) century CE. The scroll was inscribed with 17 lines in presumed pseudo-Arabic as well as some magical signs. The successful unfolding shows that it is possible to digitally unfold complexly folded scrolls, but that it requires a combination of the know-how of the software and linguistic knowledge. PMID:26648504

  10. Revealing text in a complexly rolled silver scroll from Jerash with computed tomography and advanced imaging software

    PubMed Central

    Hoffmann Barfod, Gry; Larsen, John Møller; Raja, Rubina

    2015-01-01

    Throughout Antiquity magical amulets written on papyri, lead and silver were used for apotropaic reasons. While papyri often can be unrolled and deciphered, metal scrolls, usually very thin and tightly rolled up, cannot easily be unrolled without damaging the metal. This leaves us with unreadable results due to the damage done or with the decision not to unroll the scroll. The texts vary greatly and tell us about the cultural environment and local as well as individual practices at a variety of locations across the Mediterranean. Here we present the methodology and the results of the digital unfolding of a silver sheet from Jerash in Jordan from the mid-8th century CE. The scroll was inscribed with 17 lines in presumed pseudo-Arabic as well as some magical signs. The successful unfolding shows that it is possible to digitally unfold complexly folded scrolls, but that it requires a combination of the know-how of the software and linguistic knowledge. PMID:26648504

  11. Exploring the relationship between object realism and object-based attention effects.

    PubMed

    Roque, Nelson; Boot, Walter R

    2015-09-01

    Visual attention prioritizes processing of locations in space, and evidence also suggests that the benefits of attention can be shaped by the presence of objects (object-based attention). However, the prevalence of object-based attention effects has been called into question recently by evidence from a large-sampled study employing classic attention paradigms (Pilz et al., 2012). We conducted two experiments to explore factors that might determine when and if object-based attention effects are observed, focusing on the degree to which the concreteness and realism of objects might contribute to these effects. We adapted the classic attention paradigm first reported by Egly, Driver, and Rafal (1994) by replacing abstract bar stimuli in some conditions with objects that were more concrete and familiar to participants: items of silverware. Furthermore, we varied the realism of these items of silverware, presenting either cartoon versions or photo-realistic versions. Contrary to predictions, increased realism did not increase the size of object-based effects. In fact, no clear object-based effects were observed in either experiment, consistent with previous failures to replicate these effects in similar paradigms. While object-based attention may exist, and may have important influences on how we parse the visual world, these and other findings suggest that the two-object paradigm typically relied upon to study object-based effects may not be the best paradigm to investigate these issues. PMID:26256694

  12. Computer Software.

    ERIC Educational Resources Information Center

    Kay, Alan

    1984-01-01

    Discusses the nature and development of computer software. Programing, programing languages, types of software (including dynamic spreadsheets), and software of the future are among the topics considered. (JN)

  13. Status Report on the Development of Micro-Scheduling Software for the Advanced Outage Control Center Project

    SciTech Connect

    Shawn St. Germain; Kenneth Thomas; Ronald Farris; Jeffrey Joe

    2014-09-01

    The long-term viability of existing nuclear power plants (NPPs) in the United States (U.S.) is dependent upon a number of factors, including maintaining high capacity factors, maintaining nuclear safety, and reducing operating costs, particularly those associated with refueling outages. Refueling outages typically take 20-30 days, and for existing light water NPPs in the U.S., the reactor cannot be in operation during the outage. Furthermore, given that many NPPs generate between $1-1.5 million/day in revenue when in operation, there is considerable interest in shortening the length of refueling outages. Yet, refueling outages are highly complex operations, involving multiple concurrent and dependent activities that are difficult to coordinate. Finding ways to improve refueling outage performance while maintaining nuclear safety has proven to be difficult. The Advanced Outage Control Center project is a research and development (R&D) demonstration activity under the Light Water Reactor Sustainability (LWRS) Program. LWRS is a R&D program which works with industry R&D programs to establish technical foundations for the licensing and managing of long-term, safe, and economical operation of current NPPs. The Advanced Outage Control Center project has the goal of improving the management of commercial NPP refueling outages. To accomplish this goal, this INL R&D project is developing an advanced outage control center (OCC) that is specifically designed to maximize the usefulness of communication and collaboration technologies for outage coordination and problem resolution activities. This report describes specific recent efforts to develop a capability called outage Micro-Scheduling. Micro-Scheduling is the ability to allocate and schedule outage support task resources on a sub-hour basis. Micro-Scheduling is the real-time fine-tuning of the outage schedule to react to the actual progress of the primary outage activities to ensure that support task resources are

  14. Problem-Solving Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    CBR Express software solves problems by adapting sorted solutions to new problems specified by a user. It is applicable to a wide range of situations. The technology was originally developed by Inference Corporation for Johnson Space Center's Advanced Software Development Workstation. The project focused on the reuse of software designs, and Inference used CBR as part of the ACCESS prototype software. The commercial CBR Express is used as a "help desk" for customer support, enabling reuse of existing information when necessary. It has been adopted by several companies, among them American Airlines, which uses it to solve reservation system software problems.

  15. Payload software technology

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A software analysis was performed of known STS sortie payload elements and their associated experiments. This provided basic data for STS payload software characteristics and sizes. A set of technology drivers was identified based on a survey of future technology needs and an assessment of current software technology. The results will be used to evolve a planned approach to software technology development. The purpose of this plan is to ensure that software technology is advanced at a pace and a depth sufficient to fulfill the identified future needs.

  16. A novel mask proximity correction software combining accuracy and reduced writing time for the manufacturing of advanced photomasks

    NASA Astrophysics Data System (ADS)

    Schiavone, Patrick; Martin, Luc; Browning, Clyde; Farys, Vincent; Sundermann, Frank; Narukawa, Shogo; Takikawa, Tadahiko; Hayashi, Naoya

    2012-06-01

    The new generations of photomasks are seen to bring more and more challenges to the mask manufacturer. Maskshops face two conflicting requirements, namely improving pattern fidelity and reducing or at least maintaining acceptable writing time. These requirements are getting more and more challenging since pattern size continuously shrinks and data volumes continuously grows. Although the classical dose modulation Proximity Effect Correction is able to provide sufficient process control to the mainstream products, an increased number of published and wafer data show that the mask process is becoming a nonnegligible contributor to the 28nm technology yield. We will show in this paper that a novel approach of mask proximity effect correction is able to meet the dual challenge of the new generation of masks. Unlike the classical approach, the technique presented in this paper is based on a concurrent optimization of the dose and geometry of the fractured shots. Adding one more parameter allows providing the best possible compromise between accuracy and writing time since energy latitude can be taken into account as well. This solution is implemented in the Inscale software package from Aselta Nanographics. We have assessed the capability of this technology on several levels of a 28nm technology. On this set, the writing time has been reduced up to 25% without sacrificing the accuracy which at the same time has been improved significantly compared to the existing process. The experiments presented in the paper confirm that a versatile proximity effect correction strategy, combining dose and geometry modulation helps the users to tradeoff between resolution/accuracy and e-beam write time.

  17. Estimation of Trees Outside Forests using IRS High Resolution data by Object Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Pujar, G. S.; Reddy, P. M.; Reddy, C. S.; Jha, C. S.; Dadhwal, V. K.

    2014-11-01

    Assessment of Trees outside forests (TOF) is widely being recognized as a pivotal theme, in sustainable natural resource management, due to their role in offering variety of goods, such as timber, fruits and fodder as well as services like water, carbon, biodiversity. Forest Conservation efforts involving reduction of deforestation and degradation may have to increasingly rely on alternatives provided by TOF in catering to economic demands in forest edges. Spatial information systems involving imaging, analysis and monitoring to achieve objectives under protocols like REDD+, require incorporation of information content from areas under forest as well as trees outside forests, to aid holistic decisions. In this perspective, automation in retrieving information on area under trees, growing outside forests, using high resolution imaging is essential so that measuring and verification of extant carbon pools, are strengthened. Retrieval of this tree cover is demonstrated herewith, using object based image analysis in a forest edge of dry deciduous forests of Eastern Ghats, in Khammam district of Telangana state of India. IRS high resolution panchromatic 2.5 m data (Cartosat-1 Orthorectified) used in tandem with 5.8 m multispectral LISS IV data, discerns tree crowns and clusters at a detailed scale and hence semi-automated approach is attempted to classify TOF from a pair of image from relatively crop and cloud free season. Object based image analysis(OBIA) approach as implemented in commercial suite of e-Cognition (Ver 8.9) consists of segmentation at user defined scale followed by application of wide range of spectral, textural and object geometry based parameters for classification. Software offers innovative blend of raster and vector features that can be juxtaposed flexibly, across scales horizontally or vertically. Segmentation was carried out at multiple scales to discern first the major land covers, such as forest, water, agriculture followed by that at a finer

  18. Biological Imaging Software Tools

    PubMed Central

    Eliceiri, Kevin W.; Berthold, Michael R.; Goldberg, Ilya G.; Ibáñez, Luis; Manjunath, B.S.; Martone, Maryann E.; Murphy, Robert F.; Peng, Hanchuan; Plant, Anne L.; Roysam, Badrinath; Stuurman, Nico; Swedlow, Jason R.; Tomancak, Pavel; Carpenter, Anne E.

    2013-01-01

    Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis, and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the challenges in that domain, and the overall status of available software for bioimage informatics, focusing on open source options. PMID:22743775

  19. Cathodoluminescence Spectrum Imaging Software

    Energy Science and Technology Software Center (ESTSC)

    2011-04-07

    The software developed for spectrum imaging is applied to the analysis of the spectrum series generated by our cathodoluminescence instrumentation. This software provides advanced processing capabilities s such: reconstruction of photon intensity (resolved in energy) and photon energy maps, extraction of the spectrum from selected areas, quantitative imaging mode, pixel-to-pixel correlation spectrum line scans, ASCII, output, filling routines, drift correction, etc.

  20. Object-based attention underlies the rehearsal of feature binding in visual working memory.

    PubMed

    Shen, Mowei; Huang, Xiang; Gao, Zaifeng

    2015-04-01

    Feature binding is a core concept in many research fields, including the study of working memory (WM). Over the past decade, it has been debated whether keeping the feature binding in visual WM consumes more visual attention than the constituent single features. Previous studies have only explored the contribution of domain-general attention or space-based attention in the binding process; no study so far has explored the role of object-based attention in retaining binding in visual WM. We hypothesized that object-based attention underlay the mechanism of rehearsing feature binding in visual WM. Therefore, during the maintenance phase of a visual WM task, we inserted a secondary mental rotation (Experiments 1-3), transparent motion (Experiment 4), or an object-based feature report task (Experiment 5) to consume the object-based attention available for binding. In line with the prediction of the object-based attention hypothesis, Experiments 1-5 revealed a more significant impairment for binding than for constituent single features. However, this selective binding impairment was not observed when inserting a space-based visual search task (Experiment 6). We conclude that object-based attention underlies the rehearsal of binding representation in visual WM. PMID:25602968

  1. Object-Based Greenhouse Classification from High Resolution Satellite Imagery: a Case Study Antalya-Turkey

    NASA Astrophysics Data System (ADS)

    Coslu, M.; Sonmez, N. K.; Koc-San, D.

    2016-06-01

    Pixel-based classification method is widely used with the purpose of detecting land use and land cover with remote sensing technology. Recently, object-based classification methods have begun to be used as well as pixel-based classification method on high resolution satellite imagery. In the studies conducted, it is indicated that object-based classification method has more successful results than other classification methods. While pixel-based classification method is performed according to the grey value of pixels, object-based classification process is executed by generating imagery segmentation and updatable rule sets. In this study, it was aimed to detect and map the greenhouses from object-based classification method by using high resolution satellite imagery. The study was carried out in the Antalya province which includes greenhouse intensively. The study consists of three main stages including segmentation, classification and accuracy assessment. At the first stage, which was segmentation, the most important part of the object-based imagery analysis; imagery segmentation was generated by using basic spectral bands of high resolution Worldview-2 satellite imagery. At the second stage, applying the nearest neighbour classifier to these generated segments classification process was executed, and a result map of the study area was generated. Finally, accuracy assessments were performed using land studies and digital data of the area. According to the research results, object-based greenhouse classification using high resolution satellite imagery had over 80% accuracy.

  2. The modulation of spatial congruency by object-based attention: analysing the "locus" of the modulation.

    PubMed

    Luo, Chunming; Lupiáñez, Juan; Funes, María Jesús; Fu, Xiaolan

    2011-12-01

    Earlier studies have demonstrated that spatial cueing differentially reduces stimulus-stimulus congruency (e.g., spatial Stroop) interference but not stimulus-response congruency (e.g., Simon; e.g., Lupiáñez & Funes, 2005). This spatial cueing modulation over spatial Stroop seems to be entirely attributable to object-based attention (e.g., Luo, Lupiáñez, Funes, & Fu, 2010). In the present study, two experiments were conducted to further explore whether the cueing modulation of spatial Stroop is object based and/or space based and to analyse the "locus" of this modulation. In Experiment 1, we found that the cueing modulation over spatial Stroop is entirely object based, independent of stimulus-response congruency. In Experiment 2, we observed that the modulation of object-based attention over the spatial Stroop only occurred at a short cue-target interval (i.e., stimulus onset asynchrony; SOA), whereas the stimulus-response congruency effect was not modulated either by object-based or by location-based attentional cueing. The overall pattern of results suggests that the spatial cueing modulation over spatial Stroop arises from object-based attention and occurs at the perceptual stage of processing. PMID:21923623

  3. Object-based glacier mapping in the Hohe Tauern Mountains of Austria

    NASA Astrophysics Data System (ADS)

    Aubrey Robson, Benjamin; Hölbling, Daniel; Nuth, Christopher; Olaf Dahl, Svein

    2015-04-01

    Up-to-date and frequent glacier outlines are a necessity for many applications within glaciology. While multispectral band ratios are a comparatively robust method for automatically classifying clean ice on a pixel-based level, semi- or fully automated glacier inventories are complicated by spectral similarities between classes such as debris-covered glacier ice and the surrounding bedrock and moraines, or between clean ice and turbid pro-glacial water. Most glacier inventories therefore require a great deal of manual correction. Here, we present a glacier inventory of the Hohe Tauern Mountains in the Central Eastern Alps in Austria. Numerous glaciers, including the Pasterze Glacier, which is the longest glacier in the Eastern Alps, shape this mountainous region. The mapping of glaciers is based on object-based image analysis (OBIA) using both high resolution (HR) satellite imagery from Landsat 8 and a digital elevation model (DEM) derived from Airborne Laser Scanning (ALS) data. We automatically classify clean ice, debris-covered ice and glacial lakes. Image objects are created by applying the multiresolution segmentation algorithm implemented in the eCognition (Trimble) software. The resulting image objects are classified using a combination of various features, whereby a focus was put on the selection of robust features that are ideally applicable for mapping large areas, for example spectral indices such as the Normalized Differenced Vegetation Index (NDVI), Normalized Difference Snow and Ice Index (NDSI), Normalised Difference Water Index (NDWI), Land and Water Mask (LWK) and a ratio of the SWIR and NIR spectral bands. The ability of OBIA to incorporate optical and elevation data and to individually address data-specific characteristics helps differentiate debris-covered ice from surrounding features not only by using spectral properties but also based on morphological and topographic parameters, while the inclusion of rulesets relying on contextuality, size

  4. A comparison of pixel-based and object-based approaches for land use land cover classification in semi-arid areas, Sudan

    NASA Astrophysics Data System (ADS)

    Adam, H. E.; Csaplovics, E.; Elhaja, M. E.

    2016-06-01

    This paper deals with the comparison between application of pixel-based and object- based approaches in land use land cover classification in semi-arid areas in Sudan. The second aim is to assess the accuracy of classification for each approach. The study was conducted in the gum arabic belt in North Kordofan State, which is affected by modifications in conditions and composition of vegetation cover trends. The study used ASTER L1B registered radiance at the sensor image acquired on (19.10.2010). The image was radiometrically corrected by using ENVI-FLAASH software. Subset with an area of (40880) ha was created. The image classification (pixel-based and object-based) and accuracy assessment were conducted. Total number of (47) GCPs were surveyed and used in accuracy assessment using ERDAS 9.1. Image segmentation process was implemented using Definiens eCognition 7.1 software. Segmentation level 4 of scale parameter 25 was selected for classification based on colour and form homogeneity. Land use land cover classes were derived by classification using the nearest neighbor classifier with membership functions (fuzzy logic) for each class. The land use land cover distribution in the area for forest dominated by Acacia Senegal is (20%) and for residential area is (1.50%) for the two methods of classification. While for bare and farm land, grass and bush land and mixed woodland classes are (6.69% and 1.63%), (18.62% and 15.16%) and (53% and 61%) for pixel based and object based methods, respectively. The overall accuracy and Kappa statistic of the classification produced by the pixel-based and object-based were (72.92%, and 54.17%) and (0.6259 and 0.3810), respectively. The pixel based approach performed slightly better than the object-based approach in land use land cover classification in the semi-arid land in gum Arabic belt.

  5. Keeping it Together: Advanced algorithms and software for magma dynamics (and other coupled multi-physics problems)

    NASA Astrophysics Data System (ADS)

    Spiegelman, M.; Wilson, C. R.

    2011-12-01

    A quantitative theory of magma production and transport is essential for understanding the dynamics of magmatic plate boundaries, intra-plate volcanism and the geochemical evolution of the planet. It also provides one of the most challenging computational problems in solid Earth science, as it requires consistent coupling of fluid and solid mechanics together with the thermodynamics of melting and reactive flows. Considerable work on these problems over the past two decades shows that small changes in assumptions of coupling (e.g. the relationship between melt fraction and solid rheology), can have profound changes on the behavior of these systems which in turn affects critical computational choices such as discretizations, solvers and preconditioners. To make progress in exploring and understanding this physically rich system requires a computational framework that allows more flexible, high-level description of multi-physics problems as well as increased flexibility in composing efficient algorithms for solution of the full non-linear coupled system. Fortunately, recent advances in available computational libraries and algorithms provide a platform for implementing such a framework. We present results from a new model building system that leverages functionality from both the FEniCS project (www.fenicsproject.org) and PETSc libraries (www.mcs.anl.gov/petsc) along with a model independent options system and gui, Spud (amcg.ese.ic.ac.uk/Spud). Key features from FEniCS include fully unstructured FEM with a wide range of elements; a high-level language (ufl) and code generation compiler (FFC) for describing the weak forms of residuals and automatic differentiation for calculation of exact and approximate jacobians. The overall strategy is to monitor/calculate residuals and jacobians for the entire non-linear system of equations within a global non-linear solve based on PETSc's SNES routines. PETSc already provides a wide range of solvers and preconditioners, from

  6. Using digital photographs and object-based image analysis to estimate percent ground cover in vegetation plots

    USGS Publications Warehouse

    Luscier, J.D.; Thompson, W.L.; Wilson, J.M.; Gorham, B.E.; Dragut, L.D.

    2006-01-01

    Ground vegetation influences habitat selection and provides critical resources for survival and reproduction of animals. Researchers often employ visual methods to estimate ground cover, but these approaches may be prone to observer bias. We therefore evaluated a method using digital photographs of vegetation to objectively quantify percent ground cover of grasses, forbs, shrubs, litter, and bare ground within 90 plots of 2m2. We carried out object-based image analysis, using a software program called eCognition, to divide photographs into different vegetation classes (based on similarities among neighboring pixels) to estimate percent ground cover for each category. We used the Kappa index of agreement (KIA) to quantify correctly classified, randomly selected segments of all images. Our KIA values indicated strong agreement (> 80%) of all vegetation categories, with an average of 90-96% (SE = 5%) of shrub, litter, forb, and grass segments classified correctly. We also created artificial plots with known percentages of each vegetation category to evaluate the accuracy of software predictions. Observed differences between true cover and eCognition estimates for each category ranged from 1 to 4%. This technique provides a repeatable and reliable way to estimate percent ground cover that allows quantification of classification accuracy. ?? The Ecological Society of America.

  7. Object-based vegetation classification with high resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Yu, Qian

    Vegetation species are valuable indicators to understand the earth system. Information from mapping of vegetation species and community distribution at large scales provides important insight for studying the phenological (growth) cycles of vegetation and plant physiology. Such information plays an important role in land process modeling including climate, ecosystem and hydrological models. The rapidly growing remote sensing technology has increased its potential in vegetation species mapping. However, extracting information at a species level is still a challenging research topic. I proposed an effective method for extracting vegetation species distribution from remotely sensed data and investigated some ways for accuracy improvement. The study consists of three phases. Firstly, a statistical analysis was conducted to explore the spatial variation and class separability of vegetation as a function of image scale. This analysis aimed to confirm that high resolution imagery contains the information on spatial vegetation variation and these species classes can be potentially separable. The second phase was a major effort in advancing classification by proposing a method for extracting vegetation species from high spatial resolution remote sensing data. The proposed classification employs an object-based approach that integrates GIS and remote sensing data and explores the usefulness of ancillary information. The whole process includes image segmentation, feature generation and selection, and nearest neighbor classification. The third phase introduces a spatial regression model for evaluating the mapping quality from the above vegetation classification results. The effects of six categories of sample characteristics on the classification uncertainty are examined: topography, sample membership, sample density, spatial composition characteristics, training reliability and sample object features. This evaluation analysis answered several interesting scientific questions

  8. JTpack90: A parallel, object-based, Fortran 90 linear algebra package

    SciTech Connect

    Turner, J.A.; Kothe, D.B.; Ferrell, R.C.

    1997-03-01

    The authors have developed an object-based linear algebra package, currently with emphasis on sparse Krylov methods, driven primarily by needs of the Los Alamos National Laboratory parallel unstructured-mesh casting simulation tool Telluride. Support for a number of sparse storage formats, methods, and preconditioners have been implemented, driven primarily by application needs. They describe the object-based Fortran 90 approach, which enhances maintainability, performance, and extensibility, the parallelization approach using a new portable gather/scatter library (PGSLib), current capabilities and future plans, and present preliminary performance results on a variety of platforms.

  9. Object-based Conditional Random Fields for Road Extraction from Remote Sensing Image

    NASA Astrophysics Data System (ADS)

    Huang, Zhijian; Xu, Fanjiang; Lu, Lei; Nie, Hongshan

    2014-03-01

    To make full use of spatially contextual information and topological information in the procedure of Object-based Image Analysis (OBIA), an object-based conditional random field is proposed and used for road extraction. Objects are produced with an initial segmentation, then their neighbours are constructed. Each object is represented by three kinds of features, including the colour, the gradient of histogram and the texture. Formulating the road extraction as a binary classification problem, a Conditional Random Fields model learns and is used for inference. The experimental results demonstrate that the proposed method is effective.

  10. Software Use in Psychometric Research

    ERIC Educational Resources Information Center

    Skaggs, Gary

    2004-01-01

    Research on psychometric methods is heavily dependent on software. The quality, availability, and documentation of such software are critical to the advancement of the field. In 2000, an ad hoc committee of NCME recommended that NCME adopt policies that promote greater availability and better documentation of software. This article follows the ad…

  11. Selecting Software.

    ERIC Educational Resources Information Center

    Pereus, Steven C.

    2002-01-01

    Describes a comprehensive computer software selection and evaluation process, including documenting district needs, evaluating software packages, weighing the alternatives, and making the purchase. (PKP)

  12. The Object-Based Simon Effect: Grasping Affordance or Relative Location of the Graspable Part?

    ERIC Educational Resources Information Center

    Cho, Dongbin; Proctor, Robert W.

    2010-01-01

    Reaction time is often shorter when the irrelevant graspable handle of an object corresponds with the location of a keypress response to the relevant attribute than when it does not. This object-based Simon effect has been attributed to an affordance for grasping the handle with the hand to the same side. Because a grasping affordance should…

  13. Using object-based image analysis to guide the selection of field sample locations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the most challenging tasks for resource management and research is designing field sampling schemes to achieve unbiased estimates of ecosystem parameters as efficiently as possible. This study focused on the potential of fine-scale image objects from object-based image analysis (OBIA) to be u...

  14. Object-Based Image Analysis Beyond Remote Sensing - the Human Perspective

    NASA Astrophysics Data System (ADS)

    Blaschke, T.; Lang, S.; Tiede, D.; Papadakis, M.; Györi, A.

    2016-06-01

    We introduce a prototypical methodological framework for a place-based GIS-RS system for the spatial delineation of place while incorporating spatial analysis and mapping techniques using methods from different fields such as environmental psychology, geography, and computer science. The methodological lynchpin for this to happen - when aiming to delineate place in terms of objects - is object-based image analysis (OBIA).

  15. Feature selection methods for object-based classification of sub-decimeter resolution digital aerial imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Due to the availability of numerous spectral, spatial, and contextual features, the determination of optimal features and class separabilities can be a time consuming process in object-based image analysis (OBIA). While several feature selection methods have been developed to assist OBIA, a robust c...

  16. Is Object-Based Attention Mandatory? Strategic Control over Mode of Attention

    ERIC Educational Resources Information Center

    Yeari, Menahem; Goldsmith, Morris

    2010-01-01

    Is object-based attention mandatory or under strategic control? In an adapted spatial cuing paradigm, participants focused initially on a central arrow cue that was part of a perceptual group (Experiment 1) or a uniformly connected object (Experiment 2), encompassing one of the potential target locations. The cue always pointed to an opposite,…

  17. Reuse Metrics for Object Oriented Software

    NASA Technical Reports Server (NTRS)

    Bieman, James M.

    1998-01-01

    One way to increase the quality of software products and the productivity of software development is to reuse existing software components when building new software systems. In order to monitor improvements in reuse, the level of reuse must be measured. In this NASA supported project we (1) derived a suite of metrics which quantify reuse attributes for object oriented, object based, and procedural software, (2) designed prototype tools to take these measurements in Ada, C++, Java, and C software, (3) evaluated the reuse in available software, (4) analyzed the relationship between coupling, cohesion, inheritance, and reuse, (5) collected object oriented software systems for our empirical analyses, and (6) developed quantitative criteria and methods for restructuring software to improve reusability.

  18. System For Retrieving Reusable Software

    NASA Technical Reports Server (NTRS)

    Van Warren, Lloyd; Beckman, Brian C.

    1993-01-01

    Encyclopedia of Software Components (ESC) is information-retrieval system of computer hardware and software providing access to generic reusable software tools and parts. Core of ESC is central tool base, which is repository of reusable software. It receives queries and submissions from user through local browser subsystem and receives authorized updates from maintenance subsystem. Sends retrievals to local browser subsystem and user's submissions to maintenance subsystem. Future versions will provide for advanced media, including voice and video, and will link system to database-management system. Programmers will not only retrieve software, but also modify, execute, and cross-link with other software.

  19. Reduced object-based perception in the near-hand space.

    PubMed

    Suh, Jihyun; Abrams, Richard A

    2015-12-01

    Previous studies have shown that hand proximity changes visual perception (Abrams et al. in Cognition 107(3):1035-1047, 2008). The present study examined the effects of hand proximity on object-based perception. In three experiments, participants viewed stimuli that were either near to or far from their hands. The target stimulus appeared, after a cue, in one of two rectangular objects: either at the location that had been previously cued, at the uncued end of the cued object, or in the uncued object. We found a significantly reduced same-object benefit in reaction time for stimuli near the hands in one experiment. Interestingly, we observed a same-object cost in sensitivity for stimuli near the hands in another experiment. The results reveal that object-based perception is disrupted in the near-hand space. This is consistent with previous findings revealing altered visual processing near the hands. PMID:26289483

  20. Developmental Changes in Mental Rotation: A Dissociation Between Object-Based and Egocentric Transformations.

    PubMed

    Kaltner, Sandra; Jansen, Petra

    2016-01-01

    The present study was conducted to investigate developmental changes of mental rotation performance. We compared children, adults, and older adults regarding their performance in object-based and egocentric transformations. Both children and older adults showed higher overall reaction times compared to adults. Results were interpreted against the background of impaired working memory capacity in both children and older adults. Since mental changes in working memory are mediated by age differences in cognitive processing speed, cognitive speed is supposed to be the underlying factor. Regarding both types of transformations, an advantage of egocentric over object-based human figures was only found in adults which led us to tentatively propose that children and older adults show deficits in perspective taking compared to adults. PMID:27512525

  1. Developmental Changes in Mental Rotation: A Dissociation Between Object-Based and Egocentric Transformations

    PubMed Central

    Kaltner, Sandra; Jansen, Petra

    2016-01-01

    The present study was conducted to investigate developmental changes of mental rotation performance. We compared children, adults, and older adults regarding their performance in object-based and egocentric transformations. Both children and older adults showed higher overall reaction times compared to adults. Results were interpreted against the background of impaired working memory capacity in both children and older adults. Since mental changes in working memory are mediated by age differences in cognitive processing speed, cognitive speed is supposed to be the underlying factor. Regarding both types of transformations, an advantage of egocentric over object-based human figures was only found in adults which led us to tentatively propose that children and older adults show deficits in perspective taking compared to adults. PMID:27512525

  2. Weed Mapping in Early-Season Maize Fields Using Object-Based Analysis of Unmanned Aerial Vehicle (UAV) Images

    PubMed Central

    Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca

    2013-01-01

    The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r2=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance. PMID:24146963

  3. Weed mapping in early-season maize fields using object-based analysis of unmanned aerial vehicle (UAV) images.

    PubMed

    Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca

    2013-01-01

    The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r(2)=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance. PMID:24146963

  4. Using SAR Interferograms and Coherence Images for Object-Based Delineation of Unstable Slopes

    NASA Astrophysics Data System (ADS)

    Friedl, Barbara; Holbling, Daniel

    2015-05-01

    This study uses synthetic aperture radar (SAR) interferometric products for the semi-automated identification and delineation of unstable slopes and active landslides. Single-pair interferograms and coherence images are therefore segmented and classified in an object-based image analysis (OBIA) framework. The rule-based classification approach has been applied to landslide-prone areas located in Taiwan and Southern Germany. The semi-automatically obtained results were validated against landslide polygons derived from manual interpretation.

  5. Effect of Pansharpened Image on Some of Pixel Based and Object Based Classification Accuracy

    NASA Astrophysics Data System (ADS)

    Karakus, P.; Karabork, H.

    2016-06-01

    Classification is the most important method to determine type of crop contained in a region for agricultural planning. There are two types of the classification. First is pixel based and the other is object based classification method. While pixel based classification methods are based on the information in each pixel, object based classification method is based on objects or image objects that formed by the combination of information from a set of similar pixels. Multispectral image contains a higher degree of spectral resolution than a panchromatic image. Panchromatic image have a higher spatial resolution than a multispectral image. Pan sharpening is a process of merging high spatial resolution panchromatic and high spectral resolution multispectral imagery to create a single high resolution color image. The aim of the study was to compare the potential classification accuracy provided by pan sharpened image. In this study, SPOT 5 image was used dated April 2013. 5m panchromatic image and 10m multispectral image are pan sharpened. Four different classification methods were investigated: maximum likelihood, decision tree, support vector machine at the pixel level and object based classification methods. SPOT 5 pan sharpened image was used to classification sun flowers and corn in a study site located at Kadirli region on Osmaniye in Turkey. The effects of pan sharpened image on classification results were also examined. Accuracy assessment showed that the object based classification resulted in the better overall accuracy values than the others. The results that indicate that these classification methods can be used for identifying sun flower and corn and estimating crop areas.

  6. Serial grouping of 2D-image regions with object-based attention in humans.

    PubMed

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-01-01

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. PMID:27291188

  7. Remote sensing clustering analysis based on object-based interval modeling

    NASA Astrophysics Data System (ADS)

    He, Hui; Liang, Tianheng; Hu, Dan; Yu, Xianchuan

    2016-09-01

    In object-based clustering, image data are segmented into objects (groups of pixels) and then clustered based on the objects' features. This method can be used to automatically classify high-resolution, remote sensing images, but requires accurate descriptions of object features. In this paper, we ascertain that interval-valued data model is appropriate for describing clustering prototype features. With this in mind, we developed an object-based interval modeling method for high-resolution, multiband, remote sensing data. We also designed an adaptive interval-valued fuzzy clustering method. We ran experiments utilizing images from the SPOT-5 satellite sensor, for the Pearl River Delta region and Beijing. The results indicate that the proposed algorithm considers both the anisotropy of the remote sensing data and the ambiguity of objects. Additionally, we present a new dissimilarity measure for interval vectors, which better separates the interval vectors generated by features of the segmentation units (objects). This approach effectively limits classification errors caused by spectral mixing between classes. Compared with the object-based unsupervised classification method proposed earlier, the proposed algorithm improves the classification accuracy without increasing computational complexity.

  8. Serial grouping of 2D-image regions with object-based attention in humans

    PubMed Central

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-01-01

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188

  9. Information extraction with object based support vector machines and vegetation indices

    NASA Astrophysics Data System (ADS)

    Ustuner, Mustafa; Abdikan, Saygin; Balik Sanli, Fusun

    2016-07-01

    Information extraction through remote sensing data is important for policy and decision makers as extracted information provide base layers for many application of real world. Classification of remotely sensed data is the one of the most common methods of extracting information however it is still a challenging issue because several factors are affecting the accuracy of the classification. Resolution of the imagery, number and homogeneity of land cover classes, purity of training data and characteristic of adopted classifiers are just some of these challenging factors. Object based image classification has some superiority than pixel based classification for high resolution images since it uses geometry and structure information besides spectral information. Vegetation indices are also commonly used for the classification process since it provides additional spectral information for vegetation, forestry and agricultural areas. In this study, the impacts of the Normalized Difference Vegetation Index (NDVI) and Normalized Difference Red Edge Index (NDRE) on the classification accuracy of RapidEye imagery were investigated. Object based Support Vector Machines were implemented for the classification of crop types for the study area located in Aegean region of Turkey. Results demonstrated that the incorporation of NDRE increase the classification accuracy from 79,96% to 86,80% as overall accuracy, however NDVI decrease the classification accuracy from 79,96% to 78,90%. Moreover it is proven than object based classification with RapidEye data give promising results for crop type mapping and analysis.

  10. An object-based compression system for a class of dynamic image-based representations

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Ng, King-To; Chan, Shing-Chow; Shum, Heung-Yeung

    2005-07-01

    This paper proposes a new object-based compression system for a class of dynamic image-based representations called plenoptic videos (PVs). PVs are simplified dynamic light fields, where the videos are taken at regularly spaced locations along line segments instead of a 2-D plane. The proposed system employs an object-based approach, where objects at different depth values are segmented to improve the rendering quality as in the pop-up light fields. Furthermore, by coding the plenoptic video at the object level, desirable functionalities such as scalability of contents, error resilience, and interactivity with individual IBR objects can be achieved. Besides supporting the coding of the texture and binary shape maps for IBR objects with arbitrary shapes, the proposed system also supports the coding of gray-scale alpha maps as well as geometry information in the form of depth maps to respectively facilitate the matting and rendering of the IBR objects. To improve the coding performance, the proposed compression system exploits both the temporal redundancy and spatial redundancy among the video object streams in the PV by employing disparity-compensated prediction or spatial prediction in its texture, shape and depth coding processes. To demonstrate the principle and effectiveness of the proposed system, a multiple video camera system was built and experimental results show that considerable improvements in coding performance are obtained for both synthetic scene and real scene, while supporting the stated object-based functionalities.

  11. Nonretinotopic perception of orientation: Temporal integration of basic features operates in object-based coordinates.

    PubMed

    Wutz, Andreas; Drewes, Jan; Melcher, David

    2016-08-01

    Early, feed-forward visual processing is organized in a retinotopic reference frame. In contrast, visual feature integration on longer time scales can involve object-based or spatiotopic coordinates. For example, in the Ternus-Pikler (T-P) apparent motion display, object identity is mapped across the object motion path. Here, we report evidence from three experiments supporting nonretinotopic feature integration even for the most paradigmatic example of retinotopically-defined features: orientation. We presented observers with a repeated series of T-P displays in which the perceived rotation of Gabor gratings indicates processing in either retinotopic or object-based coordinates. In Experiment 1, the frequency of perceived retinotopic rotations decreased exponentially for longer interstimulus intervals (ISIs) between T-P display frames, with object-based percepts dominating after about 150-250 ms. In a second experiment, we show that motion and rotation judgments depend on the perception of a moving object during the T-P display ISIs rather than only on temporal factors. In Experiment 3, we cued the observers' attentional state either toward a retinotopic or object motion-based reference frame and then tracked both the observers' eye position and the time course of the perceptual bias while viewing identical T-P display sequences. Overall, we report novel evidence for spatiotemporal integration of even basic visual features such as orientation in nonretinotopic coordinates, in order to support perceptual constancy across self- and object motion. PMID:27494545

  12. Sandia National Laboratories Advanced Simulation and Computing (ASC) : appraisal method for the implementation of the ASC software quality engineering practices: Version 1.0.

    SciTech Connect

    Turgeon, Jennifer; Minana, Molly A.

    2008-02-01

    This document provides a guide to the process of conducting software appraisals under the Sandia National Laboratories (SNL) ASC Program. The goal of this document is to describe a common methodology for planning, conducting, and reporting results of software appraisals thereby enabling: development of an objective baseline on implementation of the software quality engineering (SQE) practices identified in the ASC Software Quality Plan across the ASC Program; feedback from project teams on SQE opportunities for improvement; identification of strengths and opportunities for improvement for individual project teams; guidance to the ASC Program on the focus of future SQE activities Document contents include process descriptions, templates to promote consistent conduct of appraisals, and an explanation of the relationship of this procedure to the SNL ASC software program.

  13. Landslide mapping with multi-scale object-based image analysis - a case study in the Baichi watershed, Taiwan

    NASA Astrophysics Data System (ADS)

    Lahousse, T.; Chang, K. T.; Lin, Y. H.

    2011-10-01

    We developed a multi-scale OBIA (object-based image analysis) landslide detection technique to map shallow landslides in the Baichi watershed, Taiwan, after the 2004 Typhoon Aere event. Our semi-automated detection method selected multiple scales through landslide size statistics analysis for successive classification rounds. The detection performance achieved a modified success rate (MSR) of 86.5% with the training dataset and 86% with the validation dataset. This performance level was due to the multi-scale aspect of our methodology, as the MSR for single scale classification was substantially lower, even after spectral difference segmentation, with a maximum of 74%. Our multi-scale technique was capable of detecting landslides of varying sizes, including very small landslides, up to 95 m2. The method presented certain limitations: the thresholds we established for classification were specific to the study area, to the landslide type in the study area, and to the spectral characteristics of the satellite image. Because updating site-specific and image-specific classification thresholds is easy with OBIA software, our multi-scale technique is expected to be useful for mapping shallow landslides at watershed level.

  14. Powerplant software

    SciTech Connect

    Elliott, T.C.

    1995-07-01

    Powerplants need software to thrive and compete. Covered here are many programs and applications -- an overview of the functions, tasks, and problem-solving software is used for today. Software or, more accurately, software-driven systems are pervasive. Their presence is felt in every nook and cranny of the powerplant -- from design and construction through operation and maintenance, even dismantling and decommissioning -- embracing whole systems but also focusing on individual pieces of equipment. No one software supplier or two or three dominates -- powerplant software is the purview of scores if not hundreds of suppliers ranging from the largest corporations to individual consultants and application developers.

  15. Software Program: Software Management Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.

  16. Landscape object-based analysis of wetland plant functional types: the effects of spatial scale, vegetation classes and classifier methods

    NASA Astrophysics Data System (ADS)

    Dronova, I.; Gong, P.; Wang, L.; Clinton, N.; Fu, W.; Qi, S.

    2011-12-01

    Remote sensing-based vegetation classifications representing plant function such as photosynthesis and productivity are challenging in wetlands with complex cover and difficult field access. Recent advances in object-based image analysis (OBIA) and machine-learning algorithms offer new classification tools; however, few comparisons of different algorithms and spatial scales have been discussed to date. We applied OBIA to delineate wetland plant functional types (PFTs) for Poyang Lake, the largest freshwater lake in China and Ramsar wetland conservation site, from 30-m Landsat TM scene at the peak of spring growing season. We targeted major PFTs (C3 grasses, C3 forbs and different types of C4 grasses and aquatic vegetation) that are both key players in system's biogeochemical cycles and critical providers of waterbird habitat. Classification results were compared among: a) several object segmentation scales (with average object sizes 900-9000 m2); b) several families of statistical classifiers (including Bayesian, Logistic, Neural Network, Decision Trees and Support Vector Machines) and c) two hierarchical levels of vegetation classification, a generalized 3-class set and more detailed 6-class set. We found that classification benefited from object-based approach which allowed including object shape, texture and context descriptors in classification. While a number of classifiers achieved high accuracy at the finest pixel-equivalent segmentation scale, the highest accuracies and best agreement among algorithms occurred at coarser object scales. No single classifier was consistently superior across all scales, although selected algorithms of Neural Network, Logistic and K-Nearest Neighbors families frequently provided the best discrimination of classes at different scales. The choice of vegetation categories also affected classification accuracy. The 6-class set allowed for higher individual class accuracies but lower overall accuracies than the 3-class set because

  17. Proprietary software

    NASA Technical Reports Server (NTRS)

    Marnock, M. J.

    1971-01-01

    The protection of intellectual property by a patent, a copyright, or trade secrets is reviewed. The present and future use of computers and software are discussed, along with the governmental uses of software. The popularity of contractual agreements for sale or lease of computer programs and software services is also summarized.

  18. Comparison of Object-Based Image Analysis Approaches to Mapping New Buildings in Accra, Ghana Using Multi-Temporal QuickBird Satellite Imagery

    PubMed Central

    Tsai, Yu Hsin; Stow, Douglas; Weeks, John

    2013-01-01

    The goal of this study was to map and quantify the number of newly constructed buildings in Accra, Ghana between 2002 and 2010 based on high spatial resolution satellite image data. Two semi-automated feature detection approaches for detecting and mapping newly constructed buildings based on QuickBird very high spatial resolution satellite imagery were analyzed: (1) post-classification comparison; and (2) bi-temporal layerstack classification. Feature Analyst software based on a spatial contextual classifier and ENVI Feature Extraction that uses a true object-based image analysis approach of image segmentation and segment classification were evaluated. Final map products representing new building objects were compared and assessed for accuracy using two object-based accuracy measures, completeness and correctness. The bi-temporal layerstack method generated more accurate results compared to the post-classification comparison method due to less confusion with background objects. The spectral/spatial contextual approach (Feature Analyst) outperformed the true object-based feature delineation approach (ENVI Feature Extraction) due to its ability to more reliably delineate individual buildings of various sizes. Semi-automated, object-based detection followed by manual editing appears to be a reliable and efficient approach for detecting and enumerating new building objects. A bivariate regression analysis was performed using neighborhood-level estimates of new building density regressed on a census-derived measure of socio-economic status, yielding an inverse relationship with R2 = 0.31 (n = 27; p = 0.00). The primary utility of the new building delineation results is to support spatial analyses of land cover and land use and demographic change. PMID:24415810

  19. An object-based approach to weather analysis and its applications

    NASA Astrophysics Data System (ADS)

    Troemel, Silke; Diederich, Malte; Horvath, Akos; Simmer, Clemens; Kumjian, Matthew

    2013-04-01

    The research group 'Object-based Analysis and SEamless prediction' (OASE) within the Hans Ertel Centre for Weather Research programme (HErZ) pursues an object-based approach to weather analysis. The object-based tracking approach adopts the Lagrange perspective by identifying and following the development of convective events over the course of their lifetime. Prerequisites of the object-based analysis are a high-resolved observational data base and a tracking algorithm. A near real-time radar and satellite remote sensing-driven 3D observation-microphysics composite covering Germany, currently under development, contains gridded observations and estimated microphysical quantities. A 3D scale-space tracking identifies convective rain events in the dual-composite and monitors the development over the course of their lifetime. The OASE-group exploits the object-based approach in several fields of application: (1) For a better understanding and analysis of precipitation processes responsible for extreme weather events, (2) in nowcasting, (3) as a novel approach for validation of meso-γ atmospheric models, and (4) in data assimilation. Results from the different fields of application will be presented. The basic idea of the object-based approach is to identify a small set of radar- and satellite derived descriptors which characterize the temporal development of precipitation systems which constitute the objects. So-called proxies of the precipitation process are e.g. the temporal change of the brightband, vertically extensive columns of enhanced differential reflectivity ZDR or the cloud top temperature and heights identified in the 4D field of ground-based radar reflectivities and satellite retrievals generated by a cell during its life time. They quantify (micro-) physical differences among rain events and relate to the precipitation yield. Analyses on the informative content of ZDR columns as precursor for storm evolution for example will be presented to demonstrate

  20. Mapping gully-affected areas in the region of Taroudannt, Morocco based on Object-Based Image Analysis (OBIA)

    NASA Astrophysics Data System (ADS)

    d'Oleire-Oltmanns, Sebastian; Marzolff, Irene; Tiede, Dirk; Blaschke, Thomas

    2015-04-01

    The need for area-wide landform mapping approaches, especially in terms of land degradation, can be ascribed to the fact that within area-wide landform mapping approaches, the (spatial) context of erosional landforms is considered by providing additional information on the physiography neighboring the distinct landform. This study presents an approach for the detection of gully-affected areas by applying object-based image analysis in the region of Taroudannt, Morocco, which is highly affected by gully erosion while simultaneously representing a major region of agro-industry with a high demand of arable land. Various sensors provide readily available high-resolution optical satellite data with a much better temporal resolution than 3D terrain data which lead to the development of an area-wide mapping approach to extract gully-affected areas using only optical satellite imagery. The classification rule-set was developed with a clear focus on virtual spatial independence within the software environment of eCognition Developer. This allows the incorporation of knowledge about the target objects under investigation. Only optical QuickBird-2 satellite data and freely-available OpenStreetMap (OSM) vector data were used as input data. The OSM vector data were incorporated in order to mask out plantations and residential areas. Optical input data are more readily available for a broad range of users compared to terrain data, which is considered to be a major advantage. The methodology additionally incorporates expert knowledge and freely-available vector data in a cyclic object-based image analysis approach. This connects the two fields of geomorphology and remote sensing. The classification results allow conclusions on the current distribution of gullies. The results of the classification were checked against manually delineated reference data incorporating expert knowledge based on several field campaigns in the area, resulting in an overall classification accuracy of 62

  1. The effect of input data transformations on object-based image analysis

    PubMed Central

    LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.

    2011-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829

  2. Computer software.

    PubMed

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips. PMID:3536223

  3. Topic Modelling for Object-Based Classification of Vhr Satellite Images Based on Multiscale Segmentations

    NASA Astrophysics Data System (ADS)

    Shen, Li; Wu, Linmei; Li, Zhipeng

    2016-06-01

    Multiscale segmentation is a key prerequisite step for object-based classification methods. However, it is often not possible to determine a sole optimal scale for the image to be classified because in many cases different geo-objects and even an identical geo-object may appear at different scales in one image. In this paper, an object-based classification method based on mutliscale segmentation results in the framework of topic modelling is proposed to classify VHR satellite images in an entirely unsupervised fashion. In the stage of topic modelling, grayscale histogram distributions for each geo-object class and each segment are learned in an unsupervised manner from multiscale segments. In the stage of classification, each segment is allocated a geo-object class label by the similarity comparison between the grayscale histogram distributions of each segment and each geo-object class. Experimental results show that the proposed method can perform better than the traditional methods based on topic modelling.

  4. Joint source-channel coding for wireless object-based video communications utilizing data hiding.

    PubMed

    Wang, Haohong; Tsaftaris, Sotirios A; Katsaggelos, Aggelos K

    2006-08-01

    In recent years, joint source-channel coding for multimedia communications has gained increased popularity. However, very limited work has been conducted to address the problem of joint source-channel coding for object-based video. In this paper, we propose a data hiding scheme that improves the error resilience of object-based video by adaptively embedding the shape and motion information into the texture data. Within a rate-distortion theoretical framework, the source coding, channel coding, data embedding, and decoder error concealment are jointly optimized based on knowledge of the transmission channel conditions. Our goal is to achieve the best video quality as expressed by the minimum total expected distortion. The optimization problem is solved using Lagrangian relaxation and dynamic programming. The performance of the proposed scheme is tested using simulations of a Rayleigh-fading wireless channel, and the algorithm is implemented based on the MPEG-4 verification model. Experimental results indicate that the proposed hybrid source-channel coding scheme significantly outperforms methods without data hiding or unequal error protection. PMID:16900673

  5. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  6. Preliminary Results of Earthquake-Induced Building Damage Detection with Object-Based Image Classification

    NASA Astrophysics Data System (ADS)

    Sabuncu, A.; Uca Avci, Z. D.; Sunar, F.

    2016-06-01

    Earthquakes are the most destructive natural disasters, which result in massive loss of life, infrastructure damages and financial losses. Earthquake-induced building damage detection is a very important step after earthquakes since earthquake-induced building damage is one of the most critical threats to cities and countries in terms of the area of damage, rate of collapsed buildings, the damage grade near the epicenters and also building damage types for all constructions. Van-Ercis (Turkey) earthquake (Mw= 7.1) was occurred on October 23th, 2011; at 10:41 UTC (13:41 local time) centered at 38.75 N 43.36 E that places the epicenter about 30 kilometers northern part of the city of Van. It is recorded that, 604 people died and approximately 4000 buildings collapsed or seriously damaged by the earthquake. In this study, high-resolution satellite images of Van-Ercis, acquired by Quickbird-2 (Digital Globe Inc.) after the earthquake, were used to detect the debris areas using an object-based image classification. Two different land surfaces, having homogeneous and heterogeneous land covers, were selected as case study areas. As a first step of the object-based image processing, segmentation was applied with a convenient scale parameter and homogeneity criterion parameters. As a next step, condition based classification was used. In the final step of this preliminary study, outputs were compared with streetview/ortophotos for the verification and evaluation of the classification accuracy.

  7. Uncertain Training Data Edition for Automatic Object-Based Change Map Extraction

    NASA Astrophysics Data System (ADS)

    Hajahmadi, S.; Mokhtarzadeh, M.; Mohammadzadeh, A.; Valadanzouj, M. J.

    2013-09-01

    Due to the rapid transformation of the societies, and the consequent growth of the cities, it is necessary to study these changes in order to achieve better control and management of urban areas and assist the decision-makers. Change detection involves the ability to quantify temporal effects using multi-temporal data sets. The available maps of the under study area is one of the most important sources for this reason. Although old data bases and maps are a great resource, it is more than likely that the training data extracted from them might contain errors, which affects the procedure of the classification; and as a result the process of the training sample editing is an essential matter. Due to the urban nature of the area studied and the problems caused in the pixel base methods, object-based classification is applied. To reach this, the image is segmented into 4 scale levels using a multi-resolution segmentation procedure. After obtaining the segments in required levels, training samples are extracted automatically using the existing old map. Due to the old nature of the map, these samples are uncertain containing wrong data. To handle this issue, an editing process is proposed according to K-nearest neighbour and k-means algorithms. Next, the image is classified in a multi-resolution object-based manner and the effects of training sample refinement are evaluated. As a final step this classified image is compared with the existing map and the changed areas are detected.

  8. Object based change detection of Central Asian Tugai vegetation with very high spatial resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Gärtner, Philipp; Förster, Michael; Kurban, Alishir; Kleinschmit, Birgit

    2014-09-01

    Ecological restoration of degraded riparian Tugai forests in north-western China is a key driver to combat desertification in this region. Recent restoration efforts attempt to recover the forest along with its most dominant tree species, Populus euphratica. The present research observed the response of natural vegetation using an object based change detection method on QuickBird (2005) and WorldView2 (2011) data. We applied the region growing approach to derived Normalized Difference Vegetation Index (NDVI) values in order to identify single P. euphratica trees, delineate tree crown areas and quantify crown diameter changes. Results were compared to 59 reference trees. The findings confirmed a positive tree crown growth and suggest a crown diameter increase of 1.14 m, on average. On a single tree basis, tree crown diameters of larger crowns were generally underestimated. Small crowns were slightly underestimated in QuickBird and overestimated in Worldview2 images. The results of the automated tree crown delineation show a moderate relation to field reference data with R20052: 0.36 and R20112: 0.48. The object based image analysis (OBIA) method proved to be applicable in sparse riparian Tugai forests and showed great suitability to evaluate ecological restoration efforts in an endangered ecosystem.

  9. Object-based cropland degradation identification: a case study in Uzbekistan

    NASA Astrophysics Data System (ADS)

    Dubovyk, Olena; Menz, Gunter; Conrad, Christopher; Khamzina, Asia

    2012-10-01

    Sustainability of irrigated agriculture-based economies, such as in Central Asia, is threatened by cropland degradation. The field-based identification of the degraded agricultural areas can aid in developing appropriate land rehabilitation and monitoring programs. This paper combined the object-based change detection and spectral mixture analysis to develop an approach for identifying parcels of irrigated degraded cropland in Northern Uzbekistan, Central Asia. A linear spectral unmixing, followed by the object-based change vector analysis, was applied to the multiple Landsat TM images, acquired in 1987 and 2009. Considering a spectral dimensionality of Landsat TM, a multiple 4-endmember model (green vegetation, water, dark soil, and bright soil) was set up for the analysis. The spectral unmixing results were valid, as indicated by the overall root mean square errors of <2.5% reflectance for all images. The results of change detection revealed that about 33% (84,540 ha) of cropland in the study area were affected by the degradation processes to varying degrees. Spatial distribution of degraded fields was mainly associated with the abandoned fields and lands with inherently low fertile soils. The proposed approach could be elaborated for a field-based monitoring of cropland degradation in similar landscapes of Central Asia and elsewhere.

  10. Object-based land-cover classification for metropolitan Phoenix, Arizona, using aerial photography

    NASA Astrophysics Data System (ADS)

    Li, Xiaoxiao; Myint, Soe W.; Zhang, Yujia; Galletti, Chritopher; Zhang, Xiaoxiang; Turner, Billie L.

    2014-12-01

    Detailed land-cover mapping is essential for a range of research issues addressed by the sustainability and land system sciences and planning. This study uses an object-based approach to create a 1 m land-cover classification map of the expansive Phoenix metropolitan area through the use of high spatial resolution aerial photography from National Agricultural Imagery Program. It employs an expert knowledge decision rule set and incorporates the cadastral GIS vector layer as auxiliary data. The classification rule was established on a hierarchical image object network, and the properties of parcels in the vector layer were used to establish land cover types. Image segmentations were initially utilized to separate the aerial photos into parcel sized objects, and were further used for detailed land type identification within the parcels. Characteristics of image objects from contextual and geometrical aspects were used in the decision rule set to reduce the spectral limitation of the four-band aerial photography. Classification results include 12 land-cover classes and subclasses that may be assessed from the sub-parcel to the landscape scales, facilitating examination of scale dynamics. The proposed object-based classification method provides robust results, uses minimal and readily available ancillary data, and reduces computational time.

  11. Shadows remain segmented as selectable regions in object-based attention paradigms

    PubMed Central

    de-Wit, Lee; Milner, David; Kentridge, Robert

    2012-01-01

    It is unclear how shadows are processed in the visual system. Whilst shadows are clearly used as an important cue to localise the objects that cast them, there is mixed evidence regarding the extent to which shadows influence the recognition of those objects. Furthermore experiments exploring the perception of shadows per se have provided evidence that the visual system has less efficient access to the detailed form of a region if it is interpreted as a shadow. The current study sought to clarify our understanding of the manner in which shadows are represented by the visual system by exploring how they influence attention in two different object-based attention paradigms. The results provide evidence that cues to interpret a region as a shadow do not reduce the extent to which that region will result in a within-‘object’ processing advantage. Thus, whilst there is evidence that shadows are processed differently at higher stages of object perception, the present result shows that they are still represented as distinctly segmented regions as far as the allocation of attention is concerned. This result is consistent with the idea that object-based attention phenomena result from region-based scene segmentation rather than from the representations of objects per se. PMID:23145275

  12. Subjective time dilation: spatially local, object-based, or a global visual experience?

    PubMed

    New, Joshua J; Scholl, Brian J

    2009-01-01

    Time can appear to slow down in certain brief real-life events-e.g. during car accidents or critical moments of athletes' performances. Such time dilation can also be produced to a smaller degree in the laboratory by 'oddballs' presented in series of otherwise identical stimuli. We explored the spatial distribution of subjective time dilation: Does time expand only for the oddball objects themselves, only for the local spatial region including the oddball, or for the entire visual field? Because real-life traumatic events provoke an apparently global visual experience of time expansion, we predicted-and observed-that a locally discrete oddball would also dilate the apparent duration of other concurrent events in other parts of the visual field. This 'dilation at a distance' was not diminished by increasing spatial separation between the oddball and target events, and was not influenced by manipulations of objecthood that drive object-based attention. In addition, behaviorally 'urgent' oddballs (looming objects) yielded time dilation, but visually similar receding objects did not. We interpret these results in terms of the influence of attention on time perception-where attention reflects general arousal and faster internal pacing rather than spatial or object-based selection, per se. As a result, attention influences subjective time dilation as a global visual experience. PMID:19271914

  13. Parameter-Based Performance Analysis of Object-Based Image Analysis Using Aerial and Quikbird-2 Images

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz, M.

    2014-09-01

    Opening new possibilities for research, very high resolution (VHR) imagery acquired by recent commercial satellites and aerial systems requires advanced approaches and techniques that can handle large volume of data with high local variance. Delineation of land use/cover information from VHR images is a hot research topic in remote sensing. In recent years, object-based image analysis (OBIA) has become a popular solution for image analysis tasks as it considers shape, texture and content information associated with the image objects. The most important stage of OBIA is the image segmentation process applied prior to classification. Determination of optimal segmentation parameters is of crucial importance for the performance of the selected classifier. In this study, effectiveness and applicability of the segmentation method in relation to its parameters was analysed using two VHR images, an aerial photo and a Quickbird-2 image. Multi-resolution segmentation technique was employed with its optimal parameters of scale, shape and compactness that were defined after an extensive trail process on the data sets. Nearest neighbour classifier was applied on the segmented images, and then the accuracy assessment was applied. Results show that segmentation parameters have a direct effect on the classification accuracy, and low values of scale-shape combinations produce the highest classification accuracies. Also, compactness parameter was found to be having minimal effect on the construction of image objects, hence it can be set to a constant value in image classification.

  14. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    requirements. This allows the projects leeway to meet these requirements in many forms that best suit a particular project's needs and safety risk. In other words, it tells the project what to do, not how to do it. This update also incorporated advances in the state of the practice of software safety from academia and private industry. It addresses some of the more common issues now facing software developers in the NASA environment such as the use of Commercial-Off-the-Shelf Software (COTS), Modified OTS (MOTS), Government OTS (GOTS), and reused software. A team from across NASA developed the update and it has had both NASA-wide internal reviews by software engineering, quality, safety, and project management. It has also had expert external review. This presentation and paper will discuss the new NASA Software Safety Standard, its organization, and key features. It will start with a brief discussion of some NASA mission failures and incidents that had software as one of their root causes. It will then give a brief overview of the NASA Software Safety Process. This will include an overview of the key personnel responsibilities and functions that must be performed for safety-critical software.

  15. Hierarchical Object-based Image Analysis approach for classification of sub-meter multispectral imagery in Tanzania

    NASA Astrophysics Data System (ADS)

    Chung, C.; Nagol, J. R.; Tao, X.; Anand, A.; Dempewolf, J.

    2015-12-01

    Increasing agricultural production while at the same time preserving the environment has become a challenging task. There is a need for new approaches for use of multi-scale and multi-source remote sensing data as well as ground based measurements for mapping and monitoring crop and ecosystem state to support decision making by governmental and non-governmental organizations for sustainable agricultural development. High resolution sub-meter imagery plays an important role in such an integrative framework of landscape monitoring. It helps link the ground based data to more easily available coarser resolution data, facilitating calibration and validation of derived remote sensing products. Here we present a hierarchical Object Based Image Analysis (OBIA) approach to classify sub-meter imagery. The primary reason for choosing OBIA is to accommodate pixel sizes smaller than the object or class of interest. Especially in non-homogeneous savannah regions of Tanzania, this is an important concern and the traditional pixel based spectral signature approach often fails. Ortho-rectified, calibrated, pan sharpened 0.5 meter resolution data acquired from DigitalGlobe's WorldView-2 satellite sensor was used for this purpose. Multi-scale hierarchical segmentation was performed using multi-resolution segmentation approach to facilitate the use of texture, neighborhood context, and the relationship between super and sub objects for training and classification. eCognition, a commonly used OBIA software program, was used for this purpose. Both decision tree and random forest approaches for classification were tested. The Kappa index agreement for both algorithms surpassed the 85%. The results demonstrate that using hierarchical OBIA can effectively and accurately discriminate classes at even LCCS-3 legend.

  16. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    NASA Astrophysics Data System (ADS)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  17. Mapping Arctic Ocean Coastline Change With Landsat Archive Data And Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Hulslander, D.

    2010-12-01

    The melting of arctic permafrost is a significant effect of climate change. The combination of rising sea level, longer periods of ice-free conditions in the Arctic Ocean and melting permafrost can greatly accelerate coastline changes in general and arctic coastal erosion in particular. Anderson et al. (2009; Geology News) have measured erosion rates of 15 m per year at sites along the Alaskan Arctic Ocean coastline dominated by ice-cemented peats and silt-rich permafrost. With over 45,000 km of Arctic Ocean coastline, it is important that coastline movement and transgressive oceanic regimes be mapped and tracked with accurate data. Determining historic coastal erosion rates for this region is as important as mapping the current extent of the phenomenon to create as complete a picture as possible and locate where rapid erosion is an emergent process. The extent of the area involved combined with its inaccessibility and inhospitable conditions makes geologic remote sensing an appropriate tool for characterizing Arctic Ocean coastal erosion. Traditional weaknesses associated with using remote sensing in the geosciences have included a lack of historical data or baseline information as well as difficulties in systematization of feature mapping. Using object-based image analysis on Landsat archive data can overcome these issues and may allow for a potential multi-decadal map of Arctic Ocean coastline changes. The Landsat family of sensors (MSS 1-3 and TM/ETM 4, 5, and 7) have been providing imagery as frequently as every 16 days since July 1972. The frequent revisits maximize the chance of getting cloud-free imagery at least once per year in most study areas. Also, Landsat data are well characterized, extensively studied, and freely available from the USGS EROS Data Center Archive, making it an ideal and stable source of data for mapping the Arctic Ocean coastline. Delineating large sections of coastline from imagery by hand digitization would be impractical due to the

  18. 1986 Educational Software Preview Guide.

    ERIC Educational Resources Information Center

    School Science and Mathematics, 1986

    1986-01-01

    Presents a list of favorable reviewed microcomputer software for K-12 student instruction. It is not a buying guide, but an aid in locating programs for review. Software are listed under such headings as computer awareness, advanced mathematics, algebra, arithmetic, problem-solving/logic, biology, astronomy, chemistry, environmental…

  19. Software safety

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy

    1987-01-01

    Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.

  20. A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data

    NASA Astrophysics Data System (ADS)

    Gajda, Agnieszka; Wójtowicz-Nowakowska, Anna

    2013-04-01

    A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data Land cover maps are generally produced on the basis of high resolution imagery. Recently, LiDAR (Light Detection and Ranging) data have been brought into use in diverse applications including land cover mapping. In this study we attempted to assess the accuracy of land cover classification using both high resolution aerial imagery and LiDAR data (airborne laser scanning, ALS), testing two classification approaches: a pixel-based classification and object-oriented image analysis (OBIA). The study was conducted on three test areas (3 km2 each) in the administrative area of Kraków, Poland, along the course of the Vistula River. They represent three different dominating land cover types of the Vistula River valley. Test site 1 had a semi-natural vegetation, with riparian forests and shrubs, test site 2 represented a densely built-up area, and test site 3 was an industrial site. Point clouds from ALS and ortophotomaps were both captured in November 2007. Point cloud density was on average 16 pt/m2 and it contained additional information about intensity and encoded RGB values. Ortophotomaps had a spatial resolution of 10 cm. From point clouds two raster maps were generated: intensity (1) and (2) normalised Digital Surface Model (nDSM), both with the spatial resolution of 50 cm. To classify the aerial data, a supervised classification approach was selected. Pixel based classification was carried out in ERDAS Imagine software. Ortophotomaps and intensity and nDSM rasters were used in classification. 15 homogenous training areas representing each cover class were chosen. Classified pixels were clumped to avoid salt and pepper effect. Object oriented image object classification was carried out in eCognition software, which implements both the optical and ALS data. Elevation layers (intensity, firs/last reflection, etc.) were used at segmentation stage due to

  1. Building and Programming a Smart Robotic System for Distinguishing Objects Based on their Shape and Colour

    NASA Astrophysics Data System (ADS)

    Sharari, T. M.

    2015-03-01

    This paper presents a robotic system designed for holding and placing objects based on their colour and shape. The presented robot is given a complete set of instructions of positions and orientation angles for each manipulation motion. The main feature in this paper is that the developed robot used a combination of vision and motion systems for holding and placing the work-objects, mounted on the flat work-plane, based on their shapes and colors. This combination improves the flexibility of manipulation which may help eliminate the use of some expensive manipulation tasks in a variety of industrial applications. The robotic system presented in this paper is designed as an educational robot that possesses the ability for holding-and-placing operations with limited load. To process the various instructions for holding and placing the work objects, a main control unit - Manipulation Control Unit (MCU) is used as well as a slave unit that performed the actual instructions from the MCU.

  2. Mental rotation performance in soccer players and gymnasts in an object-based mental rotation task.

    PubMed

    Jansen, Petra; Lehmann, Jennifer

    2013-01-01

    In this study, the effect of motor expertise on an object-based mental rotation task was investigated. 60 males and 60 females (40 soccer players, 40 gymnasts, and 40 non-athletes, equivalent males and females in each group) solved a psychometric mental rotation task with both cube and human figures. The results revealed that all participants had a higher mental rotation accuracy for human figures compared to cubed figures, that the gender difference was reduced with human figures, and that gymnasts demonstrated a better mental rotation performance than non-athletes. The results are discussed against the background of the existing literature on motor experts, mental rotation performance as well as the importance of the testing situation and the test construction. PMID:23833695

  3. Object formation in visual working memory: Evidence from object-based attention.

    PubMed

    Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei

    2016-09-01

    We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception. PMID:27253863

  4. Object-Based Forest Change Detection Using High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Chehata, N.; Orny, C.; Boukir, S.; Guyon, D.

    2011-04-01

    An object-based approach for forest disaster change detection using High Resolution (HR) satellite images is proposed. An automatic feature selection process is used to optimize image segmentation via an original calibration-like procedure. A multitemporal classification then enables the separation of wind-fall from intact areas based on a new descriptor that depends on the level of fragmentation of the detected regions. The mean shift algorithm was used in both the segmentation and the classification processes. The method was tested on a high resolution Formosat-2 multispectral satellite image pair acquired before and after the Klaus storm. The obtained results are encouraging and the contribution of high resolution images for forest disaster mapping is discussed.

  5. Dense estimation and object-based segmentation of the optical flow with robust techniques.

    PubMed

    Mémin, E; Pérez, P

    1998-01-01

    In this paper, we address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term incorporates a discontinuity-preserving smoothness constraint. To cope with the nonconvex minimization problem thus defined, we design an efficient deterministic multigrid procedure. It converges fast toward estimates of good quality, while revealing the large discontinuity structures of flow fields. We then propose an extension of the model by attaching to it a flexible object-based segmentation device based on deformable closed curves (different families of curve equipped with different kinds of prior can be easily supported). Experimental results on synthetic and natural sequences are presented, including an analysis of sensitivity to parameter tuning. PMID:18276286

  6. A proto-object based saliency model in three-dimensional space.

    PubMed

    Hu, Brian; Kane-Jackson, Ralinkae; Niebur, Ernst

    2016-02-01

    Most models of visual saliency operate on two-dimensional images, using elementary image features such as intensity, color, or orientation. The human visual system, however, needs to function in complex three-dimensional environments, where depth information is often available and may be used to guide the bottom-up attentional selection process. In this report we extend a model of proto-object based saliency to include depth information and evaluate its performance on three separate three-dimensional eye tracking datasets. Our results show that the additional depth information provides a small, but statistically significant, improvement in the model's ability to predict perceptual saliency (eye fixations) in natural scenes. The computational mechanisms of our model have direct neural correlates, and our results provide further evidence that proto-objects help to establish perceptual organization of the scene. PMID:26739278

  7. Mapping potential Blanding's turtle habitat using aerial orthophotographic imagery and object based classification

    NASA Astrophysics Data System (ADS)

    Barker, Rebecca

    Blanding's turtle (Emydoidea blandingii) is a threatened species in southern Quebec that is being inventoried to determine abundance and potential habitat by the Quebec Ministry of Natural Resources and Wildlife. In collaboration with that program and using spring leaf-off aerial orthophotos of Gatineau Park, attributes associated with known habitat criteria were analyzed: wetlands with open water, vegetation mounds for camouflage and thermoregulation, and logs for spring sun-basking. Pixel-based classification to separate wetlands from other land cover types was followed by object-based segmentation and rule-based classification of within--wetland vegetation and logs. Classifications integrated several image characteristics including texture, context, shape, area and spectral attributes. Field data and visual interpretation showed the accuracies of wetland and within wetland habitat feature classifications to be over 82.5%. The wetland classification results were used to develop a ranked potential habitat suitability map for Blanding's turtle that can be employed in conservation planning and management.

  8. Object-based change detection on multiscale fusion for VHR remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Hansong; Chen, Jianyu; Liu, Xin

    2015-12-01

    This paper presents a novel Object-based context sensitive technique for unsupervised change detection in very high spatial resolution(VHR) remote sensing images. The proposed technique models the scene at different segment levels defining multiscale-level image objects. Multiscale-level image object change features is helpful for improving the discriminability between the changed class and unchanged class. Firstly according to the best classification principle as "homogeneity in class, heterogeneity between class", A set of optimal scales are determined. Then a multiscale level change vector analysis to each pixel of the considered images helps improve the accuracy and the degree of automation, which is implemented on multiscale features fusion. The technique properly analyzes the multiscale-level image objects' context information of the considered spatial position. The adaptive nature of optimal multiscale image objects and their multilevel representation allow one a proper modeling of complex scene in the investigated region. Experimental results confirm the effectiveness of the proposed approach.

  9. Studies on pansharpening and object-based classification of Worldview-2 multispectral image

    NASA Astrophysics Data System (ADS)

    Wyczałek, I.; Wyczałek, E.

    2013-12-01

    The new information contained in four additional spectral bands of high - resolution images from the satellite sensor WorldView - 2 should provide a visible improvement in the quality of analysis of large - scale phenomena occurring at the ground. Selected part of the image of Poznan was analyzed in order to verify these possibilities in relation to the urban environment. It includes riverside green area and a number of adjacent buildings. Attention has been focused on two components of object - oriented analysis - sharpening the image and its classification. In terms of pansharpening the aim was to obtain a clear picture of terrain objects in details, what should lead to the correct division of the image into homogenous segments and the subsequent fine classification. It was intended to ensure the possibility of separating small field objects within the set of classes. The task was carried out using various computer programs that enable the development and analysis of raster data (IDRISI Andes, ESRI ArcGIS 9.3, eCognition Developer 8) and some own computational modules. The main scientific objective of this study was to determine how much information from new spectral image layers after their pansharpening affects the quality of object - based classification of land cover in green and building areas of the city. As a basis for improving the quality of the classification was above mentioned ability of using additional data from new spectral bands of WorldView - 2 image. To assess the quality of the classification we used test that examines only the uncertain areas of t he picture, that is these which lie on differently classified types of land cover. The outcome of assessment confirmed the thesis of the positive albeit small impact of additional spectral channels on the result of object - based classification. But also pansharpening itself only slightly improves the quality of classified image

  10. Top-down facilitation of visual object recognition: object-based and context-based contributions.

    PubMed

    Fenske, Mark J; Aminoff, Elissa; Gronau, Nurit; Bar, Moshe

    2006-01-01

    The neural mechanisms subserving visual recognition are traditionally described in terms of bottom-up analysis, whereby increasingly complex aspects of the visual input are processed along a hierarchical progression of cortical regions. However, the importance of top-down facilitation in successful recognition has been emphasized in recent models and research findings. Here we consider evidence for top-down facilitation of recognition that is triggered by early information about an object, as well as by contextual associations between an object and other objects with which it typically appears. The object-based mechanism is proposed to trigger top-down facilitation of visual recognition rapidly, using a partially analyzed version of the input image (i.e., a blurred image) that is projected from early visual areas directly to the prefrontal cortex (PFC). This coarse representation activates in the PFC information that is back-projected as "initial guesses" to the temporal cortex where it presensitizes the most likely interpretations of the input object. In addition to this object-based facilitation, a context-based mechanism is proposed to trigger top-down facilitation through contextual associations between objects in scenes. These contextual associations activate predictive information about which objects are likely to appear together, and can influence the "initial guesses" about an object's identity. We have shown that contextual associations are analyzed by a network that includes the parahippocampal cortex and the retrosplenial complex. The integrated proposal described here is that object- and context-based top-down influences operate together, promoting efficient recognition by framing early information about an object within the constraints provided by a lifetime of experience with contextual associations. PMID:17027376

  11. Spatial and thematic assessment of object-based forest stand delineation using an OFA-matrix

    NASA Astrophysics Data System (ADS)

    Hernando, A.; Tiede, D.; Albrecht, F.; Lang, S.

    2012-10-01

    The delineation and classification of forest stands is a crucial aspect of forest management. Object-based image analysis (OBIA) can be used to produce detailed maps of forest stands from either orthophotos or very high resolution satellite imagery. However, measures are then required for evaluating and quantifying both the spatial and thematic accuracy of the OBIA output. In this paper we present an approach for delineating forest stands and a new Object Fate Analysis (OFA) matrix for accuracy assessment. A two-level object-based orthophoto analysis was first carried out to delineate stands on the Dehesa Boyal public land in central Spain (Avila Province). Two structural features were first created for use in class modelling, enabling good differentiation between stands: a relational tree cover cluster feature, and an arithmetic ratio shadow/tree feature. We then extended the OFA comparison approach with an OFA-matrix to enable concurrent validation of thematic and spatial accuracies. Its diagonal shows the proportion of spatial and thematic coincidence between a reference data and the corresponding classification. New parameters for Spatial Thematic Loyalty (STL), Spatial Thematic Loyalty Overall (STLOVERALL) and Maximal Interfering Object (MIO) are introduced to summarise the OFA-matrix accuracy assessment. A stands map generated by OBIA (classification data) was compared with a map of the same area produced from photo interpretation and field data (reference data). In our example the OFA-matrix results indicate good spatial and thematic accuracies (>65%) for all stand classes except for the shrub stands (31.8%), and a good STLOVERALL (69.8%). The OFA-matrix has therefore been shown to be a valid tool for OBIA accuracy assessment.

  12. A bio-inspired method and system for visual object-based attention and segmentation

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak

    2010-04-01

    This paper describes a method and system of human-like attention and object segmentation in visual scenes that (1) attends to regions in a scene in their rank of saliency in the image, (2) extracts the boundary of an attended proto-object based on feature contours, and (3) can be biased to boost the attention paid to specific features in a scene, such as those of a desired target object in static and video imagery. The purpose of the system is to identify regions of a scene of potential importance and extract the region data for processing by an object recognition and classification algorithm. The attention process can be performed in a default, bottom-up manner or a directed, top-down manner which will assign a preference to certain features over others. One can apply this system to any static scene, whether that is a still photograph or imagery captured from video. We employ algorithms that are motivated by findings in neuroscience, psychology, and cognitive science to construct a system that is novel in its modular and stepwise approach to the problems of attention and region extraction, its application of a flooding algorithm to break apart an image into smaller proto-objects based on feature density, and its ability to join smaller regions of similar features into larger proto-objects. This approach allows many complicated operations to be carried out by the system in a very short time, approaching real-time. A researcher can use this system as a robust front-end to a larger system that includes object recognition and scene understanding modules; it is engineered to function over a broad range of situations and can be applied to any scene with minimal tuning from the user.

  13. Object-based class modelling for multi-scale riparian forest habitat mapping

    NASA Astrophysics Data System (ADS)

    Strasser, Thomas; Lang, Stefan

    2015-05-01

    Object-based class modelling allows for mapping complex, hierarchical habitat systems. The riparian zone, including forests, represents such a complex ecosystem. Forests within riparian zones are biologically high productive and characterized by a rich biodiversity; thus considered of high community interest with an imperative to be protected and regularly monitored. Satellite earth observation (EO) provides tools for capturing the current state of forest habitats such as forest composition including intermixture of non-native tree species. Here we present a semi-automated object based image analysis (OBIA) approach for the mapping of riparian forests by applying class modelling of habitats based on the European Nature Information System (EUNIS) habitat classifications and the European Habitats Directive (HabDir) Annex 1. A very high resolution (VHR) WorldView-2 satellite image provided the required spatial and spectral details for a multi-scale image segmentation and rule-base composition to generate a six-level hierarchical representation of riparian forest habitats. Thereby habitats were hierarchically represented within an image object hierarchy as forest stands, stands of homogenous tree species and single trees represented by sunlit tree crowns. 522 EUNIS level 3 (EUNIS-3) habitat patches with a mean patch size (MPS) of 12,349.64 m2 were modelled from 938 forest stand patches (MPS = 6868.20 m2) and 43,742 tree stand patches (MPS = 140.79 m2). The delineation quality of the modelled EUNIS-3 habitats (focal level) was quantitatively assessed to an expert-based visual interpretation showing a mean deviation of 11.71%.

  14. Object-based "dynamic cover types" - a new framework for monitoring landscape-level ecosystem change

    NASA Astrophysics Data System (ADS)

    Dronova, I.; Wang, L.; Gong, P.; Zhong, L.

    2012-12-01

    Traditional analyses of ecosystem change with remote sensing data often focus on transitions between 'static' landscape cover types. However, in dynamic landscapes with frequent disturbance long-term surface trends may be obscured by intermediate shorter-term variation. Availability of high-quality remote sensing data is often inconsistent among change periods, which contributes to the uncertainty in change detection among 'static' classes. Alternatively, we propose Dynamic Cover Types (DCTs) to characterize highly variable areas based on their nested change regimes shaped by climate, phenology and disturbance. We define DCTs as sequences of surface transformations that have distinct temporal trajectories observable across landscapes within a given change period. To illustrate and test this concept, we combined multispectral and microwave satellite imagery to classify DCTs for a large complex seasonally inundated freshwater wetland in China in 2007-2008. Instead of using pixels, we mapped DCTs using object-based image analysis and supervised machine-learning algorithms to characterize common change types based on their spatial and temporal context. Spatial distributions of mapped DCTs simultaneously reflected several key drivers of wetland change, including broad-scale changes in submersion times, vegetation phenology and prevalence of plant cover and localized fine-scale disturbance. We further examined DCT response to a hypothetical scenario of a warmer wetter early spring by substituting spring 2008 images with 2007 ones. In this comparison, the strongest response was detected from DCTs that were closely associated with the water body and represented critical habitat for wintering migratory waterbirds in this area. Results indicate that object-based dynamic class boundaries may provide useful spatial units to highlight characteristic types of landscape change for environmental research, ecosystem monitoring and management considerations.

  15. Evaluation of satellite-based precipitation estimates in winter season using an object-based approach

    NASA Astrophysics Data System (ADS)

    Li, J.; Hsu, K.; AghaKouchak, A.; Sorooshian, S.

    2012-12-01

    Verification has become an integral component of satellite precipitation algorithms and products. A number of object-based verification methods have been proposed to provide diagnostic information regarding the precipitation products' ability to capture the spatial pattern, intensity, and placement of precipitation. However, most object-based methods are not capable of investigating precipitation objects at the storm-scale. In this study, an image processing approach known as watershed segmentation was adopted to detect the storm-scale rainfall objects. Then, a fuzzy logic-based technique was utilized to diagnose and analyze storm-scale object attributes, including centroid distance, area ratio, intersection area ratio and orientation angle difference. Three verification metrics (i.e., false alarm ratio, missing ratio and overall membership score) were generated for validation and verification. Three satellite-based precipitation products, including PERSIANN, CMORPH, 3B42RT, were evaluated against NOAA stage IV MPE multi-sensor composite rain analysis at 0.25° by 0.25° on a daily scale in the winter season of 2010 over the contiguous United States. Winter season is dominated by frontal systems which usually have larger area coverage. All three products and the stage IV observation tend to find large size storm objects. With respect to the evaluation attributes, PERSIANN tends to obtain larger area ratio and consequently has larger centroid distance to the stage IV observations, while 3B42RT are found to be closer to the stage IV for the object size. All evaluation products give small orientation angle differences but vary significantly for the missing ratio and false alarm ratio. This implies that satellite estimates can fail to detect storms in winter. The overall membership scores are close for all three different products which indicate that all three satellite-based precipitation products perform well for capturing the spatial and geometric characteristics of

  16. Software Reviews.

    ERIC Educational Resources Information Center

    Bitter, Gary G., Ed.

    1990-01-01

    Reviews three computer software: (1) "Elastic Lines: The Electronic Geoboard" on elementary geometry; (2) "Wildlife Adventures: Whales" on environmental science; and (3) "What Do You Do with a Broken Calculator?" on computation and problem solving. Summarizes the descriptions, strengths and weaknesses, and applications of each software. (YP)

  17. Software Repository

    NASA Technical Reports Server (NTRS)

    Merwarth, P., D.

    1983-01-01

    The Common Software Module Repository (CSMR) is computerized library system with high product and service visibility to potential users. Online capabilities of system allow both librarian and user to interact with library. Librarian is responsible for maintaining information in CSMR library. User searches library to locate software modules that meet his or her current needs.

  18. Software Reviews.

    ERIC Educational Resources Information Center

    Miller, Anne, Ed.; Radziemski, Cathy, Ed.

    1988-01-01

    Reviews two software packages for the Macintosh series. "Course Builder 2.0," a courseware authoring system, allows the user to create programs which stand alone and may be used independently in the classroom. "World Builder," an artificial intelligence software package, allows creative thinking, problem-solving, and decision-making. (YP)

  19. Software Bridge

    NASA Technical Reports Server (NTRS)

    1995-01-01

    I-Bridge is a commercial version of software developed by I-Kinetics under a NASA Small Business Innovation Research (SBIR) contract. The software allows users of Windows applications to gain quick, easy access to databases, programs and files on UNIX services. Information goes directly onto spreadsheets and other applications; users need not manually locate, transfer and convert data.

  20. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen

    1988-01-01

    Presents reviews of six computer software programs for teaching science. Provides the publisher, grade level, cost, and descriptions of software, including: (1) "Recycling Logic"; (2) "Introduction to Biochemistry"; (3) "Food for Thought"; (4) "Watts in a Home"; (5) "Geology in Action"; and (6) "Biomes." All are for Apple series microcomputers.…

  1. Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Syed, Abdul Haleem

    Land-use-and-land-cover (LULC) mapping is crucial in precision agriculture, environmental monitoring, disaster response, and military applications. The demand for improved and more accurate LULC maps has led to the emergence of a key methodology known as Geographic Object-Based Image Analysis (GEOBIA). The core idea of the GEOBIA for an object-based classification system (OBC) is to change the unit of analysis from single-pixels to groups-of-pixels called `objects' through segmentation. While this new paradigm solved problems and improved global accuracy, it also raised new challenges such as the loss of accuracy in categories that are less abundant, but potentially important. Although this trade-off may be acceptable in some domains, the consequences of such an accuracy loss could be potentially fatal in others (for instance, landmine detection). This thesis proposes a method to improve OBC performance by eliminating such accuracy losses. Specifically, we examine the two key players of an OBC system: Hierarchical Segmentation and Supervised Classification. Further, we propose a model to understand the source of accuracy errors in minority categories and provide a method called Scale Fusion to eliminate those errors. This proposed fusion method involves two stages. First, the characteristic scale for each category is estimated through a combination of segmentation and supervised classification. Next, these estimated scales (segmentation maps) are fused into one combined-object-map. Classification performance is evaluated by comparing results of the multi-cut-and-fuse approach (proposed) to the traditional single-cut (SC) scale selection strategy. Testing on four different data sets revealed that our proposed algorithm improves accuracy on minority classes while performing just as well on abundant categories. Another active obstacle, presented by today's remotely sensed images, is the volume of information produced by our modern sensors with high spatial and

  2. Object-based classification as an alternative approach to the traditional pixel-based classification to identify potential habitat of the grasshopper sparrow.

    PubMed

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided. PMID:17985180

  3. Object-Based Classification as an Alternative Approach to the Traditional Pixel-Based Classification to Identify Potential Habitat of the Grasshopper Sparrow

    NASA Astrophysics Data System (ADS)

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.

  4. Quantification of Impervious Surfaces Along the Wasatch Front, Utah: AN Object-Based Image Analysis Approach to Identifying AN Indicator for Wetland Stress

    NASA Astrophysics Data System (ADS)

    Leydsman-McGinty, E. I.; Ramsey, R. D.; McGinty, C.

    2013-12-01

    The Remote Sensing/GIS Laboratory at Utah State University, in cooperation with the United States Environmental Protection Agency, is quantifying impervious surfaces for three watershed sub-basins in Utah. The primary objective of developing watershed-scale quantifications of impervious surfaces is to provide an indicator of potential impacts to wetlands that occur within the Wasatch Front and along the Great Salt Lake. A geospatial layer of impervious surfaces can assist state agencies involved with Utah's Wetlands Program Plan (WPP) in understanding the impacts of impervious surfaces on wetlands, as well as support them in carrying out goals and actions identified in the WPP. The three watershed sub-basins, Lower Bear-Malad, Lower Weber, and Jordan, span the highly urbanized Wasatch Front and are consistent with focal areas in need of wetland monitoring and assessment as identified in Utah's WPP. Geospatial layers of impervious surface currently exist in the form of national and regional land cover datasets; however, these datasets are too coarse to be utilized in fine-scale analyses. In addition, the pixel-based image processing techniques used to develop these coarse datasets have proven insufficient in smaller scale or detailed studies, particularly when applied to high-resolution satellite imagery or aerial photography. Therefore, object-based image analysis techniques are being implemented to develop the geospatial layer of impervious surfaces. Object-based image analysis techniques employ a combination of both geospatial and image processing methods to extract meaningful information from high-resolution imagery. Spectral, spatial, textural, and contextual information is used to group pixels into image objects and then subsequently used to develop rule sets for image classification. eCognition, an object-based image analysis software program, is being utilized in conjunction with one-meter resolution National Agriculture Imagery Program (NAIP) aerial

  5. Software Smarts

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Under an SBIR (Small Business Innovative Research) contract with Johnson Space Center, Knowledge Based Systems Inc. (KBSI) developed an intelligent software environment for modeling and analyzing mission planning activities, simulating behavior, and, using a unique constraint propagation mechanism, updating plans with each change in mission planning activities. KBSI developed this technology into a commercial product, PROJECTLINK, a two-way bridge between PROSIm, KBSI's process modeling and simulation software and leading project management software like Microsoft Project and Primavera's SureTrak Project Manager.

  6. Software testing

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.

    2016-01-01

    Now more than ever, scientific results are dependent on sophisticated software and analysis. Why should we trust code written by others? How do you ensure your own code produces sensible results? How do you make sure it continues to do so as you update, modify, and add functionality? Software testing is an integral part of code validation and writing tests should be a requirement for any software project. I will talk about Python-based tools that make managing and running tests much easier and explore some statistics for projects hosted on GitHub that contain tests.

  7. Object Based Image Analysis Combining High Spatial Resolution Imagery and Laser Point Clouds for Urban Land Cover

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high

  8. Semi-Automated Classification of Gray Scale Aerial Photographs using Geographic Object Based Image Analysis (GEOBIA) Technique

    NASA Astrophysics Data System (ADS)

    Harb Rabia, Ahmed; Terribile, Fabio

    2013-04-01

    Aerial photography is an important source of high resolution remotely sensed data. Before 1970, aerial photographs were the only remote sensing data source for land use and land cover classification. Using these old aerial photographs improve the final output of land use and land cover change detection. However, classic techniques of aerial photographs classification like manual interpretation or screen digitization require great experience, long processing time and vast effort. A new technique needs to be developed in order to reduce processing time and effort and to give better results. Geographic object based image analysis (GEOBIA) is a newly developed area of Geographic Information Science and remote sensing in which automatic segmentation of images into objects of similar spectral, temporal and spatial characteristics is undertaken. Unlike pixel-based technique, GEOBIA deals with the object properties such as texture, square fit, roundness and many other properties that can improve classification results. GEOBIA technique can be divided into two main steps; segmentation and classification. Segmentation process is grouping adjacent pixels into objects of similar spectral and spatial characteristics. Classification process is assigning classes to the generated objects based on the characteristics of the individual objects. This study aimed to use GEOBIA technique to develop a novel approach for land use and land cover classification of aerial photographs that saves time and effort and gives improved results. Aerial photographs from 1954 of Valle Telesina in Italy were used in this study. Images were rectified and georeferenced in Arcmap using topographic maps. Images were then processed in eCognition software to generate land use and land cover map of 1954. A decision tree rule set was developed in eCognition to classify images and finally nine classes of general land use and land cover in the study area were recognized (forest, trees stripes, agricultural

  9. An object-based classification method for automatic detection of lunar impact craters from topographic data

    NASA Astrophysics Data System (ADS)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  10. An object-based approach for areal rainfall estimation and validation of atmospheric models

    NASA Astrophysics Data System (ADS)

    Trömel, Silke; Simmer, Clemens

    2012-02-01

    Integral radar volume descriptors (IRVD) are introduced and discussed as a pathway to an object-based characterization of precipitation systems. A set of IRVD values characterize the temporal development of precipitation systems which constitute the objects. The IRVDs are based on the temporal evolution of the three-dimensional distribution of radar reflectivities produced by the objects. In a first step a set of descriptors, i.e. potential IRVDs, are postulated, which characterize a precipitating system observable by a scanning radar e.g. the mean echo-top-height or the temporal change of the bright band depth of a raining system. In a second step a statistical analysis identifies those descriptors, which bear the most significant information about system surface precipitation yield, which are called IRVDs the values of which describe the objects. IRVDs are derived both from pseudo-radar observations retrieved from a weather forecast model and from real radar observations. Since different sets of IRVDs suggest also different precipitation generation mechanisms acting in the model and reality, the IRVD concept is proposed as a more process-oriented approach to model validation. Finally, the potential of IRVDs to improve estimates of radar-derived precipitation system yields when used on top of Z-R relations is demonstrated.

  11. Geographic Object-Based Image Analysis – Towards a new paradigm

    PubMed Central

    Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ‘per-pixel paradigm’ and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm. PMID:24623958

  12. A supervised method for object-based 3D building change detection on aerial stereo images

    NASA Astrophysics Data System (ADS)

    Qin, R.; Gruen, A.

    2014-08-01

    There is a great demand for studying the changes of buildings over time. The current trend for building change detection combines the orthophoto and DSM (Digital Surface Models). The pixel-based change detection methods are very sensitive to the quality of the images and DSMs, while the object-based methods are more robust towards these problems. In this paper, we propose a supervised method for building change detection. After a segment-based SVM (Support Vector Machine) classification with features extracted from the orthophoto and DSM, we focus on the detection of the building changes of different periods by measuring their height and texture differences, as well as their shapes. A decision tree analysis is used to assess the probability of change for each building segment and the traffic lighting system is used to indicate the status "change", "non-change" and "uncertain change" for building segments. The proposed method is applied to scanned aerial photos of the city of Zurich in 2002 and 2007, and the results have demonstrated that our method is able to achieve high detection accuracy.

  13. Data Mining for Knowledge Discovery from Object-Based Segmentation of Vhr Remotely Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Djerriri, K.; Malki, M.

    2013-04-01

    The success of the object-based image analysis (OBIA) paradigm can be attributed to the fact that regions obtained by means of segmentation process are depicted with a variety of spectral, shape, texture and context characteristics. These representative objectsattributes can be assigned to different land-cover/land-use types by means of two options. The first is to use supervised classifiers such as K-nearest neighbors (KNN) and Support Vector Machine (SVM), the second is to create classification rules. Supervised classifiers perform very well and have generally higher accuracies. However one of their drawbacks is that they provide no explicit knowledge in understandable and interpretable forms. The building of the rule set is generally based on the domain expert knowledge when dealing with a small number of classes and a small number of attributes, but having a dozens of continuously valued attributes attached to each image object makes it a tedious task and experts quickly get overwhelmed and become totally helpless. This is where data mining techniques for knowledge discovering help to understand the hidden relationships between classes and their attached attributes. The aim of this paper is to highlight the benefits of using knowledge discovery and data-mining tools, especially rule induction algorithms for useful and accurate information extraction from high spatial resolution remotely sensed imagery.

  14. Fully automated objective-based method for master recession curve separation.

    PubMed

    Posavec, Kristijan; Parlov, Jelena; Nakić, Zoran

    2010-01-01

    The fully automated objective-based method for master recession curve (MRC) separation was developed by using Microsoft Excel spreadsheet and Visual Basic for Applications (VBA) code. The core of the program code is used to construct an MRC by using the adapted matching strip method (Posavec et al. 2006). Criteria for separating the MRC into two or three segments are determined from the flow-duration curve and are represented as the probable range of percent of flow rate duration. Successive separations are performed automatically on two and three MRCs using sets of percent of flow rate duration from selected ranges and an optimal separation model scenario, having the highest average coefficient of determination R(2), is selected as the most appropriate one. The resulting separated master recession curves are presented graphically, whereas the statistics are presented numerically, all in separate sheets. Examples of field data obtained from two springs in Istria, Croatia, are used to illustrate its application. The freely available Excel spreadsheet and VBA program ensures the ease of use and applicability for larger data sets. PMID:20100291

  15. A Framework for Geographic Object-Based Image Analysis (GEOBIA) based on geographic ontology

    NASA Astrophysics Data System (ADS)

    Gu, H. Y.; Li, H. T.; Yan, L.; Lu, X. J.

    2015-06-01

    GEOBIA (Geographic Object-Based Image Analysis) is not only a hot topic of current remote sensing and geographical research. It is believed to be a paradigm in remote sensing and GIScience. The lack of a systematic approach designed to conceptualize and formalize the class definitions makes GEOBIA a highly subjective and difficult method to reproduce. This paper aims to put forward a framework for GEOBIA based on geographic ontology theory, which could implement "Geographic entities - Image objects - Geographic objects" true reappearance. It consists of three steps, first, geographical entities are described by geographic ontology, second, semantic network model is built based on OWL(ontology web language), at last, geographical objects are classified with decision rule or other classifiers. A case study of farmland ontology was conducted for describing the framework. The strength of this framework is that it provides interpretation strategies and global framework for GEOBIA with the property of objective, overall, universal, universality, etc., which avoids inconsistencies caused by different experts' experience and provides an objective model for mage analysis.

  16. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction.

    PubMed

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients' psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller's mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  17. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture

    PubMed Central

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-01-01

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an ‘irrelevant-change distracting effect’, where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants’ processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object’s colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM. PMID:26956084

  18. Three Dimentional Reconstruction of Large Cultural Heritage Objects Based on Uav Video and Tls Data

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Wu, T. H.; Shen, Y.; Wu, L.

    2016-06-01

    This paper investigates the synergetic use of unmanned aerial vehicle (UAV) and terrestrial laser scanner (TLS) in 3D reconstruction of cultural heritage objects. Rather than capturing still images, the UAV that equips a consumer digital camera is used to collect dynamic videos to overcome its limited endurance capacity. Then, a set of 3D point-cloud is generated from video image sequences using the automated structure-from-motion (SfM) and patch-based multi-view stereo (PMVS) methods. The TLS is used to collect the information that beyond the reachability of UAV imaging e.g., partial building facades. A coarse to fine method is introduced to integrate the two sets of point clouds UAV image-reconstruction and TLS scanning for completed 3D reconstruction. For increased reliability, a variant of ICP algorithm is introduced using local terrain invariant regions in the combined designation. The experimental study is conducted in the Tulou culture heritage building in Fujian province, China, which is focused on one of the TuLou clusters built several hundred years ago. Results show a digital 3D model of the Tulou cluster with complete coverage and textural information. This paper demonstrates the usability of the proposed method for efficient 3D reconstruction of heritage object based on UAV video and TLS data.

  19. Automatic fuzzy object-based analysis of VHSR images for urban objects extraction

    NASA Astrophysics Data System (ADS)

    Sebari, Imane; He, Dong-Chen

    2013-05-01

    We present an automatic approach for object extraction from very high spatial resolution (VHSR) satellite images based on Object-Based Image Analysis (OBIA). The proposed solution requires no input data other than the studied image. Not input parameters are required. First, an automatic non-parametric cooperative segmentation technique is applied to create object primitives. A fuzzy rule base is developed based on the human knowledge used for image interpretation. The rules integrate spectral, textural, geometric and contextual object proprieties. The classes of interest are: tree, lawn, bare soil and water for natural classes; building, road, parking lot for man made classes. The fuzzy logic is integrated in our approach in order to manage the complexity of the studied subject, to reason with imprecise knowledge and to give information on the precision and certainty of the extracted objects. The proposed approach was applied to extracts of Ikonos images of Sherbrooke city (Canada). An overall total extraction accuracy of 80% was observed. The correctness rates obtained for building, road and parking lot classes are of 81%, 75% and 60%, respectively.

  20. Object-Based Fusion of Envisat ASAR and HJ-1 Multispectral Images for Urban Landcover Mapping

    NASA Astrophysics Data System (ADS)

    Vu, Tuong-Thuy; Ban, Yifang

    2010-12-01

    The objective of this research is to investigate the synergy of ESA ENVISAT ASAR data and the Chinese HJ-1 multispectral data for urban land cover mapping in Beijing, China. A newly developed fusion approach is applied to map a complex urban area, which consists of both old and new built-up areas, using ENVISAT ASAR data acquired on 31 July 2008 and HJ-1B multispectral imagery acquired on 12 May 2009. First, radar and optical images are classified and segmented separately with scale-space analysis in integration with shape analysis, hybrid pixel/object based unsupervised classification and histogram intersection technique. Second, based on a set of predefined rules, extracted objects from the first step will be fused to produce the final classified map. The developed fusion method is fully automatic with less user interaction and would be a suitable tool for operational uses if satisfactory accuracy could be achieved. Future research is planned to improve the algorithm performance with addition of multi-temporal SAR data.

  1. Glandular object based tumor morphometry in H&E biopsy samples for prostate cancer prognosis

    NASA Astrophysics Data System (ADS)

    Fogarasi, Stephen I.; Khan, Faisal M.; Pang, Ho-Yuen H.; Mesa-Tejada, Ricardo; Donovan, Michael J.; Fernandez, Gerardo

    2011-03-01

    Morphological and architectural characteristics of primary prostate tissue compartments, such as epithelial nuclei (EN) and cytoplasm, provide critical information for cancer diagnosis, prognosis and therapeutic response prediction. The subjective and variable Gleason grade assessed by expert pathologists in Hematoxylin and Eosin (H&E) stained specimens has been the standard for prostate cancer diagnosis and prognosis. We propose a novel morphometric, glandular object-oriented image analysis approach for the robust quantification of H&E prostate biopsy images. We demonstrate the utility of features extracted through the proposed method in predicting disease progression post treatment in a multi-institution cohort of 1027 patients. The biopsy based features were univariately predictive for clinical response post therapy; with concordance indexes (CI) <= 0.4 or >= 0.6. In multivariate analysis, a glandular object feature quantifying tumor epithelial cells not directly associated with an intact tumor gland was selected in a model incorporating preoperative clinical data, protein biomarker and morphological imaging features. The model achieved a CI of 0.73 in validation, which was significantly higher than a CI of 0.69 for the standard multivariate model based solely on clinical features currently used in clinical practice. This work presents one of the first demonstrations of glandular object based morphological features in the H&E stained biopsy specimen to predict disease progression post primary treatment. Additionally, it is the largest scale study of the efficacy and robustness of the proposed features in prostate cancer prognosis.

  2. Object-based attention benefits reveal selective abnormalities of visual integration in autism.

    PubMed

    Falter, Christine M; Grant, Kate C Plaisted; Davis, Greg

    2010-06-01

    A pervasive integration deficit could provide a powerful and elegant account of cognitive processing in autism spectrum disorders (ASD). However, in the case of visual Gestalt grouping, typically assessed by tasks that require participants explicitly to introspect on their own grouping perception, clear evidence for such a deficit remains elusive. To resolve this issue, we adopt an index of Gestalt grouping from the object-based attention literature that does not require participants to assess their own grouping perception. Children with ASD and mental- and chronological-age matched typically developing children (TD) performed speeded orientation discriminations of two diagonal lines. The lines were superimposed on circles that were either grouped together or segmented on the basis of color, proximity or these two dimensions in competition. The magnitude of performance benefits evident for grouped circles, relative to ungrouped circles, provided an index of grouping under various conditions. Children with ASD showed comparable grouping by proximity to the TD group, but reduced grouping by similarity. ASD seems characterized by a selective bias away from grouping by similarity combined with typical levels of grouping by proximity, rather than by a pervasive integration deficit. PMID:20578070

  3. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture.

    PubMed

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-01-01

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an 'irrelevant-change distracting effect', where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants' processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object's colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM. PMID:26956084

  4. Object-based approach to national land cover mapping using HJ satellite imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Li, Xiaosong; Yuan, Quanzhi; Liu, Yu

    2014-01-01

    To meet the carbon storage estimate in ecosystems for a national carbon strategy, we introduce a consistent database of China land cover. The Chinese Huan Jing (HJ) satellite is proven efficient in the cloud-free acquisition of seasonal image series in a monsoon region and in vegetation identification for mesoscale land cover mapping. Thirty-eight classes of level II land cover are generated based on the Land Cover Classification System of the United Nations Food and Agriculture Organization that follows a standard and quantitative definition. Twenty-four layers of derivative spectral, environmental, and spatial features compose the classification database. Object-based approach characterizing additional nonspectral features is conducted through mapping, and multiscale segmentations are applied on object boundary match to target real-world conditions. This method sufficiently employs spatial information, in addition to spectral characteristics, to improve classification accuracy. The algorithm of hierarchical classification is employed to follow step-by-step procedures that effectively control classification quality. This algorithm divides the dual structures of universal and local trees. Consistent universal trees suitable to most regions are performed first, followed by local trees that depend on specific features of nine climate stratifications. The independent validation indicates the overall accuracy reaches 86%.

  5. Object-based image analysis for the impact of sewage pollution in Malad Creek, Mumbai, India.

    PubMed

    Shirke, Shivani; Pinto, Shannon M; Kushwaha, Vikash K; Mardikar, Trupti; Vijay, Ritesh

    2016-02-01

    Today, object-based image analysis provides an option for integrating spatial information beyond conventional pixel-based classifications for high-resolution imagery. Due to its rare applicability in pollution assessment, an attempt has been made to assess the spatial extent of sewage pollution in Malad Creek, Mumbai, India. Based on multiresolution segmentation of an IRS P6 (LISS IV) image and the Normalized Difference Turbidity Index (NDTI), the various water quality regions in the creek were classified. The existing literature implies that the reflectance of turbid water is similar to that of bare soil which gives positive NDTI values. In contrast to this, negative values of NDTI are observed in the present study due to the presence of organic matter which absorbs light and imparts turbidity, which is supported by the significant correlation between NDTI and turbidity. A strong relationship is observed between turbidity and water quality parameters, implying the impact of organic matter through discharges of sewage in the creek. Based on the classified regions and the water quality parameters, the extent of pollution was ranked as high, moderate, low and least. The methodology developed in the present study was successfully applied on an IKONOS image for the same study area but a different time frame. The approach will help in impact assessment of sewage pollution and its spatial extent in other water bodies. PMID:26780414

  6. Object-based forest classification to facilitate landscape-scale conservation in the Mississippi Alluvial Valley

    USGS Publications Warehouse

    Mitchell, Michael; Wilson, R. Randy; Twedt, Daniel J.; Mini, Anne E.; James, J. Dale

    2016-01-01

    The Mississippi Alluvial Valley is a floodplain along the southern extent of the Mississippi River extending from southern Missouri to the Gulf of Mexico. This area once encompassed nearly 10 million ha of floodplain forests, most of which has been converted to agriculture over the past two centuries. Conservation programs in this region revolve around protection of existing forest and reforestation of converted lands. Therefore, an accurate and up to date classification of forest cover is essential for conservation planning, including efforts that prioritize areas for conservation activities. We used object-based image analysis with Random Forest classification to quickly and accurately classify forest cover. We used Landsat band, band ratio, and band index statistics to identify and define similar objects as our training sets instead of selecting individual training points. This provided a single rule-set that was used to classify each of the 11 Landsat 5 Thematic Mapper scenes that encompassed the Mississippi Alluvial Valley. We classified 3,307,910±85,344 ha (32% of this region) as forest. Our overall classification accuracy was 96.9% with Kappa statistic of 0.96. Because this method of forest classification is rapid and accurate, assessment of forest cover can be regularly updated and progress toward forest habitat goals identified in conservation plans can be periodically evaluated.

  7. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are six computer software packages including "Lunar Greenhouse,""Dyno-Quest,""How Weather Works,""Animal Trackers,""Personal Science Laboratory," and "The Skeletal and Muscular Systems." Availability, functional, and hardware requirements are discussed. (CW)

  8. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane

    1990-01-01

    Reviews two programs: (1) "The Weather Machine" on understanding weather and weather forecasting and (2) "The Mystery of the Hotel Victoria" on problem solving in mathematics. Presents the descriptions, advantages, and weaknesses of the software. (YP)

  9. Software Reviews.

    ERIC Educational Resources Information Center

    Davis, Shelly J., Ed.; Knaupp, Jon, Ed.

    1984-01-01

    Reviewed is computer software on: (1) classification of living things, a tutorial program for grades 5-10; and (2) polynomial practice using tiles, a drill-and-practice program for algebra students. (MNS)

  10. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1987-01-01

    Provides a review of four science software programs. Includes topics such as plate tectonics, laboratory experiment simulations, the human body, and light and temperature. Contains information on ordering and reviewers' comments. (ML)

  11. Software Reviews.

    ERIC Educational Resources Information Center

    Dwyer, Donna; And Others

    1989-01-01

    Reviewed are seven software packages for Apple and IBM computers. Included are: "Toxicology"; "Science Corner: Space Probe"; "Alcohol and Pregnancy"; "Science Tool Kit Plus"; Computer Investigations: Plant Growth"; "Climatrolls"; and "Animal Watch: Whales." (CW)

  12. Software Reviews.

    ERIC Educational Resources Information Center

    Kimball, Jeffrey P.; And Others

    1987-01-01

    Describes a variety of computer software. The packages reviewed include a variety of simulations, a spread sheet, a printer driver and an alternative operating system for DBM.PCs and compatible programs. (BSR)

  13. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Eugene T., Ed.

    1988-01-01

    Presents reviews by classroom teachers of software for teaching science. Includes material on the work of geologists, genetics, earth science, classification of living things, astronomy, endangered species, skeleton, drugs, and heartbeat. Provides information on availability and equipment needed. (RT)

  14. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are three computer software packages including "Martin Luther King, Jr.: Instant Replay of History,""Weeds to Trees," and "The New Print Shop, School Edition." Discussed are hardware requirements, costs, grade levels, availability, emphasis, strengths, and weaknesses. (CW)

  15. Software Reviews.

    ERIC Educational Resources Information Center

    Mathematics and Computer Education, 1988

    1988-01-01

    Presents reviews of six software packages. Includes (1) "Plain Vanilla Statistics"; (2) "MathCAD 2.0"; (3) "GrFx"; (4) "Trigonometry"; (5) "Algebra II"; (6) "Algebra Drill and Practice I, II, and III." (PK)

  16. Astronomy Software

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Software Bisque's TheSky, SkyPro and Remote Astronomy Software incorporate technology developed for the Hubble Space Telescope. TheSky and SkyPro work together to orchestrate locating, identifying and acquiring images of deep sky objects. With all three systems, the user can directly control computer-driven telescopes and charge coupled device (CCD) cameras through serial ports. Through the systems, astronomers and students can remotely operate a telescope at the Mount Wilson Observatory Institute.

  17. SAR Product Control Software

    NASA Astrophysics Data System (ADS)

    Meadows, P. J.; Hounam, D.; Rye, A. J.; Rosich, B.; Börner, T.; Closa, J.; Schättler, B.; Smith, P. J.; Zink, M.

    2003-03-01

    As SAR instruments and their operating modes become more complex, as new applications place more and more demands on image quality and as our understanding of their imperfections becomes more sophisticated, there is increasing recognition that SAR data quality has to be controlled more completely to keep pace. The SAR product CONtrol software (SARCON) is a comprehensive SAR product control software suite tailored to the latest generation of SAR sensors. SARCON profits from the most up-to-date thinking on SAR image performance derived from other spaceborne and airborne SAR projects and is based on the newest applications. This paper gives an overview of the structure and the features of this new software tool, which is a product of a co-operation between teams at BAE SYSTEMS Advanced Technology Centre and DLR under contract to ESA (ESRIN). Work on SARCON began in 1999 and is continuing.

  18. Sensor Validation Software

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Under a Small Business Innovation Research contract from Lewis Research Center, Expert Microsystems, Inc. developed SureSense, real-time sensor data validation software. This ultra-reliable control and sensing system product was produced through a partnership in 1994 between Expert Microsystems and Intelligent Software Associates, Inc. SureSense was created in response to a NASA need for verifying the reliability of sensor input that operated advanced automation and control systems. The immediate applications included improving the safety and reliability of Space Shuttle Main Engine operations. The company has structured the software to enable application to virtually any process control environment, such as computer integrated manufacturing, power plants, and hazardous gas sensing and control systems.

  19. Fault tolerant software modules for SIFT

    NASA Technical Reports Server (NTRS)

    Hecht, M.; Hecht, H.

    1982-01-01

    The implementation of software fault tolerance is investigated for critical modules of the Software Implemented Fault Tolerance (SIFT) operating system to support the computational and reliability requirements of advanced fly by wire transport aircraft. Fault tolerant designs generated for the error reported and global executive are examined. A description of the alternate routines, implementation requirements, and software validation are included.

  20. Addressing Software Security

    NASA Technical Reports Server (NTRS)

    Bailey, Brandon

    2015-01-01

    Historically security within organizations was thought of as an IT function (web sites/servers, email, workstation patching, etc.) Threat landscape has evolved (Script Kiddies, Hackers, Advanced Persistent Threat (APT), Nation States, etc.) Attack surface has expanded -Networks interconnected!! Some security posture factors Network Layer (Routers, Firewalls, etc.) Computer Network Defense (IPS/IDS, Sensors, Continuous Monitoring, etc.) Industrial Control Systems (ICS) Software Security (COTS, FOSS, Custom, etc.)

  1. a Region-Based Multi-Scale Approach for Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.

    2016-06-01

    Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  2. An object-based image analysis of pinyon and juniper woodlands treated to reduce fuels.

    PubMed

    Hulet, April; Roundy, Bruce A; Petersen, Steven L; Jensen, Ryan R; Bunting, Stephen C

    2014-03-01

    Mechanical and prescribed fire treatments are commonly used to reduce fuel loads and maintain or restore sagebrush steppe rangelands across the Great Basin where pinyon (Pinus) and juniper (Juniperus) trees are encroaching and infilling. Geospatial technologies, particularly remote sensing, could potentially be used in these ecosystems to (1) evaluate the longevity of fuel reduction treatments, (2) provide data for planning and designing future fuel-reduction treatments, and (3) assess the spatial distribution of horizontal fuel structure following fuel-reduction treatments. High-spatial resolution color-infrared imagery (0.06-m pixels) was acquired for pinyon and juniper woodland plots where fuels were reduced by either prescribed fire, tree cutting, or mastication at five sites in Oregon, California, Nevada, and Utah. Imagery was taken with a Vexcel UltraCam X digital camera in June 2009. Within each treatment plot, ground cover was measured as part of the Sagebrush Steppe Treatment Evaluation Project. Trimble eCognition Developer was used to classify land cover classes using object-based image analysis (OBIA) techniques. Differences between cover estimates using OBIA and ground-measurements were not consistently higher or lower for any land cover class and when evaluated for individual sites, were within ±5 % of each other. The overall accuracy and the K hat statistic for classified thematic maps for each treatment were: prescribed burn 85 % and 0.81; cut and fell 82 % and 0.77, and mastication 84 % and 0.80. Although cover assessments from OBIA differed somewhat from ground measurements, they are sufficiently accurate to evaluate treatment success and for supporting a broad range of management concerns. PMID:24402578

  3. An Object-Based Image Analysis of Pinyon and Juniper Woodlands Treated to Reduce Fuels

    NASA Astrophysics Data System (ADS)

    Hulet, April; Roundy, Bruce A.; Petersen, Steven L.; Jensen, Ryan R.; Bunting, Stephen C.

    2014-03-01

    Mechanical and prescribed fire treatments are commonly used to reduce fuel loads and maintain or restore sagebrush steppe rangelands across the Great Basin where pinyon ( Pinus) and juniper ( Juniperus) trees are encroaching and infilling. Geospatial technologies, particularly remote sensing, could potentially be used in these ecosystems to (1) evaluate the longevity of fuel reduction treatments, (2) provide data for planning and designing future fuel-reduction treatments, and (3) assess the spatial distribution of horizontal fuel structure following fuel-reduction treatments. High-spatial resolution color-infrared imagery (0.06-m pixels) was acquired for pinyon and juniper woodland plots where fuels were reduced by either prescribed fire, tree cutting, or mastication at five sites in Oregon, California, Nevada, and Utah. Imagery was taken with a Vexcel UltraCam X digital camera in June 2009. Within each treatment plot, ground cover was measured as part of the Sagebrush Steppe Treatment Evaluation Project. Trimble eCognition Developer was used to classify land cover classes using object-based image analysis (OBIA) techniques. Differences between cover estimates using OBIA and ground-measurements were not consistently higher or lower for any land cover class and when evaluated for individual sites, were within ±5 % of each other. The overall accuracy and the K hat statistic for classified thematic maps for each treatment were: prescribed burn 85 % and 0.81; cut and fell 82 % and 0.77, and mastication 84 % and 0.80. Although cover assessments from OBIA differed somewhat from ground measurements, they are sufficiently accurate to evaluate treatment success and for supporting a broad range of management concerns.

  4. Coregistration refinement of hyperspectral images and DSM: An object-based approach using spectral information

    NASA Astrophysics Data System (ADS)

    Avbelj, Janja; Iwaszczuk, Dorota; Müller, Rupert; Reinartz, Peter; Stilla, Uwe

    2015-02-01

    For image fusion in remote sensing applications the georeferencing accuracy using position, attitude, and camera calibration measurements can be insufficient. Thus, image processing techniques should be employed for precise coregistration of images. In this article a method for multimodal object-based image coregistration refinement between hyperspectral images (HSI) and digital surface models (DSM) is presented. The method is divided in three parts: object outline detection in HSI and DSM, matching, and determination of transformation parameters. The novelty of our proposed coregistration refinement method is the use of material properties and height information of urban objects from HSI and DSM, respectively. We refer to urban objects as objects which are typical in urban environments and focus on buildings by describing them with 2D outlines. Furthermore, the geometric accuracy of these detected building outlines is taken into account in the matching step and for the determination of transformation parameters. Hence, a stochastic model is introduced to compute optimal transformation parameters. The feasibility of the method is shown by testing it on two aerial HSI of different spatial and spectral resolution, and two DSM of different spatial resolution. The evaluation is carried out by comparing the accuracies of the transformations parameters to the reference parameters, determined by considering object outlines at much higher resolution, and also by computing the correctness and the quality rate of the extracted outlines before and after coregistration refinement. Results indicate that using outlines of objects instead of only line segments is advantageous for coregistration of HSI and DSM. The extraction of building outlines in comparison to the line cue extraction provides a larger amount of assigned lines between the images and is more robust to outliers, i.e. false matches.

  5. Mapping Eroded Areas on Mountain Grassland with Terrestrial Photogrammetry and Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Mayr, Andreas; Rutzinger, Martin; Bremer, Magnus; Geitner, Clemens

    2016-06-01

    In the Alps as well as in other mountain regions steep grassland is frequently affected by shallow erosion. Often small landslides or snow movements displace the vegetation together with soil and/or unconsolidated material. This results in bare earth surface patches within the grass covered slope. Close-range and remote sensing techniques are promising for both mapping and monitoring these eroded areas. This is essential for a better geomorphological process understanding, to assess past and recent developments, and to plan mitigation measures. Recent developments in image matching techniques make it feasible to produce high resolution orthophotos and digital elevation models from terrestrial oblique images. In this paper we propose to delineate the boundary of eroded areas for selected scenes of a study area, using close-range photogrammetric data. Striving for an efficient, objective and reproducible workflow for this task, we developed an approach for automated classification of the scenes into the classes grass and eroded. We propose an object-based image analysis (OBIA) workflow which consists of image segmentation and automated threshold selection for classification using the Excess Green Vegetation Index (ExG). The automated workflow is tested with ten different scenes. Compared to a manual classification, grass and eroded areas are classified with an overall accuracy between 90.7% and 95.5%, depending on the scene. The methods proved to be insensitive to differences in illumination of the scenes and greenness of the grass. The proposed workflow reduces user interaction and is transferable to other study areas. We conclude that close-range photogrammetry is a valuable low-cost tool for mapping this type of eroded areas in the field with a high level of detail and quality. In future, the output will be used as ground truth for an area-wide mapping of eroded areas in coarser resolution aerial orthophotos acquired at the same time.

  6. An Object-Based Method for Estimation of River Discharge from Remotely-Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Burgett, D. A.; Blesius, L.; Davis, J. D.

    2010-12-01

    High resolution satellite and aerial imagery of fluvial systems contain much information about planform river channel features. However, not much is known about how these forms quantitatively related to river channel process, specifically, discharge. This research explores methods for remote image-based river discharge estimation through Object-Based Image Processing (OBIA) and GIS techniques. Previous efforts in image-based discharge estimation have relied primarily on manual delineation of river features and the input of reach-averaged values of these features into statistically based models for estimation. In addition to analyzing OBIA techniques for channel feature delineation and measurement, this approach investigates techniques of discharge estimation model design, validation, and correction along a reach, utilizing variation in “standard” channel features (e.g. water surface width), along with less tangible channel feature metrics derived from OBIA. Rather than predefine the channel unit of analysis, this work also considers the accuracy of model parameters derived from a range of channel scales, from longer reach-averaged to cross-sectional. High resolution (1 m) color infrared orthoimagery from 2005 and 2009 National Agricultural Inventory Program (NAIP) of 50 river reaches (ranging in discharge from approximately 13 m3s-1 to 856 m3s-1) were utilized for this analysis. These reaches, all near United States Geological Survey (USGS) river gages in California, USA, were split randomly and evenly into 25 reaches each for model design and validation, respectively. This approach allows better isolation of error resulting from user bias in channel feature measurement, and moves toward a more precise, standardized system of measurement for remotely observable channel form.

  7. Mapping temporal changes in connectivity using high-resolution aerial data and object based image analysis

    NASA Astrophysics Data System (ADS)

    Masselink, Rens; Anders, Niels; Keesstra, Saskia; Seeger, Manuel

    2014-05-01

    Within the field of geomorphology mapping has always been an important tool to interpret spatial and temporal distributions of phenomena and processes at the surface. In the field of connectivity however, although throughout the past decade many articles have been published, there are only very few that go into the mapping of connectivity. This study aimed at developing a new, automated method for mapping connectivity within agricultural catchments. The method, which is a combination of Object-Based Image Analysis (OBIA) and traditional geomorphological field mapping, was applied to two agricultural catchments in Navarre, Spain, both with an area of approximately 2 sq.km. An unmanned aerial vehicle (UAV) was used to take aerial photographs with a resolution of 6 cm, of which a DEM with a 12 cm resolution was created using structure-from-motion photogrammetry. Connectivity was mapped within the study areas using OBIA using a top down method, meaning that connectivity was mapped at different scale levels, starting at the largest scale. Firstly sub-catchments were automatically delineated, after which several characteristics and features that affect connectivity within the sub-catchments were classified, e.g. landuse, landslides, rills, gullies, riparian vegetation, changes in slope, ploughing direction etc. In two consecutive years (2013-2014) photographs were taken and connectivity of both catchments of both years will be compared. Future work will include a quantification of the mapped connectivity (highly connected years vs. low connected years), causes and consequences of these differences in connectivity, comparison to existing connectivity indices and comparison of mapped connectivity in sub-catchments and measured discharge.

  8. Extraction of Benthic Cover Information from Video Tows and Photographs Using Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Estomata, M. T. L.; Blanco, A. C.; Nadaoka, K.; Tomoling, E. C. M.

    2012-07-01

    Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES) was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU), which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA), which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05).

  9. Modelling of cooperating robotized systems with the use of object-based approach

    NASA Astrophysics Data System (ADS)

    Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.

    2015-11-01

    Today's robotized manufacturing systems are characterized by high efficiency. The emphasis is placed mainly on the simultaneous work of machines. It could manifest in many ways, where the most spectacular one is the cooperation of several robots, during work on the same detail. What's more, recently a dual-arm robots are used that could mimic the manipulative skills of human hands. As a result, it is often hard to deal with the situation, when it is necessary not only to maintain sufficient precision, but also the coordination and proper sequence of movements of individual robots’ arms. The successful completion of this task depends on the individual robot control systems and their respective programmed, but also on the well-functioning communication between robot controllers. A major problem in case of cooperating robots is the possibility of collision between particular links of robots’ kinematic chains. This is not a simple case, because the manufacturers of robotic systems do not disclose the details of the control algorithms, then it is hard to determine such situation. Another problem with cooperation of robots is how to inform the other units about start or completion of part of the task, so that other robots can take further actions. This paper focuses on communication between cooperating robotic units, assuming that every robot is represented by object-based model. This problem requires developing a form of communication protocol that the objects can use for collecting the information about its environment. The approach presented in the paper is not limited to the robots and could be used in a wider range, for example during modelling of the complete workcell or production line.

  10. Object-based image analysis and data mining for building ontology of informal urban settlements

    NASA Astrophysics Data System (ADS)

    Khelifa, Dejrriri; Mimoun, Malki

    2012-11-01

    During recent decades, unplanned settlements have been appeared around the big cities in most developing countries and as consequence, numerous problems have emerged. Thus the identification of different kinds of settlements is a major concern and challenge for authorities of many countries. Very High Resolution (VHR) Remotely Sensed imagery has proved to be a very promising way to detect different kinds of settlements, especially through the using of new objectbased image analysis (OBIA). The most important key is in understanding what characteristics make unplanned settlements differ from planned ones, where most experts characterize unplanned urban areas by small building sizes at high densities, no orderly road arrangement and Lack of green spaces. Knowledge about different kinds of settlements can be captured as a domain ontology that has the potential to organize knowledge in a formal, understandable and sharable way. In this work we focus on extracting knowledge from VHR images and expert's knowledge. We used an object based strategy by segmenting a VHR image taken over urban area into regions of homogenous pixels at adequate scale level and then computing spectral, spatial and textural attributes for each region to create objects. A genetic-based data mining was applied to generate high predictive and comprehensible classification rules based on selected samples from the OBIA result. Optimized intervals of relevant attributes are found, linked with land use types for forming classification rules. The unplanned areas were separated from the planned ones, through analyzing of the line segments detected from the input image. Finally a simple ontology was built based on the previous processing steps. The approach has been tested to VHR images of one of the biggest Algerian cities, that has grown considerably in recent decades.

  11. Object-Based Change Detection Using High-Resolution Remotely Sensed Data and GIS

    NASA Astrophysics Data System (ADS)

    Sofina, N.; Ehlers, M.

    2012-08-01

    High resolution remotely sensed images provide current, detailed, and accurate information for large areas of the earth surface which can be used for change detection analyses. Conventional methods of image processing permit detection of changes by comparing remotely sensed multitemporal images. However, for performing a successful analysis it is desirable to take images from the same sensor which should be acquired at the same time of season, at the same time of a day, and - for electro-optical sensors - in cloudless conditions. Thus, a change detection analysis could be problematic especially for sudden catastrophic events. A promising alternative is the use of vector-based maps containing information about the original urban layout which can be related to a single image obtained after the catastrophe. The paper describes a methodology for an object-based search of destroyed buildings as a consequence of a natural or man-made catastrophe (e.g., earthquakes, flooding, civil war). The analysis is based on remotely sensed and vector GIS data. It includes three main steps: (i) generation of features describing the state of buildings; (ii) classification of building conditions; and (iii) data import into a GIS. One of the proposed features is a newly developed 'Detected Part of Contour' (DPC). Additionally, several features based on the analysis of textural information corresponding to the investigated vector objects are calculated. The method is applied to remotely sensed images of areas that have been subjected to an earthquake. The results show the high reliability of the DPC feature as an indicator for change.

  12. Neural activity associated with self, other, and object-based counterfactual thinking

    PubMed Central

    De Brigard, Felipe; Spreng, R. Nathan; Mitchell, Jason P.; Schacter, Daniel L.

    2016-01-01

    Previous research has shown that autobiographical episodic counterfactual thinking—i.e., mental simulations about alternative ways in which one’s life experiences could have occurred—engages the brain’s default network (DN). However, it remains unknown whether or not the DN is also engaged during impersonal counterfactual thoughts, specifically those involving other people or objects. The current study compares brain activity during counterfactual simulations involving the self, others and objects. In addition, counterfactual thoughts involving others were manipulated in terms of similarity and familiarity with the simulated characters. The results indicate greater involvement of DN during person-based (i.e., self and other) as opposed to object-based counterfactual simulations. However, the involvement of different regions of the DN during other-based counterfactual simulations was modulated by how close and/or similar the simulated character was perceived to be by the participant. Simulations involving unfamiliar characters preferentially recruited dorsomedial prefrontal cortex. Simulations involving unfamiliar similar characters, characters with whom participants identified personality traits, recruited lateral temporal gyrus. Finally, our results also revealed differential coupling of right hippocampus with lateral prefrontal and temporal cortex during counterfactual simulations involving familiar similar others, but with left transverse temporal gyrus and medial frontal and inferior temporal gyri during counterfactual simulations involving either oneself or unfamiliar dissimilar others. These results suggest that different brain mechanisms are involved in the simulation of personal and impersonal counterfactual thoughts, and that the extent to which regions associated with autobiographical memory are recruited during the simulation of counterfactuals involving others depends on the perceived similarity and familiarity with the simulated individuals

  13. Software engineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III; Hiott, Jim; Golej, Jim; Plumb, Allan

    1993-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. The Johnson Space Center (JSC) created a significant set of tools to develop and maintain FORTRAN and C code during development of the space shuttle. This tool set forms the basis for an integrated environment to reengineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. The latest release of the environment was in Feb. 1992.

  14. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. JSC created a significant set of tools to develop and maintain FORTRAN and C code during development of the Space Shuttle. This tool set forms the basis for an integrated environment to re-engineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. A beta vision of the environment was released in Mar. 1991. The commercial potential for such re-engineering tools is very great. CASE TRENDS magazine reported it to be the primary concern of over four hundred of the top MIS executives.

  15. Control Software

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Real-Time Innovations, Inc. (RTI) collaborated with Ames Research Center, the Jet Propulsion Laboratory and Stanford University to leverage NASA research to produce ControlShell software. RTI is the first "graduate" of Ames Research Center's Technology Commercialization Center. The ControlShell system was used extensively on a cooperative project to enhance the capabilities of a Russian-built Marsokhod rover being evaluated for eventual flight to Mars. RTI's ControlShell is complex, real-time command and control software, capable of processing information and controlling mechanical devices. One ControlShell tool is StethoScope. As a real-time data collection and display tool, StethoScope allows a user to see how a program is running without changing its execution. RTI has successfully applied its software savvy in other arenas, such as telecommunications, networking, video editing, semiconductor manufacturing, automobile systems, and medical imaging.

  16. Antiterrorist Software

    NASA Technical Reports Server (NTRS)

    Clark, David A.

    1998-01-01

    In light of the escalation of terrorism, the Department of Defense spearheaded the development of new antiterrorist software for all Government agencies by issuing a Broad Agency Announcement to solicit proposals. This Government-wide competition resulted in a team that includes NASA Lewis Research Center's Computer Services Division, who will develop the graphical user interface (GUI) and test it in their usability lab. The team launched a program entitled Joint Sphere of Security (JSOS), crafted a design architecture (see the following figure), and is testing the interface. This software system has a state-ofthe- art, object-oriented architecture, with a main kernel composed of the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS will be used as the software "breadboard" for assembling the components of explosions, such as blast and collapse simulations.

  17. Object-based Image Classification of Arctic Sea Ice and Melt Ponds through Aerial Photos

    NASA Astrophysics Data System (ADS)

    Miao, X.; Xie, H.; Li, Z.; Lei, R.

    2013-12-01

    The last six years have marked the lowest Arctic summer sea ice extents in the modern era, with a new record summer minimum (3.4 million km2) set on 13 September 2012. It has been predicted that the Arctic could be free of summer ice within the next 25-30. The loss of Arctic summer ice could have serious consequences, such as higher water temperature due to the positive feedback of albedo, more powerful and frequent storms, rising sea levels, diminished habitats for polar animals, and more pollution due to fossil fuel exploitation and/ or increased traffic through the Northwest/ Northeast Passage. In these processes, melt ponds play an important role in Earth's radiation balance since they strongly absorb solar radiation rather than reflecting it as snow and ice do. Therefore, it is necessary to develop the ability of predicting the sea ice/ melt pond extents and space-time evolution, which is pivotal to prepare for the variation and uncertainty of the future environment, political, economic, and military needs. A lot of efforts have been put into Arctic sea ice modeling to simulate sea ice processes. However, these sea ice models were initiated and developed based on limited field surveys, aircraft or satellite image data. Therefore, it is necessary to collect high resolution sea ice aerial photo in a systematic way to tune up, validate, and improve models. Currently there are many sea ice aerial photos available, such as Chinese Arctic Exploration (CHINARE 2008, 2010, 2012), SHEBA 1998 and HOTRAX 2005. However, manually delineating of sea ice and melt pond from these images is time-consuming and labor-intensive. In this study, we use the object-based remote sensing classification scheme to extract sea ice and melt ponds efficiently from 1,727 aerial photos taken during the CHINARE 2010. The algorithm includes three major steps as follows. (1) Image segmentation groups the neighboring pixels into objects according to the similarity of spectral and texture

  18. Multi-modal multi-fractal boundary encoding in object-based image compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2006-08-01

    The compact representation of region boundary contours is key to efficient representation and compression of digital images using object-based compression (OBC). In OBC, regions are coded in terms of their texture, color, and shape. Given the appropriate representation scheme, high compression ratios (e.g., 500:1 <= CR <= 2,500:1) have been reported for selected images. Because a region boundary is often represented with more parameters than the region contents, it is crucial to maximize the boundary compression ratio by reducing these parameters. Researchers have elsewhere shown that cherished boundary encoding techniques such as chain coding, simplicial complexes, or quadtrees, to name but a few, are inadequate to support OBC within the aforementioned CR range. Several existing compression standards such as MPEG support efficient boundary representation, but do not necessarily support OBC at CR >= 500:1 . Siddiqui et al. exploited concepts from fractal geometry to encode and compress region boundaries based on fractal dimension, reporting CR = 286.6:1 in one test. However, Siddiqui's algorithm is costly and appears to contain ambiguities. In this paper, we first discuss fractal dimension and OBC compression ratio, then enhance Siddiqui's algorithm, achieving significantly higher CR for a wide variety of boundary types. In particular, our algorithm smoothes a region boundary B, then extracts its inflection or control points P, which are compactly represented. The fractal dimension D is computed locally for the detrended B. By appropriate subsampling, one efficiently segments disjoint clusters of D values subject to a preselected tolerance, thereby partitioning B into a multifractal. This is accomplished using four possible compression modes. In contrast, previous researchers have characterized boundary variance with one fractal dimension, thereby producing a monofractal. At its most complex, the compressed representation contains P, a spatial marker, and a D value

  19. Identification of GCM Uncertainty of Dynamical Cores and Physical Parameterizations by Object-Based Methods

    NASA Astrophysics Data System (ADS)

    Yorgun, M. S.; Rood, R. B.

    2012-12-01

    time and location of, for example, a particular observed cloud system. Process- and object-based evaluation preserves information in the observations by avoiding the need for extensive spatial and temporal averaging. We pose that this approach will provide sound identification of model uncertainty by comparison to observations (i.e. GPCC gauge based data), and will intrinsically link local, weather-scale phenomena to important climatological features and provide a quantitative bridge between weather and climate.

  20. Quantifying the Relationship between Dynamical Cores and Physical Parameterizations by Object-Based Methods

    NASA Astrophysics Data System (ADS)

    Yorgun, M. S.; Rood, R. B.

    2011-12-01

    The behavior of atmospheric models is sensitive to the algorithms that are used to represent the equations of motion. Typically, comprehensive models are conceived in terms of the resolved fluid dynamics (i.e. the dynamical core) and subgrid, unresolved physics represented by parameterizations. Deterministic weather predictions are often validated with feature-by-feature comparison. Probabilistic weather forecasts and climate projects are evaluated with statistical methods. We seek to develop model evaluation strategies that identify like "objects" - coherent systems with an associated set of measurable parameters. This makes it possible to evaluate processes in models without needing to reproduce the time and location of, for example, a particular observed cloud system. Process- and object-based evaluation preserves information in the observations by avoiding the need for extensive spatial and temporal averaging. As a concrete example, we focus on analyzing how the choice of dynamical core impacts the representation of precipitation in the Pacific Northwest of the United States, Western Canada, and Alaska; this brings attention to the interaction of the resolved and the parameterized components of the model. Two dynamical cores are considered within the Community Atmosphere Model. These are the Spectral (Eulerian), which relies on global basis functions and the Finite Volume (FV), which uses only local information. We introduce the concept of "meteorological realism" that is, do local representations of large-scale phenomena, for example, fronts and orographic precipitation, look like the observations? A follow on question is, does the representation of these phenomena improve with resolution? Our approach to quantify meteorological realism starts with identification and isolation of key features of orographic precipitation that are represented differently by Spectral and FV models, using objective pattern recognition methods. Then we aim to quantitatively compare

  1. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are two computer software packages: "Super Solvers Midnight Rescue!" a problem-solving program for IBM PCs; and "Interactive Physics," a simulation program for the Macintosh computer. The functions of the package are discussed including strengths and weaknesses and teaching suggestions. (CW)

  2. Reviews, Software.

    ERIC Educational Resources Information Center

    Science Teacher, 1988

    1988-01-01

    Reviews two software programs for Apple series computers. Includes "Orbital Mech," a basic planetary orbital simulation for the Macintosh, and "START: Stimulus and Response Tools for Experiments in Memory, Learning, Cognition, and Perception," a program that demonstrates basic psychological principles and experiments. (CW)

  3. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1989-01-01

    Six software packages are described in this review. Included are "Molecules and Atoms: Exploring the Essence of Matter"; "Heart Probe"; "GM Sunraycer"; "Six Puzzles"; "Information Laboratory--Life Science"; and "Science Test Builder." Hardware requirements, prices, and a summary of the abilities of each program are presented. (CW)

  4. Educational Software.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    The third session of IT@EDU98 consisted of five papers on educational software and was chaired by Tran Van Hao (University of Education, Ho Chi Minh City, Vietnam). "Courseware Engineering" (Nguyen Thanh Son, Ngo Ngoc Bao Tran, Quan Thanh Tho, Nguyen Hong Lam) briefly describes the use of courseware. "Machine Discovery Theorems in Geometry: A…

  5. Software Reviews.

    ERIC Educational Resources Information Center

    Computing Teacher, 1985

    1985-01-01

    Reprinted from "The Computing Teacher," this document contains software reviews for 23 computer programs that educators could use in the classroom or for administrative purposes. Each review describes the program by listing the program title, subject, producer, grade level (if applicable), hardware required, cost, and reviewer's name and…

  6. Software Reviews.

    ERIC Educational Resources Information Center

    Bitter, Gary G., Ed.

    1989-01-01

    Reviews three software packages: (1) "The Weather Machine Courseware Kit" for grades 7-12; (2) "Exploring Measurement, Time, and Money--Level I," for primary level mathematics; and (3) "Professor DOS with SmartGuide for DOS" providing an extensive tutorial covering DOS 2.1 to 4.0. Discusses the strengths and weaknesses of each package. (YP)

  7. Software Reviews.

    ERIC Educational Resources Information Center

    Smith, Richard L., Ed.

    1987-01-01

    Reviewed are three computer software programs: the Astronomer (astronomy program for middle school students and older); Hands-on-Statistics: Explorations with a Microcomputer (statistics program for secondary school students and older); and CATGEN (a genetics program for secondary school students and older). Each review provides information on:…

  8. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1990

    1990-01-01

    Reviewed are seven computer software packages for IBM and/or Apple Computers. Included are "Windows on Science: Volume 1--Physical Science"; "Science Probe--Physical Science"; "Wildlife Adventures--Grizzly Bears"; "Science Skills--Development Programs"; "The Clean Machine"; "Rock Doctor"; and "Geology Search." Cost, quality, hardware, and…

  9. Software Review.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1989-01-01

    Reviewed is a computer software package entitled "Audubon Wildlife Adventures: Grizzly Bears" for Apple II and IBM microcomputers. Included are availability, hardware requirements, cost, and a description of the program. The murder-mystery flavor of the program is stressed in this program that focuses on illegal hunting and game management. (CW)

  10. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1989-01-01

    Presents comments by classroom teachers on software for science teaching including topics on: the size of a molecule, matter, leaves, vitamins and minerals, dinosaurs, and collecting and measuring data. Each is an Apple computer series. Availability and costs are included. (RT)

  11. Software Comparison

    NASA Technical Reports Server (NTRS)

    Blanchard, D. C.

    1986-01-01

    Software Comparison Package (SCP) compares similar files. Normally, these are 90-character files produced by CDC UPDATE utility from program libraries that contain FORTRAN source code plus identifier. SCP also used to compare load maps, cross-reference outputs, and UPDATE corrections sets. Helps wherever line-by-line comparison of similarly structured files required.

  12. Software Patents.

    ERIC Educational Resources Information Center

    Burke, Edmund B.

    1994-01-01

    Outlines basic patent law information that pertains to computer software programs. Topics addressed include protection in other countries; how to obtain patents; kinds of patents; duration; classes of patentable subject matter, including machines and processes; patentability searches; experimental use prior to obtaining a patent; and patent…

  13. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Programs in use today generally have all of the function and information processing capabilities required to do their specified job. However, older programs usually use obsolete technology, are not integrated properly with other programs, and are difficult to maintain. Reengineering is becoming a prominent discipline as organizations try to move their systems to more modern and maintainable technologies. The Johnson Space Center (JSC) Software Technology Branch (STB) is researching and developing a system to support reengineering older FORTRAN programs into more maintainable forms that can also be more readily translated to a modern languages such as FORTRAN 8x, Ada, or C. This activity has led to the development of maintenance strategies for design recovery and reengineering. These strategies include a set of standards, methodologies, and the concepts for a software environment to support design recovery and reengineering. A brief description of the problem being addressed and the approach that is being taken by the STB toward providing an economic solution to the problem is provided. A statement of the maintenance problems, the benefits and drawbacks of three alternative solutions, and a brief history of the STB experience in software reengineering are followed by the STB new FORTRAN standards, methodology, and the concepts for a software environment.

  14. Software Reviews.

    ERIC Educational Resources Information Center

    Smith, Richard L., Ed.

    1988-01-01

    Reviews two software packages, "Solutions Unlimited" and "BASIC Data Base System." Provides a description, summary, strengths and weaknesses, availability and costs. Includes reviews of three structured BASIC packages: "True BASIC (2.0)"; "Turbo BASIC (1.0)"; and "QuickBASIC (3.0)." Explains significant features such as graphics, costs,…

  15. Design Software

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A NASA contractor and Small Business Innovation Research (SBIR) participant has converted its research into commercial software products for auto design, structural analysis and other applications. ViGYAN, Inc., utilizing the aeronautical research principle of computational fluid dynamics, has created - with VGRID3D and VPLOT3D - an easier alternative to conventional structured grids for fluid dynamic calculations.

  16. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1988

    1988-01-01

    Reviews six software packages for use with school age children ranging from grade 3 to grade 12. Includes "The Microcomputer Based Lab Project: Motion, Sound"; "Genetics"; "Geologic History"; "The Microscope Simulator"; and "Wiz Works" all for Apple II and "Reading for Information: Level II" for IBM. (CW)

  17. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1989

    1989-01-01

    Reviews of seven software packages are presented including "The Environment I: Habitats and EcoSystems; II Cycles and Interactions"; "Super Sign Maker"; "The Great Knowledge Race: Substance Abuse"; "Exploring Science: Temperature"; "Fast Food Calculator and RD Aide"; "The Human Body: Circulation and Respiration" and "Forces in Liquids and Gases."…

  18. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are seven computer software packages including "Frog Dissection Lab Report,""Backyard Birds,""LEGO TC Logo,""Alcohol--Four Interactive Programs,""Windows on Science--Life Science,""Climate and Weather/Our Town Database," and "Weeds to Trees." Discussed are availability, features, strengths, and weaknesses. (CW)

  19. Star Software.

    ERIC Educational Resources Information Center

    Kloza, Brad

    2000-01-01

    Presents a collection of computer software programs designed to spark learning enthusiasm at every grade level and across the curriculum. They include Reader Rabbit's Learn to Read, Spelling Power, Mind Twister Math, Community Construction Kit, Breaking the Code, Encarta Africana 2000, Virtual Serengeti, Operation: Frog (Deluxe), and My First…

  20. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1989-01-01

    Reviewed are two computer software programs for Apple II computers on weather for upper elementary and middle school grades. "Weather" introduces the major factors (temperature, humidity, wind, and air pressure) affecting weather. "How Weather Works" uses simulation and auto-tutorial formats on sun, wind, fronts, clouds, and storms. (YP)

  1. Statistical Software.

    ERIC Educational Resources Information Center

    Callamaras, Peter

    1983-01-01

    This buyer's guide to seven major types of statistics software packages for microcomputers reviews Edu-Ware Statistics 3.0; Financial Planning; Speed Stat; Statistics with DAISY; Human Systems Dynamics package of Stats Plus, ANOVA II, and REGRESS II; Maxistat; and Moore-Barnes' MBC Test Construction and MBC Correlation. (MBR)

  2. Software Reviews.

    ERIC Educational Resources Information Center

    Teles, Elizabeth, Ed.; And Others

    1990-01-01

    Reviewed are two computer software packages for Macintosh microcomputers including "Phase Portraits," an exploratory graphics tool for studying first-order planar systems; and "MacMath," a set of programs for exploring differential equations, linear algebra, and other mathematical topics. Features, ease of use, cost, availability, and hardware…

  3. Reviews: Software.

    ERIC Educational Resources Information Center

    Mackenzie, Norma N.; And Others

    1988-01-01

    Reviews four computer software packages including: "The Physical Science Series: Sound" which demonstrates making waves, speed of sound, doppler effect, and human hearing; "Andromeda" depicting celestial motions in any direction; "Biology Quiz: Humans" covering chemistry, cells, viruses, and human biology; and "MacStronomy" covering information on…

  4. Software Reviews.

    ERIC Educational Resources Information Center

    Mackenzie, Norma N.; And Others

    1988-01-01

    Describes computer software for use with various age groups. Topics include activities involving temperature, simulations, earth science, the circulatory system, human body, reading in science, and ecology. Provides information on equipment needed, availability, package contents, and price. Comments of reviews are presented by classroom teachers.…

  5. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, C.; Crook, J.

    1998-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for advanced engine control systems that will result in lower software maintenance (operations) costs. It effectively accommodates software requirement changes that occur due to hardware technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives, benefits, and status of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishments are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software architecture, reuse software, and reduced software reverification time related to software changes. MRECS was recently modified to support a Space Shuttle Main Engine (SSME) hot-fire test. Cold Flow and Flight Readiness Testing were completed before the test was cancelled. Currently, the program is focused on supporting NASA MSFC in accomplishing development testing of the Fastrac Engine, part of NASA's Low Cost Technologies (LCT) Program. MRECS will be used for all engine development testing.

  6. Modular Rocket Engine Control Software (MRECS)

    NASA Technical Reports Server (NTRS)

    Tarrant, Charlie; Crook, Jerry

    1997-01-01

    The Modular Rocket Engine Control Software (MRECS) Program is a technology demonstration effort designed to advance the state-of-the-art in launch vehicle propulsion systems. Its emphasis is on developing and demonstrating a modular software architecture for a generic, advanced engine control system that will result in lower software maintenance (operations) costs. It effectively accommodates software requirements changes that occur due to hardware. technology upgrades and engine development testing. Ground rules directed by MSFC were to optimize modularity and implement the software in the Ada programming language. MRECS system software and the software development environment utilize Commercial-Off-the-Shelf (COTS) products. This paper presents the objectives and benefits of the program. The software architecture, design, and development environment are described. MRECS tasks are defined and timing relationships given. Major accomplishment are listed. MRECS offers benefits to a wide variety of advanced technology programs in the areas of modular software, architecture, reuse software, and reduced software reverification time related to software changes. Currently, the program is focused on supporting MSFC in accomplishing a Space Shuttle Main Engine (SSME) hot-fire test at Stennis Space Center and the Low Cost Boost Technology (LCBT) Program.

  7. Computer-Aided Software Engineering - An approach to real-time software development

    NASA Technical Reports Server (NTRS)

    Walker, Carrie K.; Turkovich, John J.

    1989-01-01

    A new software engineering discipline is Computer-Aided Software Engineering (CASE), a technology aimed at automating the software development process. This paper explores the development of CASE technology, particularly in the area of real-time/scientific/engineering software, and a history of CASE is given. The proposed software development environment for the Advanced Launch System (ALS CASE) is described as an example of an advanced software development system for real-time/scientific/engineering (RT/SE) software. The Automated Programming Subsystem of ALS CASE automatically generates executable code and corresponding documentation from a suitably formatted specification of the software requirements. Software requirements are interactively specified in the form of engineering block diagrams. Several demonstrations of the Automated Programming Subsystem are discussed.

  8. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  9. Moving object-based depth map estimation using relabeling and hybrid matching

    NASA Astrophysics Data System (ADS)

    Kim, Tae-Woo; Shin, Jitae; Oh, Byung Tae

    2014-03-01

    Schemes to enhance human visual perception in three-dimensional (3-D) video applications with depth map data are proposed. Depth estimation is an important part of free viewpoint television and 3-DTV because the accuracy of depth information directly affects the synthesized video quality at an intermediate viewpoint. However, generating an accurate depth map is a complex computational process that makes real-time implementation challenging. In order to obtain accurate depth information with low complexity, a depth map relabeling algorithm and a hybrid matching algorithm are proposed in the depth estimation step. These techniques in acquisition for a depth map are based on human perception, which is more sensitive to moving objects than to a static background. Also, they consider the importance of appropriate processing of object boundaries. Experimental results demonstrate that the proposed schemes provide a synthesized view with both higher subjective visual quality and better objective quality in terms of peak signal-to-noise ratio than legacy depth estimation reference software.

  10. The EOSDIS software challenge

    NASA Astrophysics Data System (ADS)

    Jaworski, Allan

    1993-08-01

    The Earth Observing System (EOS) Data and Information System (EOSDIS) will serve as a major resource for the earth science community, supporting both command and control of complex instruments onboard the EOS spacecraft and the archiving, distribution, and analysis of data. The scale of EOSDIS and the volume of multidisciplinary research to be conducted using EOSDIS resources will produce unparalleled needs for technology transparency, data integration, and system interoperability. The scale of this effort far outscopes any previous scientific data system in its breadth or operational and performance needs. Modern hardware technology can meet the EOSDIS technical challenge. Multiprocessing speeds of many giga-flops are being realized by modern computers. Online storage disk, optical disk, and videocassette libraries with storage capacities of many terabytes are now commercially available. Radio frequency and fiber optics communications networks with gigabit rates are demonstrable today. It remains, of course, to perform the system engineering to establish the requirements, architectures, and designs that will implement the EOSDIS systems. Software technology, however, has not enjoyed the price/performance advances of hardware. Although we have learned to engineer hardware systems which have several orders of magnitude greater complexity and performance than those built in the 1960's, we have not made comparable progress in dramatically reducing the cost of software development. This lack of progress may significantly reduce our capabilities to achieve economically the types of highly interoperable, responsive, integraded, and productive environments which are needed by the earth science community. This paper describes some of the EOSDIS software requirements and current activities in the software community which are applicable to meeting the EOSDIS challenge. Some of these areas include intelligent user interfaces, software reuse libraries, and domain engineering

  11. Space Station Software Issues

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor); Beskenis, S. (Editor)

    1985-01-01

    Issues in the development of software for the Space Station are discussed. Software acquisition and management, software development environment, standards, information system support for software developers, and a future software advisory board are addressed.

  12. A comparison of three feature selection methods for object-based classification of sub-decimeter resolution UltraCam-L imagery

    NASA Astrophysics Data System (ADS)

    Laliberte, A. S.; Browning, D. M.; Rango, A.

    2012-04-01

    The availability of numerous spectral, spatial, and contextual features with object-based image analysis (OBIA) renders the selection of optimal features a time consuming and subjective process. While several feature selection methods have been used in conjunction with OBIA, a robust comparison of the utility and efficiency of approaches would facilitate broader and more effective implementation. In this study, we evaluated three feature selection methods, (1) Jeffreys-Matusita distance (JM), (2) classification tree analysis (CTA), and (3) feature space optimization (FSO) for object-based vegetation classifications with sub-decimeter digital aerial imagery in arid rangelands of the southwestern U.S. We assessed strengths, weaknesses, and best uses for each method using the criteria of ease of use, ability to rank and/or reduce input features, and classification accuracies. For the five sites tested, JM resulted in the highest overall classification accuracies for three sites, while CTA yielded highest accuracies for two sites. FSO resulted in the lowest accuracies. CTA offered ease of use and ability to rank and reduce features, while JM had the advantage of assessing class separation distances. FSO allowed for determining features relatively quickly, because it operates within the OBIA software used in this analysis (eCognition). However, the feature ranking in FSO is not transparent and accuracies were relatively low. While all methods offered an objective approach for determining suitable features for classifications of sub-decimeter resolution aerial imagery, we concluded that CTA was best suited for this particular application. We explore the limitations, assumptions, and appropriate uses for this and other datasets.

  13. Application of Object Based Image Analysis (OBIA) in detecting and quantifying forest loss caused by artisanal gold mining activities in Upper Mazaruni River Basin, Guyana

    NASA Astrophysics Data System (ADS)

    Mengisteab, B. S.; Blesius, L.; Hennessy, L.

    2014-12-01

    Artisanal gold mining in Guyana is mostly done in forest covered areas, causing forest loss and land degradation. Data from the Guyana Geology and Mining commission show there has been an increase of 2074% between 1986 and 2009. Our analysis of Landsat data between 1986 and 2013 for a part of the Upper Mazaruni area shows an increase from 34.2 to 278.4 hectares, which amounts to more than 800%. While the frequent coverage of Landsat data is useful for multitemporal studies, the lower resolution may not be adequate for accurate detection of mining sites. Therefore, RapidEye imagery from 2011 at a resolution of 5m was used to detect gold mining activity and to compare the results with the Landsat study. Processing was conducted in eCognition, an object-based image analysis (OBIA) software. OBIA is an image processing technique that has proven to be advantageous over traditional pixel based image processing techniques, with the primary advantage being the ability of the approach in combining both the spatial and spectral information. The satellite image was subjected to segmentation at multiple scales and classified using fuzzy sets of membership functions. Classification explicitly incorporated the different scales in order to accommodate different sizes of real-world objects and spatial relationships were utilized to establish connections between related objects. For example the presence or absence of water in pits, or the existence of sediments in the river may serve as additional indicators of mining sites besides the spectral components. Preliminary results show that OBIA approach was able to successfully detect and quantify small scale mining activities in the basin, and that the Landsat data were giving an acceptable estimate of mining sites over time. Keywords:Object Based Image Analysis, Gold Mining, Remote Sensing, Guyana

  14. IMAGE information monitoring and applied graphics software environment. Volume 2. Software description

    SciTech Connect

    Hallam, J.W.; Ng, K.B.; Upham, G.L.

    1986-09-01

    The EPRI Information Monitoring and Applied Graphics Environment (IMAGE) system is designed for 'fast proto-typing' of advanced concepts for computer-aided plant operations tools. It is a flexible software system which can be used for rapidly creating, dynamically driving and evaluating advanced operator aid displays. The software is written to be both host computer and graphic device independent.

  15. Automated object-based landslide and debris flow identification - a case study for supporting disaster management in Taiwan

    NASA Astrophysics Data System (ADS)

    Hölbling, D.; Füreder, P.; Tsai, V.; Friedl, B.

    2012-04-01

    Landslides and debris flows are a major hazard in almost all mountainous regions of the world and claim the lives of many people every year. Taiwan is one of the countries, which is heavily affected by landslides and debris flows, especially latter are extremely dangerous as they often affect settlements and cause enormous damages to infrastructure. These hardly predictable mass movements are mainly triggered by typhoons, which hit the island several times a year. A lot of efforts are made in landslide research in Taiwan towards effective and sustainable disaster management. The rapid assessment after landslide events is as well a major issue as the continuous and reliable update of existing databases and landslide inventory maps. So far, this information is usually retrieved by visual image interpretation; only little research has been done in automated analysis to identify landslides and debris flows. With traditional pixel-based image analysis methods it is hard to distinguish landslides and debris flows from bare land, e.g. due to the missing ability of considering context information, but this knowledge is very useful for disaster mitigation and recovery issues. Therefore it is necessary to establish reliable, fast and accurate methods for detecting and classifying landslides and debris flows by making use of the nowadays available amount of Earth Observation (EO) data, which implies the need to implement semi- or even fully automated methods. Object-based image analysis (OBIA) constitutes a powerful framework for fulfilling these needs and seems to be suitable for the identification and characterization of such complex natural features. The automated landslide and debris flow detection was performed for a study site of approximately 30 km2 in Southern Taiwan using eCognition software. By applying a semi-automated approach using OBIA and integrating optical data (SPOT-5 with 2.5 m GSD), digital elevation data with 5 m GSD as well as derivates from the DEM (e

  16. Hierarchical object-based classification of ultra-high-resolution digital mapping camera (DMC) imagery for rangeland mapping and assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Ultra high resolution digital aerial photography has great potential to complement or replace ground measurements of vegetation cover for rangeland monitoring and assessment. We investigated object-based image analysis (OBIA) techniques for classifying vegetation in southwestern U.S. arid rangelands...

  17. Same Old Story: The Problem of Object-Based Thinking as a Basis for Teaching Distant Places

    ERIC Educational Resources Information Center

    Martin, Fran

    2013-01-01

    The English Geography National Curriculum encourages primary teachers to focus on similarities and differences when teaching distant places. The issues this raises are particularly acute when teaching geography in the context of the Global South. In this article I argue that comparisons based on object-based thinking can lead to views of the…

  18. Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study

    ERIC Educational Resources Information Center

    Bulf, Hermann; Valenza, Eloisa

    2013-01-01

    Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…

  19. Software Architecture Design Reasoning

    NASA Astrophysics Data System (ADS)

    Tang, Antony; van Vliet, Hans

    Despite recent advancements in software architecture knowledge management and design rationale modeling, industrial practice is behind in adopting these methods. The lack of empirical proofs and the lack of a practical process that can be easily incorporated by practitioners are some of the hindrance for adoptions. In particular, the process to support systematic design reasoning is not available. To rectify this issue, we propose a design reasoning process to help architects cope with an architectural design environment where design concerns are cross-cutting and diversified.We use an industrial case study to validate that the design reasoning process can help improve the quality of software architecture design. The results have indicated that associating design concerns and identifying design options are important steps in design reasoning.

  20. Object-based visual attention in 8-month-old infants: evidence from an eye-tracking study.

    PubMed

    Bulf, Hermann; Valenza, Eloisa

    2013-10-01

    Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is surprising, given the importance of objects in driving infants' attention and predispositions to attend to object information. Here, we investigated the object-based component of attention in 8-month-old infants. An eye tracker measured the saccade latencies to find a target that could appear in a previously cued end of 2 bars (valid targets), in the other end of the cued bar (invalid same-object targets), or in the other bar but at the same distance from the cue (invalid different-object targets). Bars were unoccluded or partly occluded; if attention is object based, it should also operate on objects that require perceptual completion. After verifying in a sample of adults (Experiment 1) that a measure of saccade latency suitably assessed space-based and object-based attention, we tested 8-month-old infants (Experiment 2) using the same procedure. The results showed that in both adults and infants, target detection was faster for valid targets than for invalid ones (space-based effect). Moreover, for both the unoccluded and partly occluded conditions, detection was faster on invalid within-object trials than on invalid between-objects trials (object-based effect). These findings demonstrate that visual objects can operate as units of attention for infants by the age of 8 months, offering implications for cognitive development. PMID:23276133

  1. Analysis Software

    NASA Technical Reports Server (NTRS)

    1994-01-01

    General Purpose Boundary Element Solution Technology (GPBEST) software employs the boundary element method of mechanical engineering analysis, as opposed to finite element. It is, according to one of its developers, 10 times faster in data preparation and more accurate than other methods. Its use results in less expensive products because the time between design and manufacturing is shortened. A commercial derivative of a NASA-developed computer code, it is marketed by Best Corporation to solve problems in stress analysis, heat transfer, fluid analysis and yielding and cracking of solids. Other applications include designing tractor and auto parts, household appliances and acoustic analysis.

  2. Seminar Software

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Society for Computer Simulation International is a professional technical society that distributes information on methodology techniques and uses of computer simulation. The society uses NETS, a NASA-developed program, to assist seminar participants in learning to use neural networks for computer simulation. NETS is a software system modeled after the human brain; it is designed to help scientists exploring artificial intelligence to solve pattern matching problems. Examples from NETS are presented to seminar participants, who can then manipulate, alter or enhance them for their own applications.

  3. Simulation Software

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Various NASA Small Business Innovation Research grants from Marshall Space Flight Center, Langley Research Center and Ames Research Center were used to develop the 'kernel' of COMCO's modeling and simulation software, the PHLEX finite element code. NASA needed it to model designs of flight vehicles; one of many customized commercial applications is UNISIM, a PHLEX-based code for analyzing underground flows in oil reservoirs for Texaco, Inc. COMCO's products simulate a computational mechanics problem, estimate the solution's error and produce the optimal hp-adapted mesh for the accuracy the user chooses. The system is also used as a research or training tool in universities and in mechanical design in industrial corporations.

  4. Software Surrogate

    NASA Technical Reports Server (NTRS)

    1999-01-01

    In 1994, Blackboard Technology received a NASA Phase I SBIR award entitled "A Blackboard-Based Framework for Mixed-Initiative, Crewed- Space-System Applications." This research continued in Phase II at JSC, where a generic architecture was developed in which a software surrogate serves as the operator's representative in the fast-paced realm of nearly autonomous, intelligent systems. This SBIR research effort addressed the need to support human-operator monitoring and intervention with intelligent systems such as those being developed for NASA's crewed space program.

  5. Software system safety

    NASA Technical Reports Server (NTRS)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  6. Archetyping: A software generation and management methodology

    NASA Technical Reports Server (NTRS)

    Rothman, Hugh B.; Przybylinski, Stanley M.

    1988-01-01

    Many knowledge based software generation methods have been proposed to improve software quality and programmer productivity. Several government and industry initiatives have focused on software reusability as one solution to these problems. DARTS (trademark), a General Dynamics proprietary symbolic processing technology, provides a unique solution to the reuse problem: archtyping. Archtyping is the embedding of high order language statements in text files. An advanced macroprocessor uses the text files to generate new versions of complex software systems. A DARTS program, the Software Generation and Configuration Management (SGCM) System automates the archtyping process and maintenance cycle. The DARTS technology is briefly discussed, archtyping is described, and the SGCM system is presented in detail.

  7. Advanced computing

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Advanced concepts in hardware, software and algorithms are being pursued for application in next generation space computers and for ground based analysis of space data. The research program focuses on massively parallel computation and neural networks, as well as optical processing and optical networking which are discussed under photonics. Also included are theoretical programs in neural and nonlinear science, and device development for magnetic and ferroelectric memories.

  8. A Mathematics Software Database Update.

    ERIC Educational Resources Information Center

    Cunningham, R. S.; Smith, David A.

    1987-01-01

    Contains an update of an earlier listing of software for mathematics instruction at the college level. Topics are: advanced mathematics, algebra, calculus, differential equations, discrete mathematics, equation solving, general mathematics, geometry, linear and matrix algebra, logic, statistics and probability, and trigonometry. (PK)

  9. A Multidimensional Software Engineering Course

    ERIC Educational Resources Information Center

    Barzilay, O.; Hazzan, O.; Yehudai, A.

    2009-01-01

    Software engineering (SE) is a multidimensional field that involves activities in various areas and disciplines, such as computer science, project management, and system engineering. Though modern SE curricula include designated courses that address these various subjects, an advanced summary course that synthesizes them is still missing. Such a…

  10. Automated Digital Image Analysis of islet cell mass using Nikon's inverted Eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2013-04-29

    Reliable assessment of islet viability, mass and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples,but this technique may be susceptible to inter / intra observer variability, which may induce false positive / negative islet counts. Here we describe a simple, reliable, automated digitalimage analysis (ADIA) technique, for accurately quantifying islets into total islet number,islet equivalent number (IEQ), and islet purity before islet transplantation.Islets were isolated and purified from n=42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone,and expressed as IEQ number. Islets were analyzed manually by microscopy, or automaticallyquantified using Nikon's inverted Eclipse Ti microscope, with built in NIS-ElementsAdvanced Research (AR) software.The AIDA method significantly enhanced the number of islet preparations eligible forengraftment compared to the standard manual method (P<0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(sup)2(/sup)≤0.91), and total islet number (r(sup)2(/sup)=0.88), and thus, increased to (r(sup)2(/sup)=0.93) when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intra-observer reproducibility compared to the standard manual method (P<0.001). However, islet purity was routinely estimated as significantly higher with the manual method vs. the ADIA method(p<0.001). The ADIA method also detected small islets between 10-50 μm in size.Automated digital image analysis utilizing the Nikon Instruments (Nikon) software is anunbiased, simple, and reliable teaching tool to comprehensively assess the individual size ofeach islet cell preparation prior to transplantation. Implementation of

  11. Change detection of built-up land: A framework of combining pixel-based detection and object-based recognition

    NASA Astrophysics Data System (ADS)

    Xiao, Pengfeng; Zhang, Xueliang; Wang, Dongguang; Yuan, Min; Feng, Xuezhi; Kelly, Maggi

    2016-09-01

    This study proposed a new framework that combines pixel-level change detection and object-level recognition to detect changes of built-up land from high-spatial resolution remote sensing images. First, an adaptive differencing method was designed to detect changes at the pixel level based on both spectral and textural features. Next, the changed pixels were subjected to a set of morphological operations to improve the completeness and to generate changed objects, achieving the transition of change detection from the pixel level to the object level. The changed objects were further recognised through the difference of morphological building index in two phases to indicate changed objects on built-up land. The transformation from changed pixels to changed objects makes the proposed framework distinct with both the pixel-based and the object-based change detection methods. Compared with the pixel-based methods, the proposed framework can improve the change detection capability through the transformation and successive recognition of objects. Compared with the object-based method, the proposed framework avoids the issue of multitemporal segmentation and can generate changed objects directly from changed pixels. The experimental results show the effectiveness of the transformation from changed pixels to changed objects and the successive object-based recognition on improving the detection accuracy, which justify the application potential of the proposed change detection framework.

  12. Exploitation of GeoEye-1 Data for Land Use/Land Cover Analysis Using Object-Based Image Analysis in the Eastern Nile Basin of Sudan

    NASA Astrophysics Data System (ADS)

    Guma Biro Turk, Khalid; Pradhan, Biswajeet

    Recently, object-oriented classification techniques based on image segmentation approaches are being studied using high-resolution satellite images to extract various thematic information. In this study different types of land use/land cover (LULC) types were analysed by employing object-based image analysis approach to GeoEye-1 data at the Eastern Nile Basin of Sudan. For that purpose, multi-resolution segmentation (MRS) of the Definiens software was used for creating the image objects. Using the feature space optimisation (FSO) tool the attributes of the GeoEye-1 image were optimised in order to obtain the best separability among classes for the LULC mapping. The best separation distance of the tested spectral, shape and textural features showed different variations among the discriminated LULC classes. An overall accuracy of 94 % with a kappa value 0.92 was resulted from the classification scheme, while accuracy differences among the classes were kept minimal. Finally, the results highlighted the importance of using very high-resolution satellite images for delineating the complex land use cover in the dry lands of the Eastern Nile Basin. Key Words: Multi-resolution segmentation; feature space optimisation; land use/land cover mapping; Eastern Nile Basin

  13. Object-based auditory facilitation of visual search for pictures and words with frequent and rare targets.

    PubMed

    Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru

    2011-06-01

    Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., "meow"), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., "meow") of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., "meow") should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory

  14. Sandia software guidelines: Software quality planning

    SciTech Connect

    Not Available

    1987-08-01

    This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. In consonance with the IEEE Standard for Software Quality Assurance Plans, this volume identifies procedures to follow in producing a Software Quality Assurance Plan for an organization or a project, and provides an example project SQA plan. 2 figs., 4 tabs.

  15. Modeling and MBL: Software Tools for Science.

    ERIC Educational Resources Information Center

    Tinker, Robert F.

    Recent technological advances and new software packages put unprecedented power for experimenting and theory-building in the hands of students at all levels. Microcomputer-based laboratory (MBL) and model-solving tools illustrate the educational potential of the technology. These tools include modeling software and three MBL packages (which are…

  16. Office Computer Software: A Comprehensive Review of Software Programs.

    ERIC Educational Resources Information Center

    Secretary, 1992

    1992-01-01

    Describes types of software including system software, application software, spreadsheets, accounting software, graphics packages, desktop publishing software, database, desktop and personal information management software, project and records management software, groupware, and shareware. (JOW)

  17. Advanced Concept Modeling

    NASA Technical Reports Server (NTRS)

    Chaput, Armand; Johns, Zachary; Hodges, Todd; Selfridge, Justin; Bevirt, Joeben; Ahuja, Vivek

    2015-01-01

    Advanced Concepts Modeling software validation, analysis, and design. This was a National Institute of Aerospace contract with a lot of pieces. Efforts ranged from software development and validation for structures and aerodynamics, through flight control development, and aeropropulsive analysis, to UAV piloting services.

  18. Antenna Controller Replacement Software

    NASA Technical Reports Server (NTRS)

    Chao, Roger Y.; Morgan, Scott C.; Strain, Martha M.; Rockwell, Stephen T.; Shimizu, Kenneth J.; Tehrani, Barzia J.; Kwok, Jaclyn H.; Tuazon-Wong, Michelle; Valtier, Henry; Nalbandi, Reza; Wert, Michael; Leung, Patrick

    2010-01-01

    The Antenna Controller Replacement (ACR) software accurately points and monitors the Deep Space Network (DSN) 70-m and 34-m high-efficiency (HEF) ground-based antennas that are used to track primarily spacecraft and, periodically, celestial targets. To track a spacecraft, or other targets, the antenna must be accurately pointed at the spacecraft, which can be very far away with very weak signals. ACR s conical scanning capability collects the signal in a circular pattern around the target, calculates the location of the strongest signal, and adjusts the antenna pointing to point directly at the spacecraft. A real-time, closed-loop servo control algorithm performed every 0.02 second allows accurate positioning of the antenna in order to track these distant spacecraft. Additionally, this advanced servo control algorithm provides better antenna pointing performance in windy conditions. The ACR software provides high-level commands that provide a very easy user interface for the DSN operator. The operator only needs to enter two commands to start the antenna and subreflector, and Master Equatorial tracking. The most accurate antenna pointing is accomplished by aligning the antenna to the Master Equatorial, which because of its small size and sheltered location, has the most stable pointing. The antenna has hundreds of digital and analog monitor points. The ACR software provides compact displays to summarize the status of the antenna, subreflector, and the Master Equatorial. The ACR software has two major functions. First, it performs all of the steps required to accurately point the antenna (and subreflector and Master Equatorial) at the spacecraft (or celestial target). This involves controlling the antenna/ subreflector/Master-Equatorial hardware, initiating and monitoring the correct sequence of operations, calculating the position of the spacecraft relative to the antenna, executing the real-time servo control algorithm to maintain the correct position, and

  19. A microkernel design for component-based parallel numerical software systems.

    SciTech Connect

    Balay, S.

    1999-01-13

    What is the minimal software infrastructure and what type of conventions are needed to simplify development of sophisticated parallel numerical application codes using a variety of software components that are not necessarily available as source code? We propose an opaque object-based model where the objects are dynamically loadable from the file system or network. The microkernel required to manage such a system needs to include, at most: (1) a few basic services, namely--a mechanism for loading objects at run time via dynamic link libraries, and consistent schemes for error handling and memory management; and (2) selected methods that all objects share, to deal with object life (destruction, reference counting, relationships), and object observation (viewing, profiling, tracing). We are experimenting with these ideas in the context of extensible numerical software within the ALICE (Advanced Large-scale Integrated Computational Environment) project, where we are building the microkernel to manage the interoperability among various tools for large-scale scientific simulations. This paper presents some preliminary observations and conclusions from our work with microkernel design.

  20. Software Model Of Software-Development Process

    NASA Technical Reports Server (NTRS)

    Lin, Chi Y.; Synott, Debra J.; Levary, Reuven R.

    1990-01-01

    Collection of computer programs constitutes software tool for simulation of medium- to large-scale software-development projects. Necessary to include easily identifiable and more-readily quantifiable characteristics like costs, times, and numbers of errors. Mathematical model incorporating these and other factors of dynamics of software-development process implemented in the Software Life Cycle Simulator (SLICS) computer program. Simulates dynamics of software-development process. In combination with input and output expert software systems and knowledge-based management software system, develops information for use in managing large software-development project. Intended to aid managers in planning, managing, and controlling software-development processes by reducing uncertainties in budgets, required personnel, and schedules.

  1. Object-based classification of residential land use within Accra, Ghana based on QuickBird satellite data

    PubMed Central

    STOW, D.; LOPEZ, A.; LIPPITT, C.; HINTON, S.; WEEKS, J.

    2009-01-01

    A segmentation and hierarchical classification approach applied to QuickBird multispectral satellite data was implemented, with the goal of delineating residential land use polygons and identifying low and high socio-economic status of neighbourhoods within Accra, Ghana. Two types of object-based classification strategies were tested, one based on spatial frequency characteristics of multispectral data, and the other based on proportions of Vegetation–Impervious–Soil sub-objects. Both approaches yielded residential land-use maps with similar overall percentage accuracy (75%) and kappa index of agreement (0.62) values, based on test objects from visual interpretation of QuickBird panchromatic imagery. PMID:19424445

  2. Software to Manage the Unmanageable

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In 1995, NASA s Jet Propulsion Laboratory (JPL) contracted Redmond, Washington-based Lucidoc Corporation, to design a technology infrastructure to automate the intersection between policy management and operations management with advanced software that automates document workflow, document status, and uniformity of document layout. JPL had very specific parameters for the software. It expected to store and catalog over 8,000 technical and procedural documents integrated with hundreds of processes. The project ended in 2000, but NASA still uses the resulting highly secure document management system, and Lucidoc has managed to help other organizations, large and small, with integrating document flow and operations management to ensure a compliance-ready culture.

  3. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  4. Computer systems and software engineering

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  5. a New Framework for Object-Based Image Analysis Based on Segmentation Scale Space and Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Hadavand, A.; Saadatseresht, M.; Homayouni, S.

    2015-12-01

    In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS), a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  6. Object-based task-level control: A hierarchical control architecture for remote operation of space robots

    NASA Technical Reports Server (NTRS)

    Stevens, H. D.; Miles, E. S.; Rock, S. J.; Cannon, R. H.

    1994-01-01

    Expanding man's presence in space requires capable, dexterous robots capable of being controlled from the Earth. Traditional 'hand-in-glove' control paradigms require the human operator to directly control virtually every aspect of the robot's operation. While the human provides excellent judgment and perception, human interaction is limited by low bandwidth, delayed communications. These delays make 'hand-in-glove' operation from Earth impractical. In order to alleviate many of the problems inherent to remote operation, Stanford University's Aerospace Robotics Laboratory (ARL) has developed the Object-Based Task-Level Control architecture. Object-Based Task-Level Control (OBTLC) removes the burden of teleoperation from the human operator and enables execution of tasks not possible with current techniques. OBTLC is a hierarchical approach to control where the human operator is able to specify high-level, object-related tasks through an intuitive graphical user interface. Infrequent task-level command replace constant joystick operations, eliminating communications bandwidth and time delay problems. The details of robot control and task execution are handled entirely by the robot and computer control system. The ARL has implemented the OBTLC architecture on a set of Free-Flying Space Robots. The capability of the OBTLC architecture has been demonstrated by controlling the ARL Free-Flying Space Robots from NASA Ames Research Center.

  7. Object-based change detection in rapid urbanization regions with remotely sensed observations: a case study of Shenzhen, China

    NASA Astrophysics Data System (ADS)

    He, Lihuang; Dong, Guihua; Wang, Wei-Min; Yang, Lijun; Liang, Hong

    2013-10-01

    China, the most populous country on Earth, has experienced rapid urbanization which is one of the main causes of many environmental and ecological problems. Therefore, the monitoring of rapid urbanization regions and the environment is of critical importance for their sustainable development. In this study, the object-based classification is employed to detect the change of land cover in Shenzhen, which is located in South China and has been urbanized rapidly in recent three decades. First, four Landsat TM images, which were acquired on 1990, 2000 and 2010, respectively, are selected from the image database. Atmospheric corrections are conducted on these images with improved dark-object subtraction technique and surface meteorological observations. Geometric correction is processed with ground control points derived from topographic maps. Second, a region growing multi-resolution segmentation and a soft nearest neighbour classifier are used to finish object-based classification. After analyzing the fraction of difference classes over time series, we conclude that the comparison of derived land cover classes with socio-economic statistics demonstrates the strong positive correlation between built-up classes and urban population as well as gross GDP and GDPs in second and tertiary industries. Two different mechanisms of urbanization, namely new land development and redevelopment, are revealed. Consequently, we found that, the districts of Shenzhen were urbanized through different mechanisms.

  8. Attention, segregation, and textons: bridging the gap between object-based attention and texton-based segregation.

    PubMed

    Ben-Shahar, Ohad; Scholl, Brian J; Zucker, Steven W

    2007-03-01

    Studies of object-based attention (OBA) have suggested that attentional selection is intimately associated with discrete objects. However, the relationship of this association to the basic visual features ('textons') which guide the segregation of visual scenes into 'objects' remains largely unexplored. Here we study this hypothesized relationship for one of the most conspicuous features of early vision: orientation. To do so we examine how attention spreads through uniform (one 'object') orientation-defined textures (ODTs), and across texture-defined boundaries in discontinuous (two 'objects') ODTs. Using the divided-attention paradigm we find that visual events that are known to trigger orientation-based texture segregation, namely perceptual boundaries defined by high orientation and/or curvature gradients, also induce a significant cost on attentional selection. At the same time we show that no effect is incurred by the absolute value of the textons, i.e., by the general direction (or, the 'grain') of the texture-in conflict with previous findings in the OBA literature. Collectively these experiments begin to reveal the link between object-based attention and texton-based segregation, a link which also offers important cross-disciplinary methodological advantages. PMID:17239914

  9. The NASA Software Management and Assurance Program

    NASA Technical Reports Server (NTRS)

    Hall, D. L.; Wilson, W. M.

    1983-01-01

    A committee of experienced managers representing each major NASA organization guides a major development effort which seeks to ensure the acquisition of economical and reliable software for more complex future programs. The primary, near-term development objectives of the NASA Software Management and Assurance Program are: (1) to provide the best management and technical guidance available; (2) to facilitate the use of proven tools, techniques, and information; and (3) to maintain a pool of highly qualified software personnel. The software development tasks addressed involve such problems as satellite end-to-end architecture and advanced aircraft guidance and control systems.

  10. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, William A.

    1992-01-01

    The software for configuring a Laser Velocimeter (LV) counter processor system was developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system was developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.

  11. Software attribute visualization for high integrity software

    SciTech Connect

    Pollock, G.M.

    1998-03-01

    This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.

  12. Exploring the Process of Adult Computer Software Training Using Andragogy, Situated Cognition, and a Minimalist Approach

    ERIC Educational Resources Information Center

    Hurt, Andrew C.

    2007-01-01

    With technology advances, computer software becomes increasingly difficult to learn. Adults often rely on software training to keep abreast of these changes. Instructor-led software training is frequently used to teach adults new software skills; however there is limited research regarding the best practices in adult computer software training.…

  13. Software Vulnerability Taxonomy Consolidation

    SciTech Connect

    Polepeddi, S

    2004-12-08

    In today's environment, computers and networks are increasing exposed to a number of software vulnerabilities. Information about these vulnerabilities is collected and disseminated via various large publicly available databases such as BugTraq, OSVDB and ICAT. Each of these databases, individually, do not cover all aspects of a vulnerability and lack a standard format among them, making it difficult for end-users to easily compare various vulnerabilities. A central database of vulnerabilities has not been available until today for a number of reasons, such as the non-uniform methods by which current vulnerability database providers receive information, disagreement over which features of a particular vulnerability are important and how best to present them, and the non-utility of the information presented in many databases. The goal of this software vulnerability taxonomy consolidation project is to address the need for a universally accepted vulnerability taxonomy that classifies vulnerabilities in an unambiguous manner. A consolidated vulnerability database (CVDB) was implemented that coalesces and organizes vulnerability data from disparate data sources. Based on the work done in this paper, there is strong evidence that a consolidated taxonomy encompassing and organizing all relevant data can be achieved. However, three primary obstacles remain: lack of referencing a common ''primary key'', un-structured and free-form descriptions of necessary vulnerability data, and lack of data on all aspects of a vulnerability. This work has only considered data that can be unambiguously extracted from various data sources by straightforward parsers. It is felt that even with the use of more advanced, information mining tools, which can wade through the sea of unstructured vulnerability data, this current integration methodology would still provide repeatable, unambiguous, and exhaustive results. Though the goal of coalescing all available data, which would be of use to

  14. Report: Scientific Software.

    ERIC Educational Resources Information Center

    Borman, Stuart A.

    1985-01-01

    Discusses various aspects of scientific software, including evaluation and selection of commercial software products; program exchanges, catalogs, and other information sources; major data analysis packages; statistics and chemometrics software; and artificial intelligence. (JN)

  15. Controlling Software Piracy.

    ERIC Educational Resources Information Center

    King, Albert S.

    1992-01-01

    Explains what software manufacturers are doing to combat software piracy, recommends how managers should deal with this problem, and provides a role-playing exercise to help students understand the issues in software piracy. (SR)

  16. GENI: Grid Hardware and Software

    SciTech Connect

    2012-01-09

    GENI Project: The 15 projects in ARPA-E’s GENI program, short for “Green Electricity Network Integration,” aim to modernize the way electricity is transmitted in the U.S. through advances in hardware and software for the electric grid. These advances will improve the efficiency and reliability of electricity transmission, increase the amount of renewable energy the grid can utilize, and provide energy suppliers and consumers with greater control over their power flows in order to better manage peak power demand and cost.

  17. Software Engineering Guidebook

    NASA Technical Reports Server (NTRS)

    Connell, John; Wenneson, Greg

    1993-01-01

    The Software Engineering Guidebook describes SEPG (Software Engineering Process Group) supported processes and techniques for engineering quality software in NASA environments. Three process models are supported: structured, object-oriented, and evolutionary rapid-prototyping. The guidebook covers software life-cycles, engineering, assurance, and configuration management. The guidebook is written for managers and engineers who manage, develop, enhance, and/or maintain software under the Computer Software Services Contract.

  18. Proceedings, Conference on the Computing Environment for Mathematical Software

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Recent advances in software and hardware technology which make it economical to create computing environments appropriate for specialized applications are addressed. Topics included software tools, FORTRAN standards activity, and features of languages, operating systems, and hardware that are important for the development, testing, and maintenance of mathematical software.

  19. Computer Software for Displaying Map Projections and Comparing Distortions.

    ERIC Educational Resources Information Center

    Wikle, Thomas

    1991-01-01

    Discusses software that educators can use to teach about distortions associated with alternative map projections. Examines the Projection, MicroCAM, World, and Atlas-GIS software programs. Suggests using the software in either introductory or more advanced courses dealing with map design or thematic cartography. Explains how to obtain the…

  20. MemAxes Visualization Software

    Energy Science and Technology Software Center (ESTSC)

    2014-08-28

    Hardware advancements such as Intel's PEBS and AMD's IBS, as well as software developments such as the perf_event API in Linux have made available the acquisition of memory access samples with performance information. MemAxes is a visualization and analysis tool for memory access sample data. By mapping the samples to their associated code, variables, node topology, and application dataset, MemAxes provides intuitive views of the data.

  1. Evaluation of the contribution of LiDAR data and postclassification procedures to object-based classification accuracy

    NASA Astrophysics Data System (ADS)

    Styers, Diane M.; Moskal, L. Monika; Richardson, Jeffrey J.; Halabisky, Meghan A.

    2014-01-01

    Object-based image analysis (OBIA) is becoming an increasingly common method for producing land use/land cover (LULC) classifications in urban areas. In order to produce the most accurate LULC map, LiDAR data and postclassification procedures are often employed, but their relative contributions to accuracy are unclear. We examined the contribution of LiDAR data and postclassification procedures to increase classification accuracies over using imagery alone and assessed sources of error along an ecologically complex urban-to-rural gradient in Olympia, Washington. Overall classification accuracy and user's and producer's accuracies for individual classes were evaluated. The addition of LiDAR data to the OBIA classification resulted in an 8.34% increase in overall accuracy, while manual postclassification to the imagery+LiDAR classification improved accuracy only an additional 1%. Sources of error in this classification were largely due to edge effects, from which multiple different types of errors result.

  2. Semiautomated object-based classification of rain-induced landslides with VHR multispectral images on Madeira Island

    NASA Astrophysics Data System (ADS)

    Heleno, Sandra; Matias, Magda; Pina, Pedro; Sousa, António Jorge

    2016-04-01

    A method for semiautomated landslide detection and mapping, with the ability to separate source and run-out areas, is presented in this paper. It combines object-based image analysis and a support vector machine classifier and is tested using a GeoEye-1 multispectral image, sensed 3 days after a major damaging landslide event that occurred on Madeira Island (20 February 2010), and a pre-event lidar digital terrain model. The testing is developed in a 15 km2 wide study area, where 95 % of the number of landslides scars are detected by this supervised approach. The classifier presents a good performance in the delineation of the overall landslide area, with commission errors below 26 % and omission errors below 24 %. In addition, fair results are achieved in the separation of the source from the run-out landslide areas, although in less illuminated slopes this discrimination is less effective than in sunnier, east-facing slopes.

  3. Generating land cover boundaries from remotely sensed data using object-based image analysis: overview and epidemiological application

    PubMed Central

    Maxwell, Susan K.

    2010-01-01

    Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. PMID:21135917

  4. Generating land cover boundaries from remotely sensed data using object-based image analysis: overview and epidemiological application.

    PubMed

    Maxwell, Susan K

    2010-12-01

    Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. PMID:21135917

  5. Automated object-based classification of rain-induced landslides with VHR multispectral images in Madeira Island

    NASA Astrophysics Data System (ADS)

    Heleno, S.; Matias, M.; Pina, P.; Sousa, A. J.

    2015-09-01

    A method for semi-automatic landslide detection, with the ability to separate source and run-out areas, is presented in this paper. It combines object-based image analysis and a Support Vector Machine classifier on a GeoEye-1 multispectral image, sensed 3 days after the major damaging landslide event that occurred in Madeira island (20 February 2010), with a pre-event LIDAR Digital Elevation Model. The testing is developed in a 15 km2-wide study area, where 95 % of the landslides scars are detected by this supervised approach. The classifier presents a good performance in the delineation of the overall landslide area. In addition, fair results are achieved in the separation of the source from the run-out landslide areas, although in less illuminated slopes this discrimination is less effective than in sunnier east facing-slopes.

  6. Fault-tolerant software for the FIMP

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Hecht, M.

    1984-01-01

    The work reported here provides protection against software failures in the task dispatcher of the FTMP, a particularly critical portion of the system software. Faults in other system modules and application programs can be handled by similar techniques but are not covered in this effort. Goals of the work reported here are: (1) to develop provisions in the software design that will detect and mitigate software failures in the dispatcher portion of the FTMP Executive and, (2) to propose the implementation of specific software reliability measures in other parts of the system. Beyond the specific support to the FTMP project, the work reported here represents a considerable advance in the practical application of the recovery block methodology for fault tolerant software design.

  7. An object-based multisensoral approach for the derivation of urban land use structures in the city of Rostock, Germany

    NASA Astrophysics Data System (ADS)

    Lindner, Martin; Hese, Sören; Berger, Christian; Schmullius, Christiane

    2011-11-01

    The present work is part of the Enviland-2 research project, which investigates the synergism between radar- and optical satellite data for ENVIronment and LAND use applications. The urban work package of Enviland aims at the combined analysis of RapidEye and TerraSAR-X data for the parameterization of different urban land use structures. This study focuses on the development of a transferable, object-based rule set for the derivation of urban land use structures at block level. The data base consists of RapidEye and TerraSAR-X imagery, as well as height information of a LiDAR nDSM (normalized Digital Surface Model) and object boundaries of ATKIS (Official Topographic Cartographic Information System) vector data for a study area in the city of Rostock, Germany. The classification of various land cover units forms the basis of the analysis. Therefore, an object-based land cover classification is implemented that uses feature level fusion to combine the information of all available input data. Besides spectral values also shape and context features are employed to characterize and extract specific land cover objects as indicators for the prevalent land use. The different land use structures are then determined by typical combinations and constellations of the extracted land use indicators and land cover proportions. Accuracy assessment is done by utilizing the available ATKIS information. From this analysis the land use structure classes residential, industrial/commercial, other built-up, allotments, sports facility, forest, grassland, other green spaces, squares/parking areas and water are distinguished with an overall accuracy of 63.2 %.

  8. Object-Based Classification of Wetlands Using Optical and SAR Data with a Compound Kernel in Support Vector Machine (SVM)

    NASA Astrophysics Data System (ADS)

    Salehi, B.; Mahdavi, S.; Brisco, B.; Huang, W.

    2015-12-01

    Both Synthetic Aperture RADAR (SAR) and optical imagery play a pivotal role in many applications. Thus it is desirable to fuse the two independent sources of data congruously. Many of the fusion methods, however, fail to consider the different nature of SAR and optical data. Moreover, it is not straightforward to adjust the contribution of the two data sources with respect to the application. Support Vector Machine (SVM) is one of the classification methods which can provide the possibility of combination of two kinds of images considering the different nature of them. It is particularly useful when object-based classification is used, in which case features extracted from SAR and optical images can be treated differently. This paper aims to develop an object-based classification method using both optical and SAR data which treats the two data sources independently. For the implementation of the method, a RapidEye and a RADARSAT-2 Quad-polarimetric image over Avalon Peninsula in Newfoundland, Canada will be used for wetland classification. RapidEye will be segmented using multiresolution algorithm in eCognitionTM. Because of speckle, segmentation of SAR images does not have robust results. Thus the result of the segmentation from RapidEye image is superimposed on RADARSAT-2 image. Then useful SAR and optical features are extracted. Integrating features extracted from optical and SAR data, a compound kernel in SVM is applied for classification. This kernel is a combination of two kernels with different weights, each of which are for the features of one of the data sources. Using compound kernel can outperform using the same kernel for both images. The proposed method has two main advantages. First, different nature of optical and SAR images which is the result of dissimilar dynamic range, resolution, etc. is considered. Second, as the two data sources are combined with different weights, it is possible to adjust the role of each data sources for varying applications.

  9. Object Based Image Analysis to Support Environmental Monitoring Under the European Habitat Directive: a Case Study from Decover

    NASA Astrophysics Data System (ADS)

    Buck, O.; Peter, B.; Völker, A.; Donning, A.

    2011-09-01

    DeCOVER serves as a national extension of the European Global Monitoring for Environment and Security (GMES) initiative. It was initiated to develop land cover information services adapted to German user needs. One of its three service developments pillars is the application of Remote Sensing to support environmental monitoring schemes under the European Habitats Directive.Within two DeCOVER test sites located in North-Rhine Westphalia/Germany an object-based indicator classification approach is currently being developed to monitor heath habitats of importance under the Habitats Directive. While many previous Remote Sensing projects have focused on the discrete classification of habitat types to replace fieldwork, our approach is embedded in a strong operational context to a) focus and direct fieldwork efforts by pre-field visit assessment of habitat changes (change detection) and b) support fieldwork by contributing quality parameters and GIS-ready geometries. Using Geoeye satellite data (VHR component) and RapidEye satellite images (Multi-temporal HR component) together with existing habitat and biotope maps (knowledge and post-classification component) an image analysis approach is realised using object-based classification routines based on data mining tools to derive training information. To extract meaningful objects of heath-, sand- and grassland from the VHR-data, training sample areas have to be assigned. Thresholds and appropriate features for describing these samples are analysed by statistical algorithms and are used in the following classification. A multi-temporal approach for the acquisition of tree habitat areas integrates two RapidEye scenes into the classification process. To validate classification accuracies and potential transects were sampled in the field and analyzed for their structural composition using top view field photos of 1m2. First results demonstrate the realistic option to directly support the fieldwork or reduce its post

  10. Interaction Between Object-Based Attention and Pertinence Values Shapes the Attentional Priority Map of a Multielement Display

    PubMed Central

    2016-01-01

    Previous studies have shown that the perceptual organization of the visual scene constrains the deployment of attention. Here we investigated how the organization of multiple elements into larger configurations alters their attentional weight, depending on the “pertinence” or behavioral importance of the elements’ features. We assessed object-based effects on distinct aspects of the attentional priority map: top-down control, reflecting the tendency to encode targets rather than distracters, and the spatial distribution of attention weights across the visual scene, reflecting the tendency to report elements belonging to the same rather than different objects. In 2 experiments participants had to report the letters in briefly presented displays containing 8 letters and digits, in which pairs of characters could be connected with a line. Quantitative estimates of top-down control were obtained using Bundesen’s Theory of Visual Attention (1990). The spatial distribution of attention weights was assessed using the “paired response index” (PRI), indicating responses for within-object pairs of letters. In Experiment 1, grouping along the task-relevant dimension (targets with targets and distracters with distracters) increased top-down control and enhanced the PRI; in contrast, task-irrelevant grouping (targets with distracters) did not affect performance. In Experiment 2, we disentangled the effect of target-target and distracter-distracter grouping: Pairwise grouping of distracters enhanced top-down control whereas pairwise grouping of targets changed the PRI. We conclude that object-based perceptual representations interact with pertinence values (of the elements’ features and location) in the computation of attention weights, thereby creating a widespread pattern of attentional facilitation across the visual scene. PMID:26752732

  11. Interaction between object-based attention and pertinence values shapes the attentional priority map of a multielement display.

    PubMed

    Gillebert, Celine R; Petersen, Anders; Van Meel, Chayenne; Müller, Tanja; McIntyre, Alexandra; Wagemans, Johan; Humphreys, Glyn W

    2016-06-01

    Previous studies have shown that the perceptual organization of the visual scene constrains the deployment of attention. Here we investigated how the organization of multiple elements into larger configurations alters their attentional weight, depending on the "pertinence" or behavioral importance of the elements' features. We assessed object-based effects on distinct aspects of the attentional priority map: top-down control, reflecting the tendency to encode targets rather than distracters, and the spatial distribution of attention weights across the visual scene, reflecting the tendency to report elements belonging to the same rather than different objects. In 2 experiments participants had to report the letters in briefly presented displays containing 8 letters and digits, in which pairs of characters could be connected with a line. Quantitative estimates of top-down control were obtained using Bundesen's Theory of Visual Attention (1990). The spatial distribution of attention weights was assessed using the "paired response index" (PRI), indicating responses for within-object pairs of letters. In Experiment 1, grouping along the task-relevant dimension (targets with targets and distracters with distracters) increased top-down control and enhanced the PRI; in contrast, task-irrelevant grouping (targets with distracters) did not affect performance. In Experiment 2, we disentangled the effect of target-target and distracter-distracter grouping: Pairwise grouping of distracters enhanced top-down control whereas pairwise grouping of targets changed the PRI. We conclude that object-based perceptual representations interact with pertinence values (of the elements' features and location) in the computation of attention weights, thereby creating a widespread pattern of attentional facilitation across the visual scene. (PsycINFO Database Record PMID:26752732

  12. Advance Directives and Do Not Resuscitate Orders

    MedlinePlus

    ... a form. Call a lawyer. Use a computer software package for legal documents. Advance directives and living ... you write by yourself or with a computer software package should follow your state laws. You may ...

  13. Software Configuration Management Guidebook

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The growth in cost and importance of software to NASA has caused NASA to address the improvement of software development across the agency. One of the products of this program is a series of guidebooks that define a NASA concept of the assurance processes which are used in software development. The Software Assurance Guidebook, SMAP-GB-A201, issued in September, 1989, provides an overall picture of the concepts and practices of NASA in software assurance. Lower level guidebooks focus on specific activities that fall within the software assurance discipline, and provide more detailed information for the manager and/or practitioner. This is the Software Configuration Management Guidebook which describes software configuration management in a way that is compatible with practices in industry and at NASA Centers. Software configuration management is a key software development process, and is essential for doing software assurance.

  14. Software distribution using xnetlib

    SciTech Connect

    Dongarra, J.J. |; Rowan, T.H.; Wade, R.C.

    1993-06-01

    Xnetlib is a new tool for software distribution. Whereas its predecessor netlib uses e-mail as the user interface to its large collection of public-domain mathematical software, xnetlib uses an X Window interface and socket-based communication. Xnetlib makes it easy to search through a large distributed collection of software and to retrieve requested software in seconds.

  15. Agile Software Development

    ERIC Educational Resources Information Center

    Biju, Soly Mathew

    2008-01-01

    Many software development firms are now adopting the agile software development method. This method involves the customer at every level of software development, thus reducing the impact of change in the requirement at a later stage. In this article, the principles of the agile method for software development are explored and there is a focus on…

  16. Software productivity improvement through software engineering technology

    NASA Technical Reports Server (NTRS)

    Mcgarry, F. E.

    1985-01-01

    It has been estimated that NASA expends anywhere from 6 to 10 percent of its annual budget on the acquisition, implementation and maintenance of computer software. Although researchers have produced numerous software engineering approaches over the past 5-10 years; each claiming to be more effective than the other, there is very limited quantitative information verifying the measurable impact htat any of these technologies may have in a production environment. At NASA/GSFC, an extended research effort aimed at identifying and measuring software techniques that favorably impact productivity of software development, has been active over the past 8 years. Specific, measurable, software development technologies have been applied and measured in a production environment. Resulting software development approaches have been shown to be effective in both improving quality as well as productivity in this one environment.

  17. Software Formal Inspections Standard

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This Software Formal Inspections Standard (hereinafter referred to as Standard) is applicable to NASA software. This Standard defines the requirements that shall be fulfilled by the software formal inspections process whenever this process is specified for NASA software. The objective of this Standard is to define the requirements for a process that inspects software products to detect and eliminate defects as early as possible in the software life cycle. The process also provides for the collection and analysis of inspection data to improve the inspection process as well as the quality of the software.

  18. Mapping Arctic Coastline Change With Object-Based Image Analysis of Temporally and Geographically Distributed Landsat Archive Data

    NASA Astrophysics Data System (ADS)

    Hulslander, D.

    2011-12-01

    As a global phenomenon, climate change produces global effects. However, many of these effects are more intense in coastal and high latitude regions. Current longer periods of ice-free conditions, in combination with a rising sea level and thawing permafrost, can result in accelerated Arctic Ocean coastline change and erosion. Areas dominantly composed of ice-cemented peats and silt-rich permafrost have proven to be especially susceptible to rapid erosion. Anderson et al. (2009; Geology News) have measured erosion rates at sites along the Alaskan Arctic Ocean coast of 15 m per year. The continental scope of these changes, as well as the remote and inhospitable nature of the study area make geologic remote sensing techniques particularly well suited for studying coastal erosion along the 45,000 km of Arctic Ocean coastline. While it is valuable to determine current patterns of erosion, it is equally important to map historic rates in order to determine if coastal erosion is accelerating, if it is in a new behavioral regime, if there are areas of emergent erosion patterns, or if what is currently measured is only a single instance in a complex and constantly shifting pattern of an overall balance of erosion and deposition at high latitudes. Even in relatively stable conditions, coastline processes are dynamic and complex, making it especially important to ensure the best possible accuracy in a study of this kind. Remote sensing solutions in the earth sciences have often run in to obstacles concerning a lack of historic data and baselines as well as issues in the systemization of accurate feature mapping. Using object-based image analysis techniques on Landsat archive data allows for the possibility of a multi-decadal map of Arctic Ocean coastline changes. Landsat data (from sensors MSS 1-3 and TM/ETM 4, 5, and 7) provide imagery as frequently as every 16 days since July 1972, are well-calibrated both radiometrically and geometrically, and are freely available from

  19. VOUS Software Facilitates Development Of Other Software

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Pichumani, Ramini; Ponceleon, Dulce

    1992-01-01

    Visual Object Oriented Unification System (VOUS) computer program provides facility for development of other, high-level software. Does not replace, but rather extends, preexisting software tools for development of other software. Provides comprehensive, graphical, interactive medium for all phases in development of computer code from early exploration of concepts, through detailed coding-and-error-checking process, to final reporting of finished code and compilation of instruction manual for its use. Simplifies and partly automates programmer's task.

  20. Segmentation-based filtering and object-based feature extraction from airborne LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Chang, Jie

    Three dimensional (3D) information about ground and above-ground features such as buildings and trees is important for many urban and environmental applications. Recent developments in Light Detection And Ranging (LiDAR) technology provide promising alternatives to conventional techniques for acquiring such information. The focus of this dissertation research is to effectively and efficiently filter massive airborne LiDAR point cloud data and to extract main above-ground features such as buildings and trees in the urban area. A novel segmentation algorithm for point cloud data, namely the 3D k mutual nearest neighborhood (kMNN) segmentation algorithm, was developed based on the improvement to the kMNN clustering algorithm by employing distances in 3D space to define mutual nearest neighborhoods. A set of optimization strategies, including dividing dataset into multiple blocks and small size grids, and using distance thresholds in x and y, were implemented to improve the efficiency of the segmentation algorithm. A segmentation based filtering method was then employed to filter the generated segments, which first generates segment boundaries using Voronoi polygon and dissolving operations, and then labels the segments as ground and above-ground based on their size and relative heights to the surrounding segments. An object-based feature extraction approach was also devised to extract buildings and trees from the above-ground segments based on object-level statistics derived, which were subject to a rule based classification system developed by either human experts or an inductive machine-learning algorithm. Case studies were conducted with four different LiDAR datasets to evaluate the effectiveness and efficiency of the proposed approaches. The proposed segmentation algorithm proved to be not only effective in separating ground and above-ground measurements into different segments, but also efficient in processing large datasets. The segmentation based filtering and

  1. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  2. Starlink Software Submission

    NASA Astrophysics Data System (ADS)

    Lawden, M. D.

    The definition, structure and management of the Starlink Software Collection is described in SGP/20 which should be read by everyone involved in the production of software for the Starlink project. The Collection is managed by the Starlink Software Librarian (username STAR) who decides where new software should be stored in the existing structure and who has editorial control of Starlink documentation. This paper describes the principles governing the preparation and submission of software for inclusion in the Collection.

  3. Responsbility for unreliable software

    SciTech Connect

    Wahl, N.J.

    1994-12-31

    Unreliable software exposes software developers and distributors to legal risks. Under certain circumstances, the developer and distributor of unreliable software can be sued. To avoid lawsuits, software developers should do the following: determine what the risks am, understand the extent of the risks, and identify ways of avoiding the risks and lessening the consequences of the risks. Liability issues associated with unreliable software are explored in this article.

  4. Software component quality evaluation

    NASA Technical Reports Server (NTRS)

    Clough, A. J.

    1991-01-01

    The paper describes a software inspection process that can be used to evaluate the quality of software components. Quality criteria, process application, independent testing of the process and proposed associated tool support are covered. Early results indicate that this technique is well suited for assessing software component quality in a standardized fashion. With automated machine assistance to facilitate both the evaluation and selection of software components, such a technique should promote effective reuse of software components.

  5. Object-based Dune Analysis: Automated dune mapping and pattern characterization for Ganges Chasma and Gale crater, Mars

    NASA Astrophysics Data System (ADS)

    Vaz, David A.; Sarmento, Pedro T. K.; Barata, Maria T.; Fenton, Lori K.; Michaels, Timothy I.

    2015-12-01

    A method that enables the automated mapping and characterization of dune fields on Mars is described. Using CTX image mosaics, the introduced Object-based Dune Analysis (OBDA) technique produces an objective and reproducible mapping of dune morphologies over extensive areas. The data set thus obtained integrates a large variety of data, allowing a simple cross-analysis of dune patterns, spectral and morphometric information, and mesoscale wind models. Two dune fields, located in Gale crater and Ganges Chasma, were used to test and validate the methodology. The segmentation of dune-related morphologies is highly efficient, reaching overall accuracies of 95%. In addition, we show that the automated segmentation of slipface traces is also possible with expected accuracies of 85-90%. A qualitative and quantitative comparison of the final outputs with photointerpretations is performed, and the precision of the directional characterization of the dune patterns is evaluated. We demonstrate a good agreement between the OBDA outputs and the photointerpreted dune morphologies, with local trend deviations below 45° for 80-95% of the mapped areas. Because the developed algorithm is tuned for the recognition of linear features from the imagery, the slipfaces of small barchans can be preferentially overlooked owing to their small extent at the spatial resolution of the CTX mosaics. Dune types composed of longer linear morphologies are much better represented, including correct mapping of secondary structures. Having proved the effectiveness and accuracy of the mapping procedure, we discuss its future applications for the improvement of dune catalogs on Mars.

  6. A Markov random field approach for topology-preserving registration: application to object-based tomographic image interpolation.

    PubMed

    Cordero-Grande, Lucilio; Vegas-Sánchez-Ferrero, Gonzalo; Casaseca-de-la-Higuera, Pablo; Alberola-López, Carlos

    2012-04-01

    This paper proposes a topology-preserving multiresolution elastic registration method based on a discrete Markov random field of deformations and a block-matching procedure. The method is applied to the object-based interpolation of tomographic slices. For that purpose, the fidelity of a given deformation to the data is established by a block-matching strategy based on intensity- and gradient-related features, the smoothness of the transformation is favored by an appropriate prior on the field, and the deformation is guaranteed to maintain the topology by imposing some hard constraints on the local configurations of the field. The resulting deformation is defined as the maximum a posteriori configuration. Additionally, the relative influence of the fidelity and smoothness terms is weighted by the unsupervised estimation of the field parameters. In order to obtain an unbiased interpolation result, the registration is performed both in the forward and backward directions, and the resulting transformations are combined by using the local information content of the deformation. The method is applied to magnetic resonance and computed tomography acquisitions of the brain and the torso. Quantitative comparisons offer an overall improvement in performance with respect to related works in the literature. Additionally, the application of the interpolation method to cardiac magnetic resonance images has shown that the removal of any of the main components of the algorithm results in a decrease in performance which has proven to be statistically significant. PMID:21997265

  7. Seismic-zonation of Port-au-Prince using pixel- and object-based imaging analysis methods on ASTER GDEM

    USGS Publications Warehouse

    Yong, A.; Hough, S.E.; Cox, B.R.; Rathje, E.M.; Bachhuber, J.; Dulberg, R.; Hulslander, D.; Christiansen, L.; Abrams, M.J.

    2011-01-01

    We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, Vs30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available Vs30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data. ?? 2011 American Society for Photogrammetry and Remote Sensing.

  8. Seismic zonation of Port-Au-Prince using pixel- and object-based imaging analysis methods on ASTER GDEM

    USGS Publications Warehouse

    Yong, Alan; Hough, Susan E.; Cox, Brady R.; Rathje, Ellen M.; Bachhuber, Jeff; Dulberg, Ranon; Hulslander, David; Christiansen, Lisa; and Abrams, Michael J.

    2011-01-01

    We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, VS30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available VS30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data.

  9. Object-based locust habitat mapping using high-resolution multispectral satellite data in the southern Aral Sea basin

    NASA Astrophysics Data System (ADS)

    Navratil, Peter; Wilps, Hans

    2013-01-01

    Three different object-based image classification techniques are applied to high-resolution satellite data for the mapping of the habitats of Asian migratory locust (Locusta migratoria migratoria) in the southern Aral Sea basin, Uzbekistan. A set of panchromatic and multispectral Système Pour l'Observation de la Terre-5 satellite images was spectrally enhanced by normalized difference vegetation index and tasseled cap transformation and segmented into image objects, which were then classified by three different classification approaches: a rule-based hierarchical fuzzy threshold (HFT) classification method was compared to a supervised nearest neighbor classifier and classification tree analysis by the quick, unbiased, efficient statistical trees algorithm. Special emphasis was laid on the discrimination of locust feeding and breeding habitats due to the significance of this discrimination for practical locust control. Field data on vegetation and land cover, collected at the time of satellite image acquisition, was used to evaluate classification accuracy. The results show that a robust HFT classifier outperformed the two automated procedures by 13% overall accuracy. The classification method allowed a reliable discrimination of locust feeding and breeding habitats, which is of significant importance for the application of the resulting data for an economically and environmentally sound control of locust pests because exact spatial knowledge on the habitat types allows a more effective surveying and use of pesticides.

  10. Copula regression analysis of simultaneously recorded frontal eye field and inferotemporal spiking activity during object-based working memory.

    PubMed

    Hu, Meng; Clark, Kelsey L; Gong, Xiajing; Noudoost, Behrad; Li, Mingyao; Moore, Tirin; Liang, Hualou

    2015-06-10

    Inferotemporal (IT) neurons are known to exhibit persistent, stimulus-selective activity during the delay period of object-based working memory tasks. Frontal eye field (FEF) neurons show robust, spatially selective delay period activity during memory-guided saccade tasks. We present a copula regression paradigm to examine neural interaction of these two types of signals between areas IT and FEF of the monkey during a working memory task. This paradigm is based on copula models that can account for both marginal distribution over spiking activity of individual neurons within each area and joint distribution over ensemble activity of neurons between areas. Considering the popular GLMs as marginal models, we developed a general and flexible likelihood framework that uses the copula to integrate separate GLMs into a joint regression analysis. Such joint analysis essentially leads to a multivariate analog of the marginal GLM theory and hence efficient model estimation. In addition, we show that Granger causality between spike trains can be readily assessed via the likelihood ratio statistic. The performance of this method is validated by extensive simulations, and compared favorably to the widely used GLMs. When applied to spiking activity of simultaneously recorded FEF and IT neurons during working memory task, we observed significant Granger causality influence from FEF to IT, but not in the opposite direction, suggesting the role of the FEF in the selection and retention of visual information during working memory. The copula model has the potential to provide unique neurophysiological insights about network properties of the brain. PMID:26063909

  11. High-resolution tree canopy mapping for New York City using LIDAR and object-based image analysis

    NASA Astrophysics Data System (ADS)

    MacFaden, Sean W.; O'Neil-Dunne, Jarlath P. M.; Royar, Anna R.; Lu, Jacqueline W. T.; Rundle, Andrew G.

    2012-01-01

    Urban tree canopy is widely believed to have myriad environmental, social, and human-health benefits, but a lack of precise canopy estimates has hindered quantification of these benefits in many municipalities. This problem was addressed for New York City using object-based image analysis (OBIA) to develop a comprehensive land-cover map, including tree canopy to the scale of individual trees. Mapping was performed using a rule-based expert system that relied primarily on high-resolution LIDAR, specifically its capacity for evaluating the height and texture of aboveground features. Multispectral imagery was also used, but shadowing and varying temporal conditions limited its utility. Contextual analysis was a key part of classification, distinguishing trees according to their physical and spectral properties as well as their relationships to adjacent, nonvegetated features. The automated product was extensively reviewed and edited via manual interpretation, and overall per-pixel accuracy of the final map was 96%. Although manual editing had only a marginal effect on accuracy despite requiring a majority of project effort, it maximized aesthetic quality and ensured the capture of small, isolated trees. Converting high-resolution LIDAR and imagery into usable information is a nontrivial exercise, requiring significant processing time and labor, but an expert system-based combination of OBIA and manual review was an effective method for fine-scale canopy mapping in a complex urban environment.

  12. Software Quality Assurance Metrics

    NASA Technical Reports Server (NTRS)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  13. Space Station Software Recommendations

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor)

    1985-01-01

    Four panels of invited experts and NASA representatives focused on the following topics: software management, software development environment, languages, and software standards. Each panel deliberated in private, held two open sessions with audience participation, and developed recommendations for the NASA Space Station Program. The major thrusts of the recommendations were as follows: (1) The software management plan should establish policies, responsibilities, and decision points for software acquisition; (2) NASA should furnish a uniform modular software support environment and require its use for all space station software acquired (or developed); (3) The language Ada should be selected for space station software, and NASA should begin to address issues related to the effective use of Ada; and (4) The space station software standards should be selected (based upon existing standards where possible), and an organization should be identified to promulgate and enforce them. These and related recommendations are described in detail in the conference proceedings.

  14. Software For Simulation Of Development Of Software

    NASA Technical Reports Server (NTRS)

    Tausworthe, Robert C.

    1991-01-01

    SOFTREL is prototype software package simulating creation, detection, and repair of defects and faults during software-development project. Personnel, resources, errors, and other realistic factors represented in simulation. Available in executable form only for IBM PC. SOFTREL is copyrighted work with all copyright vested in NASA.

  15. Software Engineering Program: Software Process Improvement Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this document is to provide experience-based guidance in implementing a software process improvement program in any NASA software development or maintenance community. This guidebook details how to define, operate, and implement a working software process improvement program. It describes the concept of the software process improvement program and its basic organizational components. It then describes the structure, organization, and operation of the software process improvement program, illustrating all these concepts with specific NASA examples. The information presented in the document is derived from the experiences of several NASA software organizations, including the SEL, the SEAL, and the SORCE. Their experiences reflect many of the elements of software process improvement within NASA. This guidebook presents lessons learned in a form usable by anyone considering establishing a software process improvement program within his or her own environment. This guidebook attempts to balance general and detailed information. It provides material general enough to be usable by NASA organizations whose characteristics do not directly match those of the sources of the information and models presented herein. It also keeps the ideas sufficiently close to the sources of the practical experiences that have generated the models and information.

  16. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multicore, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to approx.50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  17. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multi-core, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to .50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  18. Achieving strategic surety for high consequence software

    SciTech Connect

    Pollock, G.M.

    1996-09-01

    A strategic surety roadmap for high consequence software systems under the High Integrity Software (HIS) Program at Sandia National Laboratories guides research in identifying methodologies to improve software surety. Selected research tracks within this roadmap are identified and described detailing current technology and outlining advancements to be pursued over the coming decade to reach HIS goals. The tracks discussed herein focus on Correctness by Design, and System Immunology{trademark}. Specific projects are discussed with greater detail given on projects involving Correct Specification via Visualization, Synthesis, & Analysis; Visualization of Abstract Objects; and Correct Implementation of Components.

  19. Engine Structures Modeling Software System (ESMOSS)

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Engine Structures Modeling Software System (ESMOSS) is the development of a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components, and substructures which can be transferred to finite element analysis programs such as NASTRAN. The NASA Lewis Engine Structures Program is concerned with the development of technology for the rational structural design and analysis of advanced gas turbine engines with emphasis on advanced structural analysis, structural dynamics, structural aspects of aeroelasticity, and life prediction. Fundamental and common to all of these developments is the need for geometric and analytical model descriptions at various engine assembly levels which are generated using ESMOSS.

  20. Land cover changes assessment using object-based image analysis in the Binah River watershed (Togo and Benin)

    NASA Astrophysics Data System (ADS)

    Badjana, Hèou Maléki; Helmschrot, Jörg; Selsam, Peter; Wala, Kpérkouma; Flügel, Wolfgang-Albert; Afouda, Abel; Akpagana, Koffi

    2015-10-01

    In this study, land cover changes between 1972 and 2013 were investigated in the Binah River watershed (North of Togo and Benin) using remote sensing and geographic information system technologies. Multitemporal satellite images—Landsat MSS (1972), TM (1987), and OLI-TIRS (2013)—were processed using object-based image analysis and post-classification comparison methods including landscape metrics and changes trajectories analysis. Land cover maps referring to five main land cover classes, namely, agricultural land, forest land, savannah, settlements, and water bodies, were produced for each acquisition date. The overall accuracies were 76.64% (1972), 83.52% (1987), and 88.84% (2013) with respective Kappa statistics of 0.69, 0.78, and 0.86. The assessment of the spatiotemporal pattern of land cover changes indicates that savannah, the main vegetation type, has undergone the most dominant change, decreasing from 67% of the basin area in 1972 to 56% in 1987 and 33% in 2013. At the same time, agricultural land has significantly increased from 15% in 1972 to 24% in 1987 and 43% in 2013, while some proportions of agricultural land were converted to savannah relating to fallow agriculture. In total, more than 55% of the landscape experienced changes between 1972 and 2013. These changes are primarily due to human activities and population growth. In addition, agricultural activities significantly contributed to the increase in the number of patches, degree of division, and splitting index of forest and savannah vegetations and the decrease in their effective mesh sizes. These results indicate further fragmentation of forest and savannah vegetations between 1972 and 2013. Further research is needed to quantitatively evaluate the influences of individual factors of human activities and to separate these from the impacts of climate change-driven disturbances.

  1. Object-Based Classification of Ikonos Imagery for Mapping Large-Scale Vegetation Communities in Urban Areas

    PubMed Central

    Mathieu, Renaud; Aryal, Jagannath; Chong, Albert K.

    2007-01-01

    Effective assessment of biodiversity in cities requires detailed vegetation maps. To date, most remote sensing of urban vegetation has focused on thematically coarse land cover products. Detailed habitat maps are created by manual interpretation of aerial photographs, but this is time consuming and costly at large scale. To address this issue, we tested the effectiveness of object-based classifications that use automated image segmentation to extract meaningful ground features from imagery. We applied these techniques to very high resolution multispectral Ikonos images to produce vegetation community maps in Dunedin City, New Zealand. An Ikonos image was orthorectified and a multi-scale segmentation algorithm used to produce a hierarchical network of image objects. The upper level included four coarse strata: industrial/commercial (commercial buildings), residential (houses and backyard private gardens), vegetation (vegetation patches larger than 0.8/1ha), and water. We focused on the vegetation stratum that was segmented at more detailed level to extract and classify fifteen classes of vegetation communities. The first classification yielded a moderate overall classification accuracy (64%, κ = 0.52), which led us to consider a simplified classification with ten vegetation classes. The overall classification accuracy from the simplified classification was 77% with a κ value close to the excellent range (κ = 0.74). These results compared favourably with similar studies in other environments. We conclude that this approach does not provide maps as detailed as those produced by manually interpreting aerial photographs, but it can still extract ecologically significant classes. It is an efficient way to generate accurate and detailed maps in significantly shorter time. The final map accuracy could be improved by integrating segmentation, automated and manual classification in the mapping process, especially when considering important vegetation classes with limited

  2. Optimization of a lossless object-based compression embedded on GAIA, a next-generation space telescope

    NASA Astrophysics Data System (ADS)

    Oseret, Emmanuel; Timsit, Claude

    2007-09-01

    Until now space telescopes, like Humbble, did not require a strong data compression. In fact, images were captured on demand and their proximity to Earth gave them a sufficient downlink bandwidth. Yet, the next generation space telescopes like GAIA (ESA) and the James Webb Space Telescope (JWST, ESA & NASA) will observe even wider sky fields at even higher resolutions. Moreover, they will be dramatically farther from Earth than Hubble (1.5 million versus 600 kilometers). This will imply a poor downlink bandwidth, and thus require a fast, on-board strong data compression (better than 1:200 ratios). To achieve GAIA scientific objectives, a real-time «selectively lossless» compression is needed. With standard schemes, it is simply not possible today, even without time constraints (because of the entropy limit...). This paper explains why the GAIA Compression, which is based on Object-Based Compression (OBC), is efficient for stellar images. Since the baseline implementation did not meet all the ESA requirements (compression speed and ratio), we have also brought our contribution to optimize the GAIA Compression. It consists mainly in using (i) non-rectangular regions for large objects and (ii) and (inter-objects) differential predictive coding to improve the effficiency of the final lossless compression. We have tested our algorithms on the GAIA sky generator (GIBIS) which stimulates flight-realistic conditioins (CCD read-noise, cosmic rays...). Without any loss on signal, we have obtained promising ratios up to 1:270 for the worst case sky.

  3. Mapping trees outside forests using high-resolution aerial imagery: a comparison of pixel- and object-based classification approaches.

    PubMed

    Meneguzzo, Dacia M; Liknes, Greg C; Nelson, Mark D

    2013-08-01

    Discrete trees and small groups of trees in nonforest settings are considered an essential resource around the world and are collectively referred to as trees outside forests (ToF). ToF provide important functions across the landscape, such as protecting soil and water resources, providing wildlife habitat, and improving farmstead energy efficiency and aesthetics. Despite the significance of ToF, forest and other natural resource inventory programs and geospatial land cover datasets that are available at a national scale do not include comprehensive information regarding ToF in the United States. Additional ground-based data collection and acquisition of specialized imagery to inventory these resources are expensive alternatives. As a potential solution, we identified two remote sensing-based approaches that use free high-resolution aerial imagery from the National Agriculture Imagery Program (NAIP) to map all tree cover in an agriculturally dominant landscape. We compared the results obtained using an unsupervised per-pixel classifier (independent component analysis-[ICA]) and an object-based image analysis (OBIA) procedure in Steele County, Minnesota, USA. Three types of accuracy assessments were used to evaluate how each method performed in terms of: (1) producing a county-level estimate of total tree-covered area, (2) correctly locating tree cover on the ground, and (3) how tree cover patch metrics computed from the classified outputs compared to those delineated by a human photo interpreter. Both approaches were found to be viable for mapping tree cover over a broad spatial extent and could serve to supplement ground-based inventory data. The ICA approach produced an estimate of total tree cover more similar to the photo-interpreted result, but the output from the OBIA method was more realistic in terms of describing the actual observed spatial pattern of tree cover. PMID:23255169

  4. An object-based approach to delineate wetlands across landscapes of varied disturbance with high spatial resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Mui, Amy; He, Yuhong; Weng, Qihao

    2015-11-01

    Mapping wetlands across both natural and human-altered landscapes is important for the management of these ecosystems. Though they are considered important landscape elements providing both ecological and socioeconomic benefits, accurate wetland inventories do not exist in many areas. In this study, a multi-scale geographic object-based image analysis (GEOBIA) approach was employed to segment three high spatial resolution images acquired over landscapes of varying heterogeneity due to human-disturbance to determine the robustness of this method to changing scene variability. Multispectral layers, a digital elevation layer, normalized-difference vegetation index (NDVI) layer, and a first-order texture layer were used to segment images across three segmentation scales with a focus on accurate delineation of wetland boundaries and wetland components. Each ancillary input layer contributed to improving segmentation at different scales. Wetlands were classified using a nearest neighbor approach across a relatively undisturbed park site and an agricultural site using GeoEye1 imagery, and an urban site using WorldView2 data. Successful wetland classification was achieved across all study sites with an accuracy above 80%, though results suggest that overall a higher degree of landscape heterogeneity may negatively affect both segmentation and classification. The agricultural site suffered from the greatest amount of over and under segmentation, and lowest map accuracy (kappa: 0.78) which was partially attributed to confusion among a greater proportion of mixed vegetated classes from both wetlands and uplands. Accuracy of individual wetland classes based on the Canadian Wetland Classification system varied between each site, with kappa values ranging from 0.64 for the swamp class and 0.89 for the marsh class. This research developed a unique approach to mapping wetlands of various degrees of disturbance using GEOBIA, which can be applied to study other wetlands of similar

  5. A systematic comparison of different object-based classification techniques using high spatial resolution imagery in agricultural environments

    NASA Astrophysics Data System (ADS)

    Li, Manchun; Ma, Lei; Blaschke, Thomas; Cheng, Liang; Tiede, Dirk

    2016-07-01

    Geographic Object-Based Image Analysis (GEOBIA) is becoming more prevalent in remote sensing classification, especially for high-resolution imagery. Many supervised classification approaches are applied to objects rather than pixels, and several studies have been conducted to evaluate the performance of such supervised classification techniques in GEOBIA. However, these studies did not systematically investigate all relevant factors affecting the classification (segmentation scale, training set size, feature selection and mixed objects). In this study, statistical methods and visual inspection were used to compare these factors systematically in two agricultural case studies in China. The results indicate that Random Forest (RF) and Support Vector Machines (SVM) are highly suitable for GEOBIA classifications in agricultural areas and confirm the expected general tendency, namely that the overall accuracies decline with increasing segmentation scale. All other investigated methods except for RF and SVM are more prone to obtain a lower accuracy due to the broken objects at fine scales. In contrast to some previous studies, the RF classifiers yielded the best results and the k-nearest neighbor classifier were the worst results, in most cases. Likewise, the RF and Decision Tree classifiers are the most robust with or without feature selection. The results of training sample analyses indicated that the RF and adaboost. M1 possess a superior generalization capability, except when dealing with small training sample sizes. Furthermore, the classification accuracies were directly related to the homogeneity/heterogeneity of the segmented objects for all classifiers. Finally, it was suggested that RF should be considered in most cases for agricultural mapping.

  6. Mapping landslide source and transport areas in VHR images with Object-Based Analysis and Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Heleno, Sandra; Matias, Magda; Pina, Pedro

    2015-04-01

    Visual interpretation of satellite imagery remains extremely demanding in terms of resources and time, especially when dealing with numerous multi-scale landslides affecting wide areas, such as is the case of rainfall-induced shallow landslides. Applying automated methods can contribute to more efficient landslide mapping and updating of existing inventories, and in recent years the number and variety of approaches is rapidly increasing. Very High Resolution (VHR) images, acquired by space-borne sensors with sub-metric precision, such as Ikonos, Quickbird, Geoeye and Worldview, are increasingly being considered as the best option for landslide mapping, but these new levels of spatial detail also present new challenges to state of the art image analysis tools, asking for automated methods specifically suited to map landslide events on VHR optical images. In this work we develop and test a methodology for semi-automatic landslide recognition and mapping of landslide source and transport areas. The method combines object-based image analysis and a Support Vector Machine supervised learning algorithm, and was tested using a GeoEye-1 multispectral image, sensed 3 days after a damaging landslide event in Madeira Island, together with a pre-event LiDAR DEM. Our approach has proved successful in the recognition of landslides on a 15 Km2-wide study area, with 81 out of 85 landslides detected in its validation regions. The classifier also showed reasonable performance (false positive rate 60% and false positive rate below 36% in both validation regions) in the internal mapping of landslide source and transport areas, in particular in the sunnier east-facing slopes. In the less illuminated areas the classifier is still able to accurately map the source areas, but performs poorly in the mapping of landslide transport areas.

  7. Multiscale object-based drought monitoring and comparison in rainfed and irrigated agriculture from Landsat 8 OLI imagery

    NASA Astrophysics Data System (ADS)

    Ozelkan, Emre; Chen, Gang; Ustundag, Burak Berk

    2016-02-01

    Drought is a rapidly rising environmental issue that can cause hardly repaired or unrepaired damages to the nature and socio-economy. This is especially true for a region that features arid/semi-arid climate, including the Turkey's most important agricultural district - Southeast Anatolia. In this area, we examined the uncertainties of applying Landsat 8 Operational Land Imager (OLI) NDVI data to estimate meteorological drought - Standardized Precipitation Index (SPI) - measured from 31 in-situ agro-meteorological monitoring stations during spring and summer of 2013 and 2014. Our analysis was designed to address two important, yet under-examined questions: (i) how does the co-existence of rainfed and irrigated agriculture affect remote sensing drought monitoring in an arid/semi-arid region? (ii) What is the role of spatial scale in drought monitoring using a GEOBIA (geographic object-based image analysis) framework? Results show that spatial scale exerted a higher impact on drought monitoring especially in the drier year 2013, during which small scales were found to outperform large scales in general. In addition, consideration of irrigated and rainfed areas separately ensured a better performance in drought analysis. Compared to the positive correlations between SPI and NDVI over the rainfed areas, negative correlations were determined over the irrigated agricultural areas. Finally, the time lag effect was evident in the study, i.e., strong correlations between spring SPI and summer NDVI in both 2013 and 2014. This reflects the fact that spring watering is crucial for the growth and yield of the major crops (i.e., winter wheat, barley and lentil) cultivated in the region.

  8. Remote sensing of seasonal variability of fractional vegetation cover and its object-based spatial pattern analysis over mountain areas

    NASA Astrophysics Data System (ADS)

    Yang, Guijun; Pu, Ruiliang; Zhang, Jixian; Zhao, Chunjiang; Feng, Haikuan; Wang, Jihua

    2013-03-01

    Fractional vegetation cover (FVC) is an important indicator of mountain ecosystem status. A study on the seasonal changes of FVC can be beneficial for regional eco-environmental security, which contributes to the assessment of mountain ecosystem recovery and supports mountain forest planning and landscape reconstruction around megacities, for example, Beijing, China. Remote sensing has been demonstrated to be one of the most powerful and feasible tools for the investigation of mountain vegetation. However, topographic and atmospheric effects can produce enormous errors in the quantitative retrieval of FVC data from satellite images of mountainous areas. Moreover, the most commonly used analysis approach for assessing FVC seasonal fluctuations is based on per-pixel analysis regardless of the spatial context, which results in pixel-based FVC values that are feasible for landscape and ecosystem applications. To solve these problems, we proposed a new method that incorporates the use of a revised physically based (RPB) model to correct both atmospheric and terrain-caused illumination effects on Landsat images, an improved vegetation index (VI)-based technique for estimating the FVC, and an adaptive mean shift approach for object-based FVC segmentation. An array of metrics for segmented FVC analyses, including a variety of area metrics, patch metrics, shape metrics and diversity metrics, was generated. On the basis of the individual segmented FVC values and landscape metrics from multiple images of different dates, remote sensing of the seasonal variability of FVC was conducted over the mountainous area of Beijing, China. The experimental results indicate that (a) the mean value of the RPB-NDVI in all seasons was increased by approximately 10% compared with that of the atmospheric correction-NDVI; (b) a strong consistency was demonstrated between ground-based FVC observations and FVC estimated through remote sensing technology (R2 = 0.8527, RMSE = 0.0851); and (c

  9. The Legacy of Space Shuttle Flight Software

    NASA Technical Reports Server (NTRS)

    Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.

    2011-01-01

    The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.

  10. Methodology for automating software systems. Task 1 of the foundations for automating software systems

    NASA Technical Reports Server (NTRS)

    Moseley, Warren

    1989-01-01

    The early stages of a research program designed to establish an experimental research platform for software engineering are described. Major emphasis is placed on Computer Assisted Software Engineering (CASE). The Poor Man's CASE Tool is based on the Apple Macintosh system, employing available software including Focal Point II, Hypercard, XRefText, and Macproject. These programs are functional in themselves, but through advanced linking are available for operation from within the tool being developed. The research platform is intended to merge software engineering technology with artificial intelligence (AI). In the first prototype of the PMCT, however, the sections of AI are not included. CASE tools assist the software engineer in planning goals, routes to those goals, and ways to measure progress. The method described allows software to be synthesized instead of being written or built.

  11. Design software for reuse

    NASA Technical Reports Server (NTRS)

    Tracz, Will

    1990-01-01

    Viewgraphs are presented on the designing of software for reuse. Topics include terminology, software reuse maxims, the science of programming, an interface design example, a modularization example, and reuse and implementation guidelines.

  12. Software assurance standard

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.

  13. Guidelines for software inspections

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Quality control inspections are software problem finding procedures which provide defect removal as well as improvements in software functionality, maintenance, quality, and development and testing methodology is discussed. The many side benefits include education, documentation, training, and scheduling.

  14. Commercial Data Mining Software

    NASA Astrophysics Data System (ADS)

    Zhang, Qingyu; Segall, Richard S.

    This chapter discusses selected commercial software for data mining, supercomputing data mining, text mining, and web mining. The selected software are compared with their features and also applied to available data sets. The software for data mining are SAS Enterprise Miner, Megaputer PolyAnalyst 5.0, PASW (formerly SPSS Clementine), IBM Intelligent Miner, and BioDiscovery GeneSight. The software for supercomputing are Avizo by Visualization Science Group and JMP Genomics from SAS Institute. The software for text mining are SAS Text Miner and Megaputer PolyAnalyst 5.0. The software for web mining are Megaputer PolyAnalyst and SPSS Clementine . Background on related literature and software are presented. Screen shots of each of the selected software are presented, as are conclusions and future directions.

  15. Software Engineering Improvement Plan

    NASA Technical Reports Server (NTRS)

    2006-01-01

    In performance of this task order, bd Systems personnel provided support to the Flight Software Branch and the Software Working Group through multiple tasks related to software engineering improvement and to activities of the independent Technical Authority (iTA) Discipline Technical Warrant Holder (DTWH) for software engineering. To ensure that the products, comments, and recommendations complied with customer requirements and the statement of work, bd Systems personnel maintained close coordination with the customer. These personnel performed work in areas such as update of agency requirements and directives database, software effort estimation, software problem reports, a web-based process asset library, miscellaneous documentation review, software system requirements, issue tracking software survey, systems engineering NPR, and project-related reviews. This report contains a summary of the work performed and the accomplishments in each of these areas.

  16. Artificial intelligence and expert systems in-flight software testing

    NASA Technical Reports Server (NTRS)

    Demasie, M. P.; Muratore, J. F.

    1991-01-01

    The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.

  17. The Software Architecture of the Upgraded ESA DRAMA Software Suite

    NASA Astrophysics Data System (ADS)

    Kebschull, Christopher; Flegel, Sven; Gelhaus, Johannes; Mockel, Marek; Braun, Vitali; Radtke, Jonas; Wiedemann, Carsten; Vorsmann, Peter; Sanchez-Ortiz, Noelia; Krag, Holger

    2013-08-01

    In the beginnings of man's space flight activities there was the belief that space is so big that everybody could use it without any repercussions. However during the last six decades the increasing use of Earth's orbits has lead to a rapid growth in the space debris environment, which has a big influence on current and future space missions. For this reason ESA issued the "Requirements on Space Debris Mitigation for ESA Projects" [1] in 2008, which apply to all ESA missions henceforth. The DRAMA (Debris Risk Assessment and Mitigation Analysis) software suite had been developed to support the planning of space missions to comply with these requirements. During the last year the DRAMA software suite has been upgraded under ESA contract by TUBS and DEIMOS to include additional tools and increase the performance of existing ones. This paper describes the overall software architecture of the ESA DRAMA software suite. Specifically the new graphical user interface, which manages the five main tools ARES (Assessment of Risk Event Statistics), MIDAS (MASTER-based Impact Flux and Damage Assessment Software), OSCAR (Orbital Spacecraft Active Removal), CROC (Cross Section of Complex Bodies) and SARA (Re-entry Survival and Risk Analysis) is being discussed. The advancements are highlighted as well as the challenges that arise from the integration of the five tool interfaces. A framework had been developed at the ILR and was used for MASTER-2009 and PROOF-2009. The Java based GUI framework, enables the cross-platform deployment, and its underlying model-view-presenter (MVP) software pattern, meet strict design requirements necessary to ensure a robust and reliable method of operation in an environment where the GUI is separated from the processing back-end. While the GUI framework evolved with each project, allowing an increasing degree of integration of services like validators for input fields, it has also increased in complexity. The paper will conclude with an outlook on

  18. Evaluating software testing strategies

    NASA Technical Reports Server (NTRS)

    Selby, R. W., Jr.; Basili, V. R.; Page, J.; Mcgarry, F. E.

    1984-01-01

    The strategies of code reading, functional testing, and structural testing are compared in three aspects of software testing: fault detection effectiveness, fault detection cost, and classes of faults detected. The major results are the following: (1) Code readers detected more faults than did those using the other techniques, while functional tester detected more faults than did structural testers; (2) Code readers had a higher fault detection rate than did those using the other methods, while there was no difference between functional testers and structural testers; (3) Subjects testing the abstract data type detected the most faults and had the highest fault detection rate, while individuals testing the database maintainer found the fewest faults and spent the most effort testing; (4) Subjects of intermediate and junior expertise were not different in number or percentage of faults found, fault detection rate, or fault detection effort; (5) subjects of advanced expertise found a greater number of faults than did the others, found a greater percentage of faults than did just those of junior expertise, and were not different from the others in either fault detection rate or effort; and (6) Code readers and functional testers both detected more omission faults and more control faults than did structural testers, while code readers detected more interface faults than did those using the other methods.

  19. Software verification and testing

    NASA Technical Reports Server (NTRS)

    1985-01-01

    General procedures for software verification and validation are provided as a guide for managers, programmers, and analysts involved in software development. The verification and validation procedures described are based primarily on testing techniques. Testing refers to the execution of all or part of a software system for the purpose of detecting errors. Planning, execution, and analysis of tests are outlined in this document. Code reading and static analysis techniques for software verification are also described.

  20. Agent Building Software

    NASA Technical Reports Server (NTRS)

    2000-01-01

    AgentBuilder is a software component developed under an SBIR contract between Reticular Systems, Inc., and Goddard Space Flight Center. AgentBuilder allows software developers without experience in intelligent agent technologies to easily build software applications using intelligent agents. Agents are components of software that will perform tasks automatically, with no intervention or command from a user. AgentBuilder reduces the time and cost of developing agent systems and provides a simple mechanism for implementing high-performance agent systems.

  1. Standard Annuciator Software overview

    SciTech Connect

    Anspach, D.A. ); Fox, E.T.; Kissock, P.S. )

    1990-01-01

    The Standard Annunciator Software is responsible for maintaining a current display of system status conditions. The software interfaces with other systems -- IACS, CCTV, UPS, and portable PC -- to determine their status and then displays this information at the operator's console. This manual describes the software organization, operation, and generation mechanisms for development and target environments. 6 figs.

  2. Java for flight software

    NASA Technical Reports Server (NTRS)

    Benowitz, E.; Niessner, A.

    2003-01-01

    This work involves developing representative mission-critical spacecraft software using the Real-Time Specification for Java (RTSJ). This work currently leverages actual flight software used in the design of actual flight software in the NASA's Deep Space 1 (DSI), which flew in 1998.

  3. Software Architecture Evolution

    ERIC Educational Resources Information Center

    Barnes, Jeffrey M.

    2013-01-01

    Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects. Today's software architects, however,…

  4. CALIPSO Data Read Software

    Atmospheric Science Data Center

    2015-11-02

      CALIPSO Data Read Software Callable routines in Interactive Data Language ... Solutions . CALIPSO_READERS_3.5v1 Software (IDL) tar (611.5 KB) zip (261 KB) ... CALIPSO_READERS_3.4v1 Software (IDL) tar  (612.9 KB) zip  (261.4 KB) ...

  5. SPEAR 3 Commissioning Software

    SciTech Connect

    Corbett, W.J.; Portmann, G.J.; Safranek, J.A.; Terebilo, A.; /SLAC, SSRL

    2005-05-09

    The short SPEAR 3 startup time required precommissioned software for machine setup, beam measurements and data analysis. To accomplish this goal, we used Matlab with the Accelerator Toolbox (AT), the Channel Access Toolbox (MCA) and Middle Layer software to integrate code and streamline production. This paper outlines the software architecture, describes the Middle Layer component and provides examples from SPEAR 3 commissioning.

  6. Astronomical Software Directory Service

    NASA Technical Reports Server (NTRS)

    Hanisch, R. J.; Payne, H.; Hayes, J.

    1998-01-01

    This is the final report on the development of the Astronomical Software Directory Service (ASDS), a distributable, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URL's indexed for full-text searching.

  7. Software Shopper. Revised.

    ERIC Educational Resources Information Center

    Davis, Sandra Hart, Comp.

    This annotated index describes and illustrates a wide selection of public domain instructional software that may be useful in the education of deaf students and provides educators with a way to order the listed programs. The software programs are designed for use on Apple computers and their compatibles. The software descriptions are presented in…

  8. 3D-Modeling of deformed halite hopper crystals: Object based image analysis and support vector machine, a first evaluation

    NASA Astrophysics Data System (ADS)

    Leitner, Christoph; Hofmann, Peter; Marschallinger, Robert

    2014-05-01

    Halite hopper crystals are thought to develop by displacive growth in unconsolidated mud (Gornitz & Schreiber, 1984). The Alpine Haselgebirge, but also e.g. the salt deposits of the Rhine graben (mined at the beginning of the 20th century), comprise hopper crystals with shapes of cuboids, parallelepipeds and rhombohedrons (Görgey, 1912). Obviously, they deformed under oriented stress, which had been tried to reconstruct with respect to the sedimentary layering (Leitner et al., 2013). In the present work, deformed halite hopper crystals embedded in mudrock were automated reconstructed. Object based image analysis (OBIA) has been used successfully in remote sensing for 2D images before. The present study represents the first time that the method was used for reconstruction of three dimensional geological objects. First, manually a reference (gold standard) was created by redrawing contours of the halite crystals on each HRXCT scanning slice. Then, for OBIA, the computer program eCognition was used. For the automated reconstruction a rule set was developed. Thereby, the strength of OBIA was to recognize all objects similar to halite hopper crystals and in particular to eliminate cracks. In a second step, all the objects unsuitable for a structural deformation analysis were dismissed using a support vector machine (SVM) (clusters, polyhalite-coated crystals and spherical halites) The SVM simultaneously drastically reduced the number of halites. From 184 OBIA-objects 67 well shaped remained, which comes close to the number of pre-selected 52 objects. To assess the accuracy of the automated reconstruction, the result before and after SVM was compared to the reference, i.e. the gold standard. State-of the art per-scene statistics were extended to a per-object statistics. Görgey R (1912) Zur Kenntnis der Kalisalzlager von Wittelsheim im Ober-Elsaß. Tschermaks Mineral Petrogr Mitt 31:339-468 Gornitz VM, Schreiber BC (1981) Displacive halite hoppers from the dead sea

  9. Object-Based Point Cloud Analysis of Full-Waveform Airborne Laser Scanning Data for Urban Vegetation Classification

    PubMed Central

    Rutzinger, Martin; Höfle, Bernhard; Hollaus, Markus; Pfeifer, Norbert

    2008-01-01

    Airborne laser scanning (ALS) is a remote sensing technique well-suited for 3D vegetation mapping and structure characterization because the emitted laser pulses are able to penetrate small gaps in the vegetation canopy. The backscattered echoes from the foliage, woody vegetation, the terrain, and other objects are detected, leading to a cloud of points. Higher echo densities (>20 echoes/m2) and additional classification variables from full-waveform (FWF) ALS data, namely echo amplitude, echo width and information on multiple echoes from one shot, offer new possibilities in classifying the ALS point cloud. Currently FWF sensor information is hardly used for classification purposes. This contribution presents an object-based point cloud analysis (OBPA) approach, combining segmentation and classification of the 3D FWF ALS points designed to detect tall vegetation in urban environments. The definition tall vegetation includes trees and shrubs, but excludes grassland and herbage. In the applied procedure FWF ALS echoes are segmented by a seeded region growing procedure. All echoes sorted descending by their surface roughness are used as seed points. Segments are grown based on echo width homogeneity. Next, segment statistics (mean, standard deviation, and coefficient of variation) are calculated by aggregating echo features such as amplitude and surface roughness. For classification a rule base is derived automatically from a training area using a statistical classification tree. To demonstrate our method we present data of three sites with around 500,000 echoes each. The accuracy of the classified vegetation segments is evaluated for two independent validation sites. In a point-wise error assessment, where the classification is compared with manually classified 3D points, completeness and correctness better than 90% are reached for the validation sites. In comparison to many other algorithms the proposed 3D point classification works on the original measurements

  10. Multi-criteria objective based climate change impact assessment for multi-purpose multi-reservoir systems

    NASA Astrophysics Data System (ADS)

    Müller, Ruben; Schütze, Niels

    2014-05-01

    Water resources systems with reservoirs are expected to be sensitive to climate change. Assessment studies that analyze the impact of climate change on the performance of reservoirs can be divided in two groups: (1) Studies that simulate the operation under projected inflows with the current set of operational rules. Due to non adapted operational rules the future performance of these reservoirs can be underestimated and the impact overestimated. (2) Studies that optimize the operational rules for best adaption of the system to the projected conditions before the assessment of the impact. The latter allows for estimating more realistically future performance and adaption strategies based on new operation rules are available if required. Multi-purpose reservoirs serve various, often conflicting functions. If all functions cannot be served simultaneously at a maximum level, an effective compromise between multiple objectives of the reservoir operation has to be provided. Yet under climate change the historically preferenced compromise may no longer be the most suitable compromise in the future. Therefore a multi-objective based climate change impact assessment approach for multi-purpose multi-reservoir systems is proposed in the study. Projected inflows are provided in a first step using a physically based rainfall-runoff model. In a second step, a time series model is applied to generate long-term inflow time series. Finally, the long-term inflow series are used as driving variables for a simulation-based multi-objective optimization of the reservoir system in order to derive optimal operation rules. As a result, the adapted Pareto-optimal set of diverse best compromise solutions can be presented to the decision maker in order to assist him in assessing climate change adaption measures with respect to the future performance of the multi-purpose reservoir system. The approach is tested on a multi-purpose multi-reservoir system in a mountainous catchment in Germany. A

  11. CHP Installed Capacity Optimizer Software

    Energy Science and Technology Software Center (ESTSC)

    2004-11-30

    The CHP Installed Capacity Optimizer is a Microsoft Excel spreadsheet application that determines the most economic amount of capacity of distributed generation and thermal utilization equipment (e.g., absorption chillers) to install for any user-defined set of load and cost data. Installing the optimum amount of capacity is critical to the life-cycle economic viability of a distributed generation/cooling heat and power (CHP) application. Using advanced optimization algorithms, the software accesses the loads, utility tariffs, equipment costs,more » etc., and provides to the user the most economic amount of system capacity to install.« less

  12. CHP Installed Capacity Optimizer Software

    SciTech Connect

    2004-11-30

    The CHP Installed Capacity Optimizer is a Microsoft Excel spreadsheet application that determines the most economic amount of capacity of distributed generation and thermal utilization equipment (e.g., absorption chillers) to install for any user-defined set of load and cost data. Installing the optimum amount of capacity is critical to the life-cycle economic viability of a distributed generation/cooling heat and power (CHP) application. Using advanced optimization algorithms, the software accesses the loads, utility tariffs, equipment costs, etc., and provides to the user the most economic amount of system capacity to install.

  13. Object-based Land Cover Classification and Change Analysis in the Baltimore Metropolitan Area Using Multitemporal High Resolution Remote Sensing Data

    PubMed Central

    Zhou, Weiqi; Troy, Austin; Grove, Morgan

    2008-01-01

    Accurate and timely information about land cover pattern and change in urban areas is crucial for urban land management decision-making, ecosystem monitoring and urban planning. This paper presents the methods and results of an object-based classification and post-classification change detection of multitemporal high-spatial resolution Emerge aerial imagery in the Gwynns Falls watershed from 1999 to 2004. The Gwynns Falls watershed includes portions of Baltimore City and Baltimore County, Maryland, USA. An object-based approach was first applied to implement the land cover classification separately for each of the two years. The overall accuracies of the classification maps of 1999 and 2004 were 92.3% and 93.7%, respectively. Following the classification, we conducted a comparison of two different land cover change detection methods: traditional (i.e., pixel-based) post-classification comparison and object-based post-classification comparison. The results from our analyses indicated that an object-based approach provides a better means for change detection than a pixel based method because it provides an effective way to incorporate spatial information and expert knowledge into the change detection process. The overall accuracy of the change map produced by the object-based method was 90.0%, with Kappa statistic of 0.854, whereas the overall accuracy and Kappa statistic of that by the pixel-based method were 81.3% and 0.712, respectively.

  14. Object based image analysis for the classification of the growth stages of Avocado crop, in Michoacán State, Mexico

    NASA Astrophysics Data System (ADS)

    Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.

    2014-11-01

    This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.

  15. Tracker 300 Software

    SciTech Connect

    Wysor, R. Wes

    2006-01-12

    The Tracker300 software is downloaded to an off-the-shelf product called RCM3400/RCM3410 made by Rabbit Semiconductor. The software is a closed loop control which computes the sun's position and provides stability compensation. Using the RCM3400/RCM3410 module, the software stores and retrieves parameters from the onboard flash. The software also allows for communication with a host. It will allow the parameters to be downloaded or uploaded, it will show the status of the controller, it will provide real-time feedback, and it will send command acknowledgements. The software will capture the GPS response and ensure the internal clock is set correctly.

  16. Object-Based Image Analysis of WORLDVIEW-2 Satellite Data for the Classification of Mangrove Areas in the City of SÃO LUÍS, MARANHÃO State, Brazil

    NASA Astrophysics Data System (ADS)

    Kux, H. J. H.; Souza, U. D. V.

    2012-07-01

    Taking into account the importance of mangrove environments for the biodiversity of coastal areas, the objective of this paper is to classify the different types of irregular human occupation on the areas of mangrove vegetation in São Luis, capital of Maranhão State, Brazil, considering the OBIA (Object-based Image Analysis) approach with WorldView-2 satellite data and using InterIMAGE, a free image analysis software. A methodology for the study of the area covered by mangroves at the northern portion of the city was proposed to identify the main targets of this area, such as: marsh areas (known locally as Apicum), mangrove forests, tidal channels, blockhouses (irregular constructions), embankments, paved streets and different condominiums. Initially a databank including information on the main types of occupation and environments was established for the area under study. An image fusion (multispectral bands with panchromatic band) was done, to improve the information content of WorldView-2 data. Following an ortho-rectification was made with the dataset used, in order to compare with cartographical data from the municipality, using Ground Control Points (GCPs) collected during field survey. Using the data mining software GEODMA, a series of attributes which characterize the targets of interest was established. Afterwards the classes were structured, a knowledge model was created and the classification performed. The OBIA approach eased mapping of such sensitive areas, showing the irregular occupations and embankments of mangrove forests, reducing its area and damaging the marine biodiversity.

  17. HAZARD ANALYSIS SOFTWARE

    SciTech Connect

    Sommer, S; Tinh Tran, T

    2008-04-08

    Washington Safety Management Solutions, LLC developed web-based software to improve the efficiency and consistency of hazard identification and analysis, control selection and classification, and to standardize analysis reporting at Savannah River Site. In the new nuclear age, information technology provides methods to improve the efficiency of the documented safety analysis development process which includes hazard analysis activities. This software provides a web interface that interacts with a relational database to support analysis, record data, and to ensure reporting consistency. A team of subject matter experts participated in a series of meetings to review the associated processes and procedures for requirements and standard practices. Through these meetings, a set of software requirements were developed and compiled into a requirements traceability matrix from which software could be developed. The software was tested to ensure compliance with the requirements. Training was provided to the hazard analysis leads. Hazard analysis teams using the software have verified its operability. The software has been classified as NQA-1, Level D, as it supports the analysis team but does not perform the analysis. The software can be transported to other sites with alternate risk schemes. The software is being used to support the development of 14 hazard analyses. User responses have been positive with a number of suggestions for improvement which are being incorporated as time permits. The software has enforced a uniform implementation of the site procedures. The software has significantly improved the efficiency and standardization of the hazard analysis process.

  18. Healthcare Software Assurance

    PubMed Central

    Cooper, Jason G.; Pauley, Keith A.

    2006-01-01

    Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA’s software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted. PMID:17238324

  19. Healthcare software assurance.

    PubMed

    Cooper, Jason G; Pauley, Keith A

    2006-01-01

    Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA's software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted. PMID:17238324

  20. Object-Based Image Classification of Floating Ice Used as Habitat for Harbor Seals in a Tidewater Glacier Fjord in Alaska

    NASA Astrophysics Data System (ADS)

    McNabb, R. W.; Womble, J. N.; Prakash, A.; Gens, R.; Ver Hoef, J.

    2014-12-01

    Tidewater glaciers play an important role in many landscape and ecosystem processes in fjords, terminating in the sea and calving icebergs and discharging meltwater directly into the ocean. Tidewater glaciers provide floating ice for use as habitat for harbor seals (Phoca vitulina richardii) for resting, pupping, nursing, molting, and avoiding predators. Tidewater glaciers are found in high concentrations in Southeast and Southcentral Alaska; currently, many of these glaciers are retreating or have stabilized in a retracted state, raising questions about the future availability of ice in these fjords as habitat for seals. Our primary objective is to investigate the relationship between harbor seal distribution and ice availability at an advancing tidewater glacier in Johns Hopkins Inlet, Glacier Bay National Park, Alaska. To this end, we use a combination of visible and infrared aerial photographs, object-based image analysis (OBIA), and statistical modeling techniques. We have developed a workflow to automate the processing of the imagery and the classification of the fjordscape (e.g., individual icebergs, brash ice, and open water), providing quantitative information on ice coverage as well as properties not typically found in traditional pixel-based classification techniques, such as block angularity and seal density across the fjord. Reflectance variation in the red channel of the optical images has proven to be the most important first-level criterion to separate open water from floating ice. This first-level criterion works well in areas without dense brash ice, but tends to misclassify dense brash ice as single icebergs. Isolating these large misclassified regions and applying a higher reflectance threshold as a second-level criterion helps to isolate individual ice blocks surrounded by dense brash ice. We present classification results from surveys taken during June and August, 2007-2013, as well as preliminary results from statistical modeling of the

  1. Bayesian Software Health Management for Aircraft Guidance, Navigation, and Control

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Mbaya, Timmy; Menghoel, Ole

    2011-01-01

    Modern aircraft, both piloted fly-by-wire commercial aircraft as well as UAVs, more and more depend on highly complex safety critical software systems with many sensors and computer-controlled actuators. Despite careful design and V&V of the software, severe incidents have happened due to malfunctioning software. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of the on-board software and sensor system, and to perform advanced on-board diagnostic reasoning. We will focus on the approach to develop reliable and robust health models for the combined software and sensor systems.

  2. Simple solution to the medical instrumentation software problem

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Leif, Suzanne B.; Leif, Stephanie H.; Bingue, E.

    1995-04-01

    Medical devices now include a substantial software component, which is both difficult and expensive to produce and maintain. Medical software must be developed according to `Good Manufacturing Practices', GMP. Good Manufacturing Practices as specified by the FDA and ISO requires the definition and compliance to a software processes which ensures quality products by specifying a detailed method of software construction. The software process should be based on accepted standards. US Department of Defense software standards and technology can both facilitate the development and improve the quality of medical systems. We describe the advantages of employing Mil-Std-498, Software Development and Documentation, and the Ada programming language. Ada provides the very broad range of functionalities, from embedded real-time to management information systems required by many medical devices. It also includes advanced facilities for object oriented programming and software engineering.

  3. Selection of features within and without objects: effects of gestalt appearance and object-based instruction on behavior and event-related brain potentials.

    PubMed

    Verleger, Rolf; Groen, Margriet; Heide, Wolfgang; Sobieralska, Kinga; Jaśkowski, Piotr

    2008-05-01

    We studied how physical and instructed embedding of features in gestalts affects perceptual selection. Four ovals on the horizontal midline were either unconnected or pairwise connected by circles, forming ears of left and right heads (gestalts). Relevant to responding was the position of one colored oval, either within its pair or relative to fixation ("object-based" or "fixation-based" instruction). Responses were faster under fixation- than object-based instruction, less so with gestalts. Previously reported increases of N1 when evoked by features within objects were replicated for fixation-based instruction only. There was no effect of instruction on N2pc. However P1 increased under the adequate instruction, object-based for gestalts, fixation-based for unconnected items, which presumably indicated how foci of attention were set by expecting specific stimuli under instructions that specified how to bind these stimuli to objects. PMID:18266806

  4. Developing Software For Monitoring And Diagnosis

    NASA Technical Reports Server (NTRS)

    Edwards, S. J.; Caglayan, A. K.

    1993-01-01

    Expert-system software shell produces executable code. Report discusses beginning phase of research directed toward development of artificial intelligence for real-time monitoring of, and diagnosis of faults in, complicated systems of equipment. Motivated by need for onboard monitoring and diagnosis of electronic sensing and controlling systems of advanced aircraft. Also applicable to such equipment systems as refineries, factories, and powerplants.

  5. Software For Generation Of ASTER Data Products

    NASA Technical Reports Server (NTRS)

    Murray, Alexander T.; Eng, Bjorn T.; Voge, Charles C.

    1996-01-01

    Software functioning in EOS-DIS computing environment developed to generate data products from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). Processes high-resolution image data from visible and near infrared (VNIR), short-wavelength infrared (SWIR), and thermal infrared (TIR) radiometric readings to generate data on radiative and thermal properties of atmosphere and surface of Earth.

  6. Object-based image analysis for scaling properties of rangeland ecosystems: Linking field and image data for management decision making

    NASA Astrophysics Data System (ADS)

    Karl, Jason William

    Management of semi-arid shrub-steppe ecosystems (i.e., rangelands) requires accurate information over large landscapes, and remote sensing is an attractive option for collecting such data. To successfully use remotely-sensed data in landscape-level rangeland management, questions as to the relevance of image data to landscape patterns and optimal scales of analysis must be addressed. Object-based image analysis (OBIA), which segments image pixels into homogeneous regions, or objects, has been suggested as a way to increase accuracy of remotely-sensed products, but little research has gone into how to determine sizes of image objects with regard to scaling of ecosystem properties. The purpose of my dissertation was to determine if OBIA could be used to generate observational scales to match ecological scales in rangelands and to explore the potential for OBIA to generate accurate and repeatable remote-sensing products for managers. The work presented here was conducted in southern Idaho's Snake River Plain region. By comparing OBIA segmentation of satellite imagery into successively coarser objects to pixel-based aggregation methods, I found that canonical correlations between field-collected and image data were similar at the finest scales, but higher for image segmentation as scale increased. I also detected scaling thresholds with image segmentation that were confirmed via semi-variograms of field data. This approach proved useful for evaluating the overall utility of an image to address an objective, and identifying scaling limits for analysis. I next used observations of percent bare-ground cover from 346 field sites to consider how hierarchies of image objects created through OBIA could be used to discover appropriate scales for analysis given a specific objective. Using a regression-based approach, I found that segmentation levels whose predictions of bare-ground cover had spatial dependence that most closely matched the spatial dependence of the field

  7. NASA software documentation standard software engineering program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as Standard) can be applied to the documentation of all NASA software. This Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. This basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  8. The environmental control and life support system advanced automation project

    NASA Technical Reports Server (NTRS)

    Dewberry, Brandon S.

    1991-01-01

    The objective of the ECLSS Advanced Automation project includes reduction of the risk associated with the integration of new, beneficial software techniques. Demonstrations of this software to baseline engineering and test personnel will show the benefits of these techniques. The advanced software will be integrated into ground testing and ground support facilities, familiarizing its usage by key personnel.

  9. Scientific Software for the Macintosh.

    ERIC Educational Resources Information Center

    Watkins, Douglas; Gabaldon, Diana J.

    1985-01-01

    Lists and describes representative software for Macintosh microcomputers. Included are Apple University Consortium software, word processing software, statistics packages, integrated packages, database management systems, and others. Source of software and costs (when available) are included. (JN)

  10. Interoperability of Neuroscience Modeling Software

    PubMed Central

    Cannon, Robert C.; Gewaltig, Marc-Oliver; Gleeson, Padraig; Bhalla, Upinder S.; Cornelis, Hugo; Hines, Michael L.; Howell, Fredrick W.; Muller, Eilif; Stiles, Joel R.; Wils, Stefan; De Schutter, Erik

    2009-01-01

    Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance. PMID:17873374

  11. HydroShare: Applying professional software engineering to a new NSF-funded large software project

    NASA Astrophysics Data System (ADS)

    Idaszak, R.; Tarboton, D. G.; Ames, D.; Saleem Arrigo, J. A.; Band, L. E.; Bedig, A.; Castronova, A. M.; Christopherson, L.; Coposky, J.; Couch, A.; Dash, P.; Gan, T.; Goodall, J.; Gustafson, K.; Heard, J.; Hooper, R. P.; Horsburgh, J. S.; Jackson, S.; Johnson, H.; Maidment, D. R.; Mbewe, P.; Merwade, V.; Miles, B.; Reeder, S.; Russell, T.; Song, C.; Taylor, A.; Thakur, S.; Valentine, D. W.; Whiteaker, T. L.

    2013-12-01

    HydroShare is an online, collaborative system being developed for sharing hydrologic data and models as part of the NSF's Software Infrastructure for Sustained Innovation (SI2) program (NSF collaborative award numbers 1148453 and 1148090). HydroShare involves a large software development effort requiring cooperative research and distributed software development between domain scientists, professional software engineers (here 'professional' denotes previous commercial experience in the application of modern software engineering), and university software developers. HydroShare expands upon the data sharing capabilities of the Hydrologic Information System of the Consortium of Universities for the Advancement of Hydrologic Sciences, Inc. (CUAHSI) by broadening the classes of data accommodated, expanding capability to include the sharing of models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. With a goal of enabling better science concomitant with improved sustainable software practices, we will describe our approach, experiences, and lessons learned thus-far in applying professional software engineering to a large NSF-funded software project from the project's onset.

  12. 2006 XSD Scientific Software Workshop report.

    SciTech Connect

    Evans, K., Jr.; De Carlo, F.; Jemian, P.; Lang, J.; Lienert, U.; Maclean, J.; Newville, M.; Tieman, B.; Toby, B.; van Veenendaal, B.; Univ. of Chicago

    2006-01-22

    In May of 2006, a committee was formed to assess the fundamental needs and opportunities in scientific software for x-ray data reduction, analysis, modeling, and simulation. This committee held a series of discussions throughout the summer, conducted a poll of the members of the x-ray community, and held a workshop. This report details the findings and recommendations of the committee. Each experiment performed at the APS requires three crucial ingredients: the powerful x-ray source, an optimized instrument to perform measurements, and computer software to acquire, visualize, and analyze the experimental observations. While the APS has invested significant resources in the accelerator, investment in other areas such as scientific software for data analysis and visualization has lagged behind. This has led to the adoption of a wide variety of software with variable levels of usability. In order to maximize the scientific output of the APS, it is essential to support the broad development of real-time analysis and data visualization software. As scientists attack problems of increasing sophistication and deal with larger and more complex data sets, software is playing an ever more important role. Furthermore, our need for excellent and flexible scientific software can only be expected to increase, as the upgrade of the APS facility and the implementation of advanced detectors create a host of new measurement capabilities. New software analysis tools must be developed to take full advantage of these capabilities. It is critical that the APS take the lead in software development and the implementation of theory to software to ensure the continued success of this facility. The topics described in this report are relevant to the APS today and critical for the APS upgrade plan. Implementing these recommendations will have a positive impact on the scientific productivity of the APS today and will be even more critical in the future.

  13. COTS software selection process.

    SciTech Connect

    Watkins, William M. (Strike Wire Technologies, Louisville, CO); Lin, Han Wei; McClelland, Kelly (U.S. Security Associates, Livermore, CA); Ullrich, Rebecca Ann; Khanjenoori, Soheil; Dalton, Karen; Lai, Anh Tri; Kuca, Michal; Pacheco, Sandra; Shaffer-Gant, Jessica

    2006-05-01

    Today's need for rapid software development has generated a great interest in employing Commercial-Off-The-Shelf (COTS) software products as a way of managing cost, developing time, and effort. With an abundance of COTS software packages to choose from, the problem now is how to systematically evaluate, rank, and select a COTS product that best meets the software project requirements and at the same time can leverage off the current corporate information technology architectural environment. This paper describes a systematic process for decision support in evaluating and ranking COTS software. Performed right after the requirements analysis, this process provides the evaluators with more concise, structural, and step-by-step activities for determining the best COTS software product with manageable risk. In addition, the process is presented in phases that are flexible to allow for customization or tailoring to meet various projects' requirements.

  14. Remedial Education of Embedded Software Specialists for Working People

    NASA Astrophysics Data System (ADS)

    Yamamoto, Masaki; Kawaguchi, Nobuo; Agusa, Kiyoshi; Mase, Kenji; Takada, Hiroaki; Tomiyama, Hiroyuki; Honda, Shinya; Kaneko, Nobuyuki

    Recently, development of embedded software increases in the company. The need of the training embedded software engineers in the company rises, and expectation for a university is big. We carry out embedded software engineers education courses for working people in a university. The courses are short term type and are classified the type of job and the technological level. We carried out eight kinds of courses of the introductory course, the intermediate courses and the advanced courses.

  15. Integrating legacy software toolkits into China-VO system

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Qian, Cui, Chen-Zhou, Zhao, Yong-Heng

    2005-12-01

    Virtual Observatory (VO) is a collection of data-archives and software toolkits. It aims to provide astronomers research resources with uniformed interfaces, using advanced information technologies. In this article, we discuss the necessaries and feasibilities of integrating legacy software toolkits into China-VO system at first; then analyse granularity about integrating. Three general integrating methods are given in detail. At last, we introduce an instance of integrating "Image Magick" - an software for image processing and discuss more about VO integration.

  16. Standard Annunciator software overview

    SciTech Connect

    Anspach, D.A. ); Fox, E.T.; Kissock, P.S. )

    1992-10-01

    The Standard Annunciator Software is responsible for controlling the AN/GSS-41 and AN/GSS-44 Annunciator Systems. The software interfaces with other systems-ACS, ECS, CCTV, UPS-to determine current alarm, tamper, and hardware status. Current system status conditions are displayed at the operator's console and on display maps. This manual describes the organization and functionality of the software as well as the generation mechanisms for development and target environments.

  17. Standard Annunciator software overview

    SciTech Connect

    Anspach, D.A.; Fox, E.T.; Kissock, P.S.

    1992-10-01

    The Standard Annunciator Software is responsible for controlling the AN/GSS-41 and AN/GSS-44 Annunciator Systems. The software interfaces with other systems-ACS, ECS, CCTV, UPS-to determine current alarm, tamper, and hardware status. Current system status conditions are displayed at the operator`s console and on display maps. This manual describes the organization and functionality of the software as well as the generation mechanisms for development and target environments.

  18. Gammasphere software development

    SciTech Connect

    Piercey, R.B.

    1993-01-01

    Activities of the nuclear physics group are described. Progress was made in organizing the Gammasphere Software Working Group, establishing a nuclear computing facility, participating in software development at Lawrence Berkeley, developing a common data file format, and adapting the ORNL UPAK software to run at Gammasphere. A universal histogram object was developed that defines a file format and provides for an objective-oriented programming model. An automated liquid nitrogen fill system was developed for Gammasphere (110 Ge detectors comprise the sphere).

  19. IRAS software analysis library

    NASA Technical Reports Server (NTRS)

    Domik, Gitta; Merkle, C. Scott

    1991-01-01

    The goal of this project was to collect 'research software' written in Interactive Data Language (IDL) to support analysis of data from the Infrared Astronomical Satellite (IRAS) and make it available to the larger community. 'Research Software' describes software created by researchers and staff for a specific research goal, but lacks sufficient documentation, easy to use interfaces, and rigorous debugging. Additionally, most of the IDL/IRAS code available needed to be ported to a (largely) hardware independent new version of IDL.

  20. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.