Science.gov

Sample records for advanced object-based software

  1. Advanced flight software reconfiguraton

    NASA Technical Reports Server (NTRS)

    Porcher, Bryan

    1991-01-01

    Information is given in viewgraph form on advanced flight software reconfiguration. Reconfiguration is defined as identifying mission and configuration specific requirements, controlling mission and configuration specific data, binding this information to the flight software code to perform specific missions, and the release and distribution of the flight software. The objectives are to develop, demonstrate, and validate advanced software reconfiguration tools and techniques; to demonstrate reconfiguration approaches on Space Station Freedom (SSF) onboard systems displays; and to interactively test onboard systems displays, flight software, and flight data.

  2. Advanced Software Development Workstation Project

    NASA Technical Reports Server (NTRS)

    Lee, Daniel

    1989-01-01

    The Advanced Software Development Workstation Project, funded by Johnson Space Center, is investigating knowledge-based techniques for software reuse in NASA software development projects. Two prototypes have been demonstrated and a third is now in development. The approach is to build a foundation that provides passive reuse support, add a layer that uses domain-independent programming knowledge, add a layer that supports the acquisition of domain-specific programming knowledge to provide active support, and enhance maintainability and modifiability through an object-oriented approach. The development of new application software would use specification-by-reformulation, based on a cognitive theory of retrieval from very long-term memory in humans, and using an Ada code library and an object base. Current tasks include enhancements to the knowledge representation of Ada packages and abstract data types, extensions to support Ada package instantiation knowledge acquisition, integration with Ada compilers and relational databases, enhancements to the graphical user interface, and demonstration of the system with a NASA contractor-developed trajectory simulation package. Future work will focus on investigating issues involving scale-up and integration.

  3. Advanced fingerprint verification software

    NASA Astrophysics Data System (ADS)

    Baradarani, A.; Taylor, J. R. B.; Severin, F.; Maev, R. Gr.

    2016-05-01

    We have developed a fingerprint software package that can be used in a wide range of applications from law enforcement to public and private security systems, and to personal devices such as laptops, vehicles, and door- locks. The software and processing units are a unique implementation of new and sophisticated algorithms that compete with the current best systems in the world. Development of the software package has been in line with the third generation of our ultrasonic fingerprinting machine1. Solid and robust performance is achieved in the presence of misplaced and low quality fingerprints.

  4. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    SciTech Connect

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enable rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed

  5. A flexible object-based software framework for modeling complex systems with interacting natural and societal processes.

    SciTech Connect

    Christiansen, J. H.

    2000-06-15

    The Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations. The DIAS infrastructure makes it feasible to build and manipulate complex simulation scenarios in which many thousands of objects can interact via dozens to hundreds of concurrent dynamic processes. The flexibility and extensibility of the DIAS software infrastructure stem mainly from (1) the abstraction of object behaviors, (2) the encapsulation and formalization of model functionality, and (3) the mutability of domain object contents. DIAS simulation objects are inherently capable of highly flexible and heterogeneous spatial realizations. Geospatial graphical representation of DIAS simulation objects is addressed via the GeoViewer, an object-based GIS toolkit application developed at ANL. DIAS simulation capabilities have been extended by inclusion of societal process models generated by the Framework for Addressing Cooperative Extended Transactions (FACET), another object-based framework developed at Argonne National Laboratory. By using FACET models to implement societal behaviors of individuals and organizations within larger DIAS-based natural systems simulations, it has become possible to conveniently address a broad range of issues involving interaction and feedback among natural and societal processes. Example DIAS application areas discussed in this paper include a dynamic virtual oceanic environment, detailed simulation of clinical, physiological, and logistical aspects of health care delivery, and studies of agricultural sustainability of urban centers under environmental stress in ancient Mesopotamia.

  6. Advances in cardiac processing software.

    PubMed

    Gordon DePuey, Ernest

    2014-07-01

    New software methods that incorporate iterative reconstruction, resolution recovery, and noise compensation now provide the ability to maintain or improve myocardial perfusion SPECT image quality with conventional sodium iodide cameras. Despite lower image counting statistics associated with significantly decreased injected radiopharmaceutical doses or shortened acquisition times or both, image quality is preserved or even improved compared with conventional processing methods. The ability to prescribe a desired myocardial count density by preselecting a SPECT acquisition time now avoids additional patient radiation exposure associated with "weight-based" dosing. More recent advancements, including temporal correlation among the gated perfusion frames and higher resolution SPECT acquisitions, hold promise to further improve image quality and diagnostic accuracy. Phase analysis of gated perfusion SPECT provides the ability to assess cardiac dyssynchrony and to select those patients who will most benefit from resynchronization therapy. In combination with the higher counting statistics afforded by the new solid-state dedicated cardiac cameras, these software advancements allow for even further decreased patient radiation doses or acquisition times or both. List-mode software allows for refinement of myocardial perfusion SPECT by interrogating particular data from selected cardiac cycles. Rejection of frames degraded by arrhythmic cardiac cycles or excessive extracardiac uptake can be excluded for reconstruction. Respiratory gating, which diminishes cardiac motion and potentially decreases diaphragmatic attenuation, has been demonstrated to improve diagnostic specificity. With high-count first-pass list-mode acquisitions at rest and during pharmacologic vasodilatation, it may be possible to measure global and regional myocardial perfusion reserve to more accurately diagnose coronary artery disease and avoid false-negative studies owing to balanced ischemia.

  7. Advanced Software Development Workstation Project, phase 3

    NASA Technical Reports Server (NTRS)

    1991-01-01

    ACCESS provides a generic capability to develop software information system applications which are explicitly intended to facilitate software reuse. In addition, it provides the capability to retrofit existing large applications with a user friendly front end for preparation of input streams in a way that will reduce required training time, improve the productivity even of experienced users, and increase accuracy. Current and past work shows that ACCESS will be scalable to much larger object bases.

  8. Advanced cardiac life support refresher course using standardized objective-based Mega Code testing.

    PubMed

    Kaye, W; Mancini, M E; Rallis, S F

    1987-01-01

    The American Heart Association (AHA) recommends that those whose daily work requires knowledge and skills in advanced cardiac life support (ACLS) not only be trained in ACLS, but also be given a refresher training at least every 2 yr. However, AMA offers no recommended course for retraining; no systematic studies of retraining have been conducted on which to base these recommendations. In this paper we review and present our recommendation for a standardized approach to refresher training. Using the goals and objectives of the ACLS training program as evaluation criteria, we tested with the Mega Code a sample population who had previously been trained in ACLS. The results revealed deficiencies in ACLS knowledge and skills in the areas of assessment, defibrillation, drug therapy, and determining the cause of an abnormal blood gas value. We combined this information with our knowledge of other deficiencies identified during actual resuscitation attempts and other basic life-support and ACLS teaching experiences. We then designed a refresher course which was consistent with the overall goals and objectives of the ACLS training program, but which placed emphasis on the deficiencies identified in the pretesting. We taught our newly designed refresher course in three sessions, which included basic life support, endotracheal intubation, arrhythmia recognition and therapeutic modalities, defibrillation, and Mega Code practice. In a fourth session, using Mega Code testing, we evaluated knowledge and skill learning immediately after training. We similarly tested retention 2 to 4 months later. Performance immediately after refresher training showed improvement in all areas where performance had been weak.(ABSTRACT TRUNCATED AT 250 WORDS)

  9. The School Advanced Ventilation Engineering Software (SAVES)

    EPA Pesticide Factsheets

    The School Advanced Ventilation Engineering Software (SAVES) package is a tool to help school designers assess the potential financial payback and indoor humidity control benefits of Energy Recovery Ventilation (ERV) systems for school applications.

  10. Student Produced Advanced Mathematical Software.

    ERIC Educational Resources Information Center

    Hogben, Leslie

    The intent of this project was to develop a course for mathematics graduate students at Iowa State University. They would design and write computer programs for use by undergraduate mathematics students, and then offer the course and actually produce the software. Phase plane graphics for ordinary differential equations was selected as the topic.…

  11. Advanced Extravehicular Mobility Unit Informatics Software Design

    NASA Technical Reports Server (NTRS)

    Wright, Theodore

    2014-01-01

    This is a description of the software design for the 2013 edition of the Advanced Extravehicular Mobility Unit (AEMU) Informatics computer assembly. The Informatics system is an optional part of the space suit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and caution and warning information. In the future it will display maps with GPS position data, and video and still images captured by the astronaut.

  12. The advanced software development workstation project

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III; Pitman, Charles L.

    1991-01-01

    The Advanced Software Development Workstation (ASDW) task is researching and developing the technologies required to support Computer Aided Software Engineering (CASE) with the emphasis on those advanced methods, tools, and processes that will be of benefit to support all NASA programs. Immediate goals are to provide research and prototype tools that will increase productivity, in the near term, in projects such as the Software Support Environment (SSE), the Space Station Control Center (SSCC), and the Flight Analysis and Design System (FADS) which will be used to support the Space Shuttle and Space Station Freedom. Goals also include providing technology for development, evolution, maintenance, and operations. The technologies under research and development in the ASDW project are targeted to provide productivity enhancements during the software life cycle phase of enterprise and information system modeling, requirements generation and analysis, system design and coding, and system use and maintenance. On-line user's guides will assist users in operating the developed information system with knowledge base expert assistance.

  13. Control Software for Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.

    2006-01-01

    Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.

  14. Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.; Araujo, Ricardo

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for remote sensing applications is becoming more frequent. However, this type of information can result in several software problems related to the huge amount of data available. Object-based image analysis (OBIA) has proven to be superior to pixel-based analysis for very high-resolution images. The main objective of this work was to explore the potentialities of the OBIA methods available in two different open source software applications, Spring and OTB/Monteverdi, in order to generate an urban land cover map. An orthomosaic derived from UAVs was considered, 10 different regions of interest were selected, and two different approaches were followed. The first one (Spring) uses the region growing segmentation algorithm followed by the Bhattacharya classifier. The second approach (OTB/Monteverdi) uses the mean shift segmentation algorithm followed by the support vector machine (SVM) classifier. Two strategies were followed: four classes were considered using Spring and thereafter seven classes were considered for OTB/Monteverdi. The SVM classifier produces slightly better results and presents a shorter processing time. However, the poor spectral resolution of the data (only RGB bands) is an important factor that limits the performance of the classifiers applied.

  15. Software Framework for Advanced Power Plant Simulations

    SciTech Connect

    John Widmann; Sorin Munteanu; Aseem Jain; Pankaj Gupta; Mark Moales; Erik Ferguson; Lewis Collins; David Sloan; Woodrow Fiveland; Yi-dong Lang; Larry Biegler; Michael Locke; Simon Lingard; Jay Yun

    2010-08-01

    This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. These include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.

  16. The Advanced Software Development and Commercialization Project

    SciTech Connect

    Gallopoulos, E. . Center for Supercomputing Research and Development); Canfield, T.R.; Minkoff, M.; Mueller, C.; Plaskacz, E.; Weber, D.P.; Anderson, D.M.; Therios, I.U. ); Aslam, S.; Bramley, R.; Chen, H.-C.; Cybenko, G.; Gallopoulos, E.; Gao, H.; Malony, A.; Sameh, A. . Center for Supercomputing Research

    1990-09-01

    This is the first of a series of reports pertaining to progress in the Advanced Software Development and Commercialization Project, a joint collaborative effort between the Center for Supercomputing Research and Development of the University of Illinois and the Computing and Telecommunications Division of Argonne National Laboratory. The purpose of this work is to apply techniques of parallel computing that were pioneered by University of Illinois researchers to mature computational fluid dynamics (CFD) and structural dynamics (SD) computer codes developed at Argonne. The collaboration in this project will bring this unique combination of expertise to bear, for the first time, on industrially important problems. By so doing, it will expose the strengths and weaknesses of existing techniques for parallelizing programs and will identify those problems that need to be solved in order to enable wide spread production use of parallel computers. Secondly, the increased efficiency of the CFD and SD codes themselves will enable the simulation of larger, more accurate engineering models that involve fluid and structural dynamics. In order to realize the above two goals, we are considering two production codes that have been developed at ANL and are widely used by both industry and Universities. These are COMMIX and WHAMS-3D. The first is a computational fluid dynamics code that is used for both nuclear reactor design and safety and as a design tool for the casting industry. The second is a three-dimensional structural dynamics code used in nuclear reactor safety as well as crashworthiness studies. These codes are currently available for both sequential and vector computers only. Our main goal is to port and optimize these two codes on shared memory multiprocessors. In so doing, we shall establish a process that can be followed in optimizing other sequential or vector engineering codes for parallel processors.

  17. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  18. Advanced software development workstation project: Engineering scripting language. Graphical editor

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Software development is widely considered to be a bottleneck in the development of complex systems, both in terms of development and in terms of maintenance of deployed systems. Cost of software development and maintenance can also be very high. One approach to reducing costs and relieving this bottleneck is increasing the reuse of software designs and software components. A method for achieving such reuse is a software parts composition system. Such a system consists of a language for modeling software parts and their interfaces, a catalog of existing parts, an editor for combining parts, and a code generator that takes a specification and generates code for that application in the target language. The Advanced Software Development Workstation is intended to be an expert system shell designed to provide the capabilities of a software part composition system.

  19. Advanced information processing system: Input/output network management software

    NASA Technical Reports Server (NTRS)

    Nagle, Gail; Alger, Linda; Kemp, Alexander

    1988-01-01

    The purpose of this document is to provide the software requirements and specifications for the Input/Output Network Management Services for the Advanced Information Processing System. This introduction and overview section is provided to briefly outline the overall architecture and software requirements of the AIPS system before discussing the details of the design requirements and specifications of the AIPS I/O Network Management software. A brief overview of the AIPS architecture followed by a more detailed description of the network architecture.

  20. Byonic: Advanced Peptide and Protein Identification Software

    PubMed Central

    Bern, Marshall; Kil, Yong J.; Becker, Christopher

    2013-01-01

    Byonic™ is the name of a software package for peptide and protein identification by tandem mass spectrometry. This software, which has only recently become commercially available, facilitates a much wider range of search possibilities than previous search software such as SEQUEST and Mascot. Byonic allows the user to define an essentially unlimited number of variable modification types. Byonic also allows the user to set a separate limit on the number of occurrences of each modification type, so that a search may consider only one or two chance modifications such as oxidations and deamidations per peptide, yet allow three or four biological modifications such as phosphorylations, which tend to cluster together. Hence Byonic can search for 10s or even 100s of modification types simultaneously without a prohibitively large combinatorial explosion. Byonic’s Wildcard Search™ allows the user to search for unanticipated or even unknown modifications alongside known modifications. Finally, Byonic’s Glycopeptide Search allows the user to identify glycopeptides without prior knowledge of glycan masses or glycosylation sites. PMID:23255153

  1. Advanced Software Methods for Physics Analysis

    NASA Astrophysics Data System (ADS)

    Lista, L.

    2006-01-01

    Unprecedented data analysis complexity is experienced in modern High Energy Physics experiments. The complexity arises from the growing size of recorded data samples, the large number of data analyses performed by different users in each single experiment, and the level of complexity of each single analysis. For this reason, the requirements on software for data analysis impose a very high level of reliability. We present two concrete examples: the former from BaBar experience with the migration to a new Analysis Model with the definition of a new model for the Event Data Store, the latter about a toolkit for multivariate statistical and parametric Monte Carlo analysis developed using generic programming.

  2. Advances in the REDCAT software package

    PubMed Central

    2013-01-01

    Background Residual Dipolar Couplings (RDCs) have emerged in the past two decades as an informative source of experimental restraints for the study of structure and dynamics of biological macromolecules and complexes. The REDCAT software package was previously introduced for the analysis of molecular structures using RDC data. Here we report additional features that have been included in this software package in order to expand the scope of its analyses. We first discuss the features that enhance REDCATs user-friendly nature, such as the integration of a number of analyses into one single operation and enabling convenient examination of a structural ensemble in order to identify the most suitable structure. We then describe the new features which expand the scope of RDC analyses, performing exercises that utilize both synthetic and experimental data to illustrate and evaluate different features with regard to structure refinement and structure validation. Results We establish the seamless interaction that takes place between REDCAT, VMD, and Xplor-NIH in demonstrations that utilize our newly developed REDCAT-VMD and XplorGUI interfaces. These modules enable visualization of RDC analysis results on the molecular structure displayed in VMD and refinement of structures with Xplor-NIH, respectively. We also highlight REDCAT’s Error-Analysis feature in reporting the localized fitness of a structure to RDC data, which provides a more effective means of recognizing local structural anomalies. This allows for structurally sound regions of a molecule to be identified, and for any refinement efforts to be focused solely on locally distorted regions. Conclusions The newly engineered REDCAT software package, which is available for download via the WWW from http://ifestos.cse.sc.edu, has been developed in the Object Oriented C++ environment. Our most recent enhancements to REDCAT serve to provide a more complete RDC analysis suite, while also accommodating a more user

  3. Intelligent Software Tools for Advanced Computing

    SciTech Connect

    Baumgart, C.W.

    2001-04-03

    Feature extraction and evaluation are two procedures common to the development of any pattern recognition application. These features are the primary pieces of information which are used to train the pattern recognition tool, whether that tool is a neural network, a fuzzy logic rulebase, or a genetic algorithm. Careful selection of the features to be used by the pattern recognition tool can significantly streamline the overall development and training of the solution for the pattern recognition application. This report summarizes the development of an integrated, computer-based software package called the Feature Extraction Toolbox (FET), which can be used for the development and deployment of solutions to generic pattern recognition problems. This toolbox integrates a number of software techniques for signal processing, feature extraction and evaluation, and pattern recognition, all under a single, user-friendly development environment. The toolbox has been developed to run on a laptop computer, so that it may be taken to a site and used to develop pattern recognition applications in the field. A prototype version of this toolbox has been completed and is currently being used for applications development on several projects in support of the Department of Energy.

  4. Advanced Transport Operating System (ATOPS) utility library software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.

    1993-01-01

    The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.

  5. Advanced transport operating system software upgrade: Flight management/flight controls software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Debure, Kelly R.; Dickson, Richard W.; Heaphy, William J.; Parks, Mark A.; Slominski, Christopher J.; Wolverton, David A.

    1988-01-01

    The Flight Management/Flight Controls (FM/FC) software for the Norden 2 (PDP-11/70M) computer installed on the NASA 737 aircraft is described. The software computes the navigation position estimates, guidance commands, those commands to be issued to the control surfaces to direct the aircraft in flight based on the modes selected on the Advanced Guidance Control System (AGSC) mode panel, and the flight path selected via the Navigation Control/Display Unit (NCDU).

  6. Object-based media and stream-based computing

    NASA Astrophysics Data System (ADS)

    Bove, V. Michael, Jr.

    1998-03-01

    Object-based media refers to the representation of audiovisual information as a collection of objects - the result of scene-analysis algorithms - and a script describing how they are to be rendered for display. Such multimedia presentations can adapt to viewing circumstances as well as to viewer preferences and behavior, and can provide a richer link between content creator and consumer. With faster networks and processors, such ideas become applicable to live interpersonal communications as well, creating a more natural and productive alternative to traditional videoconferencing. In this paper is outlined an example of object-based media algorithms and applications developed by my group, and present new hardware architectures and software methods that we have developed to enable meeting the computational requirements of object- based and other advanced media representations. In particular we describe stream-based processing, which enables automatic run-time parallelization of multidimensional signal processing tasks even given heterogenous computational resources.

  7. Advances in Games Technology: Software, Models, and Intelligence

    ERIC Educational Resources Information Center

    Prakash, Edmond; Brindle, Geoff; Jones, Kevin; Zhou, Suiping; Chaudhari, Narendra S.; Wong, Kok-Wai

    2009-01-01

    Games technology has undergone tremendous development. In this article, the authors report the rapid advancement that has been observed in the way games software is being developed, as well as in the development of games content using game engines. One area that has gained special attention is modeling the game environment such as terrain and…

  8. Advanced software development workstation. OOPSLA 1992 Conference. Trip report

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    1992-01-01

    This report summarizes the main trends observed at the Object Oriented Programming Systems, Languages, and Applications Conference held in Vancouver, British Columbia. This conference is the main object-oriented event that allows us to assess the dynamism of the technology and to meet the main actors of the field. It is an invaluable source of information for the advanced software development project.

  9. Advanced Spacesuit Informatics Software Design for Power, Avionics and Software Version 2.0

    NASA Technical Reports Server (NTRS)

    Wright, Theodore W.

    2016-01-01

    A description of the software design for the 2016 edition of the Informatics computer assembly of the NASAs Advanced Extravehicular Mobility Unit (AEMU), also called the Advanced Spacesuit. The Informatics system is an optional part of the spacesuit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and warning information. It also provides an interface to the suit mounted camera for recording still images, video, and audio field notes.

  10. Advanced Transport Operating System (ATOPS) control display unit software description

    NASA Technical Reports Server (NTRS)

    Slominski, Christopher J.; Parks, Mark A.; Debure, Kelly R.; Heaphy, William J.

    1992-01-01

    The software created for the Control Display Units (CDUs), used for the Advanced Transport Operating Systems (ATOPS) project, on the Transport Systems Research Vehicle (TSRV) is described. Module descriptions are presented in a standardized format which contains module purpose, calling sequence, a detailed description, and global references. The global reference section includes subroutines, functions, and common variables referenced by a particular module. The CDUs, one for the pilot and one for the copilot, are used for flight management purposes. Operations performed with the CDU affects the aircraft's guidance, navigation, and display software.

  11. Reliability, Safety and Error Recovery for Advanced Control Software

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2003-01-01

    For long-duration automated operation of regenerative life support systems in space environments, there is a need for advanced integration and control systems that are significantly more reliable and safe, and that support error recovery and minimization of operational failures. This presentation outlines some challenges of hazardous space environments and complex system interactions that can lead to system accidents. It discusses approaches to hazard analysis and error recovery for control software and challenges of supporting effective intervention by safety software and the crew.

  12. Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Damevski, Kostadin

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  13. Software control of the Advanced Technology Solar Telescope enclosure PLC hardware using COTS software

    NASA Astrophysics Data System (ADS)

    Borrowman, Alastair J.; de Bilbao, Lander; Ariño, Javier; Murga, Gaizka; Goodrich, Bret; Hubbard, John R.; Greer, Alan; Mayer, Chris; Taylor, Philip

    2012-09-01

    As PLCs evolve from simple logic controllers into more capable Programmable Automation Controllers (PACs), observatories are increasingly using such devices to control complex mechanisms1, 2. This paper describes use of COTS software to control such hardware using the Advanced Technology Solar Telescope (ATST) Common Services Framework (CSF). We present the Enclosure Control System (ECS) under development in Spain and the UK. The paper details selection of the commercial PLC communication library PLCIO. Implemented in C and delivered with source code, the library separates the programmer from communication details through a simple API. Capable of communicating with many types of PLCs (including Allen-Bradley and Siemens) the API remains the same irrespective of PLC in use. The ECS is implemented in Java using the observatory's framework that provides common services for software components. We present a design following a connection-based approach where all components access the PLC through a single connection class. The link between Java and PLCIO C library is provided by a thin Java Native Interface (JNI) layer. Also presented is a software simulator of the PLC based upon the PLCIO Virtual PLC. This creates a simulator operating below the library's API and thus requires no change to ECS software. It also provides enhanced software testing capabilities prior to hardware becoming available. Results are presented in the form of communication timing test data, showing that the use of CSF, JNI and PLCIO provide a control system capable of controlling enclosure tracking mechanisms, that would be equally valid for telescope mount control.

  14. Constructing an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Sims, Michael; Podolak, Ester; Mckay, Christopher

    1990-01-01

    Scientific model building can be an intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot be easily distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We believe that advanced software techniques can facilitate both the model building and model sharing process. In this paper, we describe a prototype for a scientific modeling software tool that serves as an aid to the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities. Our prototype has been developed in the domain of planetary atmospheric modeling, and is being used to construct models of Titan's atmosphere.

  15. Development of Advanced Multi-Modality Radiation Treatment Planning Software

    SciTech Connect

    Nigg, D W; Hartmann Siantar, C

    2002-02-19

    The Idaho National Engineering and Environmental Laboratory (INEEL) has long been active in development of advanced Monte-Carlo based computational dosimetry and treatment planning methods and software for advanced radiotherapy, with a particular focus on Neutron Capture Therapy (NCT) and, to a somewhat lesser extent, Fast-Neutron Therapy. The most recent INEEL software product system of this type is known as SERA, Simulation Environment for Radiotherapy Applications. SERA is at a mature level in its life cycle, it has been licensed for research use worldwide, and it has become well established as a computational tool for research. However, along with its strengths, SERA also has some limitations in its structure and computational methodologies. More specifically, it is optimized only for neutron-based applications. Although photon transport can be computed with SERA, the simplified model that is used is designed primarily for photons produced in the neutron transport process. Thus SERA is not appropriate for applications to, for example, standard external-beam photon radiotherapy, which is by far more commonly used in the clinic than neutron based therapy.

  16. The scientific modeling assistant: An advanced software tool for scientific model building

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Sims, Michael H.

    1991-01-01

    Viewgraphs on the scientific modeling assistant: an advanced software tool for scientific model building are presented. The objective is to build a specialized software tool to assist in scientific model-building.

  17. Advanced Data Format (ADF) Software Library and Users Guide

    NASA Technical Reports Server (NTRS)

    Smith, Matthew; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial. Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its 1/0 software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The ADF is a generic database manager with minimal intrinsic capability. It was written for the purpose of storing large numerical datasets in an efficient, platform independent manner. To be effective, it must be used in conjunction with external agreements on how the data will be organized within the ADF database such defined by the SIDS. There are currently 34 user callable functions that comprise the ADF

  18. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan : ASC software quality engineering practices Version 3.0.

    SciTech Connect

    Turgeon, Jennifer L.; Minana, Molly A.; Hackney, Patricia; Pilch, Martin M.

    2009-01-01

    The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and the software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.

  19. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Sussman, Alan

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  20. Advanced software development workstation project ACCESS user's guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    ACCESS is a knowledge based software information system designed to assist the user in modifying retrieved software to satisfy user specifications. A user's guide is presented for the knowledge engineer who wishes to create for ACCESS a knowledge base consisting of representations of objects in some software system. This knowledge is accessible to an end user who wishes to use the catalogued software objects to create a new application program or an input stream for an existing system. The application specific portion of an ACCESS knowledge base consists of a taxonomy of object classes, as well as instances of these classes. All objects in the knowledge base are stored in an associative memory. ACCESS provides a standard interface for the end user to browse and modify objects. In addition, the interface can be customized by the addition of application specific data entry forms and by specification of display order for the taxonomy and object attributes. These customization options are described.

  1. Recent progress and advances in iterative software (including parallel aspects)

    SciTech Connect

    Carey, G.; Young, D.M.; Kincaid, D.

    1994-12-31

    The purpose of the workshop is to provide a forum for discussion of the current state of iterative software packages. Of particular interest is software for large scale engineering and scientific applications, especially for distributed parallel systems. However, the authors will also review the state of software development for conventional architectures. This workshop will complement the other proposed workshops on iterative BLAS kernels and applications. The format for the workshop is as follows: To provide some structure, there will be brief presentations, each of less than five minutes duration and dealing with specific facets of the subject. These will be designed to focus the discussion and to stimulate an exchange with the participants. Issues to be covered include: The evolution of iterative packages, current state of the art, the parallel computing challenge, applications viewpoint, standards, and future directions and open problems.

  2. Final Report for "Center for Technology for Advanced Scientific Component Software"

    SciTech Connect

    Svetlana Shasharina

    2010-12-01

    The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.

  3. Building Software Development Capacity to Advance the State of Educational Technology

    ERIC Educational Resources Information Center

    Luterbach, Kenneth J.

    2013-01-01

    Educational technologists may advance the state of the field by increasing capacity to develop software tools and instructional applications. Presently, few academic programs in educational technology require even a single computer programming course. Further, the educational technologists who develop software generally work independently or in…

  4. Safety Cases for Advanced Control Software: Safety Case Patterns

    DTIC Science & Technology

    2007-10-15

    safety claims about conventional software. For example, Littlewood and Strigini note in [8] that, at best, statistical testing can show a failure...ERA Technology Ltd, Report 97-0365, Project 13-01-4745, 1997 [8] B. Littlewood and L. Strigini, Assessment of ultra-high dependability of

  5. Center for Technology for Advanced Scientific Componet Software (TASCS)

    SciTech Connect

    Govindaraju, Madhusudhan

    2010-10-31

    Advanced Scientific Computing Research Computer Science FY 2010Report Center for Technology for Advanced Scientific Component Software: Distributed CCA State University of New York, Binghamton, NY, 13902 Summary The overall objective of Binghamton's involvement is to work on enhancements of the CCA environment, motivated by the applications and research initiatives discussed in the proposal. This year we are working on re-focusing our design and development efforts to develop proof-of-concept implementations that have the potential to significantly impact scientific components. We worked on developing parallel implementations for non-hydrostatic code and worked on a model coupling interface for biogeochemical computations coded in MATLAB. We also worked on the design and implementation modules that will be required for the emerging MapReduce model to be effective for scientific applications. Finally, we focused on optimizing the processing of scientific datasets on multi-core processors. Research Details We worked on the following research projects that we are working on applying to CCA-based scientific applications. 1. Non-Hydrostatic Hydrodynamics: Non-static hydrodynamics are significantly more accurate at modeling internal waves that may be important in lake ecosystems. Non-hydrostatic codes, however, are significantly more computationally expensive, often prohibitively so. We have worked with Chin Wu at the University of Wisconsin to parallelize non-hydrostatic code. We have obtained a speed up of about 26 times maximum. Although this is significant progress, we hope to improve the performance further, such that it becomes a practical alternative to hydrostatic codes. 2. Model-coupling for water-based ecosystems: To answer pressing questions about water resources requires that physical models (hydrodynamics) be coupled with biological and chemical models. Most hydrodynamics codes are written in Fortran, however, while most ecologists work in MATLAB. This

  6. Construction of an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.

    1992-01-01

    Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a test bed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.

  7. Construction of an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.

    1993-01-01

    Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a testbed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.

  8. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  9. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGES

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  10. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan part 2 mappings for the ASC software quality engineering practices, version 2.0.

    SciTech Connect

    Heaphy, Robert; Sturtevant, Judith E.; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Minana, Molly A.; Hackney, Patricia; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2006-09-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR001.3.2 and CPR001.3.6 and to a Department of Energy document, ''ASCI Software Quality Engineering: Goals, Principles, and Guidelines''. This document also identifies ASC management and software project teams' responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  11. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1: ASC software quality engineering practices, Version 2.0.

    SciTech Connect

    Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Minana, Molly A.; Hackney, Patricia; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2006-09-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  12. Advanced program development management software system. Software description and user's manual

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The objectives of this project were to apply emerging techniques and tools from the computer science discipline of paperless management to the activities of the Space Transportation and Exploration Office (PT01) in Marshall Space Flight Center (MSFC) Program Development, thereby enhancing the productivity of the workforce, the quality of the data products, and the collection, dissemination, and storage of information. The approach used to accomplish the objectives emphasized the utilization of finished form (off-the-shelf) software products to the greatest extent possible without impacting the performance of the end product, to pursue developments when necessary in the rapid prototyping environment to provide a mechanism for frequent feedback from the users, and to provide a full range of user support functions during the development process to promote testing of the software.

  13. Constructing an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Sims, Michael H.; Podolak, Esther; Mckay, Christopher P.

    1990-01-01

    A prototype is described that can serve as a scientific-modeling software tool to facilitate the development of useful scientific models. The prototype is developed for applications to planetary modeling, and specific examples are given that relate to the atmosphere of Titan. The scientific modeling tool employs a high-level domain-specific modeling language, several data-display facilities, and a library of experimental datasets and scientific equations. The planetary modeling prototype links uncomputed physical variables to computed variables with computational transformations based on a backchaining procedure. The system - implemented in LISP with an object-oriented knowledge-representation tool - is run on a workstation that provides interface with several models. The prototype is expected to form the basis for a sophisticated modeling tool that can permit active experimentation.

  14. Development of Advanced Multi-Modality Radiation Treatment Planning Software for Neutron Radiotherapy and Beyond

    SciTech Connect

    Nigg, D; Wessol, D; Wemple, C; Harkin, G; Hartmann-Siantar, C

    2002-08-20

    The Idaho National Engineering and Environmental Laboratory (INEEL) has long been active in development of advanced Monte-Carlo based computational dosimetry and treatment planning methods and software for advanced radiotherapy, with a particular focus on Neutron Capture Therapy (NCT) and, to a somewhat lesser extent, Fast-Neutron Therapy. The most recent INEEL software system of this type is known as SERA, Simulation Environment for Radiotherapy Applications. As a logical next step in the development of modern radiotherapy planning tools to support the most advanced research, INEEL and Lawrence Livermore National Laboratory (LLNL), the developers of the PEREGRTNE computational engine for radiotherapy treatment planning applications, have recently launched a new project to collaborate in the development of a ''next-generation'' multi-modality treatment planning software system that will be useful for all modern forms of radiotherapy.

  15. Advanced methods of microscope control using μManager software

    PubMed Central

    Edelstein, Arthur D.; Tsuchida, Mark A.; Amodaj, Nenad; Pinkard, Henry; Vale, Ronald D.; Stuurman, Nico

    2014-01-01

    μManager is an open-source, cross-platform desktop application, to control a wide variety of motorized microscopes, scientific cameras, stages, illuminators, and other microscope accessories. Since its inception in 2005, μManager has grown to support a wide range of microscopy hardware and is now used by thousands of researchers around the world. The application provides a mature graphical user interface and offers open programming interfaces to facilitate plugins and scripts. Here, we present a guide to using some of the recently added advanced μManager features, including hardware synchronization, simultaneous use of multiple cameras, projection of patterned light onto a specimen, live slide mapping, imaging with multi-well plates, particle localization and tracking, and high-speed imaging. PMID:25606571

  16. Advances in Single-Photon Emission Computed Tomography Hardware and Software.

    PubMed

    Piccinelli, Marina; Garcia, Ernest V

    2016-02-01

    Nuclear imaging techniques remain today's most reliable modality for the assessment and quantification of myocardial perfusion. In recent years, the field has experienced tremendous progress both in terms of dedicated cameras for cardiac applications and software techniques for image reconstruction. The most recent advances in single-photon emission computed tomography hardware and software are reviewed, focusing on how these improvements have resulted in an even more powerful diagnostic tool with reduced injected radiation dose and acquisition time.

  17. Using CONFIG for Simulation of Operation of Water Recovery Subsystems for Advanced Control Software Evaluation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Flores, Luis; Fleming, Land; Throop, Daiv

    2002-01-01

    A hybrid discrete/continuous simulation tool, CONFIG, has been developed to support evaluation of the operability life support systems. CON FIG simulates operations scenarios in which flows and pressures change continuously while system reconfigurations occur as discrete events. In simulations, intelligent control software can interact dynamically with hardware system models. CONFIG simulations have been used to evaluate control software and intelligent agents for automating life support systems operations. A CON FIG model of an advanced biological water recovery system has been developed to interact with intelligent control software that is being used in a water system test at NASA Johnson Space Center

  18. WinSRFR: Current Advances in Software for Surface Irrigation Simulation and Analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Significant advances have been made over the last decade in the development of software for surface irrigation analysis. WinSRFR is an integrated tool that combines unsteady flow simulation with tools for system evaluation/parameter estimation, system design, and for operational optimization. Ongoi...

  19. Evaluating Object-Based Image Analysis on Glacial Micromorphology

    NASA Astrophysics Data System (ADS)

    Chin, K. S.; Sjogren, D. B.

    2007-12-01

    Micromorphology has recently been applied more in analyzing glacial sediments at a microscopic level. It provides additional information and details that may help to explain glacial processes in areas where macro- scale observations cannot yield sufficient information. However, the process of interpreting thin sections has been very subjective, and reaching general consensus about glacial processes is difficult. Remote sensing technology is increasingly helpful in the development and advancement of many sciences; the concepts that lie behind the technology in object cognition used in other fields, such as landscape ecology, can be applied to micromorphology. Similar to what has been done to landscape ecology in the past, automating the process of interpreting objects in glacial sediments may potentially simplify and decrease the subjectivity of the process. Definiens Professional 5 is an object-based image analysis program that imitates human cognitive methods; it is used in this study to identify objects apart from background matrices in multiple thin section images of glacial sediments. The program's initial results proved that more work was needed to be done for better results, but overall the software produced promising results. The method is repeatable and continues to generate consistent results with no bias or ambiguity, so the application of this method to micromorphology and other areas alike will be valuable.

  20. Software.

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 1989

    1989-01-01

    Presented are reviews of two computer software packages for Apple II computers; "Organic Spectroscopy," and "Videodisc Display Program" for use with "The Periodic Table Videodisc." A sample spectrograph from "Organic Spectroscopy" is included. (CW)

  1. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data.

    PubMed

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.

  2. Advanced Transport Operating System (ATOPS) Flight Management/Flight Controls (FM/FC) software description

    NASA Technical Reports Server (NTRS)

    Wolverton, David A.; Dickson, Richard W.; Clinedinst, Winston C.; Slominski, Christopher J.

    1993-01-01

    The flight software developed for the Flight Management/Flight Controls (FM/FC) MicroVAX computer used on the Transport Systems Research Vehicle for Advanced Transport Operating Systems (ATOPS) research is described. The FM/FC software computes navigation position estimates, guidance commands, and those commands issued to the control surfaces to direct the aircraft in flight. Various modes of flight are provided for, ranging from computer assisted manual modes to fully automatic modes including automatic landing. A high-level system overview as well as a description of each software module comprising the system is provided. Digital systems diagrams are included for each major flight control component and selected flight management functions.

  3. Cost benefits of advanced software: A review of methodology used at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla N.

    1993-01-01

    To assist rational investments in advanced software, a formal, explicit, and multi-perspective cost-benefit analysis methodology is proposed. The methodology can be implemented through a six-stage process which is described and explained. The current practice of cost-benefit analysis at KSC is reviewed in the light of this methodology. The review finds that there is a vicious circle operating. Unsound methods lead to unreliable cost-benefit estimates. Unreliable estimates convince management that cost-benefit studies should not be taken seriously. Then, given external demands for cost-benefit estimates, management encourages software enginees to somehow come up with the numbers for their projects. Lacking the expertise needed to do a proper study, courageous software engineers with vested interests use ad hoc and unsound methods to generate some estimates. In turn, these estimates are unreliable, and the cycle continues. The proposed methodology should help KSC to break out of this cycle.

  4. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data

    PubMed Central

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages. PMID:21253357

  5. Proposal for constructing an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Sims, Michael H.; Podolak, Esther; Mckay, Christopher P.; Thompson, David E.

    1990-01-01

    Scientific model building can be a time intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We believe that advanced software techniques can facilitate both the model building and model sharing process. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing and using models. The proposed tool will include an interactive intelligent graphical interface and a high level, domain specific, modeling language. As a testbed for this research, we propose development of a software prototype in the domain of planetary atmospheric modeling.

  6. Center for Technology for Advanced Scientific Component Software (TASCS) Consolidated Progress Report July 2006 - March 2009

    SciTech Connect

    Bernholdt, D E; McInnes, L C; Govindaraju, M; Bramley, R; Epperly, T; Kohl, J A; Nieplocha, J; Armstrong, R; Shasharina, S; Sussman, A L; Sottile, M; Damevski, K

    2009-04-14

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  7. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Kostadin, Damevski

    2015-01-25

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  8. Perceptual Load Modulates Object-Based Attention

    ERIC Educational Resources Information Center

    Ho, Ming-Chou; Atchley, Paul

    2009-01-01

    Two experimental series are reported using both reaction time (RT) and a data-limited perceptual report to examine the effects of perceptual load on object-based attention. Perceptual load was manipulated across 3 levels by increasing the complexity of perceptual judgments. Data from the RT-based experiments showed object-based effects when the…

  9. Saccade latency indexes exogenous and endogenous object-based attention.

    PubMed

    Şentürk, Gözde; Greenberg, Adam S; Liu, Taosheng

    2016-10-01

    Classic studies of object-based attention have utilized keypress responses as the main dependent measure. However, people typically make saccades to fixate important objects. Recent work has shown that attention may act differently when it is deployed covertly versus in advance of a saccade. We further investigated the link between saccades and attention by examining whether object-based effects can be observed for saccades. We adapted the classical double-rectangle cueing paradigm of Egly, Driver, and Rafal (1994), and measured both the first saccade latency and the keypress reaction time (RT) to a target that appeared at the end of one of the two rectangles. Our results showed that saccade latencies exhibited higher sensitivity than did RTs for detecting effects of attention. We also assessed the generality of the attention effects by testing three types of cues: hybrid (predictive and peripheral), exogenous (nonpredictive and peripheral), and endogenous (predictive and central). We found that both RTs and saccade latencies exhibited effects of both space-based and object-based attentional selection. However, saccade latencies showed a more robust attentional modulation than RTs. For the exogenous cues, we observed a spatial inhibition of return along with an object-based effect, implying that object-based attention is independent of space-based attention. Overall, our results revealed an oculomotor correlate of object-based attention, suggesting that, in addition to spatial priority, object-level priority also affects saccade planning.

  10. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1 : ASC software quality engineering practices version 1.0.

    SciTech Connect

    Minana, Molly A.; Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2005-01-01

    The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in DOE/AL Quality Criteria (QC-1) as conformance to customer requirements and expectations. This quality plan defines the ASC program software quality practices and provides mappings of these practices to the SNL Corporate Process Requirements (CPR 1.3.2 and CPR 1.3.6) and the Department of Energy (DOE) document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines (GP&G). This quality plan identifies ASC management and software project teams' responsibilities for cost-effective software engineering quality practices. The SNL ASC Software Quality Plan establishes the signatories commitment to improving software products by applying cost-effective software engineering quality practices. This document explains the project teams opportunities for tailoring and implementing the practices; enumerates the practices that compose the development of SNL ASC's software products; and includes a sample assessment checklist that was developed based upon the practices in this document.

  11. NASA's Advanced Multimission Operations System: A Case Study in Formalizing Software Architecture Evolution

    NASA Technical Reports Server (NTRS)

    Barnes, Jeffrey M.

    2011-01-01

    All software systems of significant size and longevity eventually undergo changes to their basic architectural structure. Such changes may be prompted by evolving requirements, changing technology, or other reasons. Whatever the cause, software architecture evolution is commonplace in real world software projects. Recently, software architecture researchers have begun to study this phenomenon in depth. However, this work has suffered from problems of validation; research in this area has tended to make heavy use of toy examples and hypothetical scenarios and has not been well supported by real world examples. To help address this problem, I describe an ongoing effort at the Jet Propulsion Laboratory to re-architect the Advanced Multimission Operations System (AMMOS), which is used to operate NASA's deep-space and astrophysics missions. Based on examination of project documents and interviews with project personnel, I describe the goals and approach of this evolution effort and then present models that capture some of the key architectural changes. Finally, I demonstrate how approaches and formal methods from my previous research in architecture evolution may be applied to this evolution, while using languages and tools already in place at the Jet Propulsion Laboratory.

  12. Advanced Transport Operating System (ATOPS) color displays software description microprocessor system

    NASA Technical Reports Server (NTRS)

    Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.

    1992-01-01

    This document describes the software created for the Sperry Microprocessor Color Display System used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global reference section includes procedures and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight cathode ray tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.

  13. Roles for software technologies in advancing research and theory in educational psychology.

    PubMed

    Hadwin, Allyson F; Winne, Philip H; Nesbit, John C

    2005-03-01

    While reviews abound on theoretical topics in educational psychology, it is rare that we examine our field's instrumentation development, and what effects this has on educational psychology's evolution. To repair this gap, this paper investigates and reveals the implications of software technologies for researching and theorizing about core issues in educational psychology. From a set of approximately 1,500 articles published between 1999 and 2004, we sampled illustrative studies and organized them into four broad themes: (a) innovative ways to operationalize variables, (b) the changing nature of instructional interventions, (c) new fields of research in educational psychology, and (d) new constructs to be examined. In each area, we identify novel uses of these technologies and suggest how they may advance, and, in some instances, reshape theory and methodology. Overall, we demonstrate that software technologies hold significant potential to elaborate research in the field.

  14. Advanced Transport Operating System (ATOPS) color displays software description: MicroVAX system

    NASA Technical Reports Server (NTRS)

    Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.

    1992-01-01

    This document describes the software created for the Display MicroVAX computer used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery of February 27, 1991, known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global references section includes subroutines, functions, and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight Cathode Ray Tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.

  15. A Visual Basic simulation software tool for performance analysis of a membrane-based advanced water treatment plant.

    PubMed

    Pal, P; Kumar, R; Srivastava, N; Chowdhury, J

    2014-02-01

    A Visual Basic simulation software (WATTPPA) has been developed to analyse the performance of an advanced wastewater treatment plant. This user-friendly and menu-driven software is based on the dynamic mathematical model for an industrial wastewater treatment scheme that integrates chemical, biological and membrane-based unit operations. The software-predicted results corroborate very well with the experimental findings as indicated in the overall correlation coefficient of the order of 0.99. The software permits pre-analysis and manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. It allows quick performance analysis of the whole system as well as the individual units. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for hazardous wastewater.

  16. Attentional Spreading in Object-Based Attention

    ERIC Educational Resources Information Center

    Richard, Ashleigh M.; Lee, Hyunkyu; Vecera, Shaun P.

    2008-01-01

    The authors investigated 2 effects of object-based attention: the spread of attention within an attended object and the prioritization of search across possible target locations within an attended object. Participants performed a flanker task in which the location of the task-relevant target was fixed and known to participants. A spreading…

  17. Object-Based Attention and Cognitive Tunneling

    ERIC Educational Resources Information Center

    Jarmasz, Jerzy; Herdman, Chris M.; Johannsdottir, Kamilla Run

    2005-01-01

    Simulator-based research has shown that pilots cognitively tunnel their attention on head-up displays (HUDs). Cognitive tunneling has been linked to object-based visual attention on the assumption that HUD symbology is perceptually grouped into an object that is perceived and attended separately from the external scene. The present research…

  18. Object-based representations of spatial images

    NASA Astrophysics Data System (ADS)

    Newsam, Shawn; Bhagavathy, Sitaram; Kenney, Charles; Manjunath, B. S.; Fonseca, Leila

    2001-03-01

    Object based representations of image data enable new content-related functionalities while facilitating management of large image databases. Developing such representations for multi-date and multi-spectral images is one of the objectives of the second phase of the Alexandria Digital Library (ADL) project at UCSB. Image segmentation and image registration are two of the main issues that are to be addressed in creating localized image representations. We present in this paper some of the recent and current work by the ADL's image processing group on robust image segmentation, registration, and the use of image texture for content representation. Built upon these technologies are techniques for managing large repositories of data. A texture thesaurus assists in creating a semantic classification of image regions. An object-based representation is proposed to facilitate data storage, retrieval, analysis, and navigation.

  19. Award-Winning CARES/Life Ceramics Durability Evaluation Software Is Making Advanced Technology Accessible

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CARES/Life software developed at the NASA Lewis Research Center eases this by providing a tool that uses probabilistic reliability analysis techniques to optimize the design and manufacture of brittle material components. CARES/Life is an integrated package that predicts the probability of a monolithic ceramic component's failure as a function of its time in service. It couples commercial finite element programs--which resolve a component's temperature and stress distribution - with reliability evaluation and fracture mechanics routines for modeling strength - limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength.

  20. Advanced Software for Analysis of High-Speed Rolling-Element Bearings

    NASA Technical Reports Server (NTRS)

    Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.

    2003-01-01

    COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.

  1. Software development for the evaluation of the ergonomic compatibility on the selection of advanced manufacturing technology.

    PubMed

    Maldonado-Macías, A; Reyes, R; Guillen, L; García, J

    2012-01-01

    Advanced Manufacturing Technology (AMT) is one of the most relevant resources that companies have to achieve competitiveness and best performance. The selection of AMT is a complex problem which involves significant amount of information and uncertainty when multiple aspects must be taken into consideration. Actual models for the selection of AMT are found scarce of the Human Factors and Ergonomics perspective which can lead to a more complete and reliable decision. This paper presents the development of software that enhances the application of an Ergonomic Compatibility Evaluation Model that supports decision making processes taking into consideration ergonomic attributes of designs. Ergonomic Compatibility is a construct used in this model and it is mainly based in the concept of human-artifact compatibility on human compatible systems. Also, an Axiomatic Design approach by the use of the Information Axiom was evolved under a fuzzy environment to obtain the Ergonomic Incompatibility Content. The extension of this axiom for the evaluation of ergonomic compatibility requirements was the theoretical framework of this research. An incremental methodology of four stages was used to design and develop the software that enables to compare AMT alternatives by the evaluation of Ergonomic Compatibility Attributes.

  2. Advanced communications technology satellite high burst rate link evaluation terminal communication protocol software user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Communication Protocol Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Communication Protocol Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Communication Protocol Software allows users to control and configure the Intermediate Frequency Switch Matrix (IFSM) on board the ACTS to yield a desired path through the spacecraft payload. Besides IFSM control, the C&PM Software System is also responsible for instrument control during HBR-LET experiments, uplink power control of the HBR-LET to demonstrate power augmentation during signal fade events, and data display. The Communication Protocol Software User's Guide, Version 1.0 (NASA CR-189162) outlines the commands and procedures to install and operate the Communication Protocol Software. Configuration files used to control the IFSM, operator commands, and error recovery procedures are discussed. The Communication Protocol Software Maintenance Manual, Version 1.0 (NASA CR-189163, to be published) is a programmer's guide to the Communication Protocol Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Communication Protocol Software, computer algorithms, format representations, and computer hardware configuration. The Communication Protocol Software Test Plan (NASA CR-189164, to be published) provides a step-by-step procedure to verify the operation of the software. Included in the Test Plan is command transmission, telemetry reception, error detection, and error recovery procedures.

  3. Advanced functionality for radio analysis in the Offline software framework of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Abreu, P.; Aglietta, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antičić, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellido, J. A.; Benzvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Caballero-Mora, K. S.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chudoba, J.; Clay, R. W.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Cotti, U.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Domenico, M.; de Donato, C.; de Jong, S. J.; de La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; de Mitri, I.; de Souza, V.; de Vries, K. D.; Decerprit, G.; Del Peral, L.; Deligny, O.; Dembinski, H.; Denkiewicz, A.; di Giulio, C.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Dobrigkeit, C.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; Dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Ferrero, A.; Fick, B.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fröhlich, U.; Fuchs, B.; Gamarra, R. F.; Gambetta, S.; García, B.; García Gámez, D.; Garcia-Pinto, D.; Gascon, A.; Gemmeke, H.; Gesterling, K.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Góra, D.; Gorgi, A.; Gouffon, P.; Gozzini, S. R.; Grashorn, E.; Grebe, S.; Griffith, N.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hague, J. D.; Hansen, P.; Harari, D.; Harmsma, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horneffer, A.; Hrabovský, M.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jiraskova, S.; Kadija, K.; Kampert, K. H.; Karhan, P.; Karova, T.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lautridou, P.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Lemiere, A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lucero, A.; Ludwig, M.; Lyberis, H.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Meurer, C.; Mićanović, S.; Micheletti, M. I.; Miller, W.; Miramonti, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Morris, C.; Mostafá, M.; Moura, C. A.; Mueller, S.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Nhung, P. T.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Oliva, P.; Olmos-Gilbaja, V. M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Parrisius, J.; Parsons, R. D.; Pastor, S.; Paul, T.; Pech, M.; PeĶala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Petrovic, J.; Pfendner, C.; Phan, N.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Privitera, P.; Prouza, M.; Quel, E. J.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rivera, H.; Riviére, C.; Rizi, V.; Robledo, C.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-D'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Salamida, F.; Salazar, H.; Salina, G.; Sánchez, F.; Santander, M.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Schmidt, F.; Schmidt, T.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schroeder, F.; Schulte, S.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Semikoz, D.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tamashiro, A.; Tapia, A.; Taşcău, O.; Tcaciuc, R.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tiwari, D. K.; Tkaczyk, W.; Todero Peixoto, C. J.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Warner, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Westerhoff, S.; Whelan, B. J.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Winders, L.; Winnick, M. G.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Younk, P.; Yuan, G.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Ziolkowski, M.

    2011-04-01

    The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs “radio-hybrid” measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request.

  4. Advanced functionality for radio analysis in the Offline software framework of the Pierre Auger Observatory

    SciTech Connect

    Abreu, P.; Aglietta, M.; Ahn, E.J.; Albuquerque, I.F.M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Ambrosio, M.; /INFN, Naples /Copenhagen Astron. Observ. /Nijmegen U., IMAPP

    2011-01-01

    The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs 'radio-hybrid' measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request.

  5. Performance analysis and optimization of an advanced pharmaceutical wastewater treatment plant through a visual basic software tool (PWWT.VB).

    PubMed

    Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha

    2016-05-01

    A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.

  6. Framework Programmable Platform for the Advanced Software Development Workstation (FPP/ASDW). Demonstration framework document. Volume 1: Concepts and activity descriptions

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paul S.; Crump, John W.; Ackley, Keith A.

    1992-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at effectively combining tool and data integration mechanisms with a model of the software development process to provide an intelligent integrated software development environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The Advanced Software Development Workstation (ASDW) program is conducting research into development of advanced technologies for Computer Aided Software Engineering (CASE).

  7. Object-based mapping of drumlins from DTMs

    NASA Astrophysics Data System (ADS)

    Eisank, C.; Dragut, L.; Blaschke, T.

    2012-04-01

    Until recently, landforms such as drumlins have only been manually delineated due to the difficulty in integrating contextual and semantic landform information in per cell classification approaches. Therefore, in most cases the results of per cell classifications presented basic landform elements or broad-scale physiographic regions that were only thematically defined. In contrast, object-based analysis provides spatially configured landform objects that are generated by terrain segmentation, the process of merging DTM cells to meaningful terrain objects at multiple scales. Such terrain objects should be favoured for landform modelling due to the following reasons: Firstly, their outlines potentially better correspond to the spatial limits of landforms as conceptualised by geoscientists; secondly, spatially aware objects enable the integration of semantic descriptions in the classification process. We present a multi-scale object-based study on automated delineation and classification of drumlins for a small test area in Bavaria, Germany. The multi-resolution segmentation algorithm is applied to create statistically meaningful objects patterns of selected DTMs, which are derived from a 5 m LiDAR DEM. For the subsequent classification of drumlins a semantics-based approach, which uses the principles of semantic modelling, is employed: initially, a geomorphological concept of the landform type drumlin is developed. The drumlin concept should ideally comprise verbal descriptions of the fundamental morphometric, morphological, hierarchical and contextual properties. Subsequently, the semantic model is built by structuring the conceptualised knowledge facts, and by associating those facts with object and class-related features, which are available in commonly used object-based software products for the development of classification rules. For the accuracy assessment we plan an integrated approach, which combines a statistical comparison to field maps and a qualitative

  8. Computer-Assisted Instruction in the Context of the Advanced Instructional System: Authoring Support Software. Final Report.

    ERIC Educational Resources Information Center

    Montgomery, Ann D.; Judd, Wilson A.

    This report details the design, development, and implementation of computer software to support the cost-effective production of computer assisted instruction (CAI) within the context of the Advanced Instructional System (AIS) located at Lowry Air Force Base. The report supplements the computer managed Air Force technical training that is…

  9. Framework programmable platform for the advanced software development workstation. Integration mechanism design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.

  10. Recent advances in hardware and software are to improve spent fuel measurements

    SciTech Connect

    Staples, P.; Beddingfield, D. H.; Lestone, J. P.; Pelowitz, D. G.; Bytchkov, M.; Starovich, Z.; Harizanov, I.; Luna-Vellejo, J.; Lavender, C.

    2001-01-01

    Vast quantities of spent fuel are available for safeguard measurements, primarily in Commonwealth of Independent States (CIS) of the former Soviet Union. This spent fuel, much of which consists of long-cooling-time material, is going to become less unique in the world safeguards arena as reprocessing projects or permanent repositories continue to be delayed or postponed. The long cooling time of many of the spent fuel assemblies being prepared for intermediate term storage in the CIS countries promotes the possibility of increased accuracy in spent fuel assays. This improvement is made possible through the process of decay of the Curium isotopes and of fission products. An important point to consider for the future that could advance safeguards measurements for reverification and inspection would be to determine what safeguards requirements should be imposed upon this 'new' class of spent fuel, Improvements in measurement capability will obviously affect the safeguards requirements. What most significantly enables this progress in spent fuel measurements is the improvement in computer processing power and software enhancements leading to user-friendly Graphical User Interfaces (GUT's). The software used for these projects significantly reduces the IAEA inspector's time expenditure for both learning and operating computer and data acquisition systems, At the same time, by standardizing the spent fuel measurements, it is possible to increase reproducibility and reliability of the measurement data. Hardware systems will be described which take advantage of the increased computer control available to enable more complex measurement scenarios. A specific example of this is the active regulation of a spent fuel neutron coincident counter's {sup 3}He tubes high voltage, and subsequent scaling of measurement results to maintain a calibration for direct assay of the plutonium content of Fast Breeder Reactor spent fuel. The plutonium content has been successfully determined

  11. GRAPHICS MANAGER (GFXMGR): An interactive graphics software program for the Advanced Electronics Design (AED) graphics controller, Model 767

    SciTech Connect

    Faculjak, D.A.

    1988-03-01

    Graphics Manager (GFXMGR) is menu-driven, user-friendly software designed to interactively create, edit, and delete graphics displays on the Advanced Electronics Design (AED) graphics controller, Model 767. The software runs on the VAX family of computers and has been used successfully in security applications to create and change site layouts (maps) of specific facilities. GFXMGR greatly benefits graphics development by minimizing display-development time, reducing tedium on the part of the user, and improving system performance. It is anticipated that GFXMGR can be used to create graphics displays for many types of applications. 8 figs., 2 tabs.

  12. Development of tools for safety analysis of control software in advanced reactors

    SciTech Connect

    Guarro, S.; Yau, M.; Motamed, M.

    1996-04-01

    Software based control systems have gained a pervasive presence in a wide variety of applications, including nuclear power plant control and protection systems which are within the oversight and licensing responsibility of the US Nuclear Regulatory Commission. While the cost effectiveness and flexibility of software based plant process control is widely recognized, it is very difficult to achieve and prove high levels of demonstrated dependability and safety assurance for the functions performed by process control software, due to the very flexibility and potential complexity of the software itself. The development of tools to model, analyze and test software design and implementations in the context of the system that the software is designed to control can greatly assist the task of providing higher levels of assurance than those obtainable by software testing alone. This report presents and discusses the development of the Dynamic Flowgraph Methodology (DFM) and its application in the dependability and assurance analysis of software-based control systems. The features of the methodology and full-scale examples of application to both generic process and nuclear power plant control systems are presented and discussed in detail. The features of a workstation software tool developed to assist users in the application of DFM are also described.

  13. Framework Programmable Platform for the advanced software development workstation: Framework processor design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les

    1991-01-01

    The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.

  14. Space Flight Software Development Software for Intelligent System Health Management

    NASA Technical Reports Server (NTRS)

    Trevino, Luis C.; Crumbley, Tim

    2004-01-01

    The slide presentation examines the Marshall Space Flight Center Flight Software Branch, including software development projects, mission critical space flight software development, software technical insight, advanced software development technologies, and continuous improvement in the software development processes and methods.

  15. I-SPINE: a software package for advances in image-guided and minimally invasive spine procedures

    NASA Astrophysics Data System (ADS)

    Choi, Jae Jeong; Cleary, Kevin R.; Zeng, Jianchao; Gary, Kevin A.; Freedman, Matthew T.; Watson, Vance; Lindisch, David; Mun, Seong K.

    2000-05-01

    While image guidance is now routinely used in the brain in the form of frameless stereotaxy, it is beginning to be more widely used in other clinical areas such as the spine. At Georgetown University Medical Center, we are developing a program to provide advanced visualization and image guidance for minimally invasive spine procedures. This is a collaboration between an engineering-based research group and physicians from the radiology, neurosurgery, and orthopaedics departments. A major component of this work is the ISIS Center Spine Procedures Imaging and Navigation Engine, which is a software package under development as the base platform for technical advances.

  16. Framework Programmable Platform for the Advanced Software Development Workstation: Preliminary system design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.

  17. Applications of custom developed object based analysis tool: Precipitation in Pacific, Tropical cyclones precipitation, Hail areas

    NASA Astrophysics Data System (ADS)

    Skok, Gregor; Rakovec, Jože; Strajnar, Benedikt; Bacmeister, Julio; Tribbia, Joe

    2014-05-01

    In the last few years an object-based analysis software tool was developed at University of Ljubljana in collaboration with National Center for Atmospheric Research (NCAR). The tool was originally based on ideas of the Method for Object-Based Diagnostic Evaluation (MODE) developed by NCAR but has since evolved and changed considerably and is now available as a separate free software package. The software is called the Forward in Time object analysis tool (FiT tool). The software was used to analyze numerous datasets - mainly focusing on precipitation. Climatology of satellite and model precipitation in the low-and-mid latitude Pacific Ocean was performed by identifying and tracking of individual perception systems and estimating their lifespan, movement and size. A global climatology of tropical cyclone precipitation was performed using satellite data and tracking and analysis of areas with hail in Slovenia was performed using radar data. The tool will be presented along with some results of applications.

  18. ATOS: Integration of advanced technology software within distributed Spacecraft Mission Operations Systems

    NASA Technical Reports Server (NTRS)

    Jones, M.; Wheadon, J.; Omullane, W.; Whitgift, D.; Poulter, K.; Niezette, M.; Timmermans, R.; Rodriguez, Ivan; Romero, R.

    1994-01-01

    The Advanced Technology Operations System (ATOS) is a program of studies into the integration of advanced applications (including knowledge based systems (KBS)) with ground systems for the support of spacecraft mission operations.

  19. Advanced engineering software for in-space assembly and manned planetary spacecraft

    NASA Technical Reports Server (NTRS)

    Delaquil, Donald; Mah, Robert

    1990-01-01

    Meeting the objectives of the Lunar/Mars initiative to establish safe and cost-effective extraterrestrial bases requires an integrated software/hardware approach to operational definitions and systems implementation. This paper begins this process by taking a 'software-first' approach to systems design, for implementing specific mission scenarios in the domains of in-space assembly and operations of the manned Mars spacecraft. The technological barriers facing implementation of robust operational systems within these two domains are discussed, and preliminary software requirements and architectures that resolve these barriers are provided.

  20. Advances in the design of a thermomechanical analyzer for fibers. II. Computer facilities and software

    NASA Astrophysics Data System (ADS)

    Noui, L.; Hearle, J. W. S.

    1995-06-01

    PC-based software for the full control of the flexible thermomechanical analyzer (FTMA) for yarns and fibers is described. The software permits a flexible procedure to control three essential parameters of the FTMA, namely tension, twist, and temperature. The computer program allows data acquisition at a programmable rate of up to 62.5 ksamples/s, on-line data display, and on-line data storage. Up to eight channels can be monitored. A circular buffer was used to store unlimited amount of data. For FTMA applications, data were calibrated in terms of Newtons for the tension, degree Celsius for the temperature, and Newton-meter for the torque and can be saved in three different formats, ASCII, LOTUS, or binary. The software is user friendly as it makes use of graphical user interface for motor control and data display. The software is also capable of controlling thermomechanical tests at constant force.

  1. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  2. Research in software allocation for advanced manned mission communications and tracking systems

    NASA Technical Reports Server (NTRS)

    Warnagiris, Tom; Wolff, Bill; Kusmanoff, Antone

    1990-01-01

    An assessment of the planned processing hardware and software/firmware for the Communications and Tracking System of the Space Station Freedom (SSF) was performed. The intent of the assessment was to determine the optimum distribution of software/firmware in the processing hardware for maximum throughput with minimum required memory. As a product of the assessment process an assessment methodology was to be developed that could be used for similar assessments of future manned spacecraft system designs. The assessment process was hampered by changing requirements for the Space Station. As a result, the initial objective of determining the optimum software/firmware allocation was not fulfilled, but several useful conclusions and recommendations resulted from the assessment. It was concluded that the assessment process would not be completely successful for a system with changing requirements. It was also concluded that memory requirements and hardware requirements were being modified to fit as a consequence of the change process, and although throughput could not be quantitized, potential problem areas could be identified. Finally, inherent flexibility of the system design was essential for the success of a system design with changing requirements. Recommendations resulting from the assessment included development of common software for some embedded controller functions, reduction of embedded processor requirements by hardwiring some Orbital Replacement Units (ORUs) to make better use of processor capabilities, and improvement in communications between software development personnel to enhance the integration process. Lastly, a critical observation was made regarding the software integration tasks did not appear to be addressed in the design process to the degree necessary for successful satisfaction of the system requirements.

  3. The Center for Technology for Advanced Scientific Component Software (TASCS) Lawrence Livermore National Laboratory - Site Status Update

    SciTech Connect

    Epperly, T W

    2008-12-03

    This report summarizes LLNL's progress for the period April through September of 2008 for the Center for Technology for Advanced Scientific Component Software (TASCS) SciDAC. The TASCS project is organized into four major thrust areas: CCA Environment (72%), Component Technology Initiatives (16%), CCA Toolkit (8%), and User and Application Outreach & Support (4%). The percentage of LLNL's effort allocation is shown in parenthesis for each thrust area. Major thrust areas are further broken down into activity areas, LLNL's effort directed to each activity is shown in Figure 1. Enhancements, Core Tools, and Usability are all part of CCA Environment, and Software Quality is part of Component Technology Initiatives. The balance of this report will cover our accomplishments in each of these activity areas.

  4. Developing Open Source Software To Advance High End Computing. Report to the President.

    ERIC Educational Resources Information Center

    National Coordination Office for Information Technology Research and Development, Arlington, VA.

    This is part of a series of reports to the President and Congress developed by the President's Information Technology Advisory Committee (PITAC) on key contemporary issues in information technology. This report defines open source software, explains PITAC's interest in this model, describes the process used to investigate issues in open source…

  5. Roles for Software Technologies in Advancing Research and Theory in Educational Psychology

    ERIC Educational Resources Information Center

    Hadwin, Allyson F.; Winne, Philip H.; Nesbit, John C.

    2005-01-01

    While reviews abound on theoretical topics in educational psychology, it is rare that we examine our field's instrumentation development, and what effects this has on educational psychology's evolution. To repair this gap, this paper investigates and reveals the implications of software technologies for researching and theorizing about core issues…

  6. Investigating Advances in the Acquisition of Systems Based on Open Architecture and Open Source Software

    DTIC Science & Technology

    2011-08-01

    careers , in order to survive and evolve. Similarly, as FOSSD projects give rise to new types of requirements for community building, community...services, and creating more stimulating jobs, careers , and workforce development opportunities depend on faster, better, and cheaper software...are three highly visible astrophysics research projects whose scientific discoveries depend on processing remotely sensed data through a complex

  7. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  8. A software tool for advanced MRgFUS prostate therapy planning and follow up

    NASA Astrophysics Data System (ADS)

    van Straaten, Dörte; Hoogenboom, Martijn; van Amerongen, Martinus J.; Weiler, Florian; Issawi, Jumana Al; Günther, Matthias; Fütterer, Jurgen; Jenne, Jürgen W.

    2017-03-01

    US guided HIFU/FUS ablation for the therapy of prostate cancer is a clinical established method, while MR guided HIFU/FUS applications for prostate recently started clinical evaluation. Even if MRI examination is an excellent diagnostic tool for prostate cancer, it is a time consuming procedure and not practicable within an MRgFUS therapy session. The aim of our ongoing work is to develop software to support therapy planning and post-therapy follow-up for MRgFUS on localized prostate cancer, based on multi-parametric MR protocols. The clinical workflow of diagnosis, therapy and follow-up of MR guided FUS on prostate cancer was deeply analyzed. Based on this, the image processing workflow was designed and all necessary components, e.g. GUI, viewer, registration tools etc. were defined and implemented. The software bases on MeVisLab with several implemented C++ modules for the image processing tasks. The developed software, called LTC (Local Therapy Control) will register and visualize automatically all images (T1w, T2w, DWI etc.) and ADC or perfusion maps gained from the diagnostic MRI session. This maximum of diagnostic information helps to segment all necessary ROIs, e.g. the tumor, for therapy planning. Final therapy planning will be performed based on these segmentation data in the following MRgFUS therapy session. In addition, the developed software should help to evaluate the therapy success, by synchronization and display of pre-therapeutic, therapy and follow-up image data including the therapy plan and thermal dose information. In this ongoing project, the first stand-alone prototype was completed and will be clinically evaluated.

  9. Advances in Highly Constrained Multi-Phase Trajectory Generation using the General Pseudospectral Optimization Software (GPOPS)

    DTIC Science & Technology

    2013-08-01

    cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Aeronautics and Astronautics vehicles. GPOPS is a MATLAB-based hp-adaptive pseudospectral optimization software. GPOPS utilizes a Radau Pseudospectral...Method (RPM); the collocation points are defined by the roots of Legendre-Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by

  10. Advanced multilateration theory, software development, and data processing: The MICRODOT system

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Gallagher, J. F.; Vonroos, O. H.

    1976-01-01

    The process of geometric parameter estimation to accuracies of one centimeter, i.e., multilateration, is defined and applications are listed. A brief functional explanation of the theory is presented. Next, various multilateration systems are described in order of increasing system complexity. Expected systems accuracy is discussed from a general point of view and a summary of the errors is listed. An outline of the design of a software processing system for multilateration, called MICRODOT, is presented next. The links of this software, which can be used for multilateration data simulations or operational data reduction, are examined on an individual basis. Functional flow diagrams are presented to aid in understanding the software capability. MICRODOT capability is described with respect to vehicle configurations, interstation coordinate reduction, geophysical parameter estimation, and orbit determination. Numerical results obtained from MICRODOT via data simulations are displayed both for hypothetical and real world vehicle/station configurations such as used in the GEOS-3 Project. These simulations show the inherent power of the multilateration procedure.

  11. Object-based landslide detection in different geographic regions

    NASA Astrophysics Data System (ADS)

    Friedl, Barbara; Hölbling, Daniel; Eisank, Clemens; Blaschke, Thomas

    2015-04-01

    , SPOT-5 images are combined with digital elevation models (DEM) for developing a consistent semi-automated landslide detection approach using eCognition (Trimble) software. Suitable image objects are generated by means of multiresolution segmentation. Expert knowledge, i.e. reported facts on features (e.g. mean object slope, mean NDVI) and thresholds that are commonly chosen by professionals for digital landslide mapping, is considered during classification. The applicability of a range of features is tested and the most promising parameters, i.e. features that produce appropriate results for both regions, are selected for landslide detection. However, minor adaptations of particular thresholds are necessary due to the distinct environmental conditions of the test sites. In order to reduce the number of required adjustments to a minimum, relational features and spectral indices are primarily used for classification. The obtained results are finally compared to manually digitized reference polygons and existing landslide inventories in order to quantify the applicability of the developed object-based landslide detection approach in different geographic regions.

  12. Application of standard and advanced open source GIS software functionality for analysis of coordinates obtained by GNSS measurements

    NASA Astrophysics Data System (ADS)

    Ilieva, Tamara

    2016-04-01

    Currently there is wide variety of GNSS measurements used in the geodetic practice. The coordinates obtained by static, kinematic or precise point positioning GNSS measurements could be analyzed by using the standard functionality of any GIS software, but the open source ones give to the users an opportunity to make themselves advanced functionality. There is an option the coordinates obtained by measurements to be stored in spatial geodatabase and information for the precision and time of measurement to be added. The data could be visualized in different coordinate systems and projections and analyzed by applying different types of spatial analysis. The process also could be automated in high degree. An example with test data is prepared. It includes automated loading of files with coordinates obtained by GNSS measurements and additional information for the precision and the time of measurements. Standard and advanced open source GIS software functionality is used for automation of the analysis process. Also, graph theory is implemented for making time series of the data stored in the spatial geodatabase.

  13. Software Systems 2--Compiler and Operating Systems Lab--Advanced, Data Processing Technology: 8025.33.

    ERIC Educational Resources Information Center

    Dade County Public Schools, Miami, FL.

    The course outline has been prepared as a guide to help the student develop the skills and knowledge necessary to succeed in the field of data processing. By learning the purpose and principles of compiler programs and operating systems, the student will become familiar with advanced data processing procedures that are representative of computer…

  14. Inhibition of Return and Object-Based Attentional Selection

    ERIC Educational Resources Information Center

    List, Alexandra; Robertson, Lynn C.

    2007-01-01

    Visual attention research has revealed that attentional allocation can occur in space- and/or object-based coordinates. Using the direct and elegant design of R. Egly, J. Driver, and R. Rafal (1994), the present experiments tested whether space- and object-based inhibition of return (IOR) emerge under similar time courses. The experiments were…

  15. Assessment of small hydropower potential for the Olanesti River using advanced software techniques

    NASA Astrophysics Data System (ADS)

    Moldoveanu, A.; Galie, A.; Moldoveanu, M.; Popa, F.; Tica, E.; Popa, B.

    2017-01-01

    The assessment of small hydropower potential for rivers is essential for the Renewable Energy Sources Directive implementation, in order to identify and analyze opportunities for new small hydropower developments. As the Water Framework Directive requirement is the non-deterioration of the status of the river water bodies, the aspects regarding the consistency of a flow regime downstream hydropower plant and the environmental objectives are important. The paper presents a case study, the Olanesti River (Romania), using software techniques dedicated to the assessment of small hydropower potential. The hydropower potential of the river was assessed considering the heads based on the Digital Terrain Model and the mean flow. In addition, the environmental flow was considered for the installed capacity assessment, in order to cope with the Water Framework Directive requirements. The harnessed hydropower potential, the optimal distance between the water intake and power plant location, their optimum positions along the river sector, installed capacity and electricity production, calculated at an average lifetime, are the outcomes of the software. The applicability of the tool might be extended on a regional scale in order to support the decision making authorities, taking into account the increasing demand for energy as well as the environmental issues.

  16. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  17. Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.

  18. Advances in software for faster procedure and lower radiotracer dose myocardial perfusion imaging.

    PubMed

    Piccinelli, Marina; Garcia, Ernest V

    2015-01-01

    The American Society of Nuclear Cardiology has recently published documents that encourage laboratories to take all the appropriate steps to greatly decrease patient radiation dose and has set the goal of 50% of all myocardial perfusion studies performed with an associated radiation exposure of 9mSv by 2014. In the present work, a description of the major software techniques readily available to shorten procedure time and decrease injected activity is presented. Particularly new reconstruction methods and their ability to include means for resolution recovery and noise regularization are described. The use of these improved reconstruction algorithms results in a consistent reduction in acquisition time, injected activity and consequently in the radiation dose absorbed by the patient. The clinical implications to the use of these techniques are also described in terms of maintained and even improved study quality, accuracy and sensitivity for the detection of heart disease.

  19. Advanced Communications Technology Satellite high burst rate link evaluation terminal experiment control and monitor software maintenance manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1992-01-01

    The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document. The EC&M Software Maintenance Manual, Version 1.0 (NASA-CR-189161) is a programmer's guide that describes current implementation of the EC&M software from a technical perspective. An overview of the EC&M software, computer algorithms, format representation, and computer hardware configuration are included in the manual.

  20. Object-based selection is contingent on attentional control settings.

    PubMed

    Taylor, J Eric T; Rajsic, Jason; Pratt, Jay

    2016-05-01

    The visual system allocates attention in object-based and location-based modes. However, the question of when attention selects objects and when it selects locations remains poorly understood. In this article, we present variations on two classic paradigms from the object-based attention literature, in which object-based effects are observed only when the object feature matches the task goal of the observer. In Experiment 1, covert orienting was influenced by task-irrelevant rectangles, but only when the target color matched the rectangle color. In Experiment 2, the region of attentional focus was adjusted to the size of task-irrelevant objects, but only when the target color matched the object color. In Experiment 3, we ruled out the possibility that contingent object-based selection is caused by color-based intratrial priming. These demonstrations of contingent object-based attention suggest that object-based selection is neither mandatory nor default, and that object-based effects are contingent on simple, top-down attentional control settings.

  1. Advancing global marine biogeography research with open-source GIS software and cloud-computing

    USGS Publications Warehouse

    Fujioka, Ei; Vanden Berghe, Edward; Donnelly, Ben; Castillo, Julio; Cleary, Jesse; Holmes, Chris; McKnight, Sean; Halpin, patrick

    2012-01-01

    Across many scientific domains, the ability to aggregate disparate datasets enables more meaningful global analyses. Within marine biology, the Census of Marine Life served as the catalyst for such a global data aggregation effort. Under the Census framework, the Ocean Biogeographic Information System was established to coordinate an unprecedented aggregation of global marine biogeography data. The OBIS data system now contains 31.3 million observations, freely accessible through a geospatial portal. The challenges of storing, querying, disseminating, and mapping a global data collection of this complexity and magnitude are significant. In the face of declining performance and expanding feature requests, a redevelopment of the OBIS data system was undertaken. Following an Open Source philosophy, the OBIS technology stack was rebuilt using PostgreSQL, PostGIS, GeoServer and OpenLayers. This approach has markedly improved the performance and online user experience while maintaining a standards-compliant and interoperable framework. Due to the distributed nature of the project and increasing needs for storage, scalability and deployment flexibility, the entire hardware and software stack was built on a Cloud Computing environment. The flexibility of the platform, combined with the power of the application stack, enabled rapid re-development of the OBIS infrastructure, and ensured complete standards-compliance.

  2. Advanced communications technology satellite high burst rate link evaluation terminal power control and rain fade software test plan, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Power Control and Rain Fade Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Power Control and Rain Fade Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Power Control and Rain Fade Software automatically controls the LET uplink power to compensate for signal fades. Besides power augmentation, the C&PM Software system is also responsible for instrument control during HBR-LET experiments, control of the Intermediate Frequency Switch Matrix on board the ACTS to yield a desired path through the spacecraft payload, and data display. The Power Control and Rain Fade Software User's Guide, Version 1.0 outlines the commands and procedures to install and operate the Power Control and Rain Fade Software. The Power Control and Rain Fade Software Maintenance Manual, Version 1.0 is a programmer's guide to the Power Control and Rain Fade Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Power Control and Rain Fade Software, computer algorithms, format representations, and computer hardware configuration. The Power Control and Rain Fade Test Plan provides a step-by-step procedure to verify the operation of the software using a predetermined signal fade event. The Test Plan also provides a means to demonstrate the capability of the software.

  3. Random visual noise impairs object-based attention.

    PubMed

    Abrams, Richard A; Law, Mark B

    2002-02-01

    Object-based visual attention is observed when the benefit of attending to one element in a display extends to other elements that are part of the same perceptual object. Apperceptive agnosia is an object identification deficit in which spatial attention is preserved but object-based attention is impaired. Some debate exists regarding the extent to which the object-based impairment can be attributed to perceptual mechanisms that are specifically involved in grouping and segmentation of a scene, as opposed to early sensory processes. In the present paper we show that random visual noise is sufficient to eliminate the object benefit, a result inconsistent with the view that grouping mechanisms are responsible for the effect. The results have implications for an understanding of apperceptive agnosia, and for an understanding of object-based attention more generally.

  4. Object-based neglect varies with egocentric position.

    PubMed

    Karnath, Hans-Otto; Mandler, André; Clavagnier, Simon

    2011-10-01

    Different reference frames have been identified to influence neglect behavior. In particular, neglect has been demonstrated to be related to the contralesional side of the subject's body (egocentric reference frames) as well as to the contralesional side of individual objects irrespective of their position to the patient (object-based reference frame). There has been discussion whether this distinction separates neglect into body- and object-based forms. The present experiment aimed to prove possible interactions between object-based and egocentric aspects in spatial neglect. Neglect patients' eye and head movements were recorded while they explored objects at five egocentric positions along the horizontal dimension of space. The patients showed both egocentric as well as object-based behavior. Most interestingly, data analysis revealed that object-based neglect varied with egocentric position. Although the neglect of the objects' left side was strong at contralesional egocentric positions, it ameliorated at more ipsilesional egocentric positions of the objects. The patients showed steep, ramp-shaped patterns of exploration for objects located on the far contralesional side and a broadening of these patterns as the locations of the objects shifted more to the ipsilesional side. The data fitted well with the saliency curves predicted by a model of space representation, which suggests that visual input is represented in two modes simultaneously: in veridical egocentric coordinates and in within-object coordinates.

  5. Advanced communications technology satellite high burst rate link evaluation terminal experiment control and monitor software user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1992-01-01

    The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document.

  6. Earth Observing System (EOS)/Advanced Microwave Sounding Unit-A (AMSU-A) software assurance plan

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert; Smith, Claude

    1994-01-01

    This document defines the responsibilities of Software Quality Assurance (SOA) for the development of the flight software installed in EOS/AMSU-A instruments, and the ground support software used in the test and integration of the EOS/AMSU-A instruments.

  7. Multiscale quantification of urban composition from EO-1/Hyperion data using object-based spectral unmixing

    NASA Astrophysics Data System (ADS)

    Zhang, Caiyun

    2016-05-01

    Quantification of the urban composition is important in urban planning and management. Previous research has primarily focused on unmixing medium-spatial resolution multispectral imagery using spectral mixture analysis (SMA) in order to estimate the abundance of urban components. For this study an object-based multiple endmember spectral mixture analysis (MESMA) approach was applied to unmix the 30-m Earth Observing-1 (EO-1)/Hyperion hyperspectral imagery. The abundance of two physical urban components (vegetation and impervious surface) was estimated and mapped at multiple scales and two defined geographic zones. The estimation results were validated by a reference dataset generated from fine spatial resolution aerial photography. The object-based MESMA approach was compared with its corresponding pixel-based one, and EO-1/Hyperion hyperspectral data was compared with the simulated EO-1/Advanced Land Imager (ALI) multispectral data in the unmixing modeling. The pros and cons of the object-based MESMA were evaluated. The result illustrates that the object-based MESMA is promising for unmixing the medium-spatial resolution hyperspectral imagery to quantify the urban composition, and it is an attractive alternative to the traditional pixel-based mixture analysis for various applications.

  8. SimTool - An object based approach to simulation construction

    NASA Technical Reports Server (NTRS)

    Crues, Edwin Z.; Yazbeck, Marwan E.; Edwards, H. C.; Barnette, Randall D.

    1993-01-01

    The creation and maintenance of large complex simulations can be a difficult and error prone task. A number of interactive and automated tools have been developed to aid in simulation construction and maintenance. Many of these tools are based upon object oriented analysis and design concepts. One such tool, SimTool, is an object based integrated tool set for the development, maintenance, and operation of large, complex and long lived simulations. This paper discusses SimTool's object based approach to simulation design, construction and execution. It also discusses the services provided to various levels of SimTool users to assist them in a wide range of simulation tasks. Also, with the aid of an implemented and working simulation example, this paper discusses SimTool's key design and operational features. Finally, this paper presents a condensed discussion of SimTool's Entity-Relationship-Attribute (ERA) modeling approach.

  9. Object-Based Classification and Change Detection of Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Park, J. G.; Harada, I.; Kwak, Y.

    2016-06-01

    Topography and geology are factors to characterize the distribution of natural vegetation. Topographic contour is particularly influential on the living conditions of plants such as soil moisture, sunlight, and windiness. Vegetation associations having similar characteristics are present in locations having similar topographic conditions unless natural disturbances such as landslides and forest fires or artificial disturbances such as deforestation and man-made plantation bring about changes in such conditions. We developed a vegetation map of Japan using an object-based segmentation approach with topographic information (elevation, slope, slope direction) that is closely related to the distribution of vegetation. The results found that the object-based classification is more effective to produce a vegetation map than the pixel-based classification.

  10. An object-based methodology for knowledge representation in SGML

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object-based methodology for knowledge representation and its Standard Generalized Markup Language (SGML) implementation is presented. The methodology includes class, perspective domain, and event constructs for representing knowledge within an object paradigm. The perspective construct allows for representation of knowledge from multiple and varying viewpoints. The event construct allows actual use of knowledge to be represented. The SGML implementation of the methodology facilitates usability, structured, yet flexible knowledge design, and sharing and reuse of knowledge class libraries.

  11. Object-Based Epistemology at a Creationist Museum

    NASA Astrophysics Data System (ADS)

    Wendel, Paul J.

    2011-01-01

    In a regional young-earth creationist museum, objects are presented as if they speak for themselves, purportedly embodying proof that the earth is less than 10,000 years old, that humans have lived on earth throughout its history, and that dinosaurs and humans lived simultaneously. In public lectures, tours, and displays, museum associates emphasize direct observation over inference or theory. These emphases resonate closely with the "object-based epistemology" of the late nineteenth century described in Steven Conn's Museums and American Intellectual Life, 1876- 1926. In Conn's description, museum objects, artfully arranged and displayed, were intended to speak for themselves, and observation and categorization were valued over experiment and theory. The regional young-earth creationist museum is observed to partly succeed and partly fail in implementing an object-based epistemology. Although object-based epistemology represents a nineteenth-century approach to knowledge and museum display, it is compatible with an inductive approach to biblical interpretation and it confers various rhetorical advantages to creationist arguments. It is concluded that a focus on the theory-laden nature of data would likely strengthen nature-of-science education efforts to increase public acceptance of evolution.

  12. Object-based Landslide Mapping: Examples, Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Eisank, Clemens; Friedl, Barbara; Chang, Kang-Tsung; Tsai, Tsai-Tsung; Birkefeldt Møller Pedersen, Gro; Betts, Harley; Cigna, Francesca; Chiang, Shou-Hao; Aubrey Robson, Benjamin; Bianchini, Silvia; Füreder, Petra; Albrecht, Florian; Spiekermann, Raphael; Weinke, Elisabeth; Blaschke, Thomas; Phillips, Chris

    2016-04-01

    Over the last decade, object-based image analysis (OBIA) has been increasingly used for mapping landslides that occur after triggering events such as heavy rainfall. The increasing availability and quality of Earth Observation (EO) data in terms of temporal, spatial and spectral resolution allows for comprehensive mapping of landslides at multiple scales. Most often very high resolution (VHR) or high resolution (HR) optical satellite images are used in combination with a digital elevation model (DEM) and its products such as slope and curvature. Semi-automated object-based mapping makes use of various characteristics of image objects that are derived through segmentation. OBIA enables numerous spectral, spatial, contextual and textural image object properties to be applied during an analysis. This is especially useful when mapping complex natural features such as landslides and constitutes an advantage over pixel-based image analysis. However, several drawbacks in the process of object-based landslide mapping have not been overcome yet. The developed classification routines are often rather complex and limited regarding their transferability across areas and sensors. There is still more research needed to further improve present approaches and to fully exploit the capabilities of OBIA for landslide mapping. In this study several examples of object-based landslide mapping from various geographical regions with different characteristics are presented. Examples from the Austrian and Italian Alps are shown, whereby one challenge lies in the detection of small-scale landslides on steep slopes while preventing the classification of false positives with similar spectral properties (construction areas, utilized land, etc.). Further examples feature landslides mapped in Iceland, where the differentiation of landslides from other landscape-altering processes in a highly dynamic volcanic landscape poses a very distinct challenge, and in Norway, which is exposed to multiple

  13. Software Reviews.

    ERIC Educational Resources Information Center

    Smith, Richard L., Ed.

    1988-01-01

    Reviews three computer software programs: (1) "Discovery! Experiences with Scientific Reasoning"--problem solving for grades 4-12 (Apple II); (2) "Organic Stereochemistry"--a tutorial for organic chemistry for advanced secondary/college level (Apple II); and (3) "SHOW PARTNER (2.01)"--a graphics utility tool for…

  14. Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) software management plan

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert

    1994-01-01

    This document defines the responsibilites for the management of the like-cycle development of the flight software installed in the AMSU-A instruments, and the ground support software used in the test and integration of the AMSU-A instruments.

  15. Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A): Software concept document

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert

    1994-01-01

    This document presents the concept for two separate but closely related software systems. The first is the special test equipment (STE) software used at Aerojet for AMSU-A instrument testing, and the second is the workstation software used at the spacecraft integration facility to monitor the AMSU-A instrument when installed on the spacecraft. The primary use of the STE software is to monitor the data output from the AMSU-A instruments, to command the instruments, and to perform automated thermal-vacuum calibration testing. The primary use of the workstation software is to monitor the AMSU-A instrument's performance through an Ethernet link during the instrument/spacecraft integration process.

  16. The Cementitious Barriers Partnership Experimental Programs and Software Advancing DOE’s Waste Disposal/Tank Closure Efforts – 15436

    SciTech Connect

    Burns, Heather; Flach, Greg; Smith, Frank; Langton, Christine; Brown, Kevin; Mallick, Pramod

    2015-01-27

    The U.S. Department of Energy Environmental Management (DOE-EM) Office of Tank Waste Management-sponsored Cementitious Barriers Partnership (CBP) is chartered with providing the technical basis for implementing cement-based waste forms and radioactive waste containment structures for long-term disposal. DOE needs in this area include the following to support progress in final treatment and disposal of legacy waste and closure of High-Level Waste (HLW) tanks in the DOE complex: long-term performance predictions, flow sheet development and flow sheet enhancements, and conceptual designs for new disposal facilities. The DOE-EM Cementitious Barriers Partnership is producing software and experimental programs resulting in new methods and data needed for end-users involved with environmental cleanup and waste disposal. Both the modeling tools and the experimental data have already benefited the DOE sites in the areas of performance assessments by increasing confidence backed up with modeling support, leaching methods, and transport properties developed for actual DOE materials. In 2014, the CBP Partnership released the CBP Software Toolbox –“Version 2.0” which provides concrete degradation models for 1) sulfate attack, 2) carbonation, and 3) chloride initiated rebar corrosion, and includes constituent leaching. These models are applicable and can be used by both DOE and the Nuclear Regulatory Commission (NRC) for service life and long-term performance evaluations and predictions of nuclear and radioactive waste containment structures across the DOE complex, including future SRS Saltstone and HLW tank performance assessments and special analyses, Hanford site HLW tank closure projects and other projects in which cementitious barriers are required, the Advanced Simulation Capability for Environmental Management (ASCEM) project which requires source terms from cementitious containment structures as input to their flow simulations, regulatory reviews of DOE performance

  17. An object-based methodology for knowledge representation

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object based methodology for knowledge representation is presented. The constructs and notation to the methodology are described and illustrated with examples. The ``blocks world,`` a classic artificial intelligence problem, is used to illustrate some of the features of the methodology including perspectives and events. Representing knowledge with perspectives can enrich the detail of the knowledge and facilitate potential lines of reasoning. Events allow example uses of the knowledge to be represented along with the contained knowledge. Other features include the extensibility and maintainability of knowledge represented in the methodology.

  18. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  19. Object-based image analysis using multiscale connectivity.

    PubMed

    Braga-Neto, Ulisses; Goutsias, John

    2005-06-01

    This paper introduces a novel approach for image analysis based on the notion of multiscale connectivity. We use the proposed approach to design several novel tools for object-based image representation and analysis which exploit the connectivity structure of images in a multiscale fashion. More specifically, we propose a nonlinear pyramidal image representation scheme, which decomposes an image at different scales by means of multiscale grain filters. These filters gradually remove connected components from an image that fail to satisfy a given criterion. We also use the concept of multiscale connectivity to design a hierarchical data partitioning tool. We employ this tool to construct another image representation scheme, based on the concept of component trees, which organizes partitions of an image in a hierarchical multiscale fashion. In addition, we propose a geometrically-oriented hierarchical clustering algorithm which generalizes the classical single-linkage algorithm. Finally, we propose two object-based multiscale image summaries, reminiscent of the well-known (morphological) pattern spectrum, which can be useful in image analysis and image understanding applications.

  20. Observed bodies generate object-based spatial codes.

    PubMed

    Taylor, Alison; Flynn, Maria; Edmonds, Caroline J; Gardner, Mark R

    2016-09-01

    Contemporary studies of spatial and social cognition frequently use human figures as stimuli. The interpretation of such studies may be complicated by spatial compatibility effects that emerge when researchers employ spatial responses, and participants spontaneously code spatial relationships about an observed body. Yet, the nature of these spatial codes - whether they are location- or object-based, and coded from the perspective of the observer or the figure - has not been determined. Here, we investigated this issue by exploring spatial compatibility effects arising for objects held by a visually presented whole-bodied schematic human figure. In three experiments, participants responded to the colour of the object held in the figure's left or right hand, using left or right key presses. Left-right compatibility effects were found relative to the participant's egocentric perspective, rather than the figure's. These effects occurred even when the figure was rotated by 90° to the left or to the right, and the coloured objects were aligned with the participant's midline. These findings are consistent with spontaneous spatial coding from the participant's perspective and relative to the normal upright orientation of the body. This evidence for object-based spatial coding implies that the domain general cognitive mechanisms that result in spatial compatibility effects may contribute to certain spatial perspective-taking and social cognition phenomena.

  1. The creation of a public database of precision phantoms to facilitate the evaluation and standardization of advanced visualization and quantification software

    NASA Astrophysics Data System (ADS)

    Chen, Joseph J.; Saenz, Naomi J.; Siegel, Eliot L.

    2009-02-01

    In order to validate CT imaging as a biomarker, it is important to ascertain the variability and artifacts associated with various forms of advanced visualization and quantification software. The purpose of the paper is to describe the rationale behind the creation of a free, public resource that contains phantom datasets for CT designed to facilitate testing, development and standardization of advanced visualization and quantification software. For our research, three phantoms were scanned at multiple kVp and mAs settings utilizing a 64-channel MDCT scanner at a collimation of 0.75 mm. Images were reconstructed at a slice thickness of 0.75 mm and archived in DICOM format. The phantoms consisted of precision spheres, balls of different materials and sizes, and slabs of Last-A-Foam(R) at varying densities. The database of scans is stored in an archive utilizing software developed for the National Cancer Imaging Archive and is publically available. The scans were completed successfully and the datasets are available for free and unrestricted download. The CT images can be accessed in DICOM format via http or FTP or utilizing caGRID. A DICOM database of phantom data was successfully created and made available to the public. We anticipate that this database will be useful as a reference for physicists for quality control purposes, for developers of advanced visualization and quantification software, and for others who need to test the performance of their systems against a known "gold" standard. We plan to add more phantom images in the future and expand to other imaging modalities.

  2. The OASE project: Object-based Analysis and Seamless prediction

    NASA Astrophysics Data System (ADS)

    Troemel, Silke; Wapler, Kathrin; Bick, Theresa; Diederich, Malte; Deneke, Hartwig; Horvath, Akos; Senf, Fabian; Simmer, Clemens; Simon, Juergen

    2013-04-01

    The research group on Object-based Analysis and SEamless prediction (OASE) is part of the Hans Ertel Centre for Weather Research (HErZ). The group consists of scientists at the Meteorological Institute, University of Bonn, the Leibniz-Institute for Tropospheric Research in Leipzig and the German Weather Service. OASE addresses seamless prediction of convective events from nowcasting to daily predictions by combining radar/satellite compositing and tracking with high-resolution model-based ensemble generation and prediction. While observation-based nowcasting provides good results for lead times between 0-1 hours, numerical weather prediction addresses lead times between 3-21 hours. Especially the discontinuity between 1-3 hours needs to be addressed. Therefore a central goal of the project is a near real-time high-resolved unprecedented data base. A radar and satellite remote sensing-driven 3D observation-microphysics composite covering Germany, currently under development, contains gridded observations and estimated microphysical quantities. Observations and microphysics are intertwined via forward operators and estimated inverse relations, which also provide uncertainties for model ensemble initialisations. The lifetime evolution of dynamics and microphysics in (severe) convective storms is analysed based on 3D scale-space tracking. An object-based analysis condenses the information contained in the dynamic 3D distributions of observables and related microphysics into descriptors, which will allow identifying governing processes leading to the formation and evolution of severe weather events. The object-based approach efficiently characterises and quantifies the process structure and life cycles of severe weather events, and facilitates nowcasting and the generation and initialisation of model prediction ensembles. Observation-based nowcasting will exploit the dual-composite based 3D feature detection and tracking to generate a set of predictions (observation

  3. Object-based attentional facilitation and inhibition are neuropsychologically dissociated.

    PubMed

    Smith, Daniel T; Ball, Keira; Swalwell, Robert; Schenk, Thomas

    2016-01-08

    Salient peripheral cues produce a transient shift of attention which is superseded by a sustained inhibitory effect. Cueing part of an object produces an inhibitory cueing effect (ICE) that spreads throughout the object. In dynamic scenes the ICE stays with objects as they move. We examined object-centred attentional facilitation and inhibition in a patient with visual form agnosia. There was no evidence of object-centred attentional facilitation. In contrast, object-centred ICE was observed in 3 out of 4 tasks. These inhibitory effects were strongest where cues to objecthood were highly salient. These data are evidence of a neuropsychological dissociation between the facilitatory and inhibitory effects of attentional cueing. From a theoretical perspective the findings suggest that 'grouped arrays' are sufficient for object-based inhibition, but insufficient to generate object-centred attentional facilitation.

  4. Introducing shape constraints into object-based traveltime tomography

    NASA Astrophysics Data System (ADS)

    Gaullier, G.; Charbonnier, P.; Heitz, F.; Côte, P.

    2016-09-01

    Traveltime tomography is a difficult, ill-posed reconstruction problem due to the nonlinearity of the forward model and the limited number of measurements usually available. In such an adverse situation, pixel-based regularization methods are generally unable to provide satisfactory reconstructions. In this paper we propose a novel object-based reconstruction method that introduces prior information about the shape of the structures to be reconstructed, which yields high quality geoacoustic inversion. The proposed method approaches the forward model by a series of linear problems, leading to a sequence of minimizations during which the shape prior is introduced. The method is demonstrated on synthetic and real data, collected on a specific bench dedicated to non-destructive testing of civil engineering structures.

  5. Advanced Resistive Exercise Device (ARED) Flight Software (FSW): A Unique Approach to Exercise in Long Duration Habitats

    NASA Technical Reports Server (NTRS)

    Mangieri, Mark

    2005-01-01

    ARED flight instrumentation software is associated with an overall custom designed resistive exercise system that will be deployed on the International Space Station (ISS). This innovative software application fuses together many diverse and new technologies into a robust and usable package. The software takes advantage of touchscreen user interface technology by providing a graphical user interface on a Windows based tablet PC, meeting a design constraint of keyboard-less interaction with flight crewmembers. The software interacts with modified commercial data acquisition (DAQ) hardware to acquire multiple channels of sensor measurment from the ARED device. This information is recorded on the tablet PC and made available, via International Space Station (ISS) Wireless LAN (WLAN) and telemetry subsystems, to ground based mission medics and trainers for analysis. The software includes a feature to accept electronically encoded prescriptions of exercises that guide crewmembers through a customized regimen of resistive weight training, based on personal analysis. These electronically encoded prescriptions are provided to the crew via ISS WLAN and telemetry subsystems. All personal data is securely associated with an individual crew member, based on a PIN ID mechanism.

  6. Object-based change detection: dimension of damage in residential areas of Abu Suruj, Sudan

    NASA Astrophysics Data System (ADS)

    Demharter, Timo; Michel, Ulrich; Ehlers, Manfred; Reinartz, Peter

    2011-11-01

    Given the importance of Change Detection, especially in the field of crisis management, this paper discusses the advantage of object-based Change Detection. This project and the used methods give an opportunity to coordinate relief actions strategically. The principal objective of this project was to develop an algorithm which allows to detect rapidly damaged and destroyed buildings in the area of Abu Suruj. This Sudanese village is located in West-Darfur and has become the victim of civil war. The software eCognition Developer was used to per-form an object-based Change Detection on two panchromatic Quickbird 2 images from two different time slots. The first image shows the area before, the second image shows the area after the massacres in this region. Seeking a classification for the huts of the Sudanese town Abu Suruj was reached by first segmenting the huts and then classifying them on the basis of geo-metrical and brightness-related values. The huts were classified as "new", "destroyed" and "preserved" with the help of a automated algorithm. Finally the results were presented in the form of a map which displays the different conditions of the huts. The accuracy of the project is validated by an accuracy assessment resulting in an Overall Classification Accuracy of 90.50 percent. These change detection results allow aid organizations to provide quick and efficient help where it is needed the most.

  7. Geographic object-based delineation of neighborhoods of Accra, Ghana using QuickBird satellite imagery.

    PubMed

    Stow, Douglas A; Lippitt, Christopher D; Weeks, John R

    2010-08-01

    The objective was to test GEographic Object-based Image Analysis (GEOBIA) techniques for delineating neighborhoods of Accra, Ghana using QuickBird multispectral imagery. Two approaches to aggregating census enumeration areas (EAs) based on image-derived measures of vegetation objects were tested: (1) merging adjacent EAs according to vegetation measures and (2) image segmentation. Both approaches exploit readily available functions within commercial GEOBIA software. Image-derived neighborhood maps were compared to a reference map derived by spatial clustering of slum index values (from census data), to provide a relative assessment of potential map utility. A size-constrained iterative segmentation approach to aggregation was more successful than standard image segmentation or feature merge techniques. The segmentation approaches account for size and shape characteristics, enabling more realistic neighborhood boundaries to be delineated. The percentage of vegetation patches within each EA yielded more realistic delineation of potential neighborhoods than mean vegetation patch size per EA.

  8. Writing Teachers Writing Software: Creating Our Place in the Electronic Age. Advances in Computers and Composition on Studies Series.

    ERIC Educational Resources Information Center

    LeBlanc, Paul J.

    Presenting a comprehensive look at (and critical history of) computer-aided composition (CAC), this book focuses on faculty development of software for composition studies. The book describes who is building these writing tools, how they are doing so, how their work is being received, and what is likely to affect their efforts in the future.…

  9. An Object-Based Architecture for Biomedical Expert Database Systems

    PubMed Central

    Barsalou, Thierry

    1988-01-01

    Objects play a major role in both database and artificial intelligence research. In this paper, we present a novel architecture for expert database systems that introduces an object-based interface between relational databases and expert systems. We exploit a semantic model of the database structure to map relations automatically into object templates, where each template can be a complex combination of join and projection operations. Moreover, we arrange the templates into object networks that represent different views of the same database. Separate processes instantiate those templates using data from the base relations, cache the resulting instances in main memory, navigate through a given network's objects, and update the database according to changes made at the object layer. In the context of an immunologic-research application, we demonstrate the capabilities of a prototype implementation of the architecture. The resulting model provides enhanced tools for database structuring and manipulation. In addition, this architecture supports efficient bidirectional communication between database and expert systems through the shared object layer.

  10. Object-based rapid change detection for disaster management

    NASA Astrophysics Data System (ADS)

    Thunig, Holger; Michel, Ulrich; Ehlers, Manfred; Reinartz, Peter

    2011-11-01

    Rapid change detection is used in cases of natural hazards and disasters. This analysis lead to quick information about areas of damage. In certain cases the lack of information after catastrophe events is obstructing supporting measures within disaster management. Earthquakes, tsunamis, civil war, volcanic eruption, droughts and floods have much in common: people are directly affected, landscapes and buildings are destroyed. In every case geospatial data is necessary to gain knowledge as basement for decision support. Where to go first? Which infrastructure is usable? How much area is affected? These are essential questions which need to be answered before appropriate, eligible help can be established. This study presents an innovative strategy to retrieve post event information by use of an object-based change detection approach. Within a transferable framework, the developed algorithms can be implemented for a set of remote sensing data among different investigation areas. Several case studies are the base for the retrieved results. Within a coarse dividing into statistical parts and the segmentation in meaningful objects, the framework is able to deal with different types of change. By means of an elaborated normalized temporal change index (NTCI) panchromatic datasets are used to extract areas which are destroyed, areas which were not affected and in addition areas which are developing new for cases where rebuilding has already started. The results of the study are also feasible for monitoring urban growth.

  11. Geographic Object-Based Image Analysis - Towards a new paradigm

    NASA Astrophysics Data System (ADS)

    Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis - GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the 'per-pixel paradigm' and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.

  12. Geographic Object-Based Image Analysis - Towards a new paradigm.

    PubMed

    Blaschke, Thomas; Hay, Geoffrey J; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis - GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the 'per-pixel paradigm' and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.

  13. [Role of connectedness in early object-based attentional selection].

    PubMed

    Takeya, Ryuji; Kasai, Tetsuko

    2014-08-01

    It has been suggested that uniform connectedness is the most fundamental factor in forming units of attentional selection, while there are evidences that attention can select a perceptual group that consists of separate elements with similar features. The present study examined the effects of connectedness and a boundary-feature similarity on early spatial-selection processes using a sustained-focal-attention paradigm of event-related potentials (ERPs). Bilateral stimuli were manipulated to have an orthogonal combination of connectedness (C-, C+) and a similarity in boundary feature (S-, S+). ERPs were recorded from 15 participants who were instructed to pay attention to the left or the right visual field and to respond to a target shape that appeared infrequently in the attended field. The ERP attention effect in the N1 latency range (125-185 ms) was decreased for stimuli with connectedness and/or boundary-feature similarity, and the effects of the two grouping factors were independent of each other. The present result suggests that multiple grouping factors, including connectedness, operate in parallel in early processes of object-based attention-spreading.

  14. Multi-objective based spectral unmixing for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Shi, Zhenwei

    2017-02-01

    Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.

  15. Development of Object-Based Teleoperator Control for Unstructured Applications

    DTIC Science & Technology

    1996-12-01

    4-23 5.1. Module Sampling Rates of Test Set #5 in Appendix C 5-7 A.1. PUMA 560 D-H parameters ....... .................... A-2 A.2. ROBOTICA Input...June, 1996. 33. Schneider, D. L., EENG 540 Class Notes, 1994. 34. Nethery, John, Robotica : User’s guide and reference manual, University of Illnois...case of PUMA robot. First, the overall forward kinematics were computed using the ROBOTICA mathematic software [34], then some of joints are set to be

  16. Computational complexity of object-based image compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    2005-09-01

    Image compression via transform coding applied to small rectangular regions or encoding blocks appears to be approaching asymptotic rate-distortion performance. However, an emerging compression technology, called object-based compression (OBC) promises significantly improved performance via compression ratios ranging from 200:1 to as high as 2,500:1. OBC involves segmentation of image regions, followed by efficient encoding of each region's content and boundary. During decompression, such regions can be approximated by objects from a codebook, yielding a reconstructed image that is semantically equivalent to the corresponding source image, but has pixel- and featural-level differences. Semantic equivalence between the source and decompressed image facilitates fast decompression through efficient substitutions, albeit at the cost of codebook search in the compression step. Given small codebooks, OBC holds promise for information-push technologies where approximate context is sufficient, for example, transmission of surveillance images that provide the gist of a scene. However, OBC is not necessarily useful for applications requiring high accuracy, such as medical image processing, because substitution of source content can be inaccurate at small spatial scales. The cost of segmentation is a significant disadvantage in current OBC implementations. Several innovative techniques have been developed for region segmentation, as discussed in a previous paper [4]. Additionally, tradeoffs between representational fidelity, computational cost, and storage requirement occur, as with the vast majority of lossy compression algorithms. This paper analyzes the computational (time) and storage (space) complexities of several recent OBC algorithms applied to single-frame imagery. A time complexity model is proposed, which can be associated theoretically with a space complexity model that we have previously published [2]. The result, when combined with measurements of

  17. Detection and object-based classification of offshore oil slicks using ENVISAT-ASAR images.

    PubMed

    Akar, Sertac; Süzen, Mehmet Lutfi; Kaymakci, Nuretdin

    2011-12-01

    The aim of this study is to propose and test a multi-level methodology for detection of oil slicks in ENVISAT Advanced Synthetic Aperture Radar (ASAR) imagery, which can be used to support the identification of hydrocarbon seeps. We selected Andrusov Ridge in the Central Black Sea as the test study area where extensive hydrocarbon seepages were known to occur continuously. Hydrocarbon seepage from tectonic or stratigraphic origin at the sea floor causes oily gas plumes to rise up to the sea surface and form thin oil films called oil slicks. Microwave sensors like synthetic aperture radar (SAR) are very suitable for ocean remote sensing as they measure the backscattered radiation from the surface and show the roughness of the terrain. Oil slicks dampen the sea waves creating dark patches in the SAR image. The proposed and applied methodology includes three levels: visual interpretation, image filtering and object-based oil spill detection. Level I, after data preparation with visual interpretation, includes dark spots identification and subsets/scenes creation. After this process, the procedure continues with categorization of subsets/scenes into three cases based on contrast difference of dark spots to the surroundings. In level II, by image and morphological filtering, it includes preparation of subsets/scenes for segmentation. Level III includes segmentation and feature extraction which is followed by object-based classification. The object-based classification is applied with the fuzzy membership functions defined by extracted features of ASAR subsets/scenes, where the parameters of the detection algorithms are tuned specifically for each case group. As a result, oil slicks are discriminated from look-alikes with an overall classification accuracy of 83% for oil slicks and 77% for look-alikes obtained by averaging three different cases.

  18. Payload software technology: Software technology development plan

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Programmatic requirements for the advancement of software technology are identified for meeting the space flight requirements in the 1980 to 1990 time period. The development items are described, and software technology item derivation worksheets are presented along with the cost/time/priority assessments.

  19. High throughput object-based image analysis of β-amyloid plaques in human and transgenic mouse brain.

    PubMed

    Samaroo, Harry D; Opsahl, Alan C; Schreiber, Jan; O'Neill, Sharon M; Marconi, Michael; Qian, Jessie; Carvajal-Gonzalez, Santos; Tate, Barbara; Milici, Anthony J; Bales, Kelly R; Stephenson, Diane T

    2012-02-15

    Advances in imaging technology have enabled automated approaches for quantitative image analysis. In this study, a high content object based image analysis method was developed for quantification of β-amyloid (Aβ) plaques in postmortem brains of Alzheimer's disease (AD) subjects and in transgenic mice over overexpressing Aβ. Digital images acquired from immunohistochemically stained sections of the superior frontal gyrus were analyzed for Aβ plaque burden using a Definiens object-based segmentation approach. Blinded evaluation of Aβ stained sections from AD and aged matched human subjects accurately identified AD cases with one exception. Brains from transgenic mice overexpressing Aβ (PS1APP mice) were also evaluated by our Definiens object based image analysis approach. We observed an age-dependent increase in the amount of Aβ plaque load that we quantified in both the hippocampus and cortex. From the contralateral hemisphere, we measured the amount of Aβ in brain homogenates biochemically and observed a significant correlation between our biochemical measurements and those that we measured by our object based Definiens system in the hippocampus. Assessment of Aβ plaque load in PS1APP mice using a manual segmentation technique (Image-Pro Plus) confirmed the results of our object-based image analysis approach. Image acquisition and analysis of 32 stained human slides and 100 mouse slides were executed in 8 h and 22 h, respectively supporting the relatively high throughput features of the Definiens platform. The data show that digital imaging combined with object based image analysis is a reliable and efficient approach to quantifying Aβ plaques in human and mouse brain.

  20. Revealing text in a complexly rolled silver scroll from Jerash with computed tomography and advanced imaging software

    NASA Astrophysics Data System (ADS)

    Hoffmann Barfod, Gry; Larsen, John Møller; Lichtenberger, Achim; Raja, Rubina

    2015-12-01

    Throughout Antiquity magical amulets written on papyri, lead and silver were used for apotropaic reasons. While papyri often can be unrolled and deciphered, metal scrolls, usually very thin and tightly rolled up, cannot easily be unrolled without damaging the metal. This leaves us with unreadable results due to the damage done or with the decision not to unroll the scroll. The texts vary greatly and tell us about the cultural environment and local as well as individual practices at a variety of locations across the Mediterranean. Here we present the methodology and the results of the digital unfolding of a silver sheet from Jerash in Jordan from the mid-8th century CE. The scroll was inscribed with 17 lines in presumed pseudo-Arabic as well as some magical signs. The successful unfolding shows that it is possible to digitally unfold complexly folded scrolls, but that it requires a combination of the know-how of the software and linguistic knowledge.

  1. Object-based change detection for landslide monitoring based on SPOT imagery

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Friedl, Barbara; Eisank, Clemens

    2014-05-01

    The steadily increasing availability of Earth observation (EO) data from a wide range of sensors facilitates the long-time monitoring of mass movements and retrospective analysis. Pixel-based approaches are most commonly used for detecting changes based on optical remote sensing data. However, single pixels are not suitable for depicting natural phenomena such as landslides in their full complexity and their transformation over time. By applying semi-automated object-based change detection limitations inherent to pixel-based methods can be overcome to a certain extent. For instance, the problem of variant spectral reflectance for the same pixel location in images from different points in time can be minimized. Therefore, atmospheric and radiometric correction of input data sets - although highly recommended - seems to be not that important for developing a straightforward change detection approach based on object-based image analysis (OBIA). The object-based change detection approach was developed for a subset of the Baichi catchment, which is located in the Shihmen Reservoir watershed in northern Taiwan. The study area is characterized by mountainous terrain with steep slopes and is regularly affected by severe landslides and debris flows. Several optical satellite images, i.e. SPOT images from different years and seasons with a spatial resolution ranging from 2.5 to 6.25 m, have been used for monitoring the past evolution of landslides and landslide affected areas. A digital elevation model (DEM) with 5 m spatial resolution was integrated in the analysis for supporting the differentiation of landslides and debris flows. The landslide changes were identified by comparing feature values of segmentation-derived image objects between two subsequent images in eCognition (Trimble) software. To increase the robustness and transferability of the approach we identified changes by using the relative difference in values of band-specific relational features, spectral

  2. Payload software technology

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A software analysis was performed of known STS sortie payload elements and their associated experiments. This provided basic data for STS payload software characteristics and sizes. A set of technology drivers was identified based on a survey of future technology needs and an assessment of current software technology. The results will be used to evolve a planned approach to software technology development. The purpose of this plan is to ensure that software technology is advanced at a pace and a depth sufficient to fulfill the identified future needs.

  3. Problem-Solving Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    CBR Express software solves problems by adapting sorted solutions to new problems specified by a user. It is applicable to a wide range of situations. The technology was originally developed by Inference Corporation for Johnson Space Center's Advanced Software Development Workstation. The project focused on the reuse of software designs, and Inference used CBR as part of the ACCESS prototype software. The commercial CBR Express is used as a "help desk" for customer support, enabling reuse of existing information when necessary. It has been adopted by several companies, among them American Airlines, which uses it to solve reservation system software problems.

  4. Revealing text in a complexly rolled silver scroll from Jerash with computed tomography and advanced imaging software

    PubMed Central

    Hoffmann Barfod, Gry; Larsen, John Møller; Raja, Rubina

    2015-01-01

    Throughout Antiquity magical amulets written on papyri, lead and silver were used for apotropaic reasons. While papyri often can be unrolled and deciphered, metal scrolls, usually very thin and tightly rolled up, cannot easily be unrolled without damaging the metal. This leaves us with unreadable results due to the damage done or with the decision not to unroll the scroll. The texts vary greatly and tell us about the cultural environment and local as well as individual practices at a variety of locations across the Mediterranean. Here we present the methodology and the results of the digital unfolding of a silver sheet from Jerash in Jordan from the mid-8th century CE. The scroll was inscribed with 17 lines in presumed pseudo-Arabic as well as some magical signs. The successful unfolding shows that it is possible to digitally unfold complexly folded scrolls, but that it requires a combination of the know-how of the software and linguistic knowledge. PMID:26648504

  5. Status Report on the Development of Micro-Scheduling Software for the Advanced Outage Control Center Project

    SciTech Connect

    Germain, Shawn St.; Thomas, Kenneth; Farris, Ronald; Joe, Jeffrey

    2014-09-01

    The long-term viability of existing nuclear power plants (NPPs) in the United States (U.S.) is dependent upon a number of factors, including maintaining high capacity factors, maintaining nuclear safety, and reducing operating costs, particularly those associated with refueling outages. Refueling outages typically take 20-30 days, and for existing light water NPPs in the U.S., the reactor cannot be in operation during the outage. Furthermore, given that many NPPs generate between $1-1.5 million/day in revenue when in operation, there is considerable interest in shortening the length of refueling outages. Yet, refueling outages are highly complex operations, involving multiple concurrent and dependent activities that are difficult to coordinate. Finding ways to improve refueling outage performance while maintaining nuclear safety has proven to be difficult. The Advanced Outage Control Center project is a research and development (R&D) demonstration activity under the Light Water Reactor Sustainability (LWRS) Program. LWRS is a R&D program which works with industry R&D programs to establish technical foundations for the licensing and managing of long-term, safe, and economical operation of current NPPs. The Advanced Outage Control Center project has the goal of improving the management of commercial NPP refueling outages. To accomplish this goal, this INL R&D project is developing an advanced outage control center (OCC) that is specifically designed to maximize the usefulness of communication and collaboration technologies for outage coordination and problem resolution activities. This report describes specific recent efforts to develop a capability called outage Micro-Scheduling. Micro-Scheduling is the ability to allocate and schedule outage support task resources on a sub-hour basis. Micro-Scheduling is the real-time fine-tuning of the outage schedule to react to the actual progress of the primary outage activities to ensure that support task resources are

  6. A novel mask proximity correction software combining accuracy and reduced writing time for the manufacturing of advanced photomasks

    NASA Astrophysics Data System (ADS)

    Schiavone, Patrick; Martin, Luc; Browning, Clyde; Farys, Vincent; Sundermann, Frank; Narukawa, Shogo; Takikawa, Tadahiko; Hayashi, Naoya

    2012-06-01

    The new generations of photomasks are seen to bring more and more challenges to the mask manufacturer. Maskshops face two conflicting requirements, namely improving pattern fidelity and reducing or at least maintaining acceptable writing time. These requirements are getting more and more challenging since pattern size continuously shrinks and data volumes continuously grows. Although the classical dose modulation Proximity Effect Correction is able to provide sufficient process control to the mainstream products, an increased number of published and wafer data show that the mask process is becoming a nonnegligible contributor to the 28nm technology yield. We will show in this paper that a novel approach of mask proximity effect correction is able to meet the dual challenge of the new generation of masks. Unlike the classical approach, the technique presented in this paper is based on a concurrent optimization of the dose and geometry of the fractured shots. Adding one more parameter allows providing the best possible compromise between accuracy and writing time since energy latitude can be taken into account as well. This solution is implemented in the Inscale software package from Aselta Nanographics. We have assessed the capability of this technology on several levels of a 28nm technology. On this set, the writing time has been reduced up to 25% without sacrificing the accuracy which at the same time has been improved significantly compared to the existing process. The experiments presented in the paper confirm that a versatile proximity effect correction strategy, combining dose and geometry modulation helps the users to tradeoff between resolution/accuracy and e-beam write time.

  7. A Symphony of Software.

    ERIC Educational Resources Information Center

    Currents, 2002

    2002-01-01

    Offers a descriptive table of databases that help higher education institutions orchestrate advancement operations. Information includes vendor, contact, software, price, database engine/server platform, recommended reporting tools, record capacity, and client type. (EV)

  8. Combining TerraSAR-X and SPOT-5 data for object-based landslide detection

    NASA Astrophysics Data System (ADS)

    Friedl, B.; Hölbling, D.; Füreder, P.

    2012-04-01

    Landslide detection and classification is an essential requirement in pre- and post-disaster hazard analysis. In earlier studies landslide detection often was achieved through time-consuming and cost-intensive field surveys and visual orthophoto interpretation. Recent studies show that Earth Observation (EO) data offer new opportunities for fast, reliable and accurate landslide detection and classification, which may conduce to an effective landslide monitoring and landslide hazard management. To ensure the fast recognition and classification of landslides at a regional scale, a (semi-)automated object-based landslide detection approach is established for a study site situated in the Huaguoshan catchment, Southern Taiwan. The study site exhibits a high vulnerability to landslides and debris flows, which are predominantly typhoon-induced. Through the integration of optical satellite data (SPOT-5 with 2.5 m GSD), SAR (Synthetic Aperture Radar) data (TerraSAR-X Spotlight with 2.95 m GSD) and digital elevation information (DEM with 5 m GSD) including its derived products (e.g. slope, curvature, flow accumulation) landslides may be examined in a more efficient way as if relying on single data sources only. The combination of optical and SAR data in an object-based image analysis (OBIA) domain for landslide detection and classification has not been investigated so far, even if SAR imagery show valuable properties for landslide detection, which differ from optical data (e.g. high sensitivity to surface roughness and soil moisture). The main purpose of this study is to recognize and analyze existing landslides by applying object-based image analysis making use of eCognition software. OBIA provides a framework for examining features defined by spectral, spatial, textural, contextual as well as hierarchical properties. Objects are derived through image segmentation and serve as input for the classification process, which relies on transparent rulesets, representing knowledge

  9. Biological imaging software tools.

    PubMed

    Eliceiri, Kevin W; Berthold, Michael R; Goldberg, Ilya G; Ibáñez, Luis; Manjunath, B S; Martone, Maryann E; Murphy, Robert F; Peng, Hanchuan; Plant, Anne L; Roysam, Badrinath; Stuurman, Nico; Stuurmann, Nico; Swedlow, Jason R; Tomancak, Pavel; Carpenter, Anne E

    2012-06-28

    Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the inherent challenges and the overall status of available software for bioimage informatics, focusing on open-source options.

  10. Biological Imaging Software Tools

    PubMed Central

    Eliceiri, Kevin W.; Berthold, Michael R.; Goldberg, Ilya G.; Ibáñez, Luis; Manjunath, B.S.; Martone, Maryann E.; Murphy, Robert F.; Peng, Hanchuan; Plant, Anne L.; Roysam, Badrinath; Stuurman, Nico; Swedlow, Jason R.; Tomancak, Pavel; Carpenter, Anne E.

    2013-01-01

    Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis, and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the challenges in that domain, and the overall status of available software for bioimage informatics, focusing on open source options. PMID:22743775

  11. Object-based algorithms and methods for quantifying urban growth pattern using sequential satellite images

    NASA Astrophysics Data System (ADS)

    Yu, Bailang; Liu, Hongxing; Gao, Yige; Wu, Jianping

    2008-08-01

    Previously, urban growth pattern is described and measured by the pixel-by-pixel comparison of satellite images. The geographic extent, patterns and types of urban growth are derived from satellite images separated in time. However, the pixel-by-pixel comparison approach suffers from several drawbacks. Firstly, slight error in image geo-reference can cause false detection of changes. Secondly, it's difficult to recognize and correct artifact changes induced by data noise and data processing errors. Thirdly, only limited information can be derived. In this paper, we present a new objectbased method to describe and quantify urban growth patterns. The different types of land cover are classified from sequential satellite images as urban objects. The geometric and shape attributes of objects and the spatial relationship between them are employed to identify the different types of urban growth pattern. The algorithms involved in the object-based method are implemented by using C++ programming language and the software user interface is developed by using ArcObjects and VB.Net. A simulated example is given to demonstrate the utility and effectiveness of this new method.

  12. Physics pedagogical software: The CIP software contest

    NASA Astrophysics Data System (ADS)

    Donnelly, Denis

    1999-09-01

    In 1989, a decision was made that the journal Computers in Physics (CIP) would sponsor a software contest under the auspices of its education department. The goal was to reward software authors for their efforts to advance undergraduate pedagogy and to advertise their work to the larger physics community. The contest is now in its ninth year. In the following remarks, we will first attempt, some general observations and then some evaluative comments.

  13. Object-based landslide mapping on satellite images from different sensors

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Friedl, Barbara; Eisank, Clemens; Blaschke, Thomas

    2015-04-01

    Several studies have proven that object-based image analysis (OBIA) is a suitable approach for landslide mapping using remote sensing data. Mostly, optical satellite images are utilized in combination with digital elevation models (DEMs) for semi-automated mapping. The ability of considering spectral, spatial, morphometric and contextual features in OBIA constitutes a significant advantage over pixel-based methods, especially when analysing non-uniform natural phenomena such as landslides. However, many of the existing knowledge-based OBIA approaches for landslide mapping are rather complex and are tailored to specific data sets. These restraints lead to a lack of transferability of OBIA mapping routines. The objective of this study is to develop an object-based approach for landslide mapping that is robust against changing input data with different resolutions, i.e. optical satellite imagery from various sensors. Two study sites in Taiwan were selected for developing and testing the landslide mapping approach. One site is located around the Baolai village in the Huaguoshan catchment in the southern-central part of the island, the other one is a sub-area of the Taimali watershed in Taitung County near the south-eastern Pacific coast. Both areas are regularly affected by severe landslides and debris flows. A range of very high resolution (VHR) optical satellite images was used for the object-based mapping of landslides and for testing the transferability across different sensors and resolutions: (I) SPOT-5, (II) Formosat-2, (III) QuickBird, and (IV) WorldView-2. Additionally, a digital elevation model (DEM) with 5 m spatial resolution and its derived products (e.g. slope, plan curvature) were used for supporting the semi-automated mapping, particularly for differentiating source areas and accumulation areas according to their morphometric characteristics. A focus was put on the identification of comparatively stable parameters (e.g. relative indices), which could be

  14. Summary Describing Integration of ERM Methodology into Supervisory Control Framework with Software Package Documentation; Advanced Reactor Technology Milestone: M4AT-16PN2301052

    SciTech Connect

    Ramuhalli, Pradeep; Hirt, Evelyn H.; Dib, Gerges; Veeramany, Arun; Bonebrake, Christopher A.; Roy, Surajit

    2016-09-20

    This project involved the development of enhanced risk monitors (ERMs) for active components in Advanced Reactor (AdvRx) designs by integrating real-time information about equipment condition with risk monitors. Health monitoring techniques in combination with predictive estimates of component failure based on condition and risk monitors can serve to indicate the risk posed by continued operation in the presence of detected degradation. This combination of predictive health monitoring based on equipment condition assessment and risk monitors can also enable optimization of maintenance scheduling with respect to the economics of plant operation. This report summarizes PNNL’s multi-year project on the development and evaluation of an ERM concept for active components while highlighting FY2016 accomplishments. Specifically, this report provides a status summary of the integration and demonstration of the prototypic ERM framework with the plant supervisory control algorithms being developed at Oak Ridge National Laboratory (ORNL), and describes additional case studies conducted to assess sensitivity of the technology to different quantities. Supporting documentation on the software package to be provided to ONRL is incorporated in this report.

  15. Covert orienting in the split brain: Right hemisphere specialization for object-based attention.

    PubMed

    Kingstone, Alan

    2015-12-18

    The present paper takes as its starting point Phil Bryden's long-standing interest in human attention and the role it can play in laterality effects. Past split-brain research has suggested that object-based attention is lateralized to the left hemisphere [e.g., Egly, R., Rafal, R. D., Driver, J., & Starreveld, Y. (1994). Covert orienting in the split brain reveals hemispheric specialization for object-based attention. Psychological Science, 5(6), 380-382]. The task used to isolate object-based attention in that previous work, however, has been found wanting [Vecera, S. P. (1994). Grouped locations and object-based attention: Comment on Egly, Driver, and Rafal (1994). Journal of Experimental Psychology: General, 123(3), 316-320]; and indeed, subsequent research with healthy participants using a different task has suggested that object-based attention is lateralized to the opposite right hemisphere (RH) [Valsangkar-Smyth, M. A., Donovan, C. L., Sinnett, S., Dawson, M. R., & Kingstone, A. (2004). Hemispheric performance in object-based attention. Psychonomic Bulletin & Review, 11(1), 84-91]. The present study tested the same split-brain as Egly, Rafal, et al. (1994) but used the object-based attention task introduced by Valsangkar-Smyth et al. (2004). The results confirm that object-based attention is lateralized to the RH. They also suggest that subcortical interhemispheric competition may occur and be dominated by the RH.

  16. Advanced Languages for Systems Software

    DTIC Science & Technology

    1994-01-01

    these are too numerous to list here. Edoardo Biagioni . Post-doctoral researcher. System networking and kernel design and imple- mentation. Kenneth Cline...John Backus, John H. Williams, and Edward L. Wimmers. The programming language FL. In Turner [131], pages 219-247. [12] Edoardo Biagioni , Nicholas

  17. System For Retrieving Reusable Software

    NASA Technical Reports Server (NTRS)

    Van Warren, Lloyd; Beckman, Brian C.

    1993-01-01

    Encyclopedia of Software Components (ESC) is information-retrieval system of computer hardware and software providing access to generic reusable software tools and parts. Core of ESC is central tool base, which is repository of reusable software. It receives queries and submissions from user through local browser subsystem and receives authorized updates from maintenance subsystem. Sends retrievals to local browser subsystem and user's submissions to maintenance subsystem. Future versions will provide for advanced media, including voice and video, and will link system to database-management system. Programmers will not only retrieve software, but also modify, execute, and cross-link with other software.

  18. Exploring the relationship between object realism and object-based attention effects.

    PubMed

    Roque, Nelson; Boot, Walter R

    2015-09-01

    Visual attention prioritizes processing of locations in space, and evidence also suggests that the benefits of attention can be shaped by the presence of objects (object-based attention). However, the prevalence of object-based attention effects has been called into question recently by evidence from a large-sampled study employing classic attention paradigms (Pilz et al., 2012). We conducted two experiments to explore factors that might determine when and if object-based attention effects are observed, focusing on the degree to which the concreteness and realism of objects might contribute to these effects. We adapted the classic attention paradigm first reported by Egly, Driver, and Rafal (1994) by replacing abstract bar stimuli in some conditions with objects that were more concrete and familiar to participants: items of silverware. Furthermore, we varied the realism of these items of silverware, presenting either cartoon versions or photo-realistic versions. Contrary to predictions, increased realism did not increase the size of object-based effects. In fact, no clear object-based effects were observed in either experiment, consistent with previous failures to replicate these effects in similar paradigms. While object-based attention may exist, and may have important influences on how we parse the visual world, these and other findings suggest that the two-object paradigm typically relied upon to study object-based effects may not be the best paradigm to investigate these issues.

  19. New Combined Object-Based Technique for Efficient Urban Classsification Using WORLDVIEW-2 Data

    NASA Astrophysics Data System (ADS)

    Elsharkawy, A.; Elhabiby, M.; El-Sheimy, N.

    2012-07-01

    The new advances of having eight bands satellite mission similar to WorldView-2, WV-2, give the chance to address and solve some of the traditional problems related to the low spatial and/or spectral resolution; such as the lack of details for certain features or the inability of the conventional classifiers to detect some land-cover types because of missing efficient spectrum information and analysis techniques. High-resolution imagery is particularly well suited to urban applications. High spectral and spatial resolution of WorldView-2 data introduces challenges in detailed mapping of urban features. Classification of Water, Shadows, Red roofs and concrete buildings spectrally exhibit significant confusion either from the high similarity in the spectral response (e.g. water and Shadows) or the similarity in material type (e.g. red roofs and concrete buildings). This research study assesses the enhancement of the classification accuracy and efficiency for a data set of WorldView-2 satellite imagery using the full 8-bands through integrating the output of classification process using three band ratios with another step involves an object-based technique for extracting shadows, water, vegetation, building, Bare soil and asphalt roads. Second generation curvelet transform will be used in the second step, specifically to detect buildings' boundaries, which will aid the new algorithm of band ratios classification through efficient separation of the buildings. The combined technique is tested, and the preliminary results show a great potential of the new bands in the WV-2 imagery in the separation between confusing classes such as water and shadows, and the testing is extended to the separation between bare soils and asphalt roads. The Integrated band ratio-curvelet transform edge detection techniques increased the percentage of building detection by more than 30%.

  20. Estimation of Trees Outside Forests using IRS High Resolution data by Object Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Pujar, G. S.; Reddy, P. M.; Reddy, C. S.; Jha, C. S.; Dadhwal, V. K.

    2014-11-01

    Assessment of Trees outside forests (TOF) is widely being recognized as a pivotal theme, in sustainable natural resource management, due to their role in offering variety of goods, such as timber, fruits and fodder as well as services like water, carbon, biodiversity. Forest Conservation efforts involving reduction of deforestation and degradation may have to increasingly rely on alternatives provided by TOF in catering to economic demands in forest edges. Spatial information systems involving imaging, analysis and monitoring to achieve objectives under protocols like REDD+, require incorporation of information content from areas under forest as well as trees outside forests, to aid holistic decisions. In this perspective, automation in retrieving information on area under trees, growing outside forests, using high resolution imaging is essential so that measuring and verification of extant carbon pools, are strengthened. Retrieval of this tree cover is demonstrated herewith, using object based image analysis in a forest edge of dry deciduous forests of Eastern Ghats, in Khammam district of Telangana state of India. IRS high resolution panchromatic 2.5 m data (Cartosat-1 Orthorectified) used in tandem with 5.8 m multispectral LISS IV data, discerns tree crowns and clusters at a detailed scale and hence semi-automated approach is attempted to classify TOF from a pair of image from relatively crop and cloud free season. Object based image analysis(OBIA) approach as implemented in commercial suite of e-Cognition (Ver 8.9) consists of segmentation at user defined scale followed by application of wide range of spectral, textural and object geometry based parameters for classification. Software offers innovative blend of raster and vector features that can be juxtaposed flexibly, across scales horizontally or vertically. Segmentation was carried out at multiple scales to discern first the major land covers, such as forest, water, agriculture followed by that at a finer

  1. Reuse Metrics for Object Oriented Software

    NASA Technical Reports Server (NTRS)

    Bieman, James M.

    1998-01-01

    One way to increase the quality of software products and the productivity of software development is to reuse existing software components when building new software systems. In order to monitor improvements in reuse, the level of reuse must be measured. In this NASA supported project we (1) derived a suite of metrics which quantify reuse attributes for object oriented, object based, and procedural software, (2) designed prototype tools to take these measurements in Ada, C++, Java, and C software, (3) evaluated the reuse in available software, (4) analyzed the relationship between coupling, cohesion, inheritance, and reuse, (5) collected object oriented software systems for our empirical analyses, and (6) developed quantitative criteria and methods for restructuring software to improve reusability.

  2. The influence of object similarity and orientation on object-based cueing.

    PubMed

    Hein, Elisabeth; Blaschke, Stefan; Rolke, Bettina

    2017-01-01

    Responses to targets that appear at a noncued position within the same object (invalid-same) compared to a noncued position at an equidistant different object (invalid-different) tend to be faster and more accurate. These cueing effects have been taken as evidence that visual attention can be object based (Egly, Driver, & Rafal, Journal of Experimental Psychology: General, 123, 161-177, 1994). Recent findings, however, have shown that the object-based cueing effect is influenced by object orientation, suggesting that the cueing effect might be due to a more general facilitation of attentional shifts across the horizontal meridian (Al-Janabi & Greenberg, Attention, Perception, & Psychophysics, 1-17, 2016; Pilz, Roggeveen, Creighton, Bennet, & Sekuler, PLOS ONE, 7, e30693, 2012). The aim of this study was to investigate whether the object-based cueing effect is influenced by object similarity and orientation. According to the object-based attention account, objects that are less similar to each other should elicit stronger object-based cueing effects independent of object orientation, whereas the horizontal meridian theory would not predict any effect of object similarity. We manipulated object similarity by using a color (Exp. 1, Exp. 2A) or shape change (Exp. 2B) to distinguish two rectangles in a variation of the classic two-rectangle paradigm (Egly et al., 1994). We found that the object-based cueing effects were influenced by the orientation of the rectangles and strengthened by object dissimilarity. We suggest that object-based cueing effects are strongly affected by the facilitation of attention along the horizontal meridian, but that they also have an object-based attentional component, which is revealed when the dissimilarity between the presented objects is accentuated.

  3. Object-based glacier mapping in the Hohe Tauern Mountains of Austria

    NASA Astrophysics Data System (ADS)

    Aubrey Robson, Benjamin; Hölbling, Daniel; Nuth, Christopher; Olaf Dahl, Svein

    2015-04-01

    Up-to-date and frequent glacier outlines are a necessity for many applications within glaciology. While multispectral band ratios are a comparatively robust method for automatically classifying clean ice on a pixel-based level, semi- or fully automated glacier inventories are complicated by spectral similarities between classes such as debris-covered glacier ice and the surrounding bedrock and moraines, or between clean ice and turbid pro-glacial water. Most glacier inventories therefore require a great deal of manual correction. Here, we present a glacier inventory of the Hohe Tauern Mountains in the Central Eastern Alps in Austria. Numerous glaciers, including the Pasterze Glacier, which is the longest glacier in the Eastern Alps, shape this mountainous region. The mapping of glaciers is based on object-based image analysis (OBIA) using both high resolution (HR) satellite imagery from Landsat 8 and a digital elevation model (DEM) derived from Airborne Laser Scanning (ALS) data. We automatically classify clean ice, debris-covered ice and glacial lakes. Image objects are created by applying the multiresolution segmentation algorithm implemented in the eCognition (Trimble) software. The resulting image objects are classified using a combination of various features, whereby a focus was put on the selection of robust features that are ideally applicable for mapping large areas, for example spectral indices such as the Normalized Differenced Vegetation Index (NDVI), Normalized Difference Snow and Ice Index (NDSI), Normalised Difference Water Index (NDWI), Land and Water Mask (LWK) and a ratio of the SWIR and NIR spectral bands. The ability of OBIA to incorporate optical and elevation data and to individually address data-specific characteristics helps differentiate debris-covered ice from surrounding features not only by using spectral properties but also based on morphological and topographic parameters, while the inclusion of rulesets relying on contextuality, size

  4. Sandia National Laboratories Advanced Simulation and Computing (ASC) : appraisal method for the implementation of the ASC software quality engineering practices: Version 1.0.

    SciTech Connect

    Turgeon, Jennifer; Minana, Molly A.

    2008-02-01

    This document provides a guide to the process of conducting software appraisals under the Sandia National Laboratories (SNL) ASC Program. The goal of this document is to describe a common methodology for planning, conducting, and reporting results of software appraisals thereby enabling: development of an objective baseline on implementation of the software quality engineering (SQE) practices identified in the ASC Software Quality Plan across the ASC Program; feedback from project teams on SQE opportunities for improvement; identification of strengths and opportunities for improvement for individual project teams; guidance to the ASC Program on the focus of future SQE activities Document contents include process descriptions, templates to promote consistent conduct of appraisals, and an explanation of the relationship of this procedure to the SNL ASC software program.

  5. A comparison of pixel-based and object-based approaches for land use land cover classification in semi-arid areas, Sudan

    NASA Astrophysics Data System (ADS)

    Adam, H. E.; Csaplovics, E.; Elhaja, M. E.

    2016-06-01

    This paper deals with the comparison between application of pixel-based and object- based approaches in land use land cover classification in semi-arid areas in Sudan. The second aim is to assess the accuracy of classification for each approach. The study was conducted in the gum arabic belt in North Kordofan State, which is affected by modifications in conditions and composition of vegetation cover trends. The study used ASTER L1B registered radiance at the sensor image acquired on (19.10.2010). The image was radiometrically corrected by using ENVI-FLAASH software. Subset with an area of (40880) ha was created. The image classification (pixel-based and object-based) and accuracy assessment were conducted. Total number of (47) GCPs were surveyed and used in accuracy assessment using ERDAS 9.1. Image segmentation process was implemented using Definiens eCognition 7.1 software. Segmentation level 4 of scale parameter 25 was selected for classification based on colour and form homogeneity. Land use land cover classes were derived by classification using the nearest neighbor classifier with membership functions (fuzzy logic) for each class. The land use land cover distribution in the area for forest dominated by Acacia Senegal is (20%) and for residential area is (1.50%) for the two methods of classification. While for bare and farm land, grass and bush land and mixed woodland classes are (6.69% and 1.63%), (18.62% and 15.16%) and (53% and 61%) for pixel based and object based methods, respectively. The overall accuracy and Kappa statistic of the classification produced by the pixel-based and object-based were (72.92%, and 54.17%) and (0.6259 and 0.3810), respectively. The pixel based approach performed slightly better than the object-based approach in land use land cover classification in the semi-arid land in gum Arabic belt.

  6. The modulation of spatial congruency by object-based attention: analysing the "locus" of the modulation.

    PubMed

    Luo, Chunming; Lupiáñez, Juan; Funes, María Jesús; Fu, Xiaolan

    2011-12-01

    Earlier studies have demonstrated that spatial cueing differentially reduces stimulus-stimulus congruency (e.g., spatial Stroop) interference but not stimulus-response congruency (e.g., Simon; e.g., Lupiáñez & Funes, 2005). This spatial cueing modulation over spatial Stroop seems to be entirely attributable to object-based attention (e.g., Luo, Lupiáñez, Funes, & Fu, 2010). In the present study, two experiments were conducted to further explore whether the cueing modulation of spatial Stroop is object based and/or space based and to analyse the "locus" of this modulation. In Experiment 1, we found that the cueing modulation over spatial Stroop is entirely object based, independent of stimulus-response congruency. In Experiment 2, we observed that the modulation of object-based attention over the spatial Stroop only occurred at a short cue-target interval (i.e., stimulus onset asynchrony; SOA), whereas the stimulus-response congruency effect was not modulated either by object-based or by location-based attentional cueing. The overall pattern of results suggests that the spatial cueing modulation over spatial Stroop arises from object-based attention and occurs at the perceptual stage of processing.

  7. Software Program: Software Management Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.

  8. Proprietary software

    NASA Technical Reports Server (NTRS)

    Marnock, M. J.

    1971-01-01

    The protection of intellectual property by a patent, a copyright, or trade secrets is reviewed. The present and future use of computers and software are discussed, along with the governmental uses of software. The popularity of contractual agreements for sale or lease of computer programs and software services is also summarized.

  9. Target-object integration, attention distribution, and object orientation interactively modulate object-based selection.

    PubMed

    Al-Janabi, Shahd; Greenberg, Adam S

    2016-10-01

    The representational basis of attentional selection can be object-based. Various studies have suggested, however, that object-based selection is less robust than spatial selection across experimental paradigms. We sought to examine the manner by which the following factors might explain this variation: Target-Object Integration (targets 'on' vs. part 'of' an object), Attention Distribution (narrow vs. wide), and Object Orientation (horizontal vs. vertical). In Experiment 1, participants discriminated between two targets presented 'on' an object in one session, or presented as a change 'of' an object in another session. There was no spatial cue-thus, attention was initially focused widely-and the objects were horizontal or vertical. We found evidence of object-based selection only when targets constituted a change 'of' an object. Additionally, object orientation modulated the sign of object-based selection: We observed a same-object advantage for horizontal objects, but a same-object cost for vertical objects. In Experiment 2, an informative cue preceded a single target presented 'on' an object or as a change 'of' an object (thus, attention was initially focused narrowly). Unlike in Experiment 1, we found evidence of object-based selection independent of target-object integration. We again found that the sign of selection was modulated by the objects' orientation. This result may reflect a meridian effect, which emerged due to anisotropies in the cortical representations when attention is oriented endogenously. Experiment 3 revealed that object orientation did not modulate object-based selection when attention was oriented exogenously. Our findings suggest that target-object integration, attention distribution, and object orientation modulate object-based selection, but only in combination.

  10. Using digital photographs and object-based image analysis to estimate percent ground cover in vegetation plots

    USGS Publications Warehouse

    Luscier, J.D.; Thompson, W.L.; Wilson, J.M.; Gorham, B.E.; Dragut, L.D.

    2006-01-01

    Ground vegetation influences habitat selection and provides critical resources for survival and reproduction of animals. Researchers often employ visual methods to estimate ground cover, but these approaches may be prone to observer bias. We therefore evaluated a method using digital photographs of vegetation to objectively quantify percent ground cover of grasses, forbs, shrubs, litter, and bare ground within 90 plots of 2m2. We carried out object-based image analysis, using a software program called eCognition, to divide photographs into different vegetation classes (based on similarities among neighboring pixels) to estimate percent ground cover for each category. We used the Kappa index of agreement (KIA) to quantify correctly classified, randomly selected segments of all images. Our KIA values indicated strong agreement (> 80%) of all vegetation categories, with an average of 90-96% (SE = 5%) of shrub, litter, forb, and grass segments classified correctly. We also created artificial plots with known percentages of each vegetation category to evaluate the accuracy of software predictions. Observed differences between true cover and eCognition estimates for each category ranged from 1 to 4%. This technique provides a repeatable and reliable way to estimate percent ground cover that allows quantification of classification accuracy. ?? The Ecological Society of America.

  11. Software Master Plan. Volume 2. Background (Annexes A-G)

    DTIC Science & Technology

    1990-02-09

    Management Roles ANNEX B Existing Policies , Standards, and Guidance Regarding Software and Systems ANNEX C Current Software Research and Development...Software related responsibilities include: - Developing policy and guidance for software acquisition programs. - Validating software acquisition...Advanced Technology) (DDDRE(R&AT)) Software related responsibilities include: - Providing review, management oversight, policy guidance, and coordination

  12. The effects of visual search efficiency on object-based attention.

    PubMed

    Greenberg, Adam S; Rosen, Maya; Cutrone, Elizabeth; Behrmann, Marlene

    2015-07-01

    The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41-51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161-177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention.

  13. Detection of object-based manipulation by the statistical features of object contour.

    PubMed

    Richao, Chen; Gaobo, Yang; Ningbo, Zhu

    2014-03-01

    Object-based manipulations, such as adding or removing objects for digital video, are usually malicious forgery operations. Compared with the conventional double MPEG compression or frame-based tampering, it makes more sense to detect these object-based manipulations because they might directly affect our understanding towards the video content. In this paper, a passive video forensics scheme is proposed for object-based forgery operations. After extracting the adjustable width areas around object boundary, several statistical features such as the moment features of detailed wavelet coefficients and the average gradient of each colour channel are obtained and input into support vector machine (SVM) as feature vectors for the classification of natural objects and forged ones. Experimental results on several videos sequence with static background show that the proposed approach can achieve an accuracy of correct detection from 70% to 95%.

  14. JTpack90: A parallel, object-based, Fortran 90 linear algebra package

    SciTech Connect

    Turner, J.A.; Kothe, D.B.; Ferrell, R.C.

    1997-03-01

    The authors have developed an object-based linear algebra package, currently with emphasis on sparse Krylov methods, driven primarily by needs of the Los Alamos National Laboratory parallel unstructured-mesh casting simulation tool Telluride. Support for a number of sparse storage formats, methods, and preconditioners have been implemented, driven primarily by application needs. They describe the object-based Fortran 90 approach, which enhances maintainability, performance, and extensibility, the parallelization approach using a new portable gather/scatter library (PGSLib), current capabilities and future plans, and present preliminary performance results on a variety of platforms.

  15. Object-based Conditional Random Fields for Road Extraction from Remote Sensing Image

    NASA Astrophysics Data System (ADS)

    Huang, Zhijian; Xu, Fanjiang; Lu, Lei; Nie, Hongshan

    2014-03-01

    To make full use of spatially contextual information and topological information in the procedure of Object-based Image Analysis (OBIA), an object-based conditional random field is proposed and used for road extraction. Objects are produced with an initial segmentation, then their neighbours are constructed. Each object is represented by three kinds of features, including the colour, the gradient of histogram and the texture. Formulating the road extraction as a binary classification problem, a Conditional Random Fields model learns and is used for inference. The experimental results demonstrate that the proposed method is effective.

  16. Software safety

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy

    1987-01-01

    Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.

  17. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    requirements. This allows the projects leeway to meet these requirements in many forms that best suit a particular project's needs and safety risk. In other words, it tells the project what to do, not how to do it. This update also incorporated advances in the state of the practice of software safety from academia and private industry. It addresses some of the more common issues now facing software developers in the NASA environment such as the use of Commercial-Off-the-Shelf Software (COTS), Modified OTS (MOTS), Government OTS (GOTS), and reused software. A team from across NASA developed the update and it has had both NASA-wide internal reviews by software engineering, quality, safety, and project management. It has also had expert external review. This presentation and paper will discuss the new NASA Software Safety Standard, its organization, and key features. It will start with a brief discussion of some NASA mission failures and incidents that had software as one of their root causes. It will then give a brief overview of the NASA Software Safety Process. This will include an overview of the key personnel responsibilities and functions that must be performed for safety-critical software.

  18. Object-Based Image Analysis Beyond Remote Sensing - the Human Perspective

    NASA Astrophysics Data System (ADS)

    Blaschke, T.; Lang, S.; Tiede, D.; Papadakis, M.; Györi, A.

    2016-06-01

    We introduce a prototypical methodological framework for a place-based GIS-RS system for the spatial delineation of place while incorporating spatial analysis and mapping techniques using methods from different fields such as environmental psychology, geography, and computer science. The methodological lynchpin for this to happen - when aiming to delineate place in terms of objects - is object-based image analysis (OBIA).

  19. Is Object-Based Attention Mandatory? Strategic Control over Mode of Attention

    ERIC Educational Resources Information Center

    Yeari, Menahem; Goldsmith, Morris

    2010-01-01

    Is object-based attention mandatory or under strategic control? In an adapted spatial cuing paradigm, participants focused initially on a central arrow cue that was part of a perceptual group (Experiment 1) or a uniformly connected object (Experiment 2), encompassing one of the potential target locations. The cue always pointed to an opposite,…

  20. Feature selection methods for object-based classification of sub-decimeter resolution digital aerial imagery

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Due to the availability of numerous spectral, spatial, and contextual features, the determination of optimal features and class separabilities can be a time consuming process in object-based image analysis (OBIA). While several feature selection methods have been developed to assist OBIA, a robust c...

  1. Using object-based image analysis to guide the selection of field sample locations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the most challenging tasks for resource management and research is designing field sampling schemes to achieve unbiased estimates of ecosystem parameters as efficiently as possible. This study focused on the potential of fine-scale image objects from object-based image analysis (OBIA) to be u...

  2. The Object-Based Simon Effect: Grasping Affordance or Relative Location of the Graspable Part?

    ERIC Educational Resources Information Center

    Cho, Dongbin; Proctor, Robert W.

    2010-01-01

    Reaction time is often shorter when the irrelevant graspable handle of an object corresponds with the location of a keypress response to the relevant attribute than when it does not. This object-based Simon effect has been attributed to an affordance for grasping the handle with the hand to the same side. Because a grasping affordance should…

  3. Software Bridge

    NASA Technical Reports Server (NTRS)

    1995-01-01

    I-Bridge is a commercial version of software developed by I-Kinetics under a NASA Small Business Innovation Research (SBIR) contract. The software allows users of Windows applications to gain quick, easy access to databases, programs and files on UNIX services. Information goes directly onto spreadsheets and other applications; users need not manually locate, transfer and convert data.

  4. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen

    1988-01-01

    Presents reviews of six computer software programs for teaching science. Provides the publisher, grade level, cost, and descriptions of software, including: (1) "Recycling Logic"; (2) "Introduction to Biochemistry"; (3) "Food for Thought"; (4) "Watts in a Home"; (5) "Geology in Action"; and (6)…

  5. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1988

    1988-01-01

    Reviews six software packages for the Apple II family. Programs reviewed include "Science Courseware: Earth Science Series"; "Heat and Light"; "In Search of Space: Introduction to Model Rocketry"; "Drug Education Series: Drugs--Their Effects on You'"; "Uncertainties and Measurement"; and "Software Films: Learning about Science Series," which…

  6. Software Reviews.

    ERIC Educational Resources Information Center

    Miller, Anne, Ed.; Radziemski, Cathy, Ed.

    1988-01-01

    Reviews two software packages for the Macintosh series. "Course Builder 2.0," a courseware authoring system, allows the user to create programs which stand alone and may be used independently in the classroom. "World Builder," an artificial intelligence software package, allows creative thinking, problem-solving, and…

  7. Bindings in working memory: The role of object-based attention.

    PubMed

    Gao, Zaifeng; Wu, Fan; Qiu, Fangfang; He, Kaifeng; Yang, Yue; Shen, Mowei

    2017-02-01

    Over the past decade, it has been debated whether retaining bindings in working memory (WM) requires more attention than retaining constituent features, focusing on domain-general attention and space-based attention. Recently, we proposed that retaining bindings in WM needs more object-based attention than retaining constituent features (Shen, Huang, & Gao, 2015, Journal of Experimental Psychology: Human Perception and Performance, doi: 10.1037/xhp0000018 ). However, only unitized visual bindings were examined; to establish the role of object-based attention in retaining bindings in WM, more emperical evidence is required. We tested 4 new bindings that had been suggested requiring no more attention than the constituent features in the WM maintenance phase: The two constituent features of binding were stored in different WM modules (cross-module binding, Experiment 1), from auditory and visual modalities (cross-modal binding, Experiment 2), or temporally (cross-time binding, Experiments 3) or spatially (cross-space binding, Experiments 4-6) separated. In the critical condition, we added a secondary object feature-report task during the delay interval of the change-detection task, such that the secondary task competed for object-based attention with the to-be-memorized stimuli. If more object-based attention is required for retaining bindings than for retaining constituent features, the secondary task should impair the binding performance to a larger degree relative to the performance of constituent features. Indeed, Experiments 1-6 consistently revealed a significantly larger impairment for bindings than for the constituent features, suggesting that object-based attention plays a pivotal role in retaining bindings in WM.

  8. A multiple-point spatially weighted k-NN method for object-based classification

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.

    2016-10-01

    Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.

  9. Software Smarts

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Under an SBIR (Small Business Innovative Research) contract with Johnson Space Center, Knowledge Based Systems Inc. (KBSI) developed an intelligent software environment for modeling and analyzing mission planning activities, simulating behavior, and, using a unique constraint propagation mechanism, updating plans with each change in mission planning activities. KBSI developed this technology into a commercial product, PROJECTLINK, a two-way bridge between PROSIm, KBSI's process modeling and simulation software and leading project management software like Microsoft Project and Primavera's SureTrak Project Manager.

  10. Weed mapping in early-season maize fields using object-based analysis of unmanned aerial vehicle (UAV) images.

    PubMed

    Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca

    2013-01-01

    The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r(2)=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance.

  11. Weed Mapping in Early-Season Maize Fields Using Object-Based Analysis of Unmanned Aerial Vehicle (UAV) Images

    PubMed Central

    Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca

    2013-01-01

    The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r2=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance. PMID:24146963

  12. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane

    1990-01-01

    Reviews two programs: (1) "The Weather Machine" on understanding weather and weather forecasting and (2) "The Mystery of the Hotel Victoria" on problem solving in mathematics. Presents the descriptions, advantages, and weaknesses of the software. (YP)

  13. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1987-01-01

    Reviews seven computer software programs that can be used in science education programs. Describes courseware which deals with muscles and bones, terminology, classifying animals without backbones, molecular structures, drugs, genetics, and shaping the earth's surface. (TW)

  14. Software Reviews.

    ERIC Educational Resources Information Center

    Mathematics and Computer Education, 1988

    1988-01-01

    Presents reviews of six software packages. Includes (1) "Plain Vanilla Statistics"; (2) "MathCAD 2.0"; (3) "GrFx"; (4) "Trigonometry"; (5) "Algebra II"; (6) "Algebra Drill and Practice I, II, and III." (PK)

  15. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1987-01-01

    Provides a review of four science software programs. Includes topics such as plate tectonics, laboratory experiment simulations, the human body, and light and temperature. Contains information on ordering and reviewers' comments. (ML)

  16. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are six computer software packages including "Lunar Greenhouse,""Dyno-Quest,""How Weather Works,""Animal Trackers,""Personal Science Laboratory," and "The Skeletal and Muscular Systems." Availability, functional, and hardware requirements are discussed. (CW)

  17. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are three computer software packages including "Martin Luther King, Jr.: Instant Replay of History,""Weeds to Trees," and "The New Print Shop, School Edition." Discussed are hardware requirements, costs, grade levels, availability, emphasis, strengths, and weaknesses. (CW)

  18. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Eugene T., Ed.

    1988-01-01

    Presents reviews by classroom teachers of software for teaching science. Includes material on the work of geologists, genetics, earth science, classification of living things, astronomy, endangered species, skeleton, drugs, and heartbeat. Provides information on availability and equipment needed. (RT)

  19. Software Reviews.

    ERIC Educational Resources Information Center

    Dwyer, Donna; And Others

    1989-01-01

    Reviewed are seven software packages for Apple and IBM computers. Included are: "Toxicology"; "Science Corner: Space Probe"; "Alcohol and Pregnancy"; "Science Tool Kit Plus"; Computer Investigations: Plant Growth"; "Climatrolls"; and "Animal Watch: Whales." (CW)

  20. Reduced object-based perception in the near-hand space.

    PubMed

    Suh, Jihyun; Abrams, Richard A

    2015-12-01

    Previous studies have shown that hand proximity changes visual perception (Abrams et al. in Cognition 107(3):1035-1047, 2008). The present study examined the effects of hand proximity on object-based perception. In three experiments, participants viewed stimuli that were either near to or far from their hands. The target stimulus appeared, after a cue, in one of two rectangular objects: either at the location that had been previously cued, at the uncued end of the cued object, or in the uncued object. We found a significantly reduced same-object benefit in reaction time for stimuli near the hands in one experiment. Interestingly, we observed a same-object cost in sensitivity for stimuli near the hands in another experiment. The results reveal that object-based perception is disrupted in the near-hand space. This is consistent with previous findings revealing altered visual processing near the hands.

  1. Object-based benthic habitat mapping in the Florida Keys from hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Caiyun; Selch, Donna; Xie, Zhixiao; Roberts, Charles; Cooper, Hannah; Chen, Ge

    2013-12-01

    Accurate mapping of benthic habitats in the Florida Keys is essential in developing effective management strategies for this unique coastal ecosystem. In this study, we evaluated the applicability of hyperspectral imagery collected from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) for benthic habitat mapping in the Florida Keys. An overall accuracy of 84.3% and 86.7% was achieved respectively for a group-level (3-class) and code-level (12-class) classification by integrating object-based image analysis (OBIA), hyperspectral image processing methods, and machine learning algorithms. Accurate and informative object-based benthic habitat maps were produced. Three commonly used image correction procedures (atmospheric, sun-glint, and water-column corrections) were proved unnecessary for small area mapping in the Florida Keys. Inclusion of bathymetry data in the mapping procedure did not increase the classification accuracy. This study indicates that hyperspectral systems are promising in accurate benthic habitat mapping at a fine detail level.

  2. Developmental Changes in Mental Rotation: A Dissociation Between Object-Based and Egocentric Transformations

    PubMed Central

    Kaltner, Sandra; Jansen, Petra

    2016-01-01

    The present study was conducted to investigate developmental changes of mental rotation performance. We compared children, adults, and older adults regarding their performance in object-based and egocentric transformations. Both children and older adults showed higher overall reaction times compared to adults. Results were interpreted against the background of impaired working memory capacity in both children and older adults. Since mental changes in working memory are mediated by age differences in cognitive processing speed, cognitive speed is supposed to be the underlying factor. Regarding both types of transformations, an advantage of egocentric over object-based human figures was only found in adults which led us to tentatively propose that children and older adults show deficits in perspective taking compared to adults. PMID:27512525

  3. Spatial attention is necessary for object-based attention: Evidence from temporal-order judgments.

    PubMed

    Donovan, Ian; Pratt, Jay; Shomstein, Sarah

    2017-04-01

    Attentional selection is a dynamic process that relies on multiple types of representations. That object representations contribute to attentional selection has been known for decades; however, most evidence for this contribution has been gleaned from studies that have relied on various forms of spatial cueing (some endogenous and some exogenous). It has thus remained unclear whether object-based attentional selection is a direct result of spatial cuing, or whether it still emerges without any spatial marker. Here we used a novel method-the temporal-order judgment (TOJ)-to examine whether object-based guidance emerges in the absence of spatial cuing. Participants were presented with two rectangles oriented either horizontally or vertically. Following a 150-ms preview time, two target stimuli were presented on the same or on different objects, and participants were asked to report which of the two stimuli had appeared first. The targets consisted of stimuli that formed a percept of a "hole" or a "hill." First, we demonstrated that the "hill" target was indeed processed faster, as evidenced by a positive perceived simultaneity (PSS) measure. We then demonstrated that if two targets appeared with equal probabilities on the same and on different objects, the PSS values, although positive, were not modulated by the objects. In a subsequent set of experiments, we showed that objects can modulate attentional allocation-however, only when they are biased by a spatial (endogenous) cue. In other words, in the absence of a spatial cue or bias, object representations do not guide attentional selection. In addition to providing new constraints for theories of object-based attentional guidance, these experiments introduce a novel paradigm for measuring object-based attentional effects.

  4. Object-based inversion of crosswell radar tomography data to monitor vegetable oil injection experiments

    USGS Publications Warehouse

    Lane, Jr., John W.; Day-Lewis, Frederick D.; Versteeg, Roelof J.; Casey, Clifton C.

    2004-01-01

    Crosswell radar methods can be used to dynamically image ground-water flow and mass transport associated with tracer tests, hydraulic tests, and natural physical processes, for improved characterization of preferential flow paths and complex aquifer heterogeneity. Unfortunately, because the raypath coverage of the interwell region is limited by the borehole geometry, the tomographic inverse problem is typically underdetermined, and tomograms may contain artifacts such as spurious blurring or streaking that confuse interpretation.We implement object-based inversion (using a constrained, non-linear, least-squares algorithm) to improve results from pixel-based inversion approaches that utilize regularization criteria, such as damping or smoothness. Our approach requires pre- and post-injection travel-time data. Parameterization of the image plane comprises a small number of objects rather than a large number of pixels, resulting in an overdetermined problem that reduces the need for prior information. The nature and geometry of the objects are based on hydrologic insight into aquifer characteristics, the nature of the experiment, and the planned use of the geophysical results.The object-based inversion is demonstrated using synthetic and crosswell radar field data acquired during vegetable-oil injection experiments at a site in Fridley, Minnesota. The region where oil has displaced ground water is discretized as a stack of rectangles of variable horizontal extents. The inversion provides the geometry of the affected region and an estimate of the radar slowness change for each rectangle. Applying petrophysical models to these results and porosity from neutron logs, we estimate the vegetable-oil emulsion saturation in various layers.Using synthetic- and field-data examples, object-based inversion is shown to be an effective strategy for inverting crosswell radar tomography data acquired to monitor the emplacement of vegetable-oil emulsions. A principal advantage of

  5. Effect of Pansharpened Image on Some of Pixel Based and Object Based Classification Accuracy

    NASA Astrophysics Data System (ADS)

    Karakus, P.; Karabork, H.

    2016-06-01

    Classification is the most important method to determine type of crop contained in a region for agricultural planning. There are two types of the classification. First is pixel based and the other is object based classification method. While pixel based classification methods are based on the information in each pixel, object based classification method is based on objects or image objects that formed by the combination of information from a set of similar pixels. Multispectral image contains a higher degree of spectral resolution than a panchromatic image. Panchromatic image have a higher spatial resolution than a multispectral image. Pan sharpening is a process of merging high spatial resolution panchromatic and high spectral resolution multispectral imagery to create a single high resolution color image. The aim of the study was to compare the potential classification accuracy provided by pan sharpened image. In this study, SPOT 5 image was used dated April 2013. 5m panchromatic image and 10m multispectral image are pan sharpened. Four different classification methods were investigated: maximum likelihood, decision tree, support vector machine at the pixel level and object based classification methods. SPOT 5 pan sharpened image was used to classification sun flowers and corn in a study site located at Kadirli region on Osmaniye in Turkey. The effects of pan sharpened image on classification results were also examined. Accuracy assessment showed that the object based classification resulted in the better overall accuracy values than the others. The results that indicate that these classification methods can be used for identifying sun flower and corn and estimating crop areas.

  6. Using SAR Interferograms and Coherence Images for Object-Based Delineation of Unstable Slopes

    NASA Astrophysics Data System (ADS)

    Friedl, Barbara; Holbling, Daniel

    2015-05-01

    This study uses synthetic aperture radar (SAR) interferometric products for the semi-automated identification and delineation of unstable slopes and active landslides. Single-pair interferograms and coherence images are therefore segmented and classified in an object-based image analysis (OBIA) framework. The rule-based classification approach has been applied to landslide-prone areas located in Taiwan and Southern Germany. The semi-automatically obtained results were validated against landslide polygons derived from manual interpretation.

  7. Serial grouping of 2D-image regions with object-based attention in humans

    PubMed Central

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-01-01

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188

  8. Remote sensing clustering analysis based on object-based interval modeling

    NASA Astrophysics Data System (ADS)

    He, Hui; Liang, Tianheng; Hu, Dan; Yu, Xianchuan

    2016-09-01

    In object-based clustering, image data are segmented into objects (groups of pixels) and then clustered based on the objects' features. This method can be used to automatically classify high-resolution, remote sensing images, but requires accurate descriptions of object features. In this paper, we ascertain that interval-valued data model is appropriate for describing clustering prototype features. With this in mind, we developed an object-based interval modeling method for high-resolution, multiband, remote sensing data. We also designed an adaptive interval-valued fuzzy clustering method. We ran experiments utilizing images from the SPOT-5 satellite sensor, for the Pearl River Delta region and Beijing. The results indicate that the proposed algorithm considers both the anisotropy of the remote sensing data and the ambiguity of objects. Additionally, we present a new dissimilarity measure for interval vectors, which better separates the interval vectors generated by features of the segmentation units (objects). This approach effectively limits classification errors caused by spectral mixing between classes. Compared with the object-based unsupervised classification method proposed earlier, the proposed algorithm improves the classification accuracy without increasing computational complexity.

  9. Information extraction with object based support vector machines and vegetation indices

    NASA Astrophysics Data System (ADS)

    Ustuner, Mustafa; Abdikan, Saygin; Balik Sanli, Fusun

    2016-07-01

    Information extraction through remote sensing data is important for policy and decision makers as extracted information provide base layers for many application of real world. Classification of remotely sensed data is the one of the most common methods of extracting information however it is still a challenging issue because several factors are affecting the accuracy of the classification. Resolution of the imagery, number and homogeneity of land cover classes, purity of training data and characteristic of adopted classifiers are just some of these challenging factors. Object based image classification has some superiority than pixel based classification for high resolution images since it uses geometry and structure information besides spectral information. Vegetation indices are also commonly used for the classification process since it provides additional spectral information for vegetation, forestry and agricultural areas. In this study, the impacts of the Normalized Difference Vegetation Index (NDVI) and Normalized Difference Red Edge Index (NDRE) on the classification accuracy of RapidEye imagery were investigated. Object based Support Vector Machines were implemented for the classification of crop types for the study area located in Aegean region of Turkey. Results demonstrated that the incorporation of NDRE increase the classification accuracy from 79,96% to 86,80% as overall accuracy, however NDVI decrease the classification accuracy from 79,96% to 78,90%. Moreover it is proven than object based classification with RapidEye data give promising results for crop type mapping and analysis.

  10. Online decoding of object-based attention using real-time fMRI.

    PubMed

    Niazi, Adnan M; van den Broek, Philip L C; Klanke, Stefan; Barth, Markus; Poel, Mannes; Desain, Peter; van Gerven, Marcel A J

    2014-01-01

    Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object-based attention when it is directed to coherent forms or objects in the visual field. This study used real-time functional magnetic resonance imaging for moment-to-moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole-brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan-by-scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom-up sensory input, object-based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real-time decoding of object-based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity.

  11. Space-Based but not Object-Based Inhibition of Return is Impaired in Parkinson's Disease

    PubMed Central

    Possin, Katherine L.; Filoteo, J. Vincent; Song, David D.; Salmon, David P.

    2009-01-01

    Impairments in certain aspects of attention have frequently been reported in Parkinson's disease (PD), including reduced inhibition of return (IOR). Recent evidence suggests that IOR can occur when attention is directed at objects or locations, but previous investigations of IOR in PD have not systematically compared these two frames of reference. The present study compared the performance of 18 nondemented patients with PD and 18 normal controls on an IOR task with two conditions. In the “object-present” condition, objects surrounded the cues and targets so that attention was cued to both a spatial location and to a specific object. In the “object-absent” condition, surrounding objects were not presented so that attention was cued only to a spatial location. When participants had to rely on space-based cues, PD patients demonstrated reduced IOR compared to controls. In contrast, when objects were present in the display and participants could use object-based cues, PD patients exhibited normal IOR. These results suggest that PD patients are impaired in inhibitory aspects of space-based attention, but are able to overcome this impairment when their attention can be directed at object-based frames of reference. This dissociation supports the view that space-based and object-based components of attention involve distinct neurocognitive processes. PMID:19397864

  12. Addressing Software Security

    NASA Technical Reports Server (NTRS)

    Bailey, Brandon

    2015-01-01

    Historically security within organizations was thought of as an IT function (web sites/servers, email, workstation patching, etc.) Threat landscape has evolved (Script Kiddies, Hackers, Advanced Persistent Threat (APT), Nation States, etc.) Attack surface has expanded -Networks interconnected!! Some security posture factors Network Layer (Routers, Firewalls, etc.) Computer Network Defense (IPS/IDS, Sensors, Continuous Monitoring, etc.) Industrial Control Systems (ICS) Software Security (COTS, FOSS, Custom, etc.)

  13. Landscape object-based analysis of wetland plant functional types: the effects of spatial scale, vegetation classes and classifier methods

    NASA Astrophysics Data System (ADS)

    Dronova, I.; Gong, P.; Wang, L.; Clinton, N.; Fu, W.; Qi, S.

    2011-12-01

    Remote sensing-based vegetation classifications representing plant function such as photosynthesis and productivity are challenging in wetlands with complex cover and difficult field access. Recent advances in object-based image analysis (OBIA) and machine-learning algorithms offer new classification tools; however, few comparisons of different algorithms and spatial scales have been discussed to date. We applied OBIA to delineate wetland plant functional types (PFTs) for Poyang Lake, the largest freshwater lake in China and Ramsar wetland conservation site, from 30-m Landsat TM scene at the peak of spring growing season. We targeted major PFTs (C3 grasses, C3 forbs and different types of C4 grasses and aquatic vegetation) that are both key players in system's biogeochemical cycles and critical providers of waterbird habitat. Classification results were compared among: a) several object segmentation scales (with average object sizes 900-9000 m2); b) several families of statistical classifiers (including Bayesian, Logistic, Neural Network, Decision Trees and Support Vector Machines) and c) two hierarchical levels of vegetation classification, a generalized 3-class set and more detailed 6-class set. We found that classification benefited from object-based approach which allowed including object shape, texture and context descriptors in classification. While a number of classifiers achieved high accuracy at the finest pixel-equivalent segmentation scale, the highest accuracies and best agreement among algorithms occurred at coarser object scales. No single classifier was consistently superior across all scales, although selected algorithms of Neural Network, Logistic and K-Nearest Neighbors families frequently provided the best discrimination of classes at different scales. The choice of vegetation categories also affected classification accuracy. The 6-class set allowed for higher individual class accuracies but lower overall accuracies than the 3-class set because

  14. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. JSC created a significant set of tools to develop and maintain FORTRAN and C code during development of the Space Shuttle. This tool set forms the basis for an integrated environment to re-engineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. A beta vision of the environment was released in Mar. 1991. The commercial potential for such re-engineering tools is very great. CASE TRENDS magazine reported it to be the primary concern of over four hundred of the top MIS executives.

  15. Software engineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III; Hiott, Jim; Golej, Jim; Plumb, Allan

    1993-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. The Johnson Space Center (JSC) created a significant set of tools to develop and maintain FORTRAN and C code during development of the space shuttle. This tool set forms the basis for an integrated environment to reengineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. The latest release of the environment was in Feb. 1992.

  16. Automatic segmentation software in locally advanced rectal cancer: READY (REsearch program in Auto Delineation sYstem)-RECTAL 02: prospective study.

    PubMed

    Gambacorta, Maria A; Boldrini, Luca; Valentini, Chiara; Dinapoli, Nicola; Mattiucci, Gian C; Chiloiro, Giuditta; Pasini, Danilo; Manfrida, Stefania; Caria, Nicola; Minsky, Bruce D; Valentini, Vincenzo

    2016-07-05

    To validate autocontouring software (AS) in a clinical practice including a two steps delineation quality assurance (QA) procedure.The existing delineation agreement among experts for rectal cancer and the overlap and time criteria that have to be verified to allow the use of AS were defined.Median Dice Similarity Coefficient (MDSC), Mean slicewise Hausdorff Distances (MSHD) and Total-Time saving (TT) were analyzed.Two expert Radiation Oncologists reviewed CT-scans of 44 patients and agreed the reference-CTV: the first 14 consecutive cases were used to populate the software Atlas and 30 were used as Test.Each expert performed a manual (group A) and an automatic delineation (group B) of 15 Test patients.The delineations were compared with the reference contours.The overlap between the manual and automatic delineations with MDSC and MSHD and the TT were analyzed.Three acceptance criteria were set: MDSC ≥ 0.75, MSHD ≤1mm and TT sparing ≥ 50%.At least 2 criteria had to be met, one of which had to be TT saving, to validate the system.The MDSC was 0.75, MSHD 2.00 mm and the TT saving 55.5% between group A and group B. MDSC among experts was 0.84.Autosegmentation systems in rectal cancer partially met acceptability criteria with the present version.

  17. Automatic segmentation software in locally advanced rectal cancer: READY (REsearch program in Auto Delineation sYstem)-RECTAL 02: prospective study

    PubMed Central

    Dinapoli, Nicola; Mattiucci, Gian C.; Chiloiro, Giuditta; Pasini, Danilo; Manfrida, Stefania; Caria, Nicola; Minsky, Bruce D.

    2016-01-01

    To validate autocontouring software (AS) in a clinical practice including a two steps delineation quality assurance (QA) procedure. The existing delineation agreement among experts for rectal cancer and the overlap and time criteria that have to be verified to allow the use of AS were defined. Median Dice Similarity Coefficient (MDSC), Mean slicewise Hausdorff Distances (MSHD) and Total-Time saving (TT) were analyzed. Two expert Radiation Oncologists reviewed CT-scans of 44 patients and agreed the reference-CTV: the first 14 consecutive cases were used to populate the software Atlas and 30 were used as Test. Each expert performed a manual (group A) and an automatic delineation (group B) of 15 Test patients. The delineations were compared with the reference contours. The overlap between the manual and automatic delineations with MDSC and MSHD and the TT were analyzed. Three acceptance criteria were set: MDSC ≥ 0.75, MSHD ≤1mm and TT sparing ≥ 50%. At least 2 criteria had to be met, one of which had to be TT saving, to validate the system. The MDSC was 0.75, MSHD 2.00 mm and the TT saving 55.5% between group A and group B. MDSC among experts was 0.84. Autosegmentation systems in rectal cancer partially met acceptability criteria with the present version. PMID:27302924

  18. Comparison of Object-Based Image Analysis Approaches to Mapping New Buildings in Accra, Ghana Using Multi-Temporal QuickBird Satellite Imagery

    PubMed Central

    Tsai, Yu Hsin; Stow, Douglas; Weeks, John

    2013-01-01

    The goal of this study was to map and quantify the number of newly constructed buildings in Accra, Ghana between 2002 and 2010 based on high spatial resolution satellite image data. Two semi-automated feature detection approaches for detecting and mapping newly constructed buildings based on QuickBird very high spatial resolution satellite imagery were analyzed: (1) post-classification comparison; and (2) bi-temporal layerstack classification. Feature Analyst software based on a spatial contextual classifier and ENVI Feature Extraction that uses a true object-based image analysis approach of image segmentation and segment classification were evaluated. Final map products representing new building objects were compared and assessed for accuracy using two object-based accuracy measures, completeness and correctness. The bi-temporal layerstack method generated more accurate results compared to the post-classification comparison method due to less confusion with background objects. The spectral/spatial contextual approach (Feature Analyst) outperformed the true object-based feature delineation approach (ENVI Feature Extraction) due to its ability to more reliably delineate individual buildings of various sizes. Semi-automated, object-based detection followed by manual editing appears to be a reliable and efficient approach for detecting and enumerating new building objects. A bivariate regression analysis was performed using neighborhood-level estimates of new building density regressed on a census-derived measure of socio-economic status, yielding an inverse relationship with R2 = 0.31 (n = 27; p = 0.00). The primary utility of the new building delineation results is to support spatial analyses of land cover and land use and demographic change. PMID:24415810

  19. Mapping gully-affected areas in the region of Taroudannt, Morocco based on Object-Based Image Analysis (OBIA)

    NASA Astrophysics Data System (ADS)

    d'Oleire-Oltmanns, Sebastian; Marzolff, Irene; Tiede, Dirk; Blaschke, Thomas

    2015-04-01

    The need for area-wide landform mapping approaches, especially in terms of land degradation, can be ascribed to the fact that within area-wide landform mapping approaches, the (spatial) context of erosional landforms is considered by providing additional information on the physiography neighboring the distinct landform. This study presents an approach for the detection of gully-affected areas by applying object-based image analysis in the region of Taroudannt, Morocco, which is highly affected by gully erosion while simultaneously representing a major region of agro-industry with a high demand of arable land. Various sensors provide readily available high-resolution optical satellite data with a much better temporal resolution than 3D terrain data which lead to the development of an area-wide mapping approach to extract gully-affected areas using only optical satellite imagery. The classification rule-set was developed with a clear focus on virtual spatial independence within the software environment of eCognition Developer. This allows the incorporation of knowledge about the target objects under investigation. Only optical QuickBird-2 satellite data and freely-available OpenStreetMap (OSM) vector data were used as input data. The OSM vector data were incorporated in order to mask out plantations and residential areas. Optical input data are more readily available for a broad range of users compared to terrain data, which is considered to be a major advantage. The methodology additionally incorporates expert knowledge and freely-available vector data in a cyclic object-based image analysis approach. This connects the two fields of geomorphology and remote sensing. The classification results allow conclusions on the current distribution of gullies. The results of the classification were checked against manually delineated reference data incorporating expert knowledge based on several field campaigns in the area, resulting in an overall classification accuracy of 62

  20. Control Software

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Real-Time Innovations, Inc. (RTI) collaborated with Ames Research Center, the Jet Propulsion Laboratory and Stanford University to leverage NASA research to produce ControlShell software. RTI is the first "graduate" of Ames Research Center's Technology Commercialization Center. The ControlShell system was used extensively on a cooperative project to enhance the capabilities of a Russian-built Marsokhod rover being evaluated for eventual flight to Mars. RTI's ControlShell is complex, real-time command and control software, capable of processing information and controlling mechanical devices. One ControlShell tool is StethoScope. As a real-time data collection and display tool, StethoScope allows a user to see how a program is running without changing its execution. RTI has successfully applied its software savvy in other arenas, such as telecommunications, networking, video editing, semiconductor manufacturing, automobile systems, and medical imaging.

  1. Antiterrorist Software

    NASA Technical Reports Server (NTRS)

    Clark, David A.

    1998-01-01

    In light of the escalation of terrorism, the Department of Defense spearheaded the development of new antiterrorist software for all Government agencies by issuing a Broad Agency Announcement to solicit proposals. This Government-wide competition resulted in a team that includes NASA Lewis Research Center's Computer Services Division, who will develop the graphical user interface (GUI) and test it in their usability lab. The team launched a program entitled Joint Sphere of Security (JSOS), crafted a design architecture (see the following figure), and is testing the interface. This software system has a state-ofthe- art, object-oriented architecture, with a main kernel composed of the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS will be used as the software "breadboard" for assembling the components of explosions, such as blast and collapse simulations.

  2. Object-based attention in Chinese readers of Chinese words: beyond Gestalt principles.

    PubMed

    Li, Xingshan; Logan, Gordon D

    2008-10-01

    Most object-based attention studies use objects defined bottom-up by Gestalt principles. In the present study, we defined objects top-down, using Chinese words that were seen as objects by skilled readers of Chinese. Using a spatial cuing paradigm, we found that a target character was detected faster if it was in the same word as the cued character than if it was in a different word. Because there were no bottom-up factors that distinguished the words, these results showed that objects defined by subjects' knowledge--in this case, lexical information--can also constrain the deployment of attention.

  3. Object-based mental rotation and visual perspective-taking in typical development and Williams syndrome.

    PubMed

    Broadbent, Hannah J; Farran, Emily K; Tolmie, Andrew

    2014-01-01

    This study examined Object-based (OB) rotation and Visual Perspective-Taking (VPT) abilities in Williams syndrome (WS) compared to typically developing (TD) 5-10-year-olds. Extensive difficulties with both types of imagined rotation were observed in WS; WS performance was in line with the level of ability observed in TD 5-year-olds. However, an atypical pattern of errors on OB and VPT tasks was observed in WS compared to TD groups. Deficits in imagined rotations are consistent with known atypical cortical development in WS. Such difficulties in updating the position of the self following movement in WS may have implications for large-scale spatial navigation.

  4. The effect of input data transformations on object-based image analysis.

    PubMed

    Lippitt, Christopher D; Coulter, Lloyd L; Freeman, Mary; Lamantia-Bishop, Jeffrey; Pang, Wyson; Stow, Douglas A

    2012-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest.

  5. The effect of input data transformations on object-based image analysis

    PubMed Central

    LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.

    2011-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829

  6. [Software version and medical device software supervision].

    PubMed

    Peng, Liang; Liu, Xiaoyan

    2015-01-01

    The importance of software version in the medical device software supervision does not cause enough attention at present. First of all, the effect of software version in the medical device software supervision is discussed, and then the necessity of software version in the medical device software supervision is analyzed based on the discussion of the misunderstanding of software version. Finally the concrete suggestions on software version naming rules, software version supervision for the software in medical devices, and software version supervision scheme are proposed.

  7. Parameter-Based Performance Analysis of Object-Based Image Analysis Using Aerial and Quikbird-2 Images

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz, M.

    2014-09-01

    Opening new possibilities for research, very high resolution (VHR) imagery acquired by recent commercial satellites and aerial systems requires advanced approaches and techniques that can handle large volume of data with high local variance. Delineation of land use/cover information from VHR images is a hot research topic in remote sensing. In recent years, object-based image analysis (OBIA) has become a popular solution for image analysis tasks as it considers shape, texture and content information associated with the image objects. The most important stage of OBIA is the image segmentation process applied prior to classification. Determination of optimal segmentation parameters is of crucial importance for the performance of the selected classifier. In this study, effectiveness and applicability of the segmentation method in relation to its parameters was analysed using two VHR images, an aerial photo and a Quickbird-2 image. Multi-resolution segmentation technique was employed with its optimal parameters of scale, shape and compactness that were defined after an extensive trail process on the data sets. Nearest neighbour classifier was applied on the segmented images, and then the accuracy assessment was applied. Results show that segmentation parameters have a direct effect on the classification accuracy, and low values of scale-shape combinations produce the highest classification accuracies. Also, compactness parameter was found to be having minimal effect on the construction of image objects, hence it can be set to a constant value in image classification.

  8. Educational Software.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    The third session of IT@EDU98 consisted of five papers on educational software and was chaired by Tran Van Hao (University of Education, Ho Chi Minh City, Vietnam). "Courseware Engineering" (Nguyen Thanh Son, Ngo Ngoc Bao Tran, Quan Thanh Tho, Nguyen Hong Lam) briefly describes the use of courseware. "Machine Discovery Theorems in Geometry: A…

  9. Software Reviews.

    ERIC Educational Resources Information Center

    Sidwell, Joseph C.; And Others

    1988-01-01

    Gives a review of four software packages including "Science Toolkit: Module 3--Body Lab" for measuring heart rate, lung capacity, and response time; "Project Zoo: Adventures with Charts and Graphs" for developing process skills; "The Body Electric" for explaining electrical activity in the body; and "M-ss-ng…

  10. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are computer software packages: "Where in Time Is Carmen Sandiego,""The Bio Sci Videodisc," and "Bio Sci Stacks." Included are hardware requirements, costs, emphasis, grade level, and availability. Functions of the packages are discussed including strengths and weaknesses and teaching suggestions. (CW)

  11. Software Reviews.

    ERIC Educational Resources Information Center

    History Microcomputer Review, 1988

    1988-01-01

    Reviews seven educational computer software packages covering such topics as presidential elections, the American Revolution, the Vietnam War, the construction of historical time lines, and general U.S. history. Also reviews a program designed to help tailor data entry files. Provides ordering information, price, and computer compatibility…

  12. Reviews: Software.

    ERIC Educational Resources Information Center

    Mackenzie, Norma N.; And Others

    1988-01-01

    Reviews four computer software packages including: "The Physical Science Series: Sound" which demonstrates making waves, speed of sound, doppler effect, and human hearing; "Andromeda" depicting celestial motions in any direction; "Biology Quiz: Humans" covering chemistry, cells, viruses, and human biology; and…

  13. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1990-01-01

    Reviews two computer software programs: (1) "Conquering Ratios and Proportions" using a medieval theme for guided practice in identifying and forming ratios for grades 5-8, and (2) "Percent Word Problems" providing problems for finding a percentage of a number and a number from a percentage. (YP)

  14. Software Update.

    ERIC Educational Resources Information Center

    Currents, 2000

    2000-01-01

    A chart of 40 alumni-development database systems provides information on vendor/Web site, address, contact/phone, software name, price range, minimum suggested workstation/suggested server, standard reports/reporting tools, minimum/maximum record capacity, and number of installed sites/client type. (DB)

  15. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are two computer software packages: "Super Solvers Midnight Rescue!" a problem-solving program for IBM PCs; and "Interactive Physics," a simulation program for the Macintosh computer. The functions of the package are discussed including strengths and weaknesses and teaching suggestions. (CW)

  16. Software Reviews.

    ERIC Educational Resources Information Center

    Bitter, Gary G., Ed.

    1989-01-01

    Describes three software packages: (1) "MacMendeleev"--database/graphic display for chemistry, grades 10-12, Macintosh; (2) "Geometry One: Foundations"--geometry tutorial, grades 7-12, IBM; (3) "Mathematics Exploration Toolkit"--algebra and calculus tutorial, grades 8-12, IBM. (MVL)

  17. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1988

    1988-01-01

    Reviewed three computer software packages for Apple II series computers. Includes "The Right Job," a career counseling program; "Zoyon Patrol," a problem-solving program; and "Adventures with Charts and Graphs: Project Zoo," a graphing, mathematics, and science skills program. Each review includes strengths, weaknesses, and suggestions for use.…

  18. Reviews, Software.

    ERIC Educational Resources Information Center

    Science Teacher, 1988

    1988-01-01

    Reviews two software programs for Apple series computers. Includes "Orbital Mech," a basic planetary orbital simulation for the Macintosh, and "START: Stimulus and Response Tools for Experiments in Memory, Learning, Cognition, and Perception," a program that demonstrates basic psychological principles and experiments. (CW)

  19. Software Reviews.

    ERIC Educational Resources Information Center

    Kinnaman, Daniel E.; And Others

    1988-01-01

    Reviews four educational software packages for Apple, IBM, and Tandy computers. Includes "How the West was One + Three x Four,""Mavis Beacon Teaches Typing,""Math and Me," and "Write On." Reviews list hardware requirements, emphasis, levels, publisher, purchase agreements, and price. Discusses the strengths…

  20. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1989-01-01

    Reviewed are two computer software programs for Apple II computers on weather for upper elementary and middle school grades. "Weather" introduces the major factors (temperature, humidity, wind, and air pressure) affecting weather. "How Weather Works" uses simulation and auto-tutorial formats on sun, wind, fronts, clouds, and…

  1. Star Software.

    ERIC Educational Resources Information Center

    Kloza, Brad

    2000-01-01

    Presents a collection of computer software programs designed to spark learning enthusiasm at every grade level and across the curriculum. They include Reader Rabbit's Learn to Read, Spelling Power, Mind Twister Math, Community Construction Kit, Breaking the Code, Encarta Africana 2000, Virtual Serengeti, Operation: Frog (Deluxe), and My First…

  2. Software Reviews.

    ERIC Educational Resources Information Center

    Mackenzie, Norma N.; And Others

    1988-01-01

    Describes computer software for use with various age groups. Topics include activities involving temperature, simulations, earth science, the circulatory system, human body, reading in science, and ecology. Provides information on equipment needed, availability, package contents, and price. Comments of reviews are presented by classroom teachers.…

  3. Software Patents.

    ERIC Educational Resources Information Center

    Burke, Edmund B.

    1994-01-01

    Outlines basic patent law information that pertains to computer software programs. Topics addressed include protection in other countries; how to obtain patents; kinds of patents; duration; classes of patentable subject matter, including machines and processes; patentability searches; experimental use prior to obtaining a patent; and patent…

  4. Software Reviews.

    ERIC Educational Resources Information Center

    Smith, Richard L., Ed.

    1988-01-01

    Contains evaluations of two computer software packages, "Simulation Experiments 45-48 in Epstein's Laboratory Manual for Chemistry" and "Maps and Legends--the Cartographer (Ver 3.0)." Includes a brief description, applications, and the perceived strengths and weaknesses for each package. (CW)

  5. Statistical Software.

    ERIC Educational Resources Information Center

    Callamaras, Peter

    1983-01-01

    This buyer's guide to seven major types of statistics software packages for microcomputers reviews Edu-Ware Statistics 3.0; Financial Planning; Speed Stat; Statistics with DAISY; Human Systems Dynamics package of Stats Plus, ANOVA II, and REGRESS II; Maxistat; and Moore-Barnes' MBC Test Construction and MBC Correlation. (MBR)

  6. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are seven computer software packages including "Frog Dissection Lab Report,""Backyard Birds,""LEGO TC Logo,""Alcohol--Four Interactive Programs,""Windows on Science--Life Science,""Climate and Weather/Our Town Database," and "Weeds to Trees." Discussed are availability, features, strengths, and weaknesses. (CW)

  7. Software Reviews.

    ERIC Educational Resources Information Center

    Teles, Elizabeth, Ed.; And Others

    1990-01-01

    Reviewed are two computer software packages for Macintosh microcomputers including "Phase Portraits," an exploratory graphics tool for studying first-order planar systems; and "MacMath," a set of programs for exploring differential equations, linear algebra, and other mathematical topics. Features, ease of use, cost, availability, and hardware…

  8. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1989

    1989-01-01

    Reviews of seven software packages are presented including "The Environment I: Habitats and EcoSystems; II Cycles and Interactions"; "Super Sign Maker"; "The Great Knowledge Race: Substance Abuse"; "Exploring Science: Temperature"; "Fast Food Calculator and RD Aide"; "The Human Body:…

  9. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1990

    1990-01-01

    Reviewed are seven computer software packages for IBM and/or Apple Computers. Included are "Windows on Science: Volume 1--Physical Science"; "Science Probe--Physical Science"; "Wildlife Adventures--Grizzly Bears"; "Science Skills--Development Programs"; "The Clean Machine"; "Rock Doctor";…

  10. Software Review.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1989-01-01

    Reviewed is a computer software package entitled "Audubon Wildlife Adventures: Grizzly Bears" for Apple II and IBM microcomputers. Included are availability, hardware requirements, cost, and a description of the program. The murder-mystery flavor of the program is stressed in this program that focuses on illegal hunting and game…

  11. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are six software packages for Apple and/or IBM computers. Included are "Autograph,""The New Game Show,""Science Probe-Earth Science,""Pollution Patrol,""Investigating Plant Growth," and "AIDS: The Investigation." Discussed are the grade level, function, availability, cost, and hardware requirements of each. (CW)

  12. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1988

    1988-01-01

    Reviews five software packages for use with school age children. Includes "Science Toolkit Module 2: Earthquake Lab"; "Adaptations and Identification"; "Geoworld"; "Body Systems II Series: The Blood System: A Liquid of Life," all for Apple II, and "Science Courseware: Life Science/Biology" for…

  13. Software Reviews.

    ERIC Educational Resources Information Center

    Mathematics and Computer Education, 1987

    1987-01-01

    Presented are reviews of several microcomputer software programs. Included are reviews of: (1) Microstat (Zenith); (2) MathCAD (MathSoft); (3) Discrete Mathematics (True Basic); (4) CALCULUS (True Basic); (5) Linear-Kit (John Wiley); and (6) Geometry Sensei (Broderbund). (RH)

  14. Software Reviews.

    ERIC Educational Resources Information Center

    Bitter, Gary G., Ed.

    1990-01-01

    Reviews three computer software: (1) "Elastic Lines: The Electronic Geoboard" on elementary geometry; (2) "Wildlife Adventures: Whales" on environmental science; and (3) "What Do You Do with a Broken Calculator?" on computation and problem solving. Summarizes the descriptions, strengths and weaknesses, and…

  15. Image Mining in Remote Sensing for Coastal Wetlands Mapping: from Pixel Based to Object Based Approach

    NASA Astrophysics Data System (ADS)

    Farda, N. M.; Danoedoro, P.; Hartono; Harjoko, A.

    2016-11-01

    The availably of remote sensing image data is numerous now, and with a large amount of data it makes “knowledge gap” in extraction of selected information, especially coastal wetlands. Coastal wetlands provide ecosystem services essential to people and the environment. The aim of this research is to extract coastal wetlands information from satellite data using pixel based and object based image mining approach. Landsat MSS, Landsat 5 TM, Landsat 7 ETM+, and Landsat 8 OLI images located in Segara Anakan lagoon are selected to represent data at various multi temporal images. The input for image mining are visible and near infrared bands, PCA band, invers PCA bands, mean shift segmentation bands, bare soil index, vegetation index, wetness index, elevation from SRTM and ASTER GDEM, and GLCM (Harralick) or variability texture. There is three methods were applied to extract coastal wetlands using image mining: pixel based - Decision Tree C4.5, pixel based - Back Propagation Neural Network, and object based - Mean Shift segmentation and Decision Tree C4.5. The results show that remote sensing image mining can be used to map coastal wetlands ecosystem. Decision Tree C4.5 can be mapped with highest accuracy (0.75 overall kappa). The availability of remote sensing image mining for mapping coastal wetlands is very important to provide better understanding about their spatiotemporal coastal wetlands dynamics distribution.

  16. Topic Modelling for Object-Based Classification of Vhr Satellite Images Based on Multiscale Segmentations

    NASA Astrophysics Data System (ADS)

    Shen, Li; Wu, Linmei; Li, Zhipeng

    2016-06-01

    Multiscale segmentation is a key prerequisite step for object-based classification methods. However, it is often not possible to determine a sole optimal scale for the image to be classified because in many cases different geo-objects and even an identical geo-object may appear at different scales in one image. In this paper, an object-based classification method based on mutliscale segmentation results in the framework of topic modelling is proposed to classify VHR satellite images in an entirely unsupervised fashion. In the stage of topic modelling, grayscale histogram distributions for each geo-object class and each segment are learned in an unsupervised manner from multiscale segments. In the stage of classification, each segment is allocated a geo-object class label by the similarity comparison between the grayscale histogram distributions of each segment and each geo-object class. Experimental results show that the proposed method can perform better than the traditional methods based on topic modelling.

  17. Preliminary Results of Earthquake-Induced Building Damage Detection with Object-Based Image Classification

    NASA Astrophysics Data System (ADS)

    Sabuncu, A.; Uca Avci, Z. D.; Sunar, F.

    2016-06-01

    Earthquakes are the most destructive natural disasters, which result in massive loss of life, infrastructure damages and financial losses. Earthquake-induced building damage detection is a very important step after earthquakes since earthquake-induced building damage is one of the most critical threats to cities and countries in terms of the area of damage, rate of collapsed buildings, the damage grade near the epicenters and also building damage types for all constructions. Van-Ercis (Turkey) earthquake (Mw= 7.1) was occurred on October 23th, 2011; at 10:41 UTC (13:41 local time) centered at 38.75 N 43.36 E that places the epicenter about 30 kilometers northern part of the city of Van. It is recorded that, 604 people died and approximately 4000 buildings collapsed or seriously damaged by the earthquake. In this study, high-resolution satellite images of Van-Ercis, acquired by Quickbird-2 (Digital Globe Inc.) after the earthquake, were used to detect the debris areas using an object-based image classification. Two different land surfaces, having homogeneous and heterogeneous land covers, were selected as case study areas. As a first step of the object-based image processing, segmentation was applied with a convenient scale parameter and homogeneity criterion parameters. As a next step, condition based classification was used. In the final step of this preliminary study, outputs were compared with streetview/ortophotos for the verification and evaluation of the classification accuracy.

  18. Visual field meridians modulate the reallocation of object-based attention.

    PubMed

    Barnas, Adam J; Greenberg, Adam S

    2016-10-01

    Object-based attention (OBA) enhances processing within the boundaries of a selected object. Larger OBA effects have been observed for horizontal compared to vertical rectangles, which were eliminated when controlling for attention shifts across the visual field meridians. We aimed to elucidate the modulatory role of the meridians on OBA. We hypothesized that the contralateral organization of visual cortex accounts for these differences in OBA prioritization. Participants viewed "L"-shaped objects and, following a peripheral cue at the object vertex, detected the presence of a target at the cued location (valid), or at a non-cued location (invalid) offset either horizontally or vertically. In Experiment 1, the single displayed object contained components crossing both meridians. In Experiment 2, one cued object and one non-cued object were displayed such that both crossed the meridians. In Experiment 3, one cued object was sequestered into one screen quadrant, with its vertex either near or far from fixation. Results from Experiments 1 and 2 revealed a horizontal shift advantage (faster RTs for horizontal shifts across the vertical meridian compared to vertical shifts across the horizontal meridian), regardless of whether shifts take place within a cued object (Experiment 1) or between objects (Experiment 2). Results from Experiment 3 revealed no difference between horizontal and vertical shifts for objects that were positioned far from fixation, although the horizontal shift advantage reappeared for objects near fixation. These findings suggest a critical modulatory role of visual field meridians in the efficiency of reorienting object-based attention.

  19. Vineyard parcel identification from Worldview-2 images using object-based classification model

    NASA Astrophysics Data System (ADS)

    Sertel, Elif; Yay, Irmak

    2014-01-01

    Accurate identification of spatial distribution and characteristics of vineyard parcels is an important task for the effective management of vineyard areas, precision viticulture, and farmer registries. This study aimed to develop rule sets to be used in object-based classification of Worldview-2 satellite images to accurately delineate the boundaries of vineyards having different plantation styles. Multilevel segmentation was applied to Worldview-2 images to create different sizes of image objects representing different land cover categories with respect to scale parameter. Texture analysis and several new spectral indices were applied to objects at different segmentation levels to accurately classify land cover classes of forest, cultivated areas, harvested areas, impervious, bareland, and vineyards. A specific attention was given to vineyard class to identify vine areas at the parcel level considering their different plantation styles. The results illustrated that the combined usage of a newly developed decision tree and image segmentation during the object-based classification process could provide highly accurate results for the identification of vineyard parcels. Linearly planted vineyards could be classified with 100% producer's accuracy due to their regular textural characteristics, whereas regular gridwise and irregular gridwise (distributed) vineyard parcels could be classified with 94.87% producer's accuracy in this research.

  20. Object-based and action-based visual perception in children with spina bifida and hydrocephalus.

    PubMed

    Dennis, Maureen; Fletcher, Jack M; Rogers, Tracey; Hetherington, Ross; Francis, David J

    2002-01-01

    Children with spina bifida and hydrocephalus (SBH) have long been known to have difficulties with visual perception. We studied how children with SBH perform 12 visual perception tasks requiring object identification, multistable representations of visual space, or visually guided overt actions. Four tasks required object-based processing (visual constancy illusions, face recognition, recognition of fragmented objects, line orientation). Four tasks required the representation of visual space in egocentric coordinates (stereopsis, visual figure-ground identification, perception of multistable figures, egocentric mental rotation). Four tasks required the coupling of visual space to overt movement (visual pursuit, figure drawing, visually guided route finding, visually guided route planning). Effect sizes, measuring the magnitude of the difference between SBH children and controls, were consistently larger for action-based than object-based visual perception tasks. Within action-based tasks, effect sizes were large and roughly comparable for tasks requiring the representation of visual space and for tasks requiring visually guided action. The results are discussed in terms of the physical and brain problems of children with SBH that limit their ability to build effective situation models of space.

  1. Sensor-independent approach to recognition: the object-based approach

    NASA Astrophysics Data System (ADS)

    Morrow, Jim C.; Hossain, Sqama

    1994-03-01

    This paper introduces a fundamentally different approach to recognition -- the object-based approach -- which is inherently knowledge-based and sensor independent. The paper begins with a description of an object-based recognition system, contrasting it with the image-based approach. Next, the multilevel stage of the system, incorporating several sensor data sources is described. From these sources elements of the situation hypothesis are generated as directed by the recognition goal. Depending on the degree of correspondence between the sensor-fed elements and the object-model-fed elements, a hypothetical element is created. The hypothetical element is further employed to develop evidence for the sensor-fed element through the inclusion of secondary sensor outputs. The sensor-fed element is thus modeled in more detail, and further evidence is added to the hypothetical element. Several levels of reasoning and data integration are involved in this overall process; further, a self-adjusting correction mechanism is included through the feedback from the hypothetical element to the sensors, thus defining secondary output connections to the sensor-fed element. Some preliminary work based on this approach has been carried out and initial results show improvements over the conventional image-based approach.

  2. Object-Based Arctic Sea Ice Feature Extraction through High Spatial Resolution Aerial photos

    NASA Astrophysics Data System (ADS)

    Miao, X.; Xie, H.

    2015-12-01

    High resolution aerial photographs used to detect and classify sea ice features can provide accurate physical parameters to refine, validate, and improve climate models. However, manually delineating sea ice features, such as melt ponds, submerged ice, water, ice/snow, and pressure ridges, is time-consuming and labor-intensive. An object-based classification algorithm is developed to automatically extract sea ice features efficiently from aerial photographs taken during the Chinese National Arctic Research Expedition in summer 2010 (CHINARE 2010) in the MIZ near the Alaska coast. The algorithm includes four steps: (1) the image segmentation groups the neighboring pixels into objects based on the similarity of spectral and textural information; (2) the random forest classifier distinguishes four general classes: water, general submerged ice (GSI, including melt ponds and submerged ice), shadow, and ice/snow; (3) the polygon neighbor analysis separates melt ponds and submerged ice based on spatial relationship; and (4) pressure ridge features are extracted from shadow based on local illumination geometry. The producer's accuracy of 90.8% and user's accuracy of 91.8% are achieved for melt pond detection, and shadow shows a user's accuracy of 88.9% and producer's accuracies of 91.4%. Finally, pond density, pond fraction, ice floes, mean ice concentration, average ridge height, ridge profile, and ridge frequency are extracted from batch processing of aerial photos, and their uncertainties are estimated.

  3. Automated spine and vertebrae detection in CT images using object-based image analysis.

    PubMed

    Schwier, M; Chitiboi, T; Hülnhagen, T; Hahn, H K

    2013-09-01

    Although computer assistance has become common in medical practice, some of the most challenging tasks that remain unsolved are in the area of automatic detection and recognition. The human visual perception is in general far superior to computer vision algorithms. Object-based image analysis is a relatively new approach that aims to lift image analysis from a pixel-based processing to a semantic region-based processing of images. It allows effective integration of reasoning processes and contextual concepts into the recognition method. In this paper, we present an approach that applies object-based image analysis to the task of detecting the spine in computed tomography images. A spine detection would be of great benefit in several contexts, from the automatic labeling of vertebrae to the assessment of spinal pathologies. We show with our approach how region-based features, contextual information and domain knowledge, especially concerning the typical shape and structure of the spine and its components, can be used effectively in the analysis process. The results of our approach are promising with a detection rate for vertebral bodies of 96% and a precision of 99%. We also gain a good two-dimensional segmentation of the spine along the more central slices and a coarse three-dimensional segmentation.

  4. Lossy to lossless object-based coding of 3-D MRI data.

    PubMed

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  5. Object based change detection of Central Asian Tugai vegetation with very high spatial resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Gärtner, Philipp; Förster, Michael; Kurban, Alishir; Kleinschmit, Birgit

    2014-09-01

    Ecological restoration of degraded riparian Tugai forests in north-western China is a key driver to combat desertification in this region. Recent restoration efforts attempt to recover the forest along with its most dominant tree species, Populus euphratica. The present research observed the response of natural vegetation using an object based change detection method on QuickBird (2005) and WorldView2 (2011) data. We applied the region growing approach to derived Normalized Difference Vegetation Index (NDVI) values in order to identify single P. euphratica trees, delineate tree crown areas and quantify crown diameter changes. Results were compared to 59 reference trees. The findings confirmed a positive tree crown growth and suggest a crown diameter increase of 1.14 m, on average. On a single tree basis, tree crown diameters of larger crowns were generally underestimated. Small crowns were slightly underestimated in QuickBird and overestimated in Worldview2 images. The results of the automated tree crown delineation show a moderate relation to field reference data with R20052: 0.36 and R20112: 0.48. The object based image analysis (OBIA) method proved to be applicable in sparse riparian Tugai forests and showed great suitability to evaluate ecological restoration efforts in an endangered ecosystem.

  6. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  7. An operational framework for object-based land use classification of heterogeneous rural landscapes

    NASA Astrophysics Data System (ADS)

    Watmough, Gary R.; Palm, Cheryl A.; Sullivan, Clare

    2017-02-01

    The characteristics of very high resolution (VHR) satellite data are encouraging development agencies to investigate its use in monitoring and evaluation programmes. VHR data pose challenges for land use classification of heterogeneous rural landscapes as it is not possible to develop generalised and transferable land use classification definitions and algorithms. We present an operational framework for classifying VHR satellite data in heterogeneous rural landscapes using an object-based and random forest classifier. The framework overcomes the challenges of classifying VHR data in anthropogenic landscapes. It does this by using an image stack of RGB-NIR, Normalised Difference Vegetation Index (NDVI) and textural bands in a two-phase object-based classification. The framework can be applied to data acquired by different sensors, with different view and illumination geometries, at different times of the year. Even with these complex input data the framework can produce classification results that are comparable across time. Here we describe the framework and present an example of its application using data from QuickBird (2 images) and GeoEye (1 image) sensors.

  8. Joint source-channel coding for wireless object-based video communications utilizing data hiding.

    PubMed

    Wang, Haohong; Tsaftaris, Sotirios A; Katsaggelos, Aggelos K

    2006-08-01

    In recent years, joint source-channel coding for multimedia communications has gained increased popularity. However, very limited work has been conducted to address the problem of joint source-channel coding for object-based video. In this paper, we propose a data hiding scheme that improves the error resilience of object-based video by adaptively embedding the shape and motion information into the texture data. Within a rate-distortion theoretical framework, the source coding, channel coding, data embedding, and decoder error concealment are jointly optimized based on knowledge of the transmission channel conditions. Our goal is to achieve the best video quality as expressed by the minimum total expected distortion. The optimization problem is solved using Lagrangian relaxation and dynamic programming. The performance of the proposed scheme is tested using simulations of a Rayleigh-fading wireless channel, and the algorithm is implemented based on the MPEG-4 verification model. Experimental results indicate that the proposed hybrid source-channel coding scheme significantly outperforms methods without data hiding or unequal error protection.

  9. Hierarchical Object-based Image Analysis approach for classification of sub-meter multispectral imagery in Tanzania

    NASA Astrophysics Data System (ADS)

    Chung, C.; Nagol, J. R.; Tao, X.; Anand, A.; Dempewolf, J.

    2015-12-01

    Increasing agricultural production while at the same time preserving the environment has become a challenging task. There is a need for new approaches for use of multi-scale and multi-source remote sensing data as well as ground based measurements for mapping and monitoring crop and ecosystem state to support decision making by governmental and non-governmental organizations for sustainable agricultural development. High resolution sub-meter imagery plays an important role in such an integrative framework of landscape monitoring. It helps link the ground based data to more easily available coarser resolution data, facilitating calibration and validation of derived remote sensing products. Here we present a hierarchical Object Based Image Analysis (OBIA) approach to classify sub-meter imagery. The primary reason for choosing OBIA is to accommodate pixel sizes smaller than the object or class of interest. Especially in non-homogeneous savannah regions of Tanzania, this is an important concern and the traditional pixel based spectral signature approach often fails. Ortho-rectified, calibrated, pan sharpened 0.5 meter resolution data acquired from DigitalGlobe's WorldView-2 satellite sensor was used for this purpose. Multi-scale hierarchical segmentation was performed using multi-resolution segmentation approach to facilitate the use of texture, neighborhood context, and the relationship between super and sub objects for training and classification. eCognition, a commonly used OBIA software program, was used for this purpose. Both decision tree and random forest approaches for classification were tested. The Kappa index agreement for both algorithms surpassed the 85%. The results demonstrate that using hierarchical OBIA can effectively and accurately discriminate classes at even LCCS-3 legend.

  10. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    NASA Astrophysics Data System (ADS)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  11. Computer-Aided Software Engineering - An approach to real-time software development

    NASA Technical Reports Server (NTRS)

    Walker, Carrie K.; Turkovich, John J.

    1989-01-01

    A new software engineering discipline is Computer-Aided Software Engineering (CASE), a technology aimed at automating the software development process. This paper explores the development of CASE technology, particularly in the area of real-time/scientific/engineering software, and a history of CASE is given. The proposed software development environment for the Advanced Launch System (ALS CASE) is described as an example of an advanced software development system for real-time/scientific/engineering (RT/SE) software. The Automated Programming Subsystem of ALS CASE automatically generates executable code and corresponding documentation from a suitably formatted specification of the software requirements. Software requirements are interactively specified in the form of engineering block diagrams. Several demonstrations of the Automated Programming Subsystem are discussed.

  12. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  13. The EOSDIS software challenge

    NASA Astrophysics Data System (ADS)

    Jaworski, Allan

    1993-08-01

    The Earth Observing System (EOS) Data and Information System (EOSDIS) will serve as a major resource for the earth science community, supporting both command and control of complex instruments onboard the EOS spacecraft and the archiving, distribution, and analysis of data. The scale of EOSDIS and the volume of multidisciplinary research to be conducted using EOSDIS resources will produce unparalleled needs for technology transparency, data integration, and system interoperability. The scale of this effort far outscopes any previous scientific data system in its breadth or operational and performance needs. Modern hardware technology can meet the EOSDIS technical challenge. Multiprocessing speeds of many giga-flops are being realized by modern computers. Online storage disk, optical disk, and videocassette libraries with storage capacities of many terabytes are now commercially available. Radio frequency and fiber optics communications networks with gigabit rates are demonstrable today. It remains, of course, to perform the system engineering to establish the requirements, architectures, and designs that will implement the EOSDIS systems. Software technology, however, has not enjoyed the price/performance advances of hardware. Although we have learned to engineer hardware systems which have several orders of magnitude greater complexity and performance than those built in the 1960's, we have not made comparable progress in dramatically reducing the cost of software development. This lack of progress may significantly reduce our capabilities to achieve economically the types of highly interoperable, responsive, integraded, and productive environments which are needed by the earth science community. This paper describes some of the EOSDIS software requirements and current activities in the software community which are applicable to meeting the EOSDIS challenge. Some of these areas include intelligent user interfaces, software reuse libraries, and domain engineering

  14. Using object-based geomorphometry for hydro-geomorphological analysis in a Mediterranean research catchment

    NASA Astrophysics Data System (ADS)

    Guida, Domenico; Cuomo, Albina; Palmieri, Vincenzo

    2016-08-01

    The aim of the paper is to apply an object-based geomorphometric procedure to define the runoff contribution areas and support a hydro-geomorphological analysis of a 3 km2 Mediterranean research catchment (southern Italy). Daily and sub-hourly discharge and electrical conductivity data were collected and recorded during a 3-year monitoring activity. Hydro-chemograph analyses carried out on these data revealed a strong seasonal hydrological response in the catchment that differed from the stormflow events that occur in the wet periods and in dry periods. This analysis enabled us to define the hydro-chemograph signatures related to increasing flood magnitude, which progressively involves various runoff components (baseflow, subsurface flow and surficial flow) and an increasing contributing area to discharge. Field surveys and water table/discharge measurements carried out during a selected storm event enabled us to identify and map specific runoff source areas with homogeneous geomorphological units previously defined as hydro-geomorphotypes (spring points, diffuse seepage along the main channel, seepage along the riparian corridors, diffuse outflow from hillslope taluses and concentrate sapping from colluvial hollows). Following the procedures previously proposed and used by authors for object-based geomorphological mapping, a hydro-geomorphologically oriented segmentation and classification was performed with the eCognition (Trimble, Inc.) package. The best agreement with the expert-based geomorphological mapping was obtained with weighted plan curvature at different-sized windows. By combining the hydro-chemical analysis and object-based hydro-geomorphotype map, the variability of the contribution areas was graphically modeled for the selected event, which occurred during the wet season, by using the log values of flow accumulation that better fit the contribution areas. The results allow us to identify the runoff component on hydro-chemographs for each time step

  15. Software Epistemology

    DTIC Science & Technology

    2016-03-01

    corpuses at scale using deep neural networks, i.e., Deep Machine Learning, on high quality features computed from canonical representations of...the application of Deep Learning on software features to support automated vulnerability identification and repair. 1.2 Overview Draper’s...referenced in Table 2. Several web -based tools were maintained to show cluster processing status. Figure 10 shows a snapshot of the build inventory

  16. Know Your Software Options.

    ERIC Educational Resources Information Center

    Moxley, Doug

    1986-01-01

    Advice on how to find the best software for institutional needs is presented. Purchasing prewritten software, acquiring custom-written software, and improving ready-made software are discussed. Questions to ask before buying software are provided. (MLW)

  17. Space Station Software Issues

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor); Beskenis, S. (Editor)

    1985-01-01

    Issues in the development of software for the Space Station are discussed. Software acquisition and management, software development environment, standards, information system support for software developers, and a future software advisory board are addressed.

  18. Survey of Object-Based Data Reduction Techniques in Observational Astronomy

    NASA Astrophysics Data System (ADS)

    Łukasik, Szymon; Moitinho, André; Kowalski, Piotr A.; Falcão, António; Ribeiro, Rita A.; Kulczycki, Piotr

    2016-01-01

    Dealing with astronomical observations represents one of the most challenging areas of big data analytics. Besides huge variety of data types, dynamics related to continuous data flow from multiple sources, handling enormous volumes of data is essential. This paper provides an overview of methods aimed at reducing both the number of features/attributes as well as data instances. It concentrates on data mining approaches not related to instruments and observation tools instead working on processed object-based data. The main goal of this article is to describe existing datasets on which algorithms are frequently tested, to characterize and classify available data reduction algorithms and identify promising solutions capable of addressing present and future challenges in astronomy.

  19. Object formation in visual working memory: Evidence from object-based attention.

    PubMed

    Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei

    2016-09-01

    We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception.

  20. Reprint of: Object-based attentional facilitation and inhibition are neuropsychologically dissociated.

    PubMed

    Smith, Daniel T; Ball, Keira; Swalwell, Robert; Schenk, Thomas

    2016-11-01

    Salient peripheral cues produce a transient shift of attention which is superseded by a sustained inhibitory effect. Cueing part of an object produces an inhibitory cueing effect (ICE) that spreads throughout the object. In dynamic scenes the ICE stays with objects as they move. We examined object-centred attentional facilitation and inhibition in a patient with visual form agnosia. There was no evidence of object-centred attentional facilitation. In contrast, object-centred ICE was observed in 3 out of 4 tasks. These inhibitory effects were strongest where cues to objecthood were highly salient. These data are evidence of a neuropsychological dissociation between the facilitatory and inhibitory effects of attentional cueing. From a theoretical perspective the findings suggest that 'grouped arrays' are sufficient for object-based inhibition, but insufficient to generate object-centred attentional facilitation.

  1. Evaluation of an object-based data model implemented over a proprietary, legacy data model.

    PubMed Central

    Pollard, D. L.; Hales, J. W.

    1995-01-01

    Most computerized medical information today is contained in legacy systems. As vendors slowly move to open systems, legacy systems remain in use and contain valuable information. This paper evaluates the use of an object model imposed on an existing database to improve the ease with which data can be accessed. This study demonstrates that data elements can be retrieved without specific programming knowledge of the underlying data structure. It also suggests that underlying data structures can be changed without updating application code. Programs written using the object model were easier to program but ran greater than one order of magnitude slower than traditionally coded programs. In this paper, the legacy information system is introduced, the methods used to implement and evaluate the object-based data model are explained, and the results and conclusions are presented. PMID:8563303

  2. Building and Programming a Smart Robotic System for Distinguishing Objects Based on their Shape and Colour

    NASA Astrophysics Data System (ADS)

    Sharari, T. M.

    2015-03-01

    This paper presents a robotic system designed for holding and placing objects based on their colour and shape. The presented robot is given a complete set of instructions of positions and orientation angles for each manipulation motion. The main feature in this paper is that the developed robot used a combination of vision and motion systems for holding and placing the work-objects, mounted on the flat work-plane, based on their shapes and colors. This combination improves the flexibility of manipulation which may help eliminate the use of some expensive manipulation tasks in a variety of industrial applications. The robotic system presented in this paper is designed as an educational robot that possesses the ability for holding-and-placing operations with limited load. To process the various instructions for holding and placing the work objects, a main control unit - Manipulation Control Unit (MCU) is used as well as a slave unit that performed the actual instructions from the MCU.

  3. 3D-modeling of deformed halite hopper crystals by Object Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Leitner, Christoph; Hofmann, Peter; Marschallinger, Robert

    2014-12-01

    Object Based Image Analysis (OBIA) is an established method for analyzing multiscale and multidimensional imagery in a range of disciplines. In the present study this method was used for the 3D reconstruction of halite hopper crystals in a mudrock sample, based on Computed Tomography data. To quantitatively assess the reliability of OBIA results, they were benchmarked against a corresponding "gold standard", a reference 3D model of the halite crystals that was derived by manual expert digitization of the CT images. For accuracy assessment, classical per-scene statistics were extended to per-object statistics. The strength of OBIA was to recognize all objects similar to halite hopper crystals and in particular to eliminate cracks. Using a support vector machine (SVM) classifier on top of OBIA, unsuitable objects like halite crystal clusters, polyhalite-coated crystals and spherical halite crystals were effectively dismissed, but simultaneously the number of well-shaped halites was reduced.

  4. A bio-inspired method and system for visual object-based attention and segmentation

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak

    2010-04-01

    This paper describes a method and system of human-like attention and object segmentation in visual scenes that (1) attends to regions in a scene in their rank of saliency in the image, (2) extracts the boundary of an attended proto-object based on feature contours, and (3) can be biased to boost the attention paid to specific features in a scene, such as those of a desired target object in static and video imagery. The purpose of the system is to identify regions of a scene of potential importance and extract the region data for processing by an object recognition and classification algorithm. The attention process can be performed in a default, bottom-up manner or a directed, top-down manner which will assign a preference to certain features over others. One can apply this system to any static scene, whether that is a still photograph or imagery captured from video. We employ algorithms that are motivated by findings in neuroscience, psychology, and cognitive science to construct a system that is novel in its modular and stepwise approach to the problems of attention and region extraction, its application of a flooding algorithm to break apart an image into smaller proto-objects based on feature density, and its ability to join smaller regions of similar features into larger proto-objects. This approach allows many complicated operations to be carried out by the system in a very short time, approaching real-time. A researcher can use this system as a robust front-end to a larger system that includes object recognition and scene understanding modules; it is engineered to function over a broad range of situations and can be applied to any scene with minimal tuning from the user.

  5. Top-down facilitation of visual object recognition: object-based and context-based contributions.

    PubMed

    Fenske, Mark J; Aminoff, Elissa; Gronau, Nurit; Bar, Moshe

    2006-01-01

    The neural mechanisms subserving visual recognition are traditionally described in terms of bottom-up analysis, whereby increasingly complex aspects of the visual input are processed along a hierarchical progression of cortical regions. However, the importance of top-down facilitation in successful recognition has been emphasized in recent models and research findings. Here we consider evidence for top-down facilitation of recognition that is triggered by early information about an object, as well as by contextual associations between an object and other objects with which it typically appears. The object-based mechanism is proposed to trigger top-down facilitation of visual recognition rapidly, using a partially analyzed version of the input image (i.e., a blurred image) that is projected from early visual areas directly to the prefrontal cortex (PFC). This coarse representation activates in the PFC information that is back-projected as "initial guesses" to the temporal cortex where it presensitizes the most likely interpretations of the input object. In addition to this object-based facilitation, a context-based mechanism is proposed to trigger top-down facilitation through contextual associations between objects in scenes. These contextual associations activate predictive information about which objects are likely to appear together, and can influence the "initial guesses" about an object's identity. We have shown that contextual associations are analyzed by a network that includes the parahippocampal cortex and the retrosplenial complex. The integrated proposal described here is that object- and context-based top-down influences operate together, promoting efficient recognition by framing early information about an object within the constraints provided by a lifetime of experience with contextual associations.

  6. A novel fractal monocular and stereo video codec with object-based functionality

    NASA Astrophysics Data System (ADS)

    Zhu, Shiping; Li, Liyun; Wang, Zaikuo

    2012-12-01

    Based on the classical fractal video compression method, an improved monocular fractal compression method is proposed which includes using more effective macroblock partition scheme instead of classical quadtree partition scheme; using improved fast motion estimation to increase the calculation speed; using homo-I-frame like in H.264, etc. The monocular codec uses the motion compensated prediction (MCP) structure. And stereo fractal video coding is proposed which matches the macroblock with two reference frames in left and right views, and it results in increasing compression ratio and reducing bit rate/bandwidth when transmitting compressed video data. The stereo codec combines MCP and disparity compensated prediction. And a new method of object-based fractal video coding is proposed in which each object can be encoded and decoded independently with higher compression ratio and speed and less bit rate/bandwidth when transmitting compressed stereo video data greatly. Experimental results indicate that the proposed monocular method can raise compression ratio 3.6 to 7.5 times, speed up compression time 5.3 to 22.3 times, and improve the image quality 3.81 to 9.24 dB in comparison with circular prediction mapping and non-contractive interframe mapping. The PSNR of the proposed stereo video coding is about 0.17 dB higher than that of the proposed monocular video coding, and 0.69 dB higher than that of JMVC 4.0 on average. Comparing with the bit rate resulted by the proposed monocular video coding and JMVC 4.0, the proposed stereo video coding achieves, on average, 2.53 and 21.14 Kbps bit rate saving, respectively. The proposed object-based fractal monocular and stereo video coding methods are simple and effective, and they make the applications of fractal monocular and stereo video coding more flexible and practicable.

  7. Spatial and thematic assessment of object-based forest stand delineation using an OFA-matrix

    NASA Astrophysics Data System (ADS)

    Hernando, A.; Tiede, D.; Albrecht, F.; Lang, S.

    2012-10-01

    The delineation and classification of forest stands is a crucial aspect of forest management. Object-based image analysis (OBIA) can be used to produce detailed maps of forest stands from either orthophotos or very high resolution satellite imagery. However, measures are then required for evaluating and quantifying both the spatial and thematic accuracy of the OBIA output. In this paper we present an approach for delineating forest stands and a new Object Fate Analysis (OFA) matrix for accuracy assessment. A two-level object-based orthophoto analysis was first carried out to delineate stands on the Dehesa Boyal public land in central Spain (Avila Province). Two structural features were first created for use in class modelling, enabling good differentiation between stands: a relational tree cover cluster feature, and an arithmetic ratio shadow/tree feature. We then extended the OFA comparison approach with an OFA-matrix to enable concurrent validation of thematic and spatial accuracies. Its diagonal shows the proportion of spatial and thematic coincidence between a reference data and the corresponding classification. New parameters for Spatial Thematic Loyalty (STL), Spatial Thematic Loyalty Overall (STLOVERALL) and Maximal Interfering Object (MIO) are introduced to summarise the OFA-matrix accuracy assessment. A stands map generated by OBIA (classification data) was compared with a map of the same area produced from photo interpretation and field data (reference data). In our example the OFA-matrix results indicate good spatial and thematic accuracies (>65%) for all stand classes except for the shrub stands (31.8%), and a good STLOVERALL (69.8%). The OFA-matrix has therefore been shown to be a valid tool for OBIA accuracy assessment.

  8. Object-based class modelling for multi-scale riparian forest habitat mapping

    NASA Astrophysics Data System (ADS)

    Strasser, Thomas; Lang, Stefan

    2015-05-01

    Object-based class modelling allows for mapping complex, hierarchical habitat systems. The riparian zone, including forests, represents such a complex ecosystem. Forests within riparian zones are biologically high productive and characterized by a rich biodiversity; thus considered of high community interest with an imperative to be protected and regularly monitored. Satellite earth observation (EO) provides tools for capturing the current state of forest habitats such as forest composition including intermixture of non-native tree species. Here we present a semi-automated object based image analysis (OBIA) approach for the mapping of riparian forests by applying class modelling of habitats based on the European Nature Information System (EUNIS) habitat classifications and the European Habitats Directive (HabDir) Annex 1. A very high resolution (VHR) WorldView-2 satellite image provided the required spatial and spectral details for a multi-scale image segmentation and rule-base composition to generate a six-level hierarchical representation of riparian forest habitats. Thereby habitats were hierarchically represented within an image object hierarchy as forest stands, stands of homogenous tree species and single trees represented by sunlit tree crowns. 522 EUNIS level 3 (EUNIS-3) habitat patches with a mean patch size (MPS) of 12,349.64 m2 were modelled from 938 forest stand patches (MPS = 6868.20 m2) and 43,742 tree stand patches (MPS = 140.79 m2). The delineation quality of the modelled EUNIS-3 habitats (focal level) was quantitatively assessed to an expert-based visual interpretation showing a mean deviation of 11.71%.

  9. Software Prototyping

    PubMed Central

    Del Fiol, Guilherme; Hanseler, Haley; Crouch, Barbara Insley; Cummins, Mollie R.

    2016-01-01

    Summary Background Health information exchange (HIE) between Poison Control Centers (PCCs) and Emergency Departments (EDs) could improve care of poisoned patients. However, PCC information systems are not designed to facilitate HIE with EDs; therefore, we are developing specialized software to support HIE within the normal workflow of the PCC using user-centered design and rapid prototyping. Objective To describe the design of an HIE dashboard and the refinement of user requirements through rapid prototyping. Methods Using previously elicited user requirements, we designed low-fidelity sketches of designs on paper with iterative refinement. Next, we designed an interactive high-fidelity prototype and conducted scenario-based usability tests with end users. Users were asked to think aloud while accomplishing tasks related to a case vignette. After testing, the users provided feedback and evaluated the prototype using the System Usability Scale (SUS). Results Survey results from three users provided useful feedback that was then incorporated into the design. After achieving a stable design, we used the prototype itself as the specification for development of the actual software. Benefits of prototyping included having 1) subject-matter experts heavily involved with the design; 2) flexibility to make rapid changes, 3) the ability to minimize software development efforts early in the design stage; 4) rapid finalization of requirements; 5) early visualization of designs; 6) and a powerful vehicle for communication of the design to the programmers. Challenges included 1) time and effort to develop the prototypes and case scenarios; 2) no simulation of system performance; 3) not having all proposed functionality available in the final product; and 4) missing needed data elements in the PCC information system. PMID:27081404

  10. Software Surrogate

    NASA Technical Reports Server (NTRS)

    1999-01-01

    In 1994, Blackboard Technology received a NASA Phase I SBIR award entitled "A Blackboard-Based Framework for Mixed-Initiative, Crewed- Space-System Applications." This research continued in Phase II at JSC, where a generic architecture was developed in which a software surrogate serves as the operator's representative in the fast-paced realm of nearly autonomous, intelligent systems. This SBIR research effort addressed the need to support human-operator monitoring and intervention with intelligent systems such as those being developed for NASA's crewed space program.

  11. Analysis Software

    NASA Technical Reports Server (NTRS)

    1994-01-01

    General Purpose Boundary Element Solution Technology (GPBEST) software employs the boundary element method of mechanical engineering analysis, as opposed to finite element. It is, according to one of its developers, 10 times faster in data preparation and more accurate than other methods. Its use results in less expensive products because the time between design and manufacturing is shortened. A commercial derivative of a NASA-developed computer code, it is marketed by Best Corporation to solve problems in stress analysis, heat transfer, fluid analysis and yielding and cracking of solids. Other applications include designing tractor and auto parts, household appliances and acoustic analysis.

  12. Simulation Software

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Various NASA Small Business Innovation Research grants from Marshall Space Flight Center, Langley Research Center and Ames Research Center were used to develop the 'kernel' of COMCO's modeling and simulation software, the PHLEX finite element code. NASA needed it to model designs of flight vehicles; one of many customized commercial applications is UNISIM, a PHLEX-based code for analyzing underground flows in oil reservoirs for Texaco, Inc. COMCO's products simulate a computational mechanics problem, estimate the solution's error and produce the optimal hp-adapted mesh for the accuracy the user chooses. The system is also used as a research or training tool in universities and in mechanical design in industrial corporations.

  13. Object-Based Classification as an Alternative Approach to the Traditional Pixel-Based Classification to Identify Potential Habitat of the Grasshopper Sparrow

    NASA Astrophysics Data System (ADS)

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.

  14. Object-based classification as an alternative approach to the traditional pixel-based classification to identify potential habitat of the grasshopper sparrow.

    PubMed

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.

  15. Quantification of Impervious Surfaces Along the Wasatch Front, Utah: AN Object-Based Image Analysis Approach to Identifying AN Indicator for Wetland Stress

    NASA Astrophysics Data System (ADS)

    Leydsman-McGinty, E. I.; Ramsey, R. D.; McGinty, C.

    2013-12-01

    The Remote Sensing/GIS Laboratory at Utah State University, in cooperation with the United States Environmental Protection Agency, is quantifying impervious surfaces for three watershed sub-basins in Utah. The primary objective of developing watershed-scale quantifications of impervious surfaces is to provide an indicator of potential impacts to wetlands that occur within the Wasatch Front and along the Great Salt Lake. A geospatial layer of impervious surfaces can assist state agencies involved with Utah's Wetlands Program Plan (WPP) in understanding the impacts of impervious surfaces on wetlands, as well as support them in carrying out goals and actions identified in the WPP. The three watershed sub-basins, Lower Bear-Malad, Lower Weber, and Jordan, span the highly urbanized Wasatch Front and are consistent with focal areas in need of wetland monitoring and assessment as identified in Utah's WPP. Geospatial layers of impervious surface currently exist in the form of national and regional land cover datasets; however, these datasets are too coarse to be utilized in fine-scale analyses. In addition, the pixel-based image processing techniques used to develop these coarse datasets have proven insufficient in smaller scale or detailed studies, particularly when applied to high-resolution satellite imagery or aerial photography. Therefore, object-based image analysis techniques are being implemented to develop the geospatial layer of impervious surfaces. Object-based image analysis techniques employ a combination of both geospatial and image processing methods to extract meaningful information from high-resolution imagery. Spectral, spatial, textural, and contextual information is used to group pixels into image objects and then subsequently used to develop rule sets for image classification. eCognition, an object-based image analysis software program, is being utilized in conjunction with one-meter resolution National Agriculture Imagery Program (NAIP) aerial

  16. Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Syed, Abdul Haleem

    Land-use-and-land-cover (LULC) mapping is crucial in precision agriculture, environmental monitoring, disaster response, and military applications. The demand for improved and more accurate LULC maps has led to the emergence of a key methodology known as Geographic Object-Based Image Analysis (GEOBIA). The core idea of the GEOBIA for an object-based classification system (OBC) is to change the unit of analysis from single-pixels to groups-of-pixels called `objects' through segmentation. While this new paradigm solved problems and improved global accuracy, it also raised new challenges such as the loss of accuracy in categories that are less abundant, but potentially important. Although this trade-off may be acceptable in some domains, the consequences of such an accuracy loss could be potentially fatal in others (for instance, landmine detection). This thesis proposes a method to improve OBC performance by eliminating such accuracy losses. Specifically, we examine the two key players of an OBC system: Hierarchical Segmentation and Supervised Classification. Further, we propose a model to understand the source of accuracy errors in minority categories and provide a method called Scale Fusion to eliminate those errors. This proposed fusion method involves two stages. First, the characteristic scale for each category is estimated through a combination of segmentation and supervised classification. Next, these estimated scales (segmentation maps) are fused into one combined-object-map. Classification performance is evaluated by comparing results of the multi-cut-and-fuse approach (proposed) to the traditional single-cut (SC) scale selection strategy. Testing on four different data sets revealed that our proposed algorithm improves accuracy on minority classes while performing just as well on abundant categories. Another active obstacle, presented by today's remotely sensed images, is the volume of information produced by our modern sensors with high spatial and

  17. Object Based Image Analysis Combining High Spatial Resolution Imagery and Laser Point Clouds for Urban Land Cover

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high

  18. CLARAty Functional-Layer Software

    NASA Technical Reports Server (NTRS)

    Nesnas, Issa; Volpe, Richard; Das, Hari; Mutz, Darren; Petras, Richard; Estlin, Tara

    2003-01-01

    Functional-layer software for the Coupled Layer Architecture for Robotics Autonomy (CLARAty) is being developed. [CLARAty was described in Coupled-Layer Architecture for Advanced Software for Robots (NPO-21218), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48. To recapitulate: CLARAty was proposed to improve the modularity of robotic software while tightening the coupling between planning/execution and control subsystems. Whereas prior robotic software architectures have typically contained three levels, the CLARAty architecture contains two layers: a decision layer and a functional layer.] Just as an operating system provides abstraction from computational hardware, the CLARAty functional-layer software provides for abstraction for the different robotic systems. The functional-layer software establishes interrelated, object-oriented hierarchies that contain active and passive objects that represent the different levels of system abstrations and components. The functional-layer software is decomposed into a set of reusable core components and a set of extended components that adapt the reusable set to specific hardware implementations. The reusable components (a) provide behavior and interface definitions and implementations of basic functionality, (b) provide local executive capabilities, (c) manage local resources, and (d) support state and resource queries by the decision layer. Software for robotic systems can be built by use of these components.

  19. Software system safety

    NASA Technical Reports Server (NTRS)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  20. Quantification and Analysis of Icebergs in a Tidewater Glacier Fjord Using an Object-Based Approach.

    PubMed

    McNabb, Robert W; Womble, Jamie N; Prakash, Anupma; Gens, Rudiger; Haselwimmer, Christian E

    2016-01-01

    Tidewater glaciers are glaciers that terminate in, and calve icebergs into, the ocean. In addition to the influence that tidewater glaciers have on physical and chemical oceanography, floating icebergs serve as habitat for marine animals such as harbor seals (Phoca vitulina richardii). The availability and spatial distribution of glacier ice in the fjords is likely a key environmental variable that influences the abundance and distribution of selected marine mammals; however, the amount of ice and the fine-scale characteristics of ice in fjords have not been systematically quantified. Given the predicted changes in glacier habitat, there is a need for the development of methods that could be broadly applied to quantify changes in available ice habitat in tidewater glacier fjords. We present a case study to describe a novel method that uses object-based image analysis (OBIA) to classify floating glacier ice in a tidewater glacier fjord from high-resolution aerial digital imagery. Our objectives were to (i) develop workflows and rule sets to classify high spatial resolution airborne imagery of floating glacier ice; (ii) quantify the amount and fine-scale characteristics of floating glacier ice; (iii) and develop processes for automating the object-based analysis of floating glacier ice for large number of images from a representative survey day during June 2007 in Johns Hopkins Inlet (JHI), a tidewater glacier fjord in Glacier Bay National Park, southeastern Alaska. On 18 June 2007, JHI was comprised of brash ice ([Formula: see text] = 45.2%, SD = 41.5%), water ([Formula: see text] = 52.7%, SD = 42.3%), and icebergs ([Formula: see text] = 2.1%, SD = 1.4%). Average iceberg size per scene was 5.7 m2 (SD = 2.6 m2). We estimate the total area (± uncertainty) of iceberg habitat in the fjord to be 455,400 ± 123,000 m2. The method works well for classifying icebergs across scenes (classification accuracy of 75.6%); the largest classification errors occur in areas with

  1. Quantification and Analysis of Icebergs in a Tidewater Glacier Fjord Using an Object-Based Approach

    PubMed Central

    McNabb, Robert W.; Womble, Jamie N.; Prakash, Anupma; Gens, Rudiger; Haselwimmer, Christian E.

    2016-01-01

    Tidewater glaciers are glaciers that terminate in, and calve icebergs into, the ocean. In addition to the influence that tidewater glaciers have on physical and chemical oceanography, floating icebergs serve as habitat for marine animals such as harbor seals (Phoca vitulina richardii). The availability and spatial distribution of glacier ice in the fjords is likely a key environmental variable that influences the abundance and distribution of selected marine mammals; however, the amount of ice and the fine-scale characteristics of ice in fjords have not been systematically quantified. Given the predicted changes in glacier habitat, there is a need for the development of methods that could be broadly applied to quantify changes in available ice habitat in tidewater glacier fjords. We present a case study to describe a novel method that uses object-based image analysis (OBIA) to classify floating glacier ice in a tidewater glacier fjord from high-resolution aerial digital imagery. Our objectives were to (i) develop workflows and rule sets to classify high spatial resolution airborne imagery of floating glacier ice; (ii) quantify the amount and fine-scale characteristics of floating glacier ice; (iii) and develop processes for automating the object-based analysis of floating glacier ice for large number of images from a representative survey day during June 2007 in Johns Hopkins Inlet (JHI), a tidewater glacier fjord in Glacier Bay National Park, southeastern Alaska. On 18 June 2007, JHI was comprised of brash ice (x¯ = 45.2%, SD = 41.5%), water (x¯ = 52.7%, SD = 42.3%), and icebergs (x¯ = 2.1%, SD = 1.4%). Average iceberg size per scene was 5.7 m2 (SD = 2.6 m2). We estimate the total area (± uncertainty) of iceberg habitat in the fjord to be 455,400 ± 123,000 m2. The method works well for classifying icebergs across scenes (classification accuracy of 75.6%); the largest classification errors occur in areas with densely-packed ice, low contrast between

  2. Region segmentation techniques for object-based image compression: a review

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    2004-10-01

    Image compression based on transform coding appears to be approaching an asymptotic bit rate limit for application-specific distortion levels. However, a new compression technology, called object-based compression (OBC) promises improved rate-distortion performance at higher compression ratios. OBC involves segmentation of image regions, followed by efficient encoding of each region"s content and boundary. Advantages of OBC include efficient representation of commonly occurring textures and shapes in terms of pointers into a compact codebook of region contents and boundary primitives. This facilitates fast decompression via substitution, at the cost of codebook search in the compression step. Segmentation cose and error are significant disadvantages in current OBC implementations. Several innovative techniques have been developed for region segmentation, including (a) moment-based analysis, (b) texture representation in terms of a syntactic grammar, and (c) transform coding approaches such as wavelet based compression used in MPEG-7 or JPEG-2000. Region-based characterization with variance templates is better understood, but lacks the locality of wavelet representations. In practice, tradeoffs are made between representational fidelity, computational cost, and storage requirement. This paper overviews current techniques for automatic region segmentation and representation, especially those that employ wavelet classification and region growing techniques. Implementational discussion focuses on complexity measures and performance metrics such as segmentation error and computational cost.

  3. Efficient reconstruction of dielectric objects based on integral equation approach with Gauss-Newton minimization.

    PubMed

    Tong, Mei Song; Yang, Kuo; Sheng, Wei Tian; Zhu, Zhen Ying

    2013-12-01

    Reconstruction of unknown objects by microwave illumination requires efficient inversion for measured electromagnetic scattering data. In the integral equation approach for reconstructing dielectric objects based on the Born iterative method or its variations, the volume integral equations are involved because the imaging domain is fully inhomogeneous. When solving the forward scattering integral equation, the Nyström method is used because the traditional method of moments may be inconvenient due to the inhomogeneity of the imaging domain. The benefits of the Nyström method include the simple implementation without using any basis and testing functions and low requirement on geometrical discretization. When solving the inverse scattering integral equation, the Gauss-Newton minimization approach with a line search method (LSM) and multiplicative regularization method (MRM) is employed. The LSM can optimize the search of step size in each iteration, whereas the MRM may reduce the number of numerical experiments for choosing the regularization parameter. Numerical examples for reconstructing typical dielectric objects under limited observation angles are presented to illustrate the inversion approach.

  4. Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use

    PubMed Central

    Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil

    2013-01-01

    The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648

  5. Fully automated objective-based method for master recession curve separation.

    PubMed

    Posavec, Kristijan; Parlov, Jelena; Nakić, Zoran

    2010-01-01

    The fully automated objective-based method for master recession curve (MRC) separation was developed by using Microsoft Excel spreadsheet and Visual Basic for Applications (VBA) code. The core of the program code is used to construct an MRC by using the adapted matching strip method (Posavec et al. 2006). Criteria for separating the MRC into two or three segments are determined from the flow-duration curve and are represented as the probable range of percent of flow rate duration. Successive separations are performed automatically on two and three MRCs using sets of percent of flow rate duration from selected ranges and an optimal separation model scenario, having the highest average coefficient of determination R(2), is selected as the most appropriate one. The resulting separated master recession curves are presented graphically, whereas the statistics are presented numerically, all in separate sheets. Examples of field data obtained from two springs in Istria, Croatia, are used to illustrate its application. The freely available Excel spreadsheet and VBA program ensures the ease of use and applicability for larger data sets.

  6. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction.

    PubMed

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-09-06

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients' psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller's mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study.

  7. Automatic fuzzy object-based analysis of VHSR images for urban objects extraction

    NASA Astrophysics Data System (ADS)

    Sebari, Imane; He, Dong-Chen

    2013-05-01

    We present an automatic approach for object extraction from very high spatial resolution (VHSR) satellite images based on Object-Based Image Analysis (OBIA). The proposed solution requires no input data other than the studied image. Not input parameters are required. First, an automatic non-parametric cooperative segmentation technique is applied to create object primitives. A fuzzy rule base is developed based on the human knowledge used for image interpretation. The rules integrate spectral, textural, geometric and contextual object proprieties. The classes of interest are: tree, lawn, bare soil and water for natural classes; building, road, parking lot for man made classes. The fuzzy logic is integrated in our approach in order to manage the complexity of the studied subject, to reason with imprecise knowledge and to give information on the precision and certainty of the extracted objects. The proposed approach was applied to extracts of Ikonos images of Sherbrooke city (Canada). An overall total extraction accuracy of 80% was observed. The correctness rates obtained for building, road and parking lot classes are of 81%, 75% and 60%, respectively.

  8. A Framework for Geographic Object-Based Image Analysis (GEOBIA) based on geographic ontology

    NASA Astrophysics Data System (ADS)

    Gu, H. Y.; Li, H. T.; Yan, L.; Lu, X. J.

    2015-06-01

    GEOBIA (Geographic Object-Based Image Analysis) is not only a hot topic of current remote sensing and geographical research. It is believed to be a paradigm in remote sensing and GIScience. The lack of a systematic approach designed to conceptualize and formalize the class definitions makes GEOBIA a highly subjective and difficult method to reproduce. This paper aims to put forward a framework for GEOBIA based on geographic ontology theory, which could implement "Geographic entities - Image objects - Geographic objects" true reappearance. It consists of three steps, first, geographical entities are described by geographic ontology, second, semantic network model is built based on OWL(ontology web language), at last, geographical objects are classified with decision rule or other classifiers. A case study of farmland ontology was conducted for describing the framework. The strength of this framework is that it provides interpretation strategies and global framework for GEOBIA with the property of objective, overall, universal, universality, etc., which avoids inconsistencies caused by different experts' experience and provides an objective model for mage analysis.

  9. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction

    PubMed Central

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  10. Object Based Land Use and Land Cover Classification Using Sumbandilasat Imagery in Pretoria, South Africa

    NASA Astrophysics Data System (ADS)

    Mhangara, P.; Odindi, J.

    2012-04-01

    The launch of SumbandilaSat satellite on 17 September 2009 has ushered in new possibilities in land use/cover classification. SumbandilaSat has a ground sampling resolution of 6.25 m and possesses the red, red-edge and NIR spectral bands. The objective of this study was to demonstrate the feasibility of using SumbandilaSat imagery to classify land use/cover using an object-based classification ruleset that uses spectral indices, Haralick texture and the original spectral bands to classify urban land use and land cover in Pretoria, South Africa. High classification accuracies were achieved for water, forest, urban built up areas, irrigated-riparian vegetation and grass and sparse vegetation as shown by the very high KIAs higher than 0.85. Bare soil achieved a relatively low classification accuracy of 0.65. This study also showcases the value of SumbandilaSat imagery and the efficacy of object-oriented techniques in land use/cover classification as revealed by the overall accuracy of 0.93 and KIA of 0.91.

  11. Persistent spatial information in the frontal eye field during object-based short-term memory.

    PubMed

    Clark, Kelsey L; Noudoost, Behrad; Moore, Tirin

    2012-08-08

    Spatial attention is known to gate entry into visual short-term memory, and some evidence suggests that spatial signals may also play a role in binding features or protecting object representations during memory maintenance. To examine the persistence of spatial signals during object short-term memory, the activity of neurons in the frontal eye field (FEF) of macaque monkeys was recorded during an object-based delayed match-to-sample task. In this task, monkeys were trained to remember an object image over a brief delay, regardless of the locations of the sample or target presentation. FEF neurons exhibited visual, delay, and target period activity, including selectivity for sample location and target location. Delay period activity represented the sample location throughout the delay, despite the irrelevance of spatial information for successful task completion. Furthermore, neurons continued to encode sample position in a variant of the task in which the matching stimulus never appeared in their response field, confirming that FEF maintains sample location independent of subsequent behavioral relevance. FEF neurons also exhibited target-position-dependent anticipatory activity immediately before target onset, suggesting that monkeys predicted target position within blocks. These results show that FEF neurons maintain spatial information during short-term memory, even when that information is irrelevant for task performance.

  12. Temporal segmentation of video objects for hierarchical object-based motion description.

    PubMed

    Fu, Yue; Ekin, Ahmet; Tekalp, A Murat; Mehrotra, Rajiv

    2002-01-01

    This paper describes a hierarchical approach for object-based motion description of video in terms of object motions and object-to-object interactions. We present a temporal hierarchy for object motion description, which consists of low-level elementary motion units (EMU) and high-level action units (AU). Likewise, object-to-object interactions are decomposed into a hierarchy of low-level elementary reaction units (ERU) and high-level interaction units (IU). We then propose an algorithm for temporal segmentation of video objects into EMUs, whose dominant motion can be described by a single representative parametric model. The algorithm also computes a representative (dominant) affine model for each EMU. We also provide algorithms for identification of ERUs and for classification of the type of ERUs. Experimental results demonstrate that segmenting the life-span of video objects into EMUS and ERUs facilitates the generation of high-level visual summaries for fast browsing and navigation. At present, the formation of high-level action and interaction units is done interactively. We also provide a set of query-by-example results for low-level EMU retrieval from a database based on similarity of the representative dominant affine models.

  13. Object-based illumination normalization for multi-temporal satellite images in urban area

    NASA Astrophysics Data System (ADS)

    Su, Nan; Zhang, Ye; Tian, Shu; Yan, Yiming

    2016-09-01

    Multi-temporal satellite images acquisition with different illumination conditions cause radiometric difference to have a huge effect on image quality during remote sensing image processing. In particular, image matching of satellite stereo images with great difference between acquisition dates is very difficult for the high-precision DSM generation in the field of satellite photogrammetry. Therefore, illumination normalization is one of the greatest application technology to eliminate radiometric difference for image matching and other image applications. In this paper, we proposed a novel method of object-based illumination normalization to improve image matching of different temporal satellite stereo images in urban area. Our proposed method include two main steps: 1) the object extraction 2) multi-level illumination normalization. Firstly, we proposed a object extraction method for the same objects extraction among the multi-temporal satellite images, which can keep the object structural attribute. Moreover, the multi-level illumination normalization is proposed by combining gradient domain method and singular value decomposition (SVD) according to characteristic information of relevant objects. Our proposed method has great improvement for the illumination of object area to be benefit for image matching in urban area with multiple objects. And the histogram similarity parameter and matching rate are used for illumination consistency quantitative evaluation. The experiments have been conducted on different satellite images with different acquisition dates in the same urban area to verify the effectiveness of our proposed method. The experimental results demonstrate a good performance by comparing other methods.

  14. Three Dimentional Reconstruction of Large Cultural Heritage Objects Based on Uav Video and Tls Data

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Wu, T. H.; Shen, Y.; Wu, L.

    2016-06-01

    This paper investigates the synergetic use of unmanned aerial vehicle (UAV) and terrestrial laser scanner (TLS) in 3D reconstruction of cultural heritage objects. Rather than capturing still images, the UAV that equips a consumer digital camera is used to collect dynamic videos to overcome its limited endurance capacity. Then, a set of 3D point-cloud is generated from video image sequences using the automated structure-from-motion (SfM) and patch-based multi-view stereo (PMVS) methods. The TLS is used to collect the information that beyond the reachability of UAV imaging e.g., partial building facades. A coarse to fine method is introduced to integrate the two sets of point clouds UAV image-reconstruction and TLS scanning for completed 3D reconstruction. For increased reliability, a variant of ICP algorithm is introduced using local terrain invariant regions in the combined designation. The experimental study is conducted in the Tulou culture heritage building in Fujian province, China, which is focused on one of the TuLou clusters built several hundred years ago. Results show a digital 3D model of the Tulou cluster with complete coverage and textural information. This paper demonstrates the usability of the proposed method for efficient 3D reconstruction of heritage object based on UAV video and TLS data.

  15. Object-based image analysis for the impact of sewage pollution in Malad Creek, Mumbai, India.

    PubMed

    Shirke, Shivani; Pinto, Shannon M; Kushwaha, Vikash K; Mardikar, Trupti; Vijay, Ritesh

    2016-02-01

    Today, object-based image analysis provides an option for integrating spatial information beyond conventional pixel-based classifications for high-resolution imagery. Due to its rare applicability in pollution assessment, an attempt has been made to assess the spatial extent of sewage pollution in Malad Creek, Mumbai, India. Based on multiresolution segmentation of an IRS P6 (LISS IV) image and the Normalized Difference Turbidity Index (NDTI), the various water quality regions in the creek were classified. The existing literature implies that the reflectance of turbid water is similar to that of bare soil which gives positive NDTI values. In contrast to this, negative values of NDTI are observed in the present study due to the presence of organic matter which absorbs light and imparts turbidity, which is supported by the significant correlation between NDTI and turbidity. A strong relationship is observed between turbidity and water quality parameters, implying the impact of organic matter through discharges of sewage in the creek. Based on the classified regions and the water quality parameters, the extent of pollution was ranked as high, moderate, low and least. The methodology developed in the present study was successfully applied on an IKONOS image for the same study area but a different time frame. The approach will help in impact assessment of sewage pollution and its spatial extent in other water bodies.

  16. Geographic Object-Based Image Analysis – Towards a new paradigm

    PubMed Central

    Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ‘per-pixel paradigm’ and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm. PMID:24623958

  17. Object-based approach to national land cover mapping using HJ satellite imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Li, Xiaosong; Yuan, Quanzhi; Liu, Yu

    2014-01-01

    To meet the carbon storage estimate in ecosystems for a national carbon strategy, we introduce a consistent database of China land cover. The Chinese Huan Jing (HJ) satellite is proven efficient in the cloud-free acquisition of seasonal image series in a monsoon region and in vegetation identification for mesoscale land cover mapping. Thirty-eight classes of level II land cover are generated based on the Land Cover Classification System of the United Nations Food and Agriculture Organization that follows a standard and quantitative definition. Twenty-four layers of derivative spectral, environmental, and spatial features compose the classification database. Object-based approach characterizing additional nonspectral features is conducted through mapping, and multiscale segmentations are applied on object boundary match to target real-world conditions. This method sufficiently employs spatial information, in addition to spectral characteristics, to improve classification accuracy. The algorithm of hierarchical classification is employed to follow step-by-step procedures that effectively control classification quality. This algorithm divides the dual structures of universal and local trees. Consistent universal trees suitable to most regions are performed first, followed by local trees that depend on specific features of nine climate stratifications. The independent validation indicates the overall accuracy reaches 86%.

  18. A building extraction approach for Airborne Laser Scanner data utilizing the Object Based Image Analysis paradigm

    NASA Astrophysics Data System (ADS)

    Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas

    2016-10-01

    In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied "as is" to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.

  19. Object-Based Multiple Foreground Video Co-Segmentation via Multi-State Selection Graph.

    PubMed

    Fu, Huazhu; Xu, Dong; Zhang, Bao; Lin, Stephen; Ward, Rabab Kreidieh

    2015-11-01

    We present a technique for multiple foreground video co-segmentation in a set of videos. This technique is based on category-independent object proposals. To identify the foreground objects in each frame, we examine the properties of the various regions that reflect the characteristics of foregrounds, considering the intra-video coherence of the foreground as well as the foreground consistency among the different videos in the set. Multiple foregrounds are handled via a multi-state selection graph in which a node representing a video frame can take multiple labels that correspond to different objects. In addition, our method incorporates an indicator matrix that for the first time allows accurate handling of cases with common foreground objects missing in some videos, thus preventing irrelevant regions from being misclassified as foreground objects. An iterative procedure is proposed to optimize our new objective function. As demonstrated through comprehensive experiments, this object-based multiple foreground video co-segmentation method compares well with related techniques that co-segment multiple foregrounds.

  20. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning

    NASA Astrophysics Data System (ADS)

    Fernandez Galarreta, J.; Kerle, N.; Gerke, M.

    2014-09-01

    Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.

  1. An object-based classification method for automatic detection of lunar impact craters from topographic data

    NASA Astrophysics Data System (ADS)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  2. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning

    NASA Astrophysics Data System (ADS)

    Fernandez Galarreta, J.; Kerle, N.; Gerke, M.

    2015-06-01

    Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.

  3. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    NASA Astrophysics Data System (ADS)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  4. An object-based assessment of the High Resolution Rapid Refresh Model over the western US

    NASA Astrophysics Data System (ADS)

    Bytheway, J. L.; Kummerow, C. D.

    2015-12-01

    The High Resolution Rapid Refresh (HRRR) model is an hourly-updated, radar-initialized, 3km storm-resolving model run operationally in the US since September 2014, with the main goal of improving forecasts of convective precipitation. Here, we identify precipitating objects in the forecast and observed fields in a manner similar to the Method for Object Based Deterministic Evaluation (MODE). The identified objects are used to perform an assessment of HRRR model forecasts of warm season convective precipitation over the complex terrain of the western US (west of 105W). The constellation of satellites comprising the Global Precipitation Measurement (GPM) Mission provide observations of precipitating objects in the domain, and can be used to assess not only the quality of the rain/no-rain forecast, but also the model's representation of the microphysical and other environmental properties governing precipitation behavior (e.g. latent heating, vertically integrated liquid, etc.). Special attention will be paid to vertical profiles of liquid and ice water contents that can be inferred from the observed reflectivity profiles of the GPM Dual-frequency Precipitation Radar (DPR) and other imaging and sounding instruments. Potential relationships between these properties and the success or failure of the model forecast will be discussed.

  5. Object-based forest classification to facilitate landscape-scale conservation in the Mississippi Alluvial Valley

    USGS Publications Warehouse

    Mitchell, Michael; Wilson, R. Randy; Twedt, Daniel J.; Mini, Anne E.; James, J. Dale

    2016-01-01

    The Mississippi Alluvial Valley is a floodplain along the southern extent of the Mississippi River extending from southern Missouri to the Gulf of Mexico. This area once encompassed nearly 10 million ha of floodplain forests, most of which has been converted to agriculture over the past two centuries. Conservation programs in this region revolve around protection of existing forest and reforestation of converted lands. Therefore, an accurate and up to date classification of forest cover is essential for conservation planning, including efforts that prioritize areas for conservation activities. We used object-based image analysis with Random Forest classification to quickly and accurately classify forest cover. We used Landsat band, band ratio, and band index statistics to identify and define similar objects as our training sets instead of selecting individual training points. This provided a single rule-set that was used to classify each of the 11 Landsat 5 Thematic Mapper scenes that encompassed the Mississippi Alluvial Valley. We classified 3,307,910±85,344 ha (32% of this region) as forest. Our overall classification accuracy was 96.9% with Kappa statistic of 0.96. Because this method of forest classification is rapid and accurate, assessment of forest cover can be regularly updated and progress toward forest habitat goals identified in conservation plans can be periodically evaluated.

  6. A Multidimensional Software Engineering Course

    ERIC Educational Resources Information Center

    Barzilay, O.; Hazzan, O.; Yehudai, A.

    2009-01-01

    Software engineering (SE) is a multidimensional field that involves activities in various areas and disciplines, such as computer science, project management, and system engineering. Though modern SE curricula include designated courses that address these various subjects, an advanced summary course that synthesizes them is still missing. Such a…

  7. A Mathematics Software Database Update.

    ERIC Educational Resources Information Center

    Cunningham, R. S.; Smith, David A.

    1987-01-01

    Contains an update of an earlier listing of software for mathematics instruction at the college level. Topics are: advanced mathematics, algebra, calculus, differential equations, discrete mathematics, equation solving, general mathematics, geometry, linear and matrix algebra, logic, statistics and probability, and trigonometry. (PK)

  8. Use of Remote Imagery and Object-based Image Methods to Count Plants in an Open-field Container Nursery

    NASA Astrophysics Data System (ADS)

    Leiva Lopez, Josue Nahun

    In general, the nursery industry lacks an automated inventory control system. Object-based image analysis (OBIA) software and aerial images could be used to count plants in nurseries. The objectives of this research were: 1) to evaluate the effect of an unmanned aerial vehicle (UAV) flight altitude and plant canopy separation of container-grown plants on count accuracy using aerial images and 2) to evaluate the effect of plant canopy shape, presence of flowers, and plant status (living and dead) on counting accuracy of container-grown plants using remote sensing images. Images were analyzed using Feature AnalystRTM (FA) and an algorithm trained using MATLABRTM. Total count error, false positives and unidentified plants were recorded from output images using FA; only total count error was reported for the MATLAB algorithm. For objective 1, images were taken at 6, 12 and 22 m above the ground using a UAV. Plants were placed on black fabric and gravel, and spaced as follows: 5 cm between canopy edges, canopy edges touching, and 5 cm of canopy edge overlap. In general, when both methods were considered, total count error was smaller [ranging from -5 (undercount) to 4 (over count)] when plants were fully separated with the exception of images taken at 22 m. FA showed a smaller total count error (-2) than MATLAB (-5) when plants were placed on black fabric than those placed on gravel. For objective 2, the plan was to continue using the UAV, however, due to the unexpected disruption of the GPS-based navigation by heightened solar flare activity in 2013, a boom lift that could provide images on a more reliable basis was used. When images obtained using a boom lift were analyzed using FA there was no difference between variables measured when an algorithm trained with an image displaying regular or irregular plant canopy shape was applied to images displaying both plant canopy shapes even though the canopy shape of 'Sea Green' juniper is less compact than 'Plumosa Compacta

  9. How a face may affect object-based attention: evidence from adults and 8-month-old infants

    PubMed Central

    Valenza, Eloisa; Franchin, Laura; Bulf, Hermann

    2013-01-01

    Object-based attention operates on perceptual objects, opening the possibility that the costs and benefits humans have to pay to move attention between-objects might be affected by the nature of the stimuli. The current study reported two experiments with adults and 8-month-old infants investigating whether object-based-attention is affected by the type of stimulus (faces vs. non-faces stimuli). Using the well-known cueing task developed by Egly et al. (1994) to study the object-based component of attention, in Experiment 1 adult participants were presented with two upright, inverted or scrambled faces and an eye-tracker measured their saccadic latencies to find a target that could appear on the same object that was just cued or on the other object that was uncued. Data showed that an object-based effect (a smaller cost to shift attention within- compared to between-objects) occurred only with scrambled face, but not with upright or inverted faces. In Experiment 2 the same task was performed with 8-month-old infants, using upright and inverted faces. Data revealed that an object-based effect emerges only for inverted faces but not for upright faces. Overall, these findings suggest that object-based attention is modulated by the type of stimulus and by the experience acquired by the viewer with different objects. PMID:24723860

  10. Geologic Sequestration Software Suite

    SciTech Connect

    Black, Gary; Bonneville, PNNL Alain; Sivaramakrishnan, PNNL Chandrika; Purohit, PNNL Sumit; White, PNNL Signe; Lansing, PNNL Carina; Gosink, PNNL Luke; Guillen, PNNL Zoe; Moeglein, PNNL William; Gorton, PNNL Ian; PNNL,

    2013-11-04

    GS3 is the bundling of the Geological Sequestration Software Suite domain tools with the Velo wiki user interface, rich client interface, and data store. Velo is an application domain independent collaborative user environment for modeling and simulation. Velo has a web browser based wiki interface integrated with a sophisticated content management system supporting data and knowledge management required for large-scale scientific modeling projects. GS3 adds tools and capability specifically in the area of modeling subsurface reservoirs for the purpose of carbon sequestration. Velo is a core software framework to create scientific domain user environments. Velo is not tied to a specific domain although it provides novel capability needed by many application areas. A well-defined Velo integration layer allows custom applications such as GS3 to leverage the core Velo components to reduce development cost/time and ultimately provide a more capable software product. Compared with previous efforts like ECCE and SALSSA, Velo is a major advancement being a web browser based interface, having a more comprehensive data management architecture, and having intrinsic support for collaboration through the wiki. GS3 adds specific domain tools for looking at site data, developing conceptual and numerical models, building simulation input files, launching and monitoring the progress of those simulations and being able to look at and interpret simulation output.

  11. Mapping seabed sediments: Comparison of manual, geostatistical, object-based image analysis and machine learning approaches

    NASA Astrophysics Data System (ADS)

    Diesing, Markus; Green, Sophie L.; Stephens, David; Lark, R. Murray; Stewart, Heather A.; Dove, Dayton

    2014-08-01

    Marine spatial planning and conservation need underpinning with sufficiently detailed and accurate seabed substrate and habitat maps. Although multibeam echosounders enable us to map the seabed with high resolution and spatial accuracy, there is still a lack of fit-for-purpose seabed maps. This is due to the high costs involved in carrying out systematic seabed mapping programmes and the fact that the development of validated, repeatable, quantitative and objective methods of swath acoustic data interpretation is still in its infancy. We compared a wide spectrum of approaches including manual interpretation, geostatistics, object-based image analysis and machine-learning to gain further insights into the accuracy and comparability of acoustic data interpretation approaches based on multibeam echosounder data (bathymetry, backscatter and derivatives) and seabed samples with the aim to derive seabed substrate maps. Sample data were split into a training and validation data set to allow us to carry out an accuracy assessment. Overall thematic classification accuracy ranged from 67% to 76% and Cohen's kappa varied between 0.34 and 0.52. However, these differences were not statistically significant at the 5% level. Misclassifications were mainly associated with uncommon classes, which were rarely sampled. Map outputs were between 68% and 87% identical. To improve classification accuracy in seabed mapping, we suggest that more studies on the effects of factors affecting the classification performance as well as comparative studies testing the performance of different approaches need to be carried out with a view to developing guidelines for selecting an appropriate method for a given dataset. In the meantime, classification accuracy might be improved by combining different techniques to hybrid approaches and multi-method ensembles.

  12. Applying object-based segmentation in the temporal domain to characterise snow seasonality

    NASA Astrophysics Data System (ADS)

    Thompson, Jeffery A.; Lees, Brian G.

    2014-11-01

    In the context of a changing climate it is important to be able to monitor and map descriptors of snow seasonality. Because of its relatively low elevation range, Australia's alpine bioregion is a marginal area for seasonal snow-cover with high inter-annual variability. It has been predicted that snow-cover will become increasingly ephemeral within the alpine bioregion as warming continues. To assist the monitoring of snow seasonality and ephemeral snow-cover, a remote sensing method is proposed. The method adapted principles of object-based image analysis that have traditionally be used in the spatial domain and applied them in the temporal domain. The method allows for a more comprehensive characterisation of snow seasonality relative to other methods. Using high-temporal resolution (daily) MODIS image time-series, remotely sensed descriptors were derived and validated using in situ observations. Overall, moderate to strong relationships were observed between the remotely sensed descriptors of the persistent snow-covered period (start r = 0.70, p < 0.001; end r = 0.88, p < 0.001 and duration r = 0.88, p < 0.001) and their in situ counterparts. Although only weak correspondence (r = 0.39, p < 0.05) was observed for the number of ephemeral events detected using remote sensing, this was thought to be related to differences in the sampling frequency of the in situ observations relative to the remotely sense observations. For 2009, the mapped results for the number of snow-cover events suggested that snow-cover between 1400 and 1799 m was characterised by a high numbers of ephemeral events.

  13. A fuzzy rule base system for object-based feature extraction and classification

    NASA Astrophysics Data System (ADS)

    Jin, Xiaoying; Paswaters, Scott

    2007-04-01

    In this paper, we present a fuzzy rule base system for object-based feature extraction and classification on remote sensing imagery. First, the object primitives are generated from the segmentation step. Object primitives are defined as individual regions with a set of attributes computed on the regions. The attributes computed include spectral, texture and shape measurements. Crisp rules are very intuitive to the users. They are usually represented as "GT (greater than)", "LT (less than)" and "IB (In Between)" with numerical values. The features can be manually generated by querying on the attributes using these crisp rules and monitoring the resulting selected object primitives. However, the attributes of different features are usually overlapping. The information is inexact and not suitable for traditional digital on/off decisions. Here a fuzzy rule base system is built to better model the uncertainty inherent in the data and vague human knowledge. Rather than representing attributes in linguistic terms like "Small", "Medium", "Large", we proposed a new method for automatic fuzzification of the traditional crisp concepts "GT", "LT" and "IB". Two sets of membership functions are defined to model those concepts. One is based on the piecewise linear functions, the other is based on S-type membership functions. A novel concept "fuzzy tolerance" is proposed to control the degree of fuzziness of each rule. The experimental results on classification and extracting features such as water, buildings, trees, fields and urban areas have shown that this newly designed fuzzy rule base system is intuitive and allows users to easily generate fuzzy rules.

  14. Neural activity associated with self, other, and object-based counterfactual thinking.

    PubMed

    De Brigard, Felipe; Nathan Spreng, R; Mitchell, Jason P; Schacter, Daniel L

    2015-04-01

    Previous research has shown that autobiographical episodic counterfactual thinking-i.e., mental simulations about alternative ways in which one's life experiences could have occurred-engages the brain's default network (DN). However, it remains unknown whether or not the DN is also engaged during impersonal counterfactual thoughts, specifically those involving other people or objects. The current study compares brain activity during counterfactual simulations involving the self, others and objects. In addition, counterfactual thoughts involving others were manipulated in terms of similarity and familiarity with the simulated characters. The results indicate greater involvement of DN during person-based (i.e., self and other) as opposed to object-based counterfactual simulations. However, the involvement of different regions of the DN during other-based counterfactual simulations was modulated by how close and/or similar the simulated character was perceived to be by the participant. Simulations involving unfamiliar characters preferentially recruited dorsomedial prefrontal cortex. Simulations involving unfamiliar similar characters, characters with whom participants identified personality traits, recruited lateral temporal gyrus. Finally, our results also revealed differential coupling of right hippocampus with lateral prefrontal and temporal cortex during counterfactual simulations involving familiar similar others, but with left transverse temporal gyrus and medial frontal and inferior temporal gyri during counterfactual simulations involving either oneself or unfamiliar dissimilar others. These results suggest that different brain mechanisms are involved in the simulation of personal and impersonal counterfactual thoughts, and that the extent to which regions associated with autobiographical memory are recruited during the simulation of counterfactuals involving others depends on the perceived similarity and familiarity with the simulated individuals.

  15. Object-based gully system prediction from medium resolution imagery using Random Forests

    NASA Astrophysics Data System (ADS)

    Shruthi, Rajesh B. V.; Kerle, Norman; Jetten, Victor; Stein, Alfred

    2014-07-01

    Erosion, in particular gully erosion, is a widespread problem. Its mapping is crucial for erosion monitoring and remediation of degraded areas. In addition, mapping of areas with high potential for future gully erosion can be used to assist prevention strategies. Good relations with topographic variables collected from the field are appropriate for determining areas susceptible to gullying. Image analysis of high resolution remotely sensed imagery (HRI) in combination with field verification has proven to be a good approach, although dependent on expensive imagery. Automatic and semi-automatic methods, such as object-oriented analysis (OOA), are rapid and reproducible. However, HRI data are not always available. We therefore attempted to identify gully systems using statistical modeling of image features from medium resolution imagery, here ASTER. These data were used for determining areas within gully system boundaries (GSB) using a semi-automatic method based on OOA. We assess if the selection of useful object features can be done in an objective and transferable way, using Random Forests (RF) for prediction of gully systems at regional scale, here in the Sehoul region, near Rabat, Morocco. Moderate success was achieved using a semi-automatic object-based RF model (out-of-bag error of 18.8%). Besides compensating for the imbalance between gully and non-gully classes, the procedure followed in this study enabled us to balance the classification error rates. The user's and producer's accuracy of the data with a balanced set of class showed an improved accuracy of the spatial estimates of gully systems, when compared to the data with imbalanced class. The model over-predicted the area within the GSB (13-27%), but its overall performance demonstrated that medium resolution satellite images contain sufficient information to identify gully systems, so that large areas can be mapped with relatively little effort and acceptable accuracy.

  16. High resolution mapping of development in the wildland-urban interface using object based image extraction

    USGS Publications Warehouse

    Caggiano, Michael D.; Tinkham, Wade T.; Hoffman, Chad; Cheng, Antony S.; Hawbaker, Todd J.

    2016-01-01

    The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m2) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA

  17. Mapping Eroded Areas on Mountain Grassland with Terrestrial Photogrammetry and Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Mayr, Andreas; Rutzinger, Martin; Bremer, Magnus; Geitner, Clemens

    2016-06-01

    In the Alps as well as in other mountain regions steep grassland is frequently affected by shallow erosion. Often small landslides or snow movements displace the vegetation together with soil and/or unconsolidated material. This results in bare earth surface patches within the grass covered slope. Close-range and remote sensing techniques are promising for both mapping and monitoring these eroded areas. This is essential for a better geomorphological process understanding, to assess past and recent developments, and to plan mitigation measures. Recent developments in image matching techniques make it feasible to produce high resolution orthophotos and digital elevation models from terrestrial oblique images. In this paper we propose to delineate the boundary of eroded areas for selected scenes of a study area, using close-range photogrammetric data. Striving for an efficient, objective and reproducible workflow for this task, we developed an approach for automated classification of the scenes into the classes grass and eroded. We propose an object-based image analysis (OBIA) workflow which consists of image segmentation and automated threshold selection for classification using the Excess Green Vegetation Index (ExG). The automated workflow is tested with ten different scenes. Compared to a manual classification, grass and eroded areas are classified with an overall accuracy between 90.7% and 95.5%, depending on the scene. The methods proved to be insensitive to differences in illumination of the scenes and greenness of the grass. The proposed workflow reduces user interaction and is transferable to other study areas. We conclude that close-range photogrammetry is a valuable low-cost tool for mapping this type of eroded areas in the field with a high level of detail and quality. In future, the output will be used as ground truth for an area-wide mapping of eroded areas in coarser resolution aerial orthophotos acquired at the same time.

  18. Object-Based Change Detection Using High-Resolution Remotely Sensed Data and GIS

    NASA Astrophysics Data System (ADS)

    Sofina, N.; Ehlers, M.

    2012-08-01

    High resolution remotely sensed images provide current, detailed, and accurate information for large areas of the earth surface which can be used for change detection analyses. Conventional methods of image processing permit detection of changes by comparing remotely sensed multitemporal images. However, for performing a successful analysis it is desirable to take images from the same sensor which should be acquired at the same time of season, at the same time of a day, and - for electro-optical sensors - in cloudless conditions. Thus, a change detection analysis could be problematic especially for sudden catastrophic events. A promising alternative is the use of vector-based maps containing information about the original urban layout which can be related to a single image obtained after the catastrophe. The paper describes a methodology for an object-based search of destroyed buildings as a consequence of a natural or man-made catastrophe (e.g., earthquakes, flooding, civil war). The analysis is based on remotely sensed and vector GIS data. It includes three main steps: (i) generation of features describing the state of buildings; (ii) classification of building conditions; and (iii) data import into a GIS. One of the proposed features is a newly developed 'Detected Part of Contour' (DPC). Additionally, several features based on the analysis of textural information corresponding to the investigated vector objects are calculated. The method is applied to remotely sensed images of areas that have been subjected to an earthquake. The results show the high reliability of the DPC feature as an indicator for change.

  19. Modelling of cooperating robotized systems with the use of object-based approach

    NASA Astrophysics Data System (ADS)

    Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.

    2015-11-01

    Today's robotized manufacturing systems are characterized by high efficiency. The emphasis is placed mainly on the simultaneous work of machines. It could manifest in many ways, where the most spectacular one is the cooperation of several robots, during work on the same detail. What's more, recently a dual-arm robots are used that could mimic the manipulative skills of human hands. As a result, it is often hard to deal with the situation, when it is necessary not only to maintain sufficient precision, but also the coordination and proper sequence of movements of individual robots’ arms. The successful completion of this task depends on the individual robot control systems and their respective programmed, but also on the well-functioning communication between robot controllers. A major problem in case of cooperating robots is the possibility of collision between particular links of robots’ kinematic chains. This is not a simple case, because the manufacturers of robotic systems do not disclose the details of the control algorithms, then it is hard to determine such situation. Another problem with cooperation of robots is how to inform the other units about start or completion of part of the task, so that other robots can take further actions. This paper focuses on communication between cooperating robotic units, assuming that every robot is represented by object-based model. This problem requires developing a form of communication protocol that the objects can use for collecting the information about its environment. The approach presented in the paper is not limited to the robots and could be used in a wider range, for example during modelling of the complete workcell or production line.

  20. Coregistration refinement of hyperspectral images and DSM: An object-based approach using spectral information

    NASA Astrophysics Data System (ADS)

    Avbelj, Janja; Iwaszczuk, Dorota; Müller, Rupert; Reinartz, Peter; Stilla, Uwe

    2015-02-01

    For image fusion in remote sensing applications the georeferencing accuracy using position, attitude, and camera calibration measurements can be insufficient. Thus, image processing techniques should be employed for precise coregistration of images. In this article a method for multimodal object-based image coregistration refinement between hyperspectral images (HSI) and digital surface models (DSM) is presented. The method is divided in three parts: object outline detection in HSI and DSM, matching, and determination of transformation parameters. The novelty of our proposed coregistration refinement method is the use of material properties and height information of urban objects from HSI and DSM, respectively. We refer to urban objects as objects which are typical in urban environments and focus on buildings by describing them with 2D outlines. Furthermore, the geometric accuracy of these detected building outlines is taken into account in the matching step and for the determination of transformation parameters. Hence, a stochastic model is introduced to compute optimal transformation parameters. The feasibility of the method is shown by testing it on two aerial HSI of different spatial and spectral resolution, and two DSM of different spatial resolution. The evaluation is carried out by comparing the accuracies of the transformations parameters to the reference parameters, determined by considering object outlines at much higher resolution, and also by computing the correctness and the quality rate of the extracted outlines before and after coregistration refinement. Results indicate that using outlines of objects instead of only line segments is advantageous for coregistration of HSI and DSM. The extraction of building outlines in comparison to the line cue extraction provides a larger amount of assigned lines between the images and is more robust to outliers, i.e. false matches.

  1. An Object-Based Method for Estimation of River Discharge from Remotely-Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Burgett, D. A.; Blesius, L.; Davis, J. D.

    2010-12-01

    High resolution satellite and aerial imagery of fluvial systems contain much information about planform river channel features. However, not much is known about how these forms quantitatively related to river channel process, specifically, discharge. This research explores methods for remote image-based river discharge estimation through Object-Based Image Processing (OBIA) and GIS techniques. Previous efforts in image-based discharge estimation have relied primarily on manual delineation of river features and the input of reach-averaged values of these features into statistically based models for estimation. In addition to analyzing OBIA techniques for channel feature delineation and measurement, this approach investigates techniques of discharge estimation model design, validation, and correction along a reach, utilizing variation in “standard” channel features (e.g. water surface width), along with less tangible channel feature metrics derived from OBIA. Rather than predefine the channel unit of analysis, this work also considers the accuracy of model parameters derived from a range of channel scales, from longer reach-averaged to cross-sectional. High resolution (1 m) color infrared orthoimagery from 2005 and 2009 National Agricultural Inventory Program (NAIP) of 50 river reaches (ranging in discharge from approximately 13 m3s-1 to 856 m3s-1) were utilized for this analysis. These reaches, all near United States Geological Survey (USGS) river gages in California, USA, were split randomly and evenly into 25 reaches each for model design and validation, respectively. This approach allows better isolation of error resulting from user bias in channel feature measurement, and moves toward a more precise, standardized system of measurement for remotely observable channel form.

  2. An object-based image analysis of pinyon and juniper woodlands treated to reduce fuels.

    PubMed

    Hulet, April; Roundy, Bruce A; Petersen, Steven L; Jensen, Ryan R; Bunting, Stephen C

    2014-03-01

    Mechanical and prescribed fire treatments are commonly used to reduce fuel loads and maintain or restore sagebrush steppe rangelands across the Great Basin where pinyon (Pinus) and juniper (Juniperus) trees are encroaching and infilling. Geospatial technologies, particularly remote sensing, could potentially be used in these ecosystems to (1) evaluate the longevity of fuel reduction treatments, (2) provide data for planning and designing future fuel-reduction treatments, and (3) assess the spatial distribution of horizontal fuel structure following fuel-reduction treatments. High-spatial resolution color-infrared imagery (0.06-m pixels) was acquired for pinyon and juniper woodland plots where fuels were reduced by either prescribed fire, tree cutting, or mastication at five sites in Oregon, California, Nevada, and Utah. Imagery was taken with a Vexcel UltraCam X digital camera in June 2009. Within each treatment plot, ground cover was measured as part of the Sagebrush Steppe Treatment Evaluation Project. Trimble eCognition Developer was used to classify land cover classes using object-based image analysis (OBIA) techniques. Differences between cover estimates using OBIA and ground-measurements were not consistently higher or lower for any land cover class and when evaluated for individual sites, were within ±5 % of each other. The overall accuracy and the K hat statistic for classified thematic maps for each treatment were: prescribed burn 85 % and 0.81; cut and fell 82 % and 0.77, and mastication 84 % and 0.80. Although cover assessments from OBIA differed somewhat from ground measurements, they are sufficiently accurate to evaluate treatment success and for supporting a broad range of management concerns.

  3. An Object-Based Image Analysis of Pinyon and Juniper Woodlands Treated to Reduce Fuels

    NASA Astrophysics Data System (ADS)

    Hulet, April; Roundy, Bruce A.; Petersen, Steven L.; Jensen, Ryan R.; Bunting, Stephen C.

    2014-03-01

    Mechanical and prescribed fire treatments are commonly used to reduce fuel loads and maintain or restore sagebrush steppe rangelands across the Great Basin where pinyon ( Pinus) and juniper ( Juniperus) trees are encroaching and infilling. Geospatial technologies, particularly remote sensing, could potentially be used in these ecosystems to (1) evaluate the longevity of fuel reduction treatments, (2) provide data for planning and designing future fuel-reduction treatments, and (3) assess the spatial distribution of horizontal fuel structure following fuel-reduction treatments. High-spatial resolution color-infrared imagery (0.06-m pixels) was acquired for pinyon and juniper woodland plots where fuels were reduced by either prescribed fire, tree cutting, or mastication at five sites in Oregon, California, Nevada, and Utah. Imagery was taken with a Vexcel UltraCam X digital camera in June 2009. Within each treatment plot, ground cover was measured as part of the Sagebrush Steppe Treatment Evaluation Project. Trimble eCognition Developer was used to classify land cover classes using object-based image analysis (OBIA) techniques. Differences between cover estimates using OBIA and ground-measurements were not consistently higher or lower for any land cover class and when evaluated for individual sites, were within ±5 % of each other. The overall accuracy and the K hat statistic for classified thematic maps for each treatment were: prescribed burn 85 % and 0.81; cut and fell 82 % and 0.77, and mastication 84 % and 0.80. Although cover assessments from OBIA differed somewhat from ground measurements, they are sufficiently accurate to evaluate treatment success and for supporting a broad range of management concerns.

  4. Mapping temporal changes in connectivity using high-resolution aerial data and object based image analysis

    NASA Astrophysics Data System (ADS)

    Masselink, Rens; Anders, Niels; Keesstra, Saskia; Seeger, Manuel

    2014-05-01

    Within the field of geomorphology mapping has always been an important tool to interpret spatial and temporal distributions of phenomena and processes at the surface. In the field of connectivity however, although throughout the past decade many articles have been published, there are only very few that go into the mapping of connectivity. This study aimed at developing a new, automated method for mapping connectivity within agricultural catchments. The method, which is a combination of Object-Based Image Analysis (OBIA) and traditional geomorphological field mapping, was applied to two agricultural catchments in Navarre, Spain, both with an area of approximately 2 sq.km. An unmanned aerial vehicle (UAV) was used to take aerial photographs with a resolution of 6 cm, of which a DEM with a 12 cm resolution was created using structure-from-motion photogrammetry. Connectivity was mapped within the study areas using OBIA using a top down method, meaning that connectivity was mapped at different scale levels, starting at the largest scale. Firstly sub-catchments were automatically delineated, after which several characteristics and features that affect connectivity within the sub-catchments were classified, e.g. landuse, landslides, rills, gullies, riparian vegetation, changes in slope, ploughing direction etc. In two consecutive years (2013-2014) photographs were taken and connectivity of both catchments of both years will be compared. Future work will include a quantification of the mapped connectivity (highly connected years vs. low connected years), causes and consequences of these differences in connectivity, comparison to existing connectivity indices and comparison of mapped connectivity in sub-catchments and measured discharge.

  5. a Region-Based Multi-Scale Approach for Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.

    2016-06-01

    Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  6. High resolution mapping of development in the wildland-urban interface using object based image extraction.

    PubMed

    Caggiano, Michael D; Tinkham, Wade T; Hoffman, Chad; Cheng, Antony S; Hawbaker, Todd J

    2016-10-01

    The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m(2)) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA

  7. Forecast bias analysis using object-based verification of regional WRF summertime convective forecasts

    NASA Astrophysics Data System (ADS)

    Starzec, Mariusz

    Forecast verification remains a crucial component of improving model forecasts, but still remains a challenge to perform. An objective method is developed to verify simulated reflectivity against radar reflectivity at a 1 km altitude utilizing the Method for Object-based Diagnostic Evaluation (MODE) Tool. Comparing the reflectivity field allows for an instantaneous view of what is occurring in simulations without any averaging that may occur when analyzing fields such as accumulated precipitation. The objective method is applied to high resolution 3 km and 1 km local convective WRF summertime forecasts in the Northern Plains region. The bulk verification statistics reveal that forecasts generate too many objects, over-forecast the areal coverage of convection, and over-intensify convection. No noteworthy increases in skill are found when increasing to 1 km resolution and instead lead to a significant over-forecasting of small cells. A sensitivity study is performed to investigate the forecast biases found by varying the cloud droplet concentration, microphysical scheme, and horizontal resolution on a case day containing weakly forced convection mostly below the freezing level. Changing the cloud droplet concentration has a strong impact on the number of object and area biases. Increasing droplet counts to observed values generates a forecast that more closely resembles the observations in terms of area and object counts, but leads not enough rain generation. Changing the microphysical scheme produces the most pronounced effects on object counts and intensity, which is attributed to differences in autoconversion formulations. Coarsening the resolution from 3 km to 9 km leads to a decrease in skill, showing that 3 km simulations are more effective at convective forecasts. Increasing the resolution to 1 km results in amplifying the object count bias, and is found to not be worth the additional computational expense.

  8. Sandia software guidelines: Software quality planning

    SciTech Connect

    Not Available

    1987-08-01

    This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. In consonance with the IEEE Standard for Software Quality Assurance Plans, this volume identifies procedures to follow in producing a Software Quality Assurance Plan for an organization or a project, and provides an example project SQA plan. 2 figs., 4 tabs.

  9. Study on tracking technology of the moving object based on computer vision

    NASA Astrophysics Data System (ADS)

    Xu, Ranran; Xu, Zhenying; Li, Boquan

    2010-10-01

    The tracking technology of the moving object has been an active topic of the visual tracking system. In this paper, the tracking algorithms are classified into four classes: correlation-based methods, boundary-based methods, model-based methods and multifunctional methods. Based on the analysis of the advantages and disadvantages of all these algorithms, a new tracking algorithm, integrating SSDA and advanced Camshift algorithm, is put forward here.

  10. EDS operator and control software

    SciTech Connect

    Ott, L.L.

    1985-04-01

    The Enrichment Diagnostic System (EDS) was developed at Lawrence Livermore National Laboratory (LLNL) to acquire, display and analyze large quantities of transient data for a real-time Advanced Vapor Laser Isotope Separation (AVLIS) experiment. Major topics discussed in this paper are the EDS operator interface (SHELL) program, the data acquisition and analysis scheduling software, and the graphics software. The workstation concept used in EDS, the software used to configure a user's workstation, and the ownership and management of a diagnostic are described. An EDS diagnostic is a combination of hardware and software designed to study specific aspects of the process. Overall system performance is discussed from the standpoint of scheduling techniques, evaluation tools, optimization techniques, and program-to-program communication methods. EDS is based on a data driven design which keeps the need to modify software to a minimum. This design requires a fast and reliable data base management system. A third party data base management product, Berkeley Software System Database, written explicitly for HP1000's, is used for all EDS data bases. All graphics is done with an in-house graphics product, Device Independent Graphics Library (DIGLIB). Examples of devices supported by DIGLIB are: Versatec printer/plotters, Raster Technologies Graphic Display Controllers, and HP terminals (HP264x and HP262x). The benefits derived by using HP hardware and software as well as obstacles imposed by the HP environment are presented in relation to EDS development and implementation.

  11. Online Assistants in Children's Hypermedia Software

    ERIC Educational Resources Information Center

    Garcia, Penny Ann

    2002-01-01

    The classroom teacher's comfort and familiarity with computers and software influences student-computer use in the classroom. Teachers remain mired in repetitive introduction of basic software mechanics and rarely progress with students to advanced concepts or complex applications. An Online Assistant (OLA) was developed to accompany the…

  12. Modeling and MBL: Software Tools for Science.

    ERIC Educational Resources Information Center

    Tinker, Robert F.

    Recent technological advances and new software packages put unprecedented power for experimenting and theory-building in the hands of students at all levels. Microcomputer-based laboratory (MBL) and model-solving tools illustrate the educational potential of the technology. These tools include modeling software and three MBL packages (which are…

  13. Antenna Controller Replacement Software

    NASA Technical Reports Server (NTRS)

    Chao, Roger Y.; Morgan, Scott C.; Strain, Martha M.; Rockwell, Stephen T.; Shimizu, Kenneth J.; Tehrani, Barzia J.; Kwok, Jaclyn H.; Tuazon-Wong, Michelle; Valtier, Henry; Nalbandi, Reza; Wert, Michael; Leung, Patrick

    2010-01-01

    The Antenna Controller Replacement (ACR) software accurately points and monitors the Deep Space Network (DSN) 70-m and 34-m high-efficiency (HEF) ground-based antennas that are used to track primarily spacecraft and, periodically, celestial targets. To track a spacecraft, or other targets, the antenna must be accurately pointed at the spacecraft, which can be very far away with very weak signals. ACR s conical scanning capability collects the signal in a circular pattern around the target, calculates the location of the strongest signal, and adjusts the antenna pointing to point directly at the spacecraft. A real-time, closed-loop servo control algorithm performed every 0.02 second allows accurate positioning of the antenna in order to track these distant spacecraft. Additionally, this advanced servo control algorithm provides better antenna pointing performance in windy conditions. The ACR software provides high-level commands that provide a very easy user interface for the DSN operator. The operator only needs to enter two commands to start the antenna and subreflector, and Master Equatorial tracking. The most accurate antenna pointing is accomplished by aligning the antenna to the Master Equatorial, which because of its small size and sheltered location, has the most stable pointing. The antenna has hundreds of digital and analog monitor points. The ACR software provides compact displays to summarize the status of the antenna, subreflector, and the Master Equatorial. The ACR software has two major functions. First, it performs all of the steps required to accurately point the antenna (and subreflector and Master Equatorial) at the spacecraft (or celestial target). This involves controlling the antenna/ subreflector/Master-Equatorial hardware, initiating and monitoring the correct sequence of operations, calculating the position of the spacecraft relative to the antenna, executing the real-time servo control algorithm to maintain the correct position, and

  14. Office Computer Software: A Comprehensive Review of Software Programs.

    ERIC Educational Resources Information Center

    Secretary, 1992

    1992-01-01

    Describes types of software including system software, application software, spreadsheets, accounting software, graphics packages, desktop publishing software, database, desktop and personal information management software, project and records management software, groupware, and shareware. (JOW)

  15. Quantifying the Relationship between Dynamical Cores and Physical Parameterizations by Object-Based Methods

    NASA Astrophysics Data System (ADS)

    Yorgun, M. S.; Rood, R. B.

    2011-12-01

    The behavior of atmospheric models is sensitive to the algorithms that are used to represent the equations of motion. Typically, comprehensive models are conceived in terms of the resolved fluid dynamics (i.e. the dynamical core) and subgrid, unresolved physics represented by parameterizations. Deterministic weather predictions are often validated with feature-by-feature comparison. Probabilistic weather forecasts and climate projects are evaluated with statistical methods. We seek to develop model evaluation strategies that identify like "objects" - coherent systems with an associated set of measurable parameters. This makes it possible to evaluate processes in models without needing to reproduce the time and location of, for example, a particular observed cloud system. Process- and object-based evaluation preserves information in the observations by avoiding the need for extensive spatial and temporal averaging. As a concrete example, we focus on analyzing how the choice of dynamical core impacts the representation of precipitation in the Pacific Northwest of the United States, Western Canada, and Alaska; this brings attention to the interaction of the resolved and the parameterized components of the model. Two dynamical cores are considered within the Community Atmosphere Model. These are the Spectral (Eulerian), which relies on global basis functions and the Finite Volume (FV), which uses only local information. We introduce the concept of "meteorological realism" that is, do local representations of large-scale phenomena, for example, fronts and orographic precipitation, look like the observations? A follow on question is, does the representation of these phenomena improve with resolution? Our approach to quantify meteorological realism starts with identification and isolation of key features of orographic precipitation that are represented differently by Spectral and FV models, using objective pattern recognition methods. Then we aim to quantitatively compare

  16. Identification of GCM Uncertainty of Dynamical Cores and Physical Parameterizations by Object-Based Methods

    NASA Astrophysics Data System (ADS)

    Yorgun, M. S.; Rood, R. B.

    2012-12-01

    time and location of, for example, a particular observed cloud system. Process- and object-based evaluation preserves information in the observations by avoiding the need for extensive spatial and temporal averaging. We pose that this approach will provide sound identification of model uncertainty by comparison to observations (i.e. GPCC gauge based data), and will intrinsically link local, weather-scale phenomena to important climatological features and provide a quantitative bridge between weather and climate.

  17. Object-based Image Classification of Arctic Sea Ice and Melt Ponds through Aerial Photos

    NASA Astrophysics Data System (ADS)

    Miao, X.; Xie, H.; Li, Z.; Lei, R.

    2013-12-01

    The last six years have marked the lowest Arctic summer sea ice extents in the modern era, with a new record summer minimum (3.4 million km2) set on 13 September 2012. It has been predicted that the Arctic could be free of summer ice within the next 25-30. The loss of Arctic summer ice could have serious consequences, such as higher water temperature due to the positive feedback of albedo, more powerful and frequent storms, rising sea levels, diminished habitats for polar animals, and more pollution due to fossil fuel exploitation and/ or increased traffic through the Northwest/ Northeast Passage. In these processes, melt ponds play an important role in Earth's radiation balance since they strongly absorb solar radiation rather than reflecting it as snow and ice do. Therefore, it is necessary to develop the ability of predicting the sea ice/ melt pond extents and space-time evolution, which is pivotal to prepare for the variation and uncertainty of the future environment, political, economic, and military needs. A lot of efforts have been put into Arctic sea ice modeling to simulate sea ice processes. However, these sea ice models were initiated and developed based on limited field surveys, aircraft or satellite image data. Therefore, it is necessary to collect high resolution sea ice aerial photo in a systematic way to tune up, validate, and improve models. Currently there are many sea ice aerial photos available, such as Chinese Arctic Exploration (CHINARE 2008, 2010, 2012), SHEBA 1998 and HOTRAX 2005. However, manually delineating of sea ice and melt pond from these images is time-consuming and labor-intensive. In this study, we use the object-based remote sensing classification scheme to extract sea ice and melt ponds efficiently from 1,727 aerial photos taken during the CHINARE 2010. The algorithm includes three major steps as follows. (1) Image segmentation groups the neighboring pixels into objects according to the similarity of spectral and texture

  18. Software Model Of Software-Development Process

    NASA Technical Reports Server (NTRS)

    Lin, Chi Y.; Synott, Debra J.; Levary, Reuven R.

    1990-01-01

    Collection of computer programs constitutes software tool for simulation of medium- to large-scale software-development projects. Necessary to include easily identifiable and more-readily quantifiable characteristics like costs, times, and numbers of errors. Mathematical model incorporating these and other factors of dynamics of software-development process implemented in the Software Life Cycle Simulator (SLICS) computer program. Simulates dynamics of software-development process. In combination with input and output expert software systems and knowledge-based management software system, develops information for use in managing large software-development project. Intended to aid managers in planning, managing, and controlling software-development processes by reducing uncertainties in budgets, required personnel, and schedules.

  19. RELAP-7 Software Verification and Validation Plan

    SciTech Connect

    Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling

    2014-09-25

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.

  20. Advanced Satellite Hardware/Software System Study.

    DTIC Science & Technology

    1980-04-15

    amplitude for channel i 135 Yk - Output pixel amplitude for channel k Fk - Multiplicative scale factor for channel k (default - 128) Bk - Bias scale factor...normalized channels . Mathematically: Yk F + Bk k k Xi where i, k -1, 2, ... , 6 X - Input pixel amplitude for channel i N - Analyst specified number, N... channel k (default - 255) Bk - Bias scale factor for channel k (default - 0) Difference/sum mode ratios the difference and sum respectively of a user

  1. Advanced Concept Modeling

    NASA Technical Reports Server (NTRS)

    Chaput, Armand; Johns, Zachary; Hodges, Todd; Selfridge, Justin; Bevirt, Joeben; Ahuja, Vivek

    2015-01-01

    Advanced Concepts Modeling software validation, analysis, and design. This was a National Institute of Aerospace contract with a lot of pieces. Efforts ranged from software development and validation for structures and aerodynamics, through flight control development, and aeropropulsive analysis, to UAV piloting services.

  2. Software to Manage the Unmanageable

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In 1995, NASA s Jet Propulsion Laboratory (JPL) contracted Redmond, Washington-based Lucidoc Corporation, to design a technology infrastructure to automate the intersection between policy management and operations management with advanced software that automates document workflow, document status, and uniformity of document layout. JPL had very specific parameters for the software. It expected to store and catalog over 8,000 technical and procedural documents integrated with hundreds of processes. The project ended in 2000, but NASA still uses the resulting highly secure document management system, and Lucidoc has managed to help other organizations, large and small, with integrating document flow and operations management to ensure a compliance-ready culture.

  3. Computer systems and software engineering

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  4. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  5. Book and Software Review.

    ERIC Educational Resources Information Center

    Wissick, Cheryl

    2000-01-01

    This introductory column on books and software concerned with special education technology presents an article by JuHye Yook on the software design process. It discusses the rationale for developing new software for students with reading disabilities, the design and development process, and analysis of the software design. Software use by two…

  6. Other People's Software

    NASA Astrophysics Data System (ADS)

    Mandel, E.; Murray, S. S.

    Why do we continually re-invent the astronomical software wheel? Why is it so difficult to use ``other people's software''? Leaving aside issues such as money, power, and control, we need to investigate practically how we can remove barriers to software sharing. This paper will offer a starting point for software cooperation, centered on the concept of ``minimal software buy-in''.

  7. Application of Object Based Image Analysis (OBIA) in detecting and quantifying forest loss caused by artisanal gold mining activities in Upper Mazaruni River Basin, Guyana

    NASA Astrophysics Data System (ADS)

    Mengisteab, B. S.; Blesius, L.; Hennessy, L.

    2014-12-01

    Artisanal gold mining in Guyana is mostly done in forest covered areas, causing forest loss and land degradation. Data from the Guyana Geology and Mining commission show there has been an increase of 2074% between 1986 and 2009. Our analysis of Landsat data between 1986 and 2013 for a part of the Upper Mazaruni area shows an increase from 34.2 to 278.4 hectares, which amounts to more than 800%. While the frequent coverage of Landsat data is useful for multitemporal studies, the lower resolution may not be adequate for accurate detection of mining sites. Therefore, RapidEye imagery from 2011 at a resolution of 5m was used to detect gold mining activity and to compare the results with the Landsat study. Processing was conducted in eCognition, an object-based image analysis (OBIA) software. OBIA is an image processing technique that has proven to be advantageous over traditional pixel based image processing techniques, with the primary advantage being the ability of the approach in combining both the spatial and spectral information. The satellite image was subjected to segmentation at multiple scales and classified using fuzzy sets of membership functions. Classification explicitly incorporated the different scales in order to accommodate different sizes of real-world objects and spatial relationships were utilized to establish connections between related objects. For example the presence or absence of water in pits, or the existence of sediments in the river may serve as additional indicators of mining sites besides the spectral components. Preliminary results show that OBIA approach was able to successfully detect and quantify small scale mining activities in the basin, and that the Landsat data were giving an acceptable estimate of mining sites over time. Keywords:Object Based Image Analysis, Gold Mining, Remote Sensing, Guyana

  8. The NASA Software Management and Assurance Program

    NASA Technical Reports Server (NTRS)

    Hall, D. L.; Wilson, W. M.

    1983-01-01

    A committee of experienced managers representing each major NASA organization guides a major development effort which seeks to ensure the acquisition of economical and reliable software for more complex future programs. The primary, near-term development objectives of the NASA Software Management and Assurance Program are: (1) to provide the best management and technical guidance available; (2) to facilitate the use of proven tools, techniques, and information; and (3) to maintain a pool of highly qualified software personnel. The software development tasks addressed involve such problems as satellite end-to-end architecture and advanced aircraft guidance and control systems.

  9. Software attribute visualization for high integrity software

    SciTech Connect

    Pollock, G.M.

    1998-03-01

    This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.

  10. Software Vulnerability Taxonomy Consolidation

    SciTech Connect

    Polepeddi, Sriram S.

    2004-12-07

    In today's environment, computers and networks are increasing exposed to a number of software vulnerabilities. Information about these vulnerabilities is collected and disseminated via various large publicly available databases such as BugTraq, OSVDB and ICAT. Each of these databases, individually, do not cover all aspects of a vulnerability and lack a standard format among them, making it difficult for end-users to easily compare various vulnerabilities. A central database of vulnerabilities has not been available until today for a number of reasons, such as the non-uniform methods by which current vulnerability database providers receive information, disagreement over which features of a particular vulnerability are important and how best to present them, and the non-utility of the information presented in many databases. The goal of this software vulnerability taxonomy consolidation project is to address the need for a universally accepted vulnerability taxonomy that classifies vulnerabilities in an unambiguous manner. A consolidated vulnerability database (CVDB) was implemented that coalesces and organizes vulnerability data from disparate data sources. Based on the work done in this paper, there is strong evidence that a consolidated taxonomy encompassing and organizing all relevant data can be achieved. However, three primary obstacles remain: lack of referencing a common ''primary key'', un-structured and free-form descriptions of necessary vulnerability data, and lack of data on all aspects of a vulnerability. This work has only considered data that can be unambiguously extracted from various data sources by straightforward parsers. It is felt that even with the use of more advanced, information mining tools, which can wade through the sea of unstructured vulnerability data, this current integration methodology would still provide repeatable, unambiguous, and exhaustive results. Though the goal of coalescing all available data, which would be of use to

  11. Controlling Software Piracy.

    ERIC Educational Resources Information Center

    King, Albert S.

    1992-01-01

    Explains what software manufacturers are doing to combat software piracy, recommends how managers should deal with this problem, and provides a role-playing exercise to help students understand the issues in software piracy. (SR)

  12. Report: Scientific Software.

    ERIC Educational Resources Information Center

    Borman, Stuart A.

    1985-01-01

    Discusses various aspects of scientific software, including evaluation and selection of commercial software products; program exchanges, catalogs, and other information sources; major data analysis packages; statistics and chemometrics software; and artificial intelligence. (JN)

  13. Hierarchical object-based classification of ultra-high-resolution digital mapping camera (DMC) imagery for rangeland mapping and assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Ultra high resolution digital aerial photography has great potential to complement or replace ground measurements of vegetation cover for rangeland monitoring and assessment. We investigated object-based image analysis (OBIA) techniques for classifying vegetation in southwestern U.S. arid rangelands...

  14. Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study

    ERIC Educational Resources Information Center

    Bulf, Hermann; Valenza, Eloisa

    2013-01-01

    Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…

  15. Same Old Story: The Problem of Object-Based Thinking as a Basis for Teaching Distant Places

    ERIC Educational Resources Information Center

    Martin, Fran

    2013-01-01

    The English Geography National Curriculum encourages primary teachers to focus on similarities and differences when teaching distant places. The issues this raises are particularly acute when teaching geography in the context of the Global South. In this article I argue that comparisons based on object-based thinking can lead to views of the…

  16. GENI: Grid Hardware and Software

    SciTech Connect

    2012-01-09

    GENI Project: The 15 projects in ARPA-E’s GENI program, short for “Green Electricity Network Integration,” aim to modernize the way electricity is transmitted in the U.S. through advances in hardware and software for the electric grid. These advances will improve the efficiency and reliability of electricity transmission, increase the amount of renewable energy the grid can utilize, and provide energy suppliers and consumers with greater control over their power flows in order to better manage peak power demand and cost.

  17. Exploring the Process of Adult Computer Software Training Using Andragogy, Situated Cognition, and a Minimalist Approach

    ERIC Educational Resources Information Center

    Hurt, Andrew C.

    2007-01-01

    With technology advances, computer software becomes increasingly difficult to learn. Adults often rely on software training to keep abreast of these changes. Instructor-led software training is frequently used to teach adults new software skills; however there is limited research regarding the best practices in adult computer software training.…

  18. Software Engineering Guidebook

    NASA Technical Reports Server (NTRS)

    Connell, John; Wenneson, Greg

    1993-01-01

    The Software Engineering Guidebook describes SEPG (Software Engineering Process Group) supported processes and techniques for engineering quality software in NASA environments. Three process models are supported: structured, object-oriented, and evolutionary rapid-prototyping. The guidebook covers software life-cycles, engineering, assurance, and configuration management. The guidebook is written for managers and engineers who manage, develop, enhance, and/or maintain software under the Computer Software Services Contract.

  19. An object-based image analysis approach for aquaculture ponds precise mapping and monitoring: a case study of Tam Giang-Cau Hai Lagoon, Vietnam.

    PubMed

    Virdis, Salvatore Gonario Pasquale

    2014-01-01

    Monitoring and mapping shrimp farms, including their impact on land cover and land use, is critical to the sustainable management and planning of coastal zones. In this work, a methodology was proposed to set up a cost-effective and reproducible procedure that made use of satellite remote sensing, object-based classification approach, and open-source software for mapping aquaculture areas with high planimetric and thematic accuracy between 2005 and 2008. The analysis focused on two characteristic areas of interest of the Tam Giang-Cau Hai Lagoon (in central Vietnam), which have similar farming systems to other coastal aquaculture worldwide: the first was primarily characterised by locally referred "low tide" shrimp ponds, which are partially submerged areas; the second by earthed shrimp ponds, locally referred to as "high tide" ponds, which are non-submerged areas on the lagoon coast. The approach was based on the region-growing segmentation of high- and very high-resolution panchromatic images, SPOT5 and Worldview-1, and the unsupervised clustering classifier ISOSEG embedded on SPRING non-commercial software. The results, the accuracy of which was tested with a field-based aquaculture inventory, showed that in favourable situations (high tide shrimp ponds), the classification results provided high rates of accuracy (>95 %) through a fully automatic object-based classification. In unfavourable situations (low tide shrimp ponds), the performance degraded due to the low contrast between the water and the pond embankments. In these situations, the automatic results were improved by manual delineation of the embankments. Worldview-1 necessarily showed better thematic accuracy, and precise maps have been realised at a scale of up to 1:2,000. However, SPOT5 provided comparable results in terms of number of correctly classified ponds, but less accurate results in terms of the precision of mapped features. The procedure also demonstrated high degrees of reproducibility

  20. MemAxes Visualization Software

    SciTech Connect

    2014-08-28

    Hardware advancements such as Intel's PEBS and AMD's IBS, as well as software developments such as the perf_event API in Linux have made available the acquisition of memory access samples with performance information. MemAxes is a visualization and analysis tool for memory access sample data. By mapping the samples to their associated code, variables, node topology, and application dataset, MemAxes provides intuitive views of the data.

  1. Computer Software for Displaying Map Projections and Comparing Distortions.

    ERIC Educational Resources Information Center

    Wikle, Thomas

    1991-01-01

    Discusses software that educators can use to teach about distortions associated with alternative map projections. Examines the Projection, MicroCAM, World, and Atlas-GIS software programs. Suggests using the software in either introductory or more advanced courses dealing with map design or thematic cartography. Explains how to obtain the…

  2. Proceedings, Conference on the Computing Environment for Mathematical Software

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Recent advances in software and hardware technology which make it economical to create computing environments appropriate for specialized applications are addressed. Topics included software tools, FORTRAN standards activity, and features of languages, operating systems, and hardware that are important for the development, testing, and maintenance of mathematical software.

  3. Software Configuration Management Guidebook

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The growth in cost and importance of software to NASA has caused NASA to address the improvement of software development across the agency. One of the products of this program is a series of guidebooks that define a NASA concept of the assurance processes which are used in software development. The Software Assurance Guidebook, SMAP-GB-A201, issued in September, 1989, provides an overall picture of the concepts and practices of NASA in software assurance. Lower level guidebooks focus on specific activities that fall within the software assurance discipline, and provide more detailed information for the manager and/or practitioner. This is the Software Configuration Management Guidebook which describes software configuration management in a way that is compatible with practices in industry and at NASA Centers. Software configuration management is a key software development process, and is essential for doing software assurance.

  4. Image Processing Software

    NASA Astrophysics Data System (ADS)

    Bosio, M. A.

    1990-11-01

    ABSTRACT: A brief description of astronomical image software is presented. This software was developed in a Digital Micro Vax II Computer System. : St presenta una somera descripci6n del software para procesamiento de imagenes. Este software fue desarrollado en un equipo Digital Micro Vax II. : DATA ANALYSIS - IMAGE PROCESSING

  5. Decentralized Software Evolution

    DTIC Science & Technology

    2003-09-01

    Institute for Software Research University of California, Irvine www.isr.uci.edu/tech-reports.html Peyman Oreizy University of California, Irvine... Peyman Oreizy and Richard N. Taylor Institute for Software Research University of California, Irvine Irvine, CA 92697-3425 USA {peymano, taylor...mechanisms that enforce cooperation among Decentralized Software Evolution Peyman Oreizy and Richard N. Taylor Institute for Software Research

  6. Agile Software Development

    ERIC Educational Resources Information Center

    Biju, Soly Mathew

    2008-01-01

    Many software development firms are now adopting the agile software development method. This method involves the customer at every level of software development, thus reducing the impact of change in the requirement at a later stage. In this article, the principles of the agile method for software development are explored and there is a focus on…

  7. Complexity, Systems, and Software

    DTIC Science & Technology

    2014-08-14

    2014 Carnegie Mellon University Complexity, Systems, and Software Software Engineering Institute Carnegie Mellon University Pittsburgh, PA...NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 8...for the operation of the Software Engineering Institute, a federally funded research and development center sponsored by the United States

  8. Finding Helpful Software Reviews.

    ERIC Educational Resources Information Center

    Kruse, Ted, Comp.

    1987-01-01

    Provides a list of evaluation services currently producing critical reviews of educational software. Includes information about The Apple K-12 Curriculum Software Reference, The Educational Software Preview, The Educational Software Selector, MicroSIFT, and Only The Best: The Discriminating Guide for Preschool-Grade 12. (TW)

  9. Software distribution using xnetlib

    SciTech Connect

    Dongarra, J.J. |; Rowan, T.H.; Wade, R.C.

    1993-06-01

    Xnetlib is a new tool for software distribution. Whereas its predecessor netlib uses e-mail as the user interface to its large collection of public-domain mathematical software, xnetlib uses an X Window interface and socket-based communication. Xnetlib makes it easy to search through a large distributed collection of software and to retrieve requested software in seconds.

  10. Software productivity improvement through software engineering technology

    NASA Technical Reports Server (NTRS)

    Mcgarry, F. E.

    1985-01-01

    It has been estimated that NASA expends anywhere from 6 to 10 percent of its annual budget on the acquisition, implementation and maintenance of computer software. Although researchers have produced numerous software engineering approaches over the past 5-10 years; each claiming to be more effective than the other, there is very limited quantitative information verifying the measurable impact htat any of these technologies may have in a production environment. At NASA/GSFC, an extended research effort aimed at identifying and measuring software techniques that favorably impact productivity of software development, has been active over the past 8 years. Specific, measurable, software development technologies have been applied and measured in a production environment. Resulting software development approaches have been shown to be effective in both improving quality as well as productivity in this one environment.

  11. Software Formal Inspections Standard

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This Software Formal Inspections Standard (hereinafter referred to as Standard) is applicable to NASA software. This Standard defines the requirements that shall be fulfilled by the software formal inspections process whenever this process is specified for NASA software. The objective of this Standard is to define the requirements for a process that inspects software products to detect and eliminate defects as early as possible in the software life cycle. The process also provides for the collection and analysis of inspection data to improve the inspection process as well as the quality of the software.

  12. Study of key technology of ghost imaging via compressive sensing for a phase object based on phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Dong, Liang; Bei, Li; Zilan, Pan; Dawei, Zhang; Xiuhua, Ma

    2015-07-01

    In this article, the algorithm of compressing sensing is used to improve the imaging resolution and realize ghost imaging via compressive sensing for a phase object based on the theoretical analysis of the lensless Fourier imaging of the algorithm of ghost imaging based on phase-shifting digital holography. The algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography uses the bucket detector to measure the total light intensity of the interference and the four-step phase-shifting method is used to obtain the total light intensity of differential interference light. The experimental platform is built based on the software simulation, and the experimental results show that the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography can obtain the high-resolution phase distribution figure of the phase object. With the same sampling times, the phase clarity of the phase distribution figure obtained by the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography is higher than that obtained by the algorithm of ghost imaging based on phase-shift digital holography. In this article, this study further extends the application range of ghost imaging and obtains the phase distribution of the phase object.

  13. Software component quality evaluation

    NASA Technical Reports Server (NTRS)

    Clough, A. J.

    1991-01-01

    The paper describes a software inspection process that can be used to evaluate the quality of software components. Quality criteria, process application, independent testing of the process and proposed associated tool support are covered. Early results indicate that this technique is well suited for assessing software component quality in a standardized fashion. With automated machine assistance to facilitate both the evaluation and selection of software components, such a technique should promote effective reuse of software components.

  14. Responsbility for unreliable software

    SciTech Connect

    Wahl, N.J.

    1994-12-31

    Unreliable software exposes software developers and distributors to legal risks. Under certain circumstances, the developer and distributor of unreliable software can be sued. To avoid lawsuits, software developers should do the following: determine what the risks am, understand the extent of the risks, and identify ways of avoiding the risks and lessening the consequences of the risks. Liability issues associated with unreliable software are explored in this article.

  15. Change detection of built-up land: A framework of combining pixel-based detection and object-based recognition

    NASA Astrophysics Data System (ADS)

    Xiao, Pengfeng; Zhang, Xueliang; Wang, Dongguang; Yuan, Min; Feng, Xuezhi; Kelly, Maggi

    2016-09-01

    This study proposed a new framework that combines pixel-level change detection and object-level recognition to detect changes of built-up land from high-spatial resolution remote sensing images. First, an adaptive differencing method was designed to detect changes at the pixel level based on both spectral and textural features. Next, the changed pixels were subjected to a set of morphological operations to improve the completeness and to generate changed objects, achieving the transition of change detection from the pixel level to the object level. The changed objects were further recognised through the difference of morphological building index in two phases to indicate changed objects on built-up land. The transformation from changed pixels to changed objects makes the proposed framework distinct with both the pixel-based and the object-based change detection methods. Compared with the pixel-based methods, the proposed framework can improve the change detection capability through the transformation and successive recognition of objects. Compared with the object-based method, the proposed framework avoids the issue of multitemporal segmentation and can generate changed objects directly from changed pixels. The experimental results show the effectiveness of the transformation from changed pixels to changed objects and the successive object-based recognition on improving the detection accuracy, which justify the application potential of the proposed change detection framework.

  16. Support vector machines and object-based classification for obtaining land-use/cover cartography from Hyperion hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Petropoulos, George P.; Kalaitzidis, Chariton; Prasad Vadrevu, Krishna

    2012-04-01

    The Hyperion hyperspectral sensor has the highest spectral resolution, acquiring spectral information of Earth's surface objects in 242 spectral bands at a spatial resolution of 30 m. In this study, we evaluate the performance of the Hyperion sensor in conjunction with the two different classification algorithms for delineating land use/cover in a typical Mediterranean setting. The algorithms include pixel-based support vector machines (SVMs) and the object-based classification algorithm. Validation of the derived land-use/cover maps from the above two algorithms was performed through error matrix statistics using the validation points from the very high resolution QuickBird imagery. Results suggested both classifiers as highly useful in mapping land use/cover in the study region with the object-based approach slightly outperforming the SVMs classification by overall higher classification accuracy and Kappa statistics. Results from the statistical significance testing using McNemar's chi-square test confirmed the superiority of the object-oriented approach compared to SVM. The relative strengths and weaknesses of the two classification algorithms for land-use/cover mapping studies are highlighted. Overall, our results underline the potential of hyperspectral remote sensing data together with an object-based classification approach for mapping land use/cover in the Mediterranean regions.

  17. Object-based auditory facilitation of visual search for pictures and words with frequent and rare targets

    PubMed Central

    Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru

    2010-01-01

    Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential-association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory

  18. Software Quality Assurance Metrics

    NASA Technical Reports Server (NTRS)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  19. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multi-core, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to .50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  20. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multicore, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to approx.50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  1. Software Engineering Program: Software Process Improvement Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this document is to provide experience-based guidance in implementing a software process improvement program in any NASA software development or maintenance community. This guidebook details how to define, operate, and implement a working software process improvement program. It describes the concept of the software process improvement program and its basic organizational components. It then describes the structure, organization, and operation of the software process improvement program, illustrating all these concepts with specific NASA examples. The information presented in the document is derived from the experiences of several NASA software organizations, including the SEL, the SEAL, and the SORCE. Their experiences reflect many of the elements of software process improvement within NASA. This guidebook presents lessons learned in a form usable by anyone considering establishing a software process improvement program within his or her own environment. This guidebook attempts to balance general and detailed information. It provides material general enough to be usable by NASA organizations whose characteristics do not directly match those of the sources of the information and models presented herein. It also keeps the ideas sufficiently close to the sources of the practical experiences that have generated the models and information.

  2. Space Station Software Recommendations

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor)

    1985-01-01

    Four panels of invited experts and NASA representatives focused on the following topics: software management, software development environment, languages, and software standards. Each panel deliberated in private, held two open sessions with audience participation, and developed recommendations for the NASA Space Station Program. The major thrusts of the recommendations were as follows: (1) The software management plan should establish policies, responsibilities, and decision points for software acquisition; (2) NASA should furnish a uniform modular software support environment and require its use for all space station software acquired (or developed); (3) The language Ada should be selected for space station software, and NASA should begin to address issues related to the effective use of Ada; and (4) The space station software standards should be selected (based upon existing standards where possible), and an organization should be identified to promulgate and enforce them. These and related recommendations are described in detail in the conference proceedings.

  3. Achieving strategic surety for high consequence software

    SciTech Connect

    Pollock, G.M.

    1996-09-01

    A strategic surety roadmap for high consequence software systems under the High Integrity Software (HIS) Program at Sandia National Laboratories guides research in identifying methodologies to improve software surety. Selected research tracks within this roadmap are identified and described detailing current technology and outlining advancements to be pursued over the coming decade to reach HIS goals. The tracks discussed herein focus on Correctness by Design, and System Immunology{trademark}. Specific projects are discussed with greater detail given on projects involving Correct Specification via Visualization, Synthesis, & Analysis; Visualization of Abstract Objects; and Correct Implementation of Components.

  4. An electrophysiological study of the object-based correspondence effect: is the effect triggered by an intended grasping action?

    PubMed

    Lien, Mei-Ching; Jardin, Elliott; Proctor, Robert W

    2013-11-01

    We examined Goslin, Dixon, Fischer, Cangelosi, and Ellis's (Psychological Science 23:152-157, 2012) claim that the object-based correspondence effect (i.e., faster keypress responses when the orientation of an object's graspable part corresponds with the response location than when it does not) is the result of object-based attention (vision-action binding). In Experiment 1, participants determined the category of a centrally located object (kitchen utensil vs. tool), as in Goslin et al.'s study. The handle orientation (left vs. right) did or did not correspond with the response location (left vs. right). We found no correspondence effect on the response times (RTs) for either category. The effect was also not evident in the P1 and N1 components of the event-related potentials, which are thought to reflect the allocation of early visual attention. This finding was replicated in Experiment 2 for centrally located objects, even when the object was presented 45 times (33 more times than in Exp. 1). Critically, the correspondence effects on RTs, P1s, and N1s emerged only when the object was presented peripherally, so that the object handle was clearly located to the left or right of fixation. Experiment 3 provided further evidence that the effect was observed only for the base-centered objects, in which the handle was clearly positioned to the left or right of center. These findings contradict those of Goslin et al. and provide no evidence that an intended grasping action modulates visual attention. Instead, the findings support the spatial-coding account of the object-based correspondence effect.

  5. Unsupervised change detection in VHR remote sensing imagery - an object-based clustering approach in a dynamic urban environment

    NASA Astrophysics Data System (ADS)

    Leichtle, Tobias; Geiß, Christian; Wurm, Michael; Lakes, Tobia; Taubenböck, Hannes

    2017-02-01

    Monitoring of changes is one of the most important inherent capabilities of remote sensing. The steadily increasing amount of available very-high resolution (VHR) remote sensing imagery requires highly automatic methods and thus, largely unsupervised concepts for change detection. In addition, new procedures that address this challenge should be capable of handling remote sensing data acquired by different sensors. Thereby, especially in rapidly changing complex urban environments, the high level of detail present in VHR data indicates the deployment of object-based concepts for change detection. This paper presents a novel object-based approach for unsupervised change detection with focus on individual buildings. First, a principal component analysis together with a unique procedure for determination of the number of relevant principal components is performed as a predecessor for change detection. Second, k-means clustering is applied for discrimination of changed and unchanged buildings. In this manner, several groups of object-based difference features that can be derived from multi-temporal VHR data are evaluated regarding their discriminative properties for change detection. In addition, the influence of deviating viewing geometries when using VHR data acquired by different sensors is quantified. Overall, the proposed workflow returned viable results in the order of κ statistics of 0.8-0.9 and beyond for different groups of features, which demonstrates its suitability for unsupervised change detection in dynamic urban environments. With respect to imagery from different sensors, deviating viewing geometries were found to deteriorate the change detection result only slightly in the order of up to 0.04 according to κ statistics, which underlines the robustness of the proposed approach.

  6. The Legacy of Space Shuttle Flight Software

    NASA Technical Reports Server (NTRS)

    Hickey, Christopher J.; Loveall, James B.; Orr, James K.; Klausman, Andrew L.

    2011-01-01

    The initial goals of the Space Shuttle Program required that the avionics and software systems blaze new trails in advancing avionics system technology. Many of the requirements placed on avionics and software were accomplished for the first time on this program. Examples include comprehensive digital fly-by-wire technology, use of a digital databus for flight critical functions, fail operational/fail safe requirements, complex automated redundancy management, and the use of a high-order software language for flight software development. In order to meet the operational and safety goals of the program, the Space Shuttle software had to be extremely high quality, reliable, robust, reconfigurable and maintainable. To achieve this, the software development team evolved a software process focused on continuous process improvement and defect elimination that consistently produced highly predictable and top quality results, providing software managers the confidence needed to sign each Certificate of Flight Readiness (COFR). This process, which has been appraised at Capability Maturity Model (CMM)/Capability Maturity Model Integration (CMMI) Level 5, has resulted in one of the lowest software defect rates in the industry. This paper will present an overview of the evolution of the Primary Avionics Software System (PASS) project and processes over thirty years, an argument for strong statistical control of software processes with examples, an overview of the success story for identifying and driving out errors before flight, a case study of the few significant software issues and how they were either identified before flight or slipped through the process onto a flight vehicle, and identification of the valuable lessons learned over the life of the project.

  7. Software Engineering Improvement Plan

    NASA Technical Reports Server (NTRS)

    2006-01-01

    In performance of this task order, bd Systems personnel provided support to the Flight Software Branch and the Software Working Group through multiple tasks related to software engineering improvement and to activities of the independent Technical Authority (iTA) Discipline Technical Warrant Holder (DTWH) for software engineering. To ensure that the products, comments, and recommendations complied with customer requirements and the statement of work, bd Systems personnel maintained close coordination with the customer. These personnel performed work in areas such as update of agency requirements and directives database, software effort estimation, software problem reports, a web-based process asset library, miscellaneous documentation review, software system requirements, issue tracking software survey, systems engineering NPR, and project-related reviews. This report contains a summary of the work performed and the accomplishments in each of these areas.

  8. Guidelines for software inspections

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Quality control inspections are software problem finding procedures which provide defect removal as well as improvements in software functionality, maintenance, quality, and development and testing methodology is discussed. The many side benefits include education, documentation, training, and scheduling.

  9. Software assurance standard

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.

  10. Design software for reuse

    NASA Technical Reports Server (NTRS)

    Tracz, Will

    1990-01-01

    Viewgraphs are presented on the designing of software for reuse. Topics include terminology, software reuse maxims, the science of programming, an interface design example, a modularization example, and reuse and implementation guidelines.

  11. Methodology for automating software systems. Task 1 of the foundations for automating software systems

    NASA Technical Reports Server (NTRS)

    Moseley, Warren

    1989-01-01

    The early stages of a research program designed to establish an experimental research platform for software engineering are described. Major emphasis is placed on Computer Assisted Software Engineering (CASE). The Poor Man's CASE Tool is based on the Apple Macintosh system, employing available software including Focal Point II, Hypercard, XRefText, and Macproject. These programs are functional in themselves, but through advanced linking are available for operation from within the tool being developed. The research platform is intended to merge software engineering technology with artificial intelligence (AI). In the first prototype of the PMCT, however, the sections of AI are not included. CASE tools assist the software engineer in planning goals, routes to those goals, and ways to measure progress. The method described allows software to be synthesized instead of being written or built.

  12. Intelligent software for laboratory automation.

    PubMed

    Whelan, Ken E; King, Ross D

    2004-09-01

    The automation of laboratory techniques has greatly increased the number of experiments that can be carried out in the chemical and biological sciences. Until recently, this automation has focused primarily on improving hardware. Here we argue that future advances will concentrate on intelligent software to integrate physical experimentation and results analysis with hypothesis formulation and experiment planning. To illustrate our thesis, we describe the 'Robot Scientist' - the first physically implemented example of such a closed loop system. In the Robot Scientist, experimentation is performed by a laboratory robot, hypotheses concerning the results are generated by machine learning and experiments are allocated and selected by a combination of techniques derived from artificial intelligence research. The performance of the Robot Scientist has been evaluated by a rediscovery task based on yeast functional genomics. The Robot Scientist is proof that the integration of programmable laboratory hardware and intelligent software can be used to develop increasingly automated laboratories.

  13. The Software Architecture of the Upgraded ESA DRAMA Software Suite

    NASA Astrophysics Data System (ADS)

    Kebschull, Christopher; Flegel, Sven; Gelhaus, Johannes; Mockel, Marek; Braun, Vitali; Radtke, Jonas; Wiedemann, Carsten; Vorsmann, Peter; Sanchez-Ortiz, Noelia; Krag, Holger

    2013-08-01

    In the beginnings of man's space flight activities there was the belief that space is so big that everybody could use it without any repercussions. However during the last six decades the increasing use of Earth's orbits has lead to a rapid growth in the space debris environment, which has a big influence on current and future space missions. For this reason ESA issued the "Requirements on Space Debris Mitigation for ESA Projects" [1] in 2008, which apply to all ESA missions henceforth. The DRAMA (Debris Risk Assessment and Mitigation Analysis) software suite had been developed to support the planning of space missions to comply with these requirements. During the last year the DRAMA software suite has been upgraded under ESA contract by TUBS and DEIMOS to include additional tools and increase the performance of existing ones. This paper describes the overall software architecture of the ESA DRAMA software suite. Specifically the new graphical user interface, which manages the five main tools ARES (Assessment of Risk Event Statistics), MIDAS (MASTER-based Impact Flux and Damage Assessment Software), OSCAR (Orbital Spacecraft Active Removal), CROC (Cross Section of Complex Bodies) and SARA (Re-entry Survival and Risk Analysis) is being discussed. The advancements are highlighted as well as the challenges that arise from the integration of the five tool interfaces. A framework had been developed at the ILR and was used for MASTER-2009 and PROOF-2009. The Java based GUI framework, enables the cross-platform deployment, and its underlying model-view-presenter (MVP) software pattern, meet strict design requirements necessary to ensure a robust and reliable method of operation in an environment where the GUI is separated from the processing back-end. While the GUI framework evolved with each project, allowing an increasing degree of integration of services like validators for input fields, it has also increased in complexity. The paper will conclude with an outlook on

  14. Software verification and testing

    NASA Technical Reports Server (NTRS)

    1985-01-01

    General procedures for software verification and validation are provided as a guide for managers, programmers, and analysts involved in software development. The verification and validation procedures described are based primarily on testing techniques. Testing refers to the execution of all or part of a software system for the purpose of detecting errors. Planning, execution, and analysis of tests are outlined in this document. Code reading and static analysis techniques for software verification are also described.

  15. Microcomputer Software Collections.

    ERIC Educational Resources Information Center

    Demas, Samuel

    1985-01-01

    Presents overview of special considerations in developing microcomputer software collections, review of standardized cataloging practices, and discussion of problems of selection and acquisition of software. Policies governing loan procedures for microcomputer software which involve four types of copy protection (patent, trade secret, contract,…

  16. Benchmarking Software Assurance Implementation

    DTIC Science & Technology

    2011-05-18

    product The chicken#. (a.k.a. Process Focused Assessment ) – Management Systems ( ISO 9001 , ISO 27001, ISO 2000) – Capability Maturity Models (CMMI...How – Executive leadership commitment – Translate ROI to project manager vocabulary (cost, schedule, quality ) – Start small and build – Use...collaboration Vocabulary Reserved Words Software Acquisition Information Assurance Project Management System Engineering Software Engineering Software

  17. Astronomical Software Directory Service

    NASA Technical Reports Server (NTRS)

    Hanisch, R. J.; Payne, H.; Hayes, J.

    1998-01-01

    This is the final report on the development of the Astronomical Software Directory Service (ASDS), a distributable, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URL's indexed for full-text searching.

  18. Specifications for Thesaurus Software.

    ERIC Educational Resources Information Center

    Milstead, Jessica L.

    1991-01-01

    Presents specifications for software that is designed to support manual development and maintenance of information retrieval thesauri. Evaluation of existing software and design of custom software is discussed, requirements for integration with larger systems and for the user interface are described, and relationships among terms are discussed.…

  19. Java for flight software

    NASA Technical Reports Server (NTRS)

    Benowitz, E.; Niessner, A.

    2003-01-01

    This work involves developing representative mission-critical spacecraft software using the Real-Time Specification for Java (RTSJ). This work currently leverages actual flight software used in the design of actual flight software in the NASA's Deep Space 1 (DSI), which flew in 1998.

  20. Software Engineering for Portability.

    ERIC Educational Resources Information Center

    Stanchev, Ivan

    1990-01-01

    Discussion of the portability of educational software focuses on the software design and development process. Topics discussed include levels of portability; the user-computer dialog; software engineering principles; design techniques for student performance records; techniques of courseware programing; and suggestions for further research and…

  1. Software Architecture Evolution

    ERIC Educational Resources Information Center

    Barnes, Jeffrey M.

    2013-01-01

    Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects. Today's software architects, however,…

  2. Artificial intelligence and expert systems in-flight software testing

    NASA Technical Reports Server (NTRS)

    Demasie, M. P.; Muratore, J. F.

    1991-01-01

    The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.

  3. Numerical linear algebra algorithms and software

    NASA Astrophysics Data System (ADS)

    Dongarra, Jack J.; Eijkhout, Victor

    2000-11-01

    The increasing availability of advanced-architecture computers has a significant effect on all spheres of scientific computation, including algorithm research and software development in numerical linear algebra. Linear algebra - in particular, the solution of linear systems of equations - lies at the heart of most calculations in scientific computing. This paper discusses some of the recent developments in linear algebra designed to exploit these advanced-architecture computers. We discuss two broad classes of algorithms: those for dense, and those for sparse matrices.

  4. a New Framework for Object-Based Image Analysis Based on Segmentation Scale Space and Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Hadavand, A.; Saadatseresht, M.; Homayouni, S.

    2015-12-01

    In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS), a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  5. Object-based change detection in rapid urbanization regions with remotely sensed observations: a case study of Shenzhen, China

    NASA Astrophysics Data System (ADS)

    He, Lihuang; Dong, Guihua; Wang, Wei-Min; Yang, Lijun; Liang, Hong

    2013-10-01

    China, the most populous country on Earth, has experienced rapid urbanization which is one of the main causes of many environmental and ecological problems. Therefore, the monitoring of rapid urbanization regions and the environment is of critical importance for their sustainable development. In this study, the object-based classification is employed to detect the change of land cover in Shenzhen, which is located in South China and has been urbanized rapidly in recent three decades. First, four Landsat TM images, which were acquired on 1990, 2000 and 2010, respectively, are selected from the image database. Atmospheric corrections are conducted on these images with improved dark-object subtraction technique and surface meteorological observations. Geometric correction is processed with ground control points derived from topographic maps. Second, a region growing multi-resolution segmentation and a soft nearest neighbour classifier are used to finish object-based classification. After analyzing the fraction of difference classes over time series, we conclude that the comparison of derived land cover classes with socio-economic statistics demonstrates the strong positive correlation between built-up classes and urban population as well as gross GDP and GDPs in second and tertiary industries. Two different mechanisms of urbanization, namely new land development and redevelopment, are revealed. Consequently, we found that, the districts of Shenzhen were urbanized through different mechanisms.

  6. Object-based task-level control: A hierarchical control architecture for remote operation of space robots

    NASA Technical Reports Server (NTRS)

    Stevens, H. D.; Miles, E. S.; Rock, S. J.; Cannon, R. H.

    1994-01-01

    Expanding man's presence in space requires capable, dexterous robots capable of being controlled from the Earth. Traditional 'hand-in-glove' control paradigms require the human operator to directly control virtually every aspect of the robot's operation. While the human provides excellent judgment and perception, human interaction is limited by low bandwidth, delayed communications. These delays make 'hand-in-glove' operation from Earth impractical. In order to alleviate many of the problems inherent to remote operation, Stanford University's Aerospace Robotics Laboratory (ARL) has developed the Object-Based Task-Level Control architecture. Object-Based Task-Level Control (OBTLC) removes the burden of teleoperation from the human operator and enables execution of tasks not possible with current techniques. OBTLC is a hierarchical approach to control where the human operator is able to specify high-level, object-related tasks through an intuitive graphical user interface. Infrequent task-level command replace constant joystick operations, eliminating communications bandwidth and time delay problems. The details of robot control and task execution are handled entirely by the robot and computer control system. The ARL has implemented the OBTLC architecture on a set of Free-Flying Space Robots. The capability of the OBTLC architecture has been demonstrated by controlling the ARL Free-Flying Space Robots from NASA Ames Research Center.

  7. The regularized iteratively reweighted object-based MAD method for change detection in bi-temporal, multispectral data

    NASA Astrophysics Data System (ADS)

    Xu, Qiangqiang; Liu, Zhengjun; Li, Fangfang; Yang, Mingze; Ren, Haicheng

    2016-10-01

    As the resolution for multispectral images continuously improving, the phenomena that "synonyms spectrum", "foreign body with spectrum" become more apparently, therefore, in the results for pixel-based iteratively reweighted multivariate alteration detection (IR -MAD) may appear many issues such as broken patch, much pseudo-change, more noise, the low overall detection rate et al. In order to improving the above problems. In this paper, the pixel-based IR-MAD algorithm can be transferred to the object domain. the object-based IR-MAD(OB_IRMAD) method apply different and meaningful combinations of features rather than the original pixels. these features are classified into meaningful combinations, so that the results for object-based change detection can get higher reliability and accuracy. To stabilize solutions to the IR-MAD problem, some of regularization may be needed. A case with ZY-3 multispectral image at one point in time in province of Xinjiang border port demonstrate the effectiveness and feasibility of the OB_IRMAD. Compared as, using the same date and region we do the pixel-level IR-MAD change detection and artificial visual change detection. Finally, we calculate the various evaluation indexes for accuracy utilizing confuse matrix and compare the accuracy of the two detection results. The results show: in the ways of overall accuracy, correct rate, error rate, the OB_IRMAD is better than pixel-level IRMAD, change polygon more rules and indicates a less noisy.

  8. Tracker 300 Software

    SciTech Connect

    Wysor, R. Wes

    2006-01-12

    The Tracker300 software is downloaded to an off-the-shelf product called RCM3400/RCM3410 made by Rabbit Semiconductor. The software is a closed loop control which computes the sun's position and provides stability compensation. Using the RCM3400/RCM3410 module, the software stores and retrieves parameters from the onboard flash. The software also allows for communication with a host. It will allow the parameters to be downloaded or uploaded, it will show the status of the controller, it will provide real-time feedback, and it will send command acknowledgements. The software will capture the GPS response and ensure the internal clock is set correctly.

  9. Funding Research Software Development

    NASA Astrophysics Data System (ADS)

    Momcheva, Ivelina G.

    2017-01-01

    Astronomical software is used by each and every member of our scientific community. Purpose-build software is becoming ever more critical as we enter the regime of large datasets and simulations of increasing complexity. However, financial investments in building, maintaining and renovating the software infrastructure have been uneven. In this talk I will summarize past and current funding sources for astronomical software development, discuss other models of funding and introduce a new initiative for supporting community software at STScI. The purpose of this talk is to prompt discussion about how we allocate resources to this vital infrastructure.

  10. Software Trends and Trendsetters: How They're Shaping an Industry.

    ERIC Educational Resources Information Center

    McGinty, Tony; And Others

    1987-01-01

    Discusses trends in educational software and the effects of new developments on publishers and on the computer industry. Marketing prospects for software are examined, and recent advances are highlighted, including integrated learning systems, skill-based software, software tied to textbooks, networking, and freeware. (LRW)

  11. Healthcare Software Assurance

    PubMed Central

    Cooper, Jason G.; Pauley, Keith A.

    2006-01-01

    Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA’s software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted. PMID:17238324

  12. Bayesian Software Health Management for Aircraft Guidance, Navigation, and Control

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Mbaya, Timmy; Menghoel, Ole

    2011-01-01

    Modern aircraft, both piloted fly-by-wire commercial aircraft as well as UAVs, more and more depend on highly complex safety critical software systems with many sensors and computer-controlled actuators. Despite careful design and V&V of the software, severe incidents have happened due to malfunctioning software. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of the on-board software and sensor system, and to perform advanced on-board diagnostic reasoning. We will focus on the approach to develop reliable and robust health models for the combined software and sensor systems.

  13. Simple solution to the medical instrumentation software problem

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Leif, Suzanne B.; Leif, Stephanie H.; Bingue, E.

    1995-04-01

    Medical devices now include a substantial software component, which is both difficult and expensive to produce and maintain. Medical software must be developed according to `Good Manufacturing Practices', GMP. Good Manufacturing Practices as specified by the FDA and ISO requires the definition and compliance to a software processes which ensures quality products by specifying a detailed method of software construction. The software process should be based on accepted standards. US Department of Defense software standards and technology can both facilitate the development and improve the quality of medical systems. We describe the advantages of employing Mil-Std-498, Software Development and Documentation, and the Ada programming language. Ada provides the very broad range of functionalities, from embedded real-time to management information systems required by many medical devices. It also includes advanced facilities for object oriented programming and software engineering.

  14. Software For Generation Of ASTER Data Products

    NASA Technical Reports Server (NTRS)

    Murray, Alexander T.; Eng, Bjorn T.; Voge, Charles C.

    1996-01-01

    Software functioning in EOS-DIS computing environment developed to generate data products from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). Processes high-resolution image data from visible and near infrared (VNIR), short-wavelength infrared (SWIR), and thermal infrared (TIR) radiometric readings to generate data on radiative and thermal properties of atmosphere and surface of Earth.

  15. BMDP/Diamond for Windows [software review].

    ERIC Educational Resources Information Center

    Lay, Colette M.

    1996-01-01

    BMDP/Diamond for Windows was developed as a stand-alone software package for initial data exploration and screening. The program uses advanced visualization techniques to allow one to examine the data from different perspectives, or to manipulate graphics to gain greater insight. Data can be checked quickly for a variety of characteristics. (SLD)

  16. NASA software documentation standard software engineering program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as Standard) can be applied to the documentation of all NASA software. This Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. This basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  17. Semiautomated object-based classification of rain-induced landslides with VHR multispectral images on Madeira Island

    NASA Astrophysics Data System (ADS)

    Heleno, Sandra; Matias, Magda; Pina, Pedro; Sousa, António Jorge

    2016-04-01

    A method for semiautomated landslide detection and mapping, with the ability to separate source and run-out areas, is presented in this paper. It combines object-based image analysis and a support vector machine classifier and is tested using a GeoEye-1 multispectral image, sensed 3 days after a major damaging landslide event that occurred on Madeira Island (20 February 2010), and a pre-event lidar digital terrain model. The testing is developed in a 15 km2 wide study area, where 95 % of the number of landslides scars are detected by this supervised approach. The classifier presents a good performance in the delineation of the overall landslide area, with commission errors below 26 % and omission errors below 24 %. In addition, fair results are achieved in the separation of the source from the run-out landslide areas, although in less illuminated slopes this discrimination is less effective than in sunnier, east-facing slopes.

  18. Generating land cover boundaries from remotely sensed data using object-based image analysis: overview and epidemiological application.

    PubMed

    Maxwell, Susan K

    2010-12-01

    Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application.

  19. Evaluation of the contribution of LiDAR data and postclassification procedures to object-based classification accuracy

    NASA Astrophysics Data System (ADS)

    Styers, Diane M.; Moskal, L. Monika; Richardson, Jeffrey J.; Halabisky, Meghan A.

    2014-01-01

    Object-based image analysis (OBIA) is becoming an increasingly common method for producing land use/land cover (LULC) classifications in urban areas. In order to produce the most accurate LULC map, LiDAR data and postclassification procedures are often employed, but their relative contributions to accuracy are unclear. We examined the contribution of LiDAR data and postclassification procedures to increase classification accuracies over using imagery alone and assessed sources of error along an ecologically complex urban-to-rural gradient in Olympia, Washington. Overall classification accuracy and user's and producer's accuracies for individual classes were evaluated. The addition of LiDAR data to the OBIA classification resulted in an 8.34% increase in overall accuracy, while manual postclassification to the imagery+LiDAR classification improved accuracy only an additional 1%. Sources of error in this classification were largely due to edge effects, from which multiple different types of errors result.

  20. Generating land cover boundaries from remotely sensed data using object-based image analysis: overview and epidemiological application

    PubMed Central

    Maxwell, Susan K.

    2010-01-01

    Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. PMID:21135917

  1. Interaction Between Object-Based Attention and Pertinence Values Shapes the Attentional Priority Map of a Multielement Display

    PubMed Central

    2016-01-01

    Previous studies have shown that the perceptual organization of the visual scene constrains the deployment of attention. Here we investigated how the organization of multiple elements into larger configurations alters their attentional weight, depending on the “pertinence” or behavioral importance of the elements’ features. We assessed object-based effects on distinct aspects of the attentional priority map: top-down control, reflecting the tendency to encode targets rather than distracters, and the spatial distribution of attention weights across the visual scene, reflecting the tendency to report elements belonging to the same rather than different objects. In 2 experiments participants had to report the letters in briefly presented displays containing 8 letters and digits, in which pairs of characters could be connected with a line. Quantitative estimates of top-down control were obtained using Bundesen’s Theory of Visual Attention (1990). The spatial distribution of attention weights was assessed using the “paired response index” (PRI), indicating responses for within-object pairs of letters. In Experiment 1, grouping along the task-relevant dimension (targets with targets and distracters with distracters) increased top-down control and enhanced the PRI; in contrast, task-irrelevant grouping (targets with distracters) did not affect performance. In Experiment 2, we disentangled the effect of target-target and distracter-distracter grouping: Pairwise grouping of distracters enhanced top-down control whereas pairwise grouping of targets changed the PRI. We conclude that object-based perceptual representations interact with pertinence values (of the elements’ features and location) in the computation of attention weights, thereby creating a widespread pattern of attentional facilitation across the visual scene. PMID:26752732

  2. Scientific Software for the Macintosh.

    ERIC Educational Resources Information Center

    Watkins, Douglas; Gabaldon, Diana J.

    1985-01-01

    Lists and describes representative software for Macintosh microcomputers. Included are Apple University Consortium software, word processing software, statistics packages, integrated packages, database management systems, and others. Source of software and costs (when available) are included. (JN)

  3. COTS software selection process.

    SciTech Connect

    Watkins, William M. (Strike Wire Technologies, Louisville, CO); Lin, Han Wei; McClelland, Kelly (U.S. Security Associates, Livermore, CA); Ullrich, Rebecca Ann; Khanjenoori, Soheil; Dalton, Karen; Lai, Anh Tri; Kuca, Michal; Pacheco, Sandra; Shaffer-Gant, Jessica

    2006-05-01

    Today's need for rapid software development has generated a great interest in employing Commercial-Off-The-Shelf (COTS) software products as a way of managing cost, developing time, and effort. With an abundance of COTS software packages to choose from, the problem now is how to systematically evaluate, rank, and select a COTS product that best meets the software project requirements and at the same time can leverage off the current corporate information technology architectural environment. This paper describes a systematic process for decision support in evaluating and ranking COTS software. Performed right after the requirements analysis, this process provides the evaluators with more concise, structural, and step-by-step activities for determining the best COTS software product with manageable risk. In addition, the process is presented in phases that are flexible to allow for customization or tailoring to meet various projects' requirements.

  4. Computer-assisted qualitative data analysis software.

    PubMed

    Cope, Diane G

    2014-05-01

    Advances in technology have provided new approaches for data collection methods and analysis for researchers. Data collection is no longer limited to paper-and-pencil format, and numerous methods are now available through Internet and electronic resources. With these techniques, researchers are not burdened with entering data manually and data analysis is facilitated by software programs. Quantitative research is supported by the use of computer software and provides ease in the management of large data sets and rapid analysis of numeric statistical methods. New technologies are emerging to support qualitative research with the availability of computer-assisted qualitative data analysis software (CAQDAS).CAQDAS will be presented with a discussion of advantages, limitations, controversial issues, and recommendations for this type of software use.

  5. Solar Asset Management Software

    SciTech Connect

    Iverson, Aaron; Zviagin, George

    2016-09-30

    Ra Power Management (RPM) has developed a cloud based software platform that manages the financial and operational functions of third party financed solar projects throughout their lifecycle. RPM’s software streamlines and automates the sales, financing, and management of a portfolio of solar assets. The software helps solar developers automate the most difficult aspects of asset management, leading to increased transparency, efficiency, and reduction in human error. More importantly, our platform will help developers save money by improving their operating margins.

  6. Perspective on Software Reuse

    DTIC Science & Technology

    1988-09-01

    structured systems analysis [Gane & Sarson; McMenamin & Palmer] which tries to separate the "logical system" requirements from the "physical aspects...6. Lewis. T.G., Apple Macintosh Software, Software Reviews, IEEE Software, March 1985, pp. 89-92. 7. McMenamin S.M. and Palmer, J.F., Essential...lb. RESTRICTIVE MARKINGS NONE 2a. SECURITY CLASSIFICATION AUTHORITY N/A 2b. OECLASSIFICATION/OOWNGRAOING SCHEDULE 3. OlSTRIBUTION/AVAILABILITY

  7. 2006 XSD Scientific Software Workshop report.

    SciTech Connect

    Evans, K., Jr.; De Carlo, F.; Jemian, P.; Lang, J.; Lienert, U.; Maclean, J.; Newville, M.; Tieman, B.; Toby, B.; van Veenendaal, B.; Univ. of Chicago

    2006-01-22

    In May of 2006, a committee was formed to assess the fundamental needs and opportunities in scientific software for x-ray data reduction, analysis, modeling, and simulation. This committee held a series of discussions throughout the summer, conducted a poll of the members of the x-ray community, and held a workshop. This report details the findings and recommendations of the committee. Each experiment performed at the APS requires three crucial ingredients: the powerful x-ray source, an optimized instrument to perform measurements, and computer software to acquire, visualize, and analyze the experimental observations. While the APS has invested significant resources in the accelerator, investment in other areas such as scientific software for data analysis and visualization has lagged behind. This has led to the adoption of a wide variety of software with variable levels of usability. In order to maximize the scientific output of the APS, it is essential to support the broad development of real-time analysis and data visualization software. As scientists attack problems of increasing sophistication and deal with larger and more complex data sets, software is playing an ever more important role. Furthermore, our need for excellent and flexible scientific software can only be expected to increase, as the upgrade of the APS facility and the implementation of advanced detectors create a host of new measurement capabilities. New software analysis tools must be developed to take full advantage of these capabilities. It is critical that the APS take the lead in software development and the implementation of theory to software to ensure the continued success of this facility. The topics described in this report are relevant to the APS today and critical for the APS upgrade plan. Implementing these recommendations will have a positive impact on the scientific productivity of the APS today and will be even more critical in the future.

  8. HydroShare: Applying professional software engineering to a new NSF-funded large software project

    NASA Astrophysics Data System (ADS)

    Idaszak, R.; Tarboton, D. G.; Ames, D.; Saleem Arrigo, J. A.; Band, L. E.; Bedig, A.; Castronova, A. M.; Christopherson, L.; Coposky, J.; Couch, A.; Dash, P.; Gan, T.; Goodall, J.; Gustafson, K.; Heard, J.; Hooper, R. P.; Horsburgh, J. S.; Jackson, S.; Johnson, H.; Maidment, D. R.; Mbewe, P.; Merwade, V.; Miles, B.; Reeder, S.; Russell, T.; Song, C.; Taylor, A.; Thakur, S.; Valentine, D. W.; Whiteaker, T. L.

    2013-12-01

    HydroShare is an online, collaborative system being developed for sharing hydrologic data and models as part of the NSF's Software Infrastructure for Sustained Innovation (SI2) program (NSF collaborative award numbers 1148453 and 1148090). HydroShare involves a large software development effort requiring cooperative research and distributed software development between domain scientists, professional software engineers (here 'professional' denotes previous commercial experience in the application of modern software engineering), and university software developers. HydroShare expands upon the data sharing capabilities of the Hydrologic Information System of the Consortium of Universities for the Advancement of Hydrologic Sciences, Inc. (CUAHSI) by broadening the classes of data accommodated, expanding capability to include the sharing of models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. With a goal of enabling better science concomitant with improved sustainable software practices, we will describe our approach, experiences, and lessons learned thus-far in applying professional software engineering to a large NSF-funded software project from the project's onset.

  9. On Software Compatibility.

    ERIC Educational Resources Information Center

    Ershov, Andrei P.

    The problem of compatibility of software hampers the development of computer application. One solution lies in standardization of languages, terms, peripherais, operating systems and computer characteristics. (AB)

  10. Instructional Software Design Principles.

    ERIC Educational Resources Information Center

    Hazen, Margret

    1985-01-01

    Discusses learner/computer interaction, learner control, sequencing of instructional events, and graphic screen design as effective principles for the design of instructional software, including tutorials. (MBR)

  11. Classification software technique assessment

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.; Atkinson, R.; Dasarathy, B. V.; Lybanon, M.; Ramapryian, H. K.

    1976-01-01

    A catalog of software options is presented for the use of local user communities to obtain software for analyzing remotely sensed multispectral imagery. The resources required to utilize a particular software program are described. Descriptions of how a particular program analyzes data and the performance of that program for an application and data set provided by the user are shown. An effort is made to establish a statistical performance base for various software programs with regard to different data sets and analysis applications, to determine the status of the state-of-the-art.

  12. Software Reuse Issues

    NASA Technical Reports Server (NTRS)

    Voigt, Susan J. (Editor); Smith, Kathryn A. (Editor)

    1989-01-01

    NASA Langley Research Center sponsored a Workshop on NASA Research in Software Reuse on November 17-18, 1988 in Melbourne, Florida, hosted by Software Productivity Solutions, Inc. Participants came from four NASA centers and headquarters, eight NASA contractor companies, and three research institutes. Presentations were made on software reuse research at the four NASA centers; on Eli, the reusable software synthesis system designed and currently under development by SPS; on Space Station Freedom plans for reuse; and on other reuse research projects. This publication summarizes the presentations made and the issues discussed during the workshop.

  13. Dtest Testing Software

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Cameron, Jonathan M.; Myint, Steven

    2013-01-01

    This software runs a suite of arbitrary software tests spanning various software languages and types of tests (unit level, system level, or file comparison tests). The dtest utility can be set to automate periodic testing of large suites of software, as well as running individual tests. It supports distributing multiple tests over multiple CPU cores, if available. The dtest tool is a utility program (written in Python) that scans through a directory (and its subdirectories) and finds all directories that match a certain pattern and then executes any tests in that directory as described in simple configuration files.

  14. Architecture for Verifiable Software

    NASA Technical Reports Server (NTRS)

    Reinholtz, William; Dvorak, Daniel

    2005-01-01

    Verifiable MDS Architecture (VMA) is a software architecture that facilitates the construction of highly verifiable flight software for NASA s Mission Data System (MDS), especially for smaller missions subject to cost constraints. More specifically, the purpose served by VMA is to facilitate aggressive verification and validation of flight software while imposing a minimum of constraints on overall functionality. VMA exploits the state-based architecture of the MDS and partitions verification issues into elements susceptible to independent verification and validation, in such a manner that scaling issues are minimized, so that relatively large software systems can be aggressively verified in a cost-effective manner.

  15. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2005-01-01

    NASA (National Aeronautics and Space Administration) relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft (manned or unmanned) launched that did not have a computer on board that provided vital command and control services. Despite this growing dependence on software control and monitoring, there has been no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Led by the NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard (STD-18l9.13B) has recently undergone a significant update in an attempt to provide that consistency. This paper will discuss the key features of the new NASA Software Safety Standard. It will start with a brief history of the use and development of software in safety critical applications at NASA. It will then give a brief overview of the NASA Software Working Group and the approach it took to revise the software engineering process across the Agency.

  16. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  17. Creating and Manipulating a Domain-Specific Formal Object Base to Support a Domain-Oriented Application Composition System

    DTIC Science & Technology

    1992-12-01

    and add new attributes as needed (11:129). 2.2.3.2 Feature Oriented Domain Analysis In their Feature-Oriented Domain Analysis ( FODA ) study, the...problems within the domain 3. Architecture Modeling - Create software architecture(s) to implement a solution in the problem domain. The FODA study also...dissertation, The University of Texas at Austin, Austin Texas, 1990. 12. Kang, Kyo C. and others. Feature-Oriented Domain Analysis ( FODA ) Feasibil- ity Study

  18. Independent Software Quality Assessment (ISQA) Services: Makes Dollars and Sense

    DTIC Science & Technology

    2006-05-04

    1 Independent Software Quality Assessment ( ISQA ) Services: Makes Dollars and Sense Presented to the Systems & Software Technology Conference Salt... ISQA ? – What is NOT ISQA • Why? – Software Costs – Benefits of ISQA – ISQA Services • When? – Throughout Life Cycle – Proven Examples • How...advanced technology to meet current needs. • Implement SQA through our ISQA Team – Dedicated Cadre of computer scientists/engineers with access to

  19. ISE advanced technology

    NASA Technical Reports Server (NTRS)

    Fox, Barry R.

    1991-01-01

    Information on Space Station Freedom scheduling problems and techniques are presented in viewgraph form. Topics covered include automated scheduling systems, user interface standards, benefits of interactive scheduling systems, incremental scheduling, software engineering, computer graphics interface, distributed resource management, and advanced applications.

  20. Teaching Social Software with Social Software

    ERIC Educational Resources Information Center

    Mejias, Ulises

    2006-01-01

    Ulises Mejias examines how social software--information and communications technologies that facilitate the collaboration and exchange of ideas--enables students to participate in distributed research, an approach to learning in which knowledge is collectively constructed and shared. During Fall 2005, Mejias taught a graduate seminar that provided…

  1. Designing Distributed Learning Environments with Intelligent Software Agents

    ERIC Educational Resources Information Center

    Lin, Fuhua, Ed.

    2005-01-01

    "Designing Distributed Learning Environments with Intelligent Software Agents" reports on the most recent advances in agent technologies for distributed learning. Chapters are devoted to the various aspects of intelligent software agents in distributed learning, including the methodological and technical issues on where and how intelligent agents…

  2. The Case for Open Source Software: The Interactional Discourse Lab

    ERIC Educational Resources Information Center

    Choi, Seongsook

    2016-01-01

    Computational techniques and software applications for the quantitative content analysis of texts are now well established, and many qualitative data software applications enable the manipulation of input variables and the visualization of complex relations between them via interactive and informative graphical interfaces. Although advances in…

  3. Supporting Source Code Comprehension during Software Evolution and Maintenance

    ERIC Educational Resources Information Center

    Alhindawi, Nouh

    2013-01-01

    This dissertation addresses the problems of program comprehension to support the evolution of large-scale software systems. The research concerns how software engineers locate features and concepts along with categorizing changes within very large bodies of source code along with their versioned histories. More specifically, advanced Information…

  4. A strategic surety roadmap for high consequence software

    SciTech Connect

    Pollock, G.M.; Dalton, L.J.

    1995-12-31

    A strategic surety roadmap for high consequence software systems developed under the High Integrity Software (HIS) Program at Sandia National Laboratories is presented. Selected research tracks are identified and described detailing current technology and outlining advancements to be pursued over the coming decade to reach HIS goals.

  5. Practical quality metrics for resolution enhancement software

    NASA Astrophysics Data System (ADS)

    Boone, Robert E.; Lucas, Kevin; Wynd, Raphael; Boatright, Mike; Thompson, Matthew A.; Reich, Alfred J.

    2003-06-01

    The past few years have seen an explosion in the application of software techniques to improve lithographic printing. Techniques such as optical proximity correction (OPC) and phase shift masks (PSM) increase resolution and CD control by distorting the mask pattern data from the original designed pattern. These software techniques are becoming increasingly complicated and non-intuitive; and the rate of complexity increase appears to be accelerating [1]. The benefits of these techniques to improve CD control and lower cost of ownership (COO) is balanced against the effort required to implement them and the additional problems they create. One severe problem for users of immature and complex software tools and methodologies is quality control, [2] as it ultimately becomes a COO problem. Software quality can be defined very simply as the ability of an application to meet detailed customer requirements. Software quality practice can be defined as the adherence to proven methods for planning, developing, testing and maintaining software. Although software quality for lithographic resolution enhancement is extremely important, the understanding and recognition of good software development practices among lithographers is generally poor. We therefore start by reviewing the essential terms and concepts of software quality that impact lithography and COO. We then propose methods by which semiconductor process and design engineers can estimate and compare the quality of the software tools and vendors they are evaluating or using. We include examples from advanced process technology resolution enhancement work that highlight the need for high-quality software practices, and show how to avoid many problems. Note that, although several authors have worked in software application development, our analysis here is somewhat of a black box analysis. The black box is the software development organization of an RET software supplier. Our access to actual developers within these

  6. The environmental control and life support system advanced automation project

    NASA Technical Reports Server (NTRS)

    Dewberry, Brandon S.

    1991-01-01

    The objective of the ECLSS Advanced Automation project includes reduction of the risk associated with the integration of new, beneficial software techniques. Demonstrations of this software to baseline engineering and test personnel will show the benefits of these techniques. The advanced software will be integrated into ground testing and ground support facilities, familiarizing its usage by key personnel.

  7. COTS-based OO-component approach for software inter-operability and reuse (software systems engineering methodology)

    NASA Technical Reports Server (NTRS)

    Yin, J.; Oyaki, A.; Hwang, C.; Hung, C.

    2000-01-01

    The purpose of this research and study paper is to provide a summary description and results of rapid development accomplishments at NASA/JPL in the area of advanced distributed computing technology using a Commercial-Off--The-Shelf (COTS)-based object oriented component approach to open inter-operable software development and software reuse.

  8. Cartographic applications software

    USGS Publications Warehouse

    ,

    1992-01-01

    The Office of the Assistant Division Chief for Research, National Mapping Division, develops computer software for the solution of geometronic problems in the fields of surveying, geodesy, remote sensing, and photogrammetry. Software that has been developed using public funds is available on request for a nominal charge to recover the cost of duplication.

  9. Learning from Software Localization.

    ERIC Educational Resources Information Center

    Guo, She-Sen

    2003-01-01

    Localization is the process of adapting a product to meet the language, cultural and other requirements of a specific target environment or market. This article describes ways in which software localization impacts upon curriculum, and discusses what students will learn from software localization. (AEF)

  10. Selecting the Right Software.

    ERIC Educational Resources Information Center

    Shearn, Joseph

    1987-01-01

    Selection of administrative software requires analyzing present needs and, to meet future needs, choosing software that will function with a more powerful computer system. Other important factors to include are a professional system demonstration, maintenance and training, and financial considerations that allow leasing or renting alternatives.…

  11. Software Product Lines Essentials

    DTIC Science & Technology

    2008-07-01

    improvement Technology innovation Reuse 7 Software Product Lines Linda Northrop © 2008 Carnegie Mellon University Few Systems Are Unique Most...Focus was small-grained, opportunistic, and technology -driven. Results did not meet business goals. Reuse History 9 Software Product Lines Linda...servers, storage servers, network camera and scanner servers Bold Stroke Avionics Customized solutions for transportation industries E-COM Technology

  12. Who Owns Computer Software?

    ERIC Educational Resources Information Center

    Branscomb, Anne Wells

    1995-01-01

    Discusses the protection of intellectual property as it applies to computer software and its impact on private enterprise and the public good. Highlights include the role of patents, copyrights, and trade secrets; some court cases; and recommendations for alternatives to the existing legal framework for protecting computer software. (KRN)

  13. Software process assessments

    NASA Technical Reports Server (NTRS)

    Miller, Sharon E.; Tucker, George T.; Verducci, Anthony J., Jr.

    1992-01-01

    Software process assessments (SPA's) are part of an ongoing program of continuous quality improvements in AT&T. Their use was found to be very beneficial by software development organizations in identifying the issues facing the organization and the actions required to increase both quality and productivity in the organization.

  14. Software Marketing Considerations.

    ERIC Educational Resources Information Center

    Fuchs, Victor E.

    Seven factors that currently affect the potential for marketing and publishing computer software for education are discussed: (1) computers as an inplace technology in education, (2) marketing and distribution patterns for software, (3) consumer demand, (4) quality, (5) timelessenss, (6) basic skills, and (7) the future. The proliferation of…

  15. Computer Software Reviews.

    ERIC Educational Resources Information Center

    Hawaii State Dept. of Education, Honolulu. Office of Instructional Services.

    Intended to provide guidance in the selection of the best computer software available to support instruction and to make optimal use of schools' financial resources, this publication provides a listing of computer software programs that have been evaluated according to their currency, relevance, and value to Hawaii's educational programs. The…

  16. Software Solution Saves Dollars

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2004-01-01

    This article discusses computer software that can give classrooms and computer labs the capabilities of costly PC's at a small fraction of the cost. A growing number of cost-conscious school districts are finding budget relief in low-cost computer software known as "open source" that can do everything from manage school Web sites to equip…

  17. STARLINK Software Collection

    NASA Astrophysics Data System (ADS)

    Rankin, Steve; Lawden, Mike; Bly, Martin

    The Starlink Software Collection is a set of software which is managed and distributed by the Starlink Project. Some of the software was written by members of the Project, but some of it comes from outside the Project. This note describes the functions of the individual items in the Collection and provides an overview of the software so that readers can identify the items they need. The software is classified into four main divisions: * Packages -- are large collections of programs for people who want to analyse, convert, and display data. They are subdivided into eleven classes to help you find what you want. * Utilities -- are small programs devoted to a specific purpose. For example, they help you prepare for observations, write documents, and write programs. * Subroutine Libraries -- are for programmers writing astronomical software. They provide facilities such as astronomical calculations, data management and graphics. * Infrastructure -- are items which are mainly of interest to people writing programs within the Starlink Software Environment. They are included for completeness. Each item is described in sufficient detail for you to decide whether or not to investigate it further. If you want to find out more about an item, follow the document references given opposite the item name. If you are using the hypertext version of this document, the most up-to-date document references can be found by following the link from the software item name.

  18. Cactus: Software Priorities

    ERIC Educational Resources Information Center

    Hyde, Hartley

    2009-01-01

    The early eighties saw a period of rapid change in computing and teachers lost control of how they used computers in their classrooms. Software companies produced computer tools that looked so good that teachers forgot about writing their own classroom materials and happily purchased software--that offered much more than teachers needed--from…

  19. No System? No Software!

    ERIC Educational Resources Information Center

    Conley, Donna

    1989-01-01

    Noting that the computer software cataloging system at the media center of the Washington (Illinois) Center for Continuing Education is essentially a card catalog in notebook form, this article describes aspects of the development and utilization of the system. Major sections describe: (1) software cataloging system terminology; (2) steps for…

  20. Software measurement guidebook

    NASA Technical Reports Server (NTRS)

    Bassman, Mitchell J.; Mcgarry, Frank; Pajerski, Rose

    1994-01-01

    This software Measurement Guidebook presents information on the purpose and importance of measurement. It discusses the specific procedures and activities of a measurement program and the roles of the people involved. The guidebook also clarifies the roles that measurement can and must play in the goal of continual, sustained improvement for all software production and maintenance efforts.