Science.gov

Sample records for advanced object-based software

  1. An advanced object-based software framework for complex ecosystem modeling and simulation

    SciTech Connect

    Sydelko, P. J.; Dolph, J. E.; Majerus, K. A.; Taxon, T. N.

    2000-06-29

    Military land managers and decision makers face an ever increasing challenge to balance maximum flexibility for the mission with a diverse set of multiple land use, social, political, and economic goals. In addition, these goals encompass environmental requirements for maintaining ecosystem health and sustainability over the long term. Spatiotemporal modeling and simulation in support of adaptive ecosystem management can be best accomplished through a dynamic, integrated, and flexible approach that incorporates scientific and technological components into a comprehensive ecosystem modeling framework. The Integrated Dynamic Landscape Analysis and Modeling System (IDLAMS) integrates ecological models and decision support techniques through a geographic information system (GIS)-based backbone. Recently, an object-oriented (OO) architectural framework was developed for IDLAMS (OO-IDLAMS). This OO-IDLAMS Prototype was built upon and leverages from the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS is an object-based architectural framework that affords a more integrated, dynamic, and flexible approach to comprehensive ecosystem modeling than was possible with the GIS-based integration approach of the original IDLAMS. The flexibility, dynamics, and interoperability demonstrated through this case study of an object-oriented approach have the potential to provide key technology solutions for many of the military's multiple-use goals and needs for integrated natural resource planning and ecosystem management.

  2. Advanced flight software reconfiguraton

    NASA Technical Reports Server (NTRS)

    Porcher, Bryan

    1991-01-01

    Information is given in viewgraph form on advanced flight software reconfiguration. Reconfiguration is defined as identifying mission and configuration specific requirements, controlling mission and configuration specific data, binding this information to the flight software code to perform specific missions, and the release and distribution of the flight software. The objectives are to develop, demonstrate, and validate advanced software reconfiguration tools and techniques; to demonstrate reconfiguration approaches on Space Station Freedom (SSF) onboard systems displays; and to interactively test onboard systems displays, flight software, and flight data.

  3. Advanced Software Development Workstation Project

    NASA Technical Reports Server (NTRS)

    Lee, Daniel

    1989-01-01

    The Advanced Software Development Workstation Project, funded by Johnson Space Center, is investigating knowledge-based techniques for software reuse in NASA software development projects. Two prototypes have been demonstrated and a third is now in development. The approach is to build a foundation that provides passive reuse support, add a layer that uses domain-independent programming knowledge, add a layer that supports the acquisition of domain-specific programming knowledge to provide active support, and enhance maintainability and modifiability through an object-oriented approach. The development of new application software would use specification-by-reformulation, based on a cognitive theory of retrieval from very long-term memory in humans, and using an Ada code library and an object base. Current tasks include enhancements to the knowledge representation of Ada packages and abstract data types, extensions to support Ada package instantiation knowledge acquisition, integration with Ada compilers and relational databases, enhancements to the graphical user interface, and demonstration of the system with a NASA contractor-developed trajectory simulation package. Future work will focus on investigating issues involving scale-up and integration.

  4. Advanced Software Quality Assurance

    DTIC Science & Technology

    1977-03-01

    ment Head, General Research Corporation reports are subject to Independent review by a staff member not connected with the project. This report...ö CR^-72,0 / y < BwKmTö^RoTi T^W^n TNÖ^RcTN^rTlON NAME AND ADDRESS General Research Corporation / P.O. Box 3587 / Santa Barbara, CA...Idrnlily tt* block numb«" This is a report of the work performed by General Research Corporation during the Advanced Software Quality Assurance contract

  5. Advanced fingerprint verification software

    NASA Astrophysics Data System (ADS)

    Baradarani, A.; Taylor, J. R. B.; Severin, F.; Maev, R. Gr.

    2016-05-01

    We have developed a fingerprint software package that can be used in a wide range of applications from law enforcement to public and private security systems, and to personal devices such as laptops, vehicles, and door- locks. The software and processing units are a unique implementation of new and sophisticated algorithms that compete with the current best systems in the world. Development of the software package has been in line with the third generation of our ultrasonic fingerprinting machine1. Solid and robust performance is achieved in the presence of misplaced and low quality fingerprints.

  6. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    SciTech Connect

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enable rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed

  7. Advanced Hybrid Computer Systems. Software Technology.

    DTIC Science & Technology

    This software technology final report evaluates advances made in Advanced Hybrid Computer System software technology . The report describes what...automatic patching software is available as well as which analog/hybrid programming languages would be most feasible for the Advanced Hybrid Computer...compiler software . The problem of how software would interface with the hybrid system is also presented.

  8. Advanced software algorithms

    SciTech Connect

    Berry, K.; Dayton, S.

    1996-10-28

    Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.

  9. A flexible object-based software framework for modeling complex systems with interacting natural and societal processes.

    SciTech Connect

    Christiansen, J. H.

    2000-06-15

    The Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations. The DIAS infrastructure makes it feasible to build and manipulate complex simulation scenarios in which many thousands of objects can interact via dozens to hundreds of concurrent dynamic processes. The flexibility and extensibility of the DIAS software infrastructure stem mainly from (1) the abstraction of object behaviors, (2) the encapsulation and formalization of model functionality, and (3) the mutability of domain object contents. DIAS simulation objects are inherently capable of highly flexible and heterogeneous spatial realizations. Geospatial graphical representation of DIAS simulation objects is addressed via the GeoViewer, an object-based GIS toolkit application developed at ANL. DIAS simulation capabilities have been extended by inclusion of societal process models generated by the Framework for Addressing Cooperative Extended Transactions (FACET), another object-based framework developed at Argonne National Laboratory. By using FACET models to implement societal behaviors of individuals and organizations within larger DIAS-based natural systems simulations, it has become possible to conveniently address a broad range of issues involving interaction and feedback among natural and societal processes. Example DIAS application areas discussed in this paper include a dynamic virtual oceanic environment, detailed simulation of clinical, physiological, and logistical aspects of health care delivery, and studies of agricultural sustainability of urban centers under environmental stress in ancient Mesopotamia.

  10. Advances in cardiac processing software.

    PubMed

    Gordon DePuey, Ernest

    2014-07-01

    New software methods that incorporate iterative reconstruction, resolution recovery, and noise compensation now provide the ability to maintain or improve myocardial perfusion SPECT image quality with conventional sodium iodide cameras. Despite lower image counting statistics associated with significantly decreased injected radiopharmaceutical doses or shortened acquisition times or both, image quality is preserved or even improved compared with conventional processing methods. The ability to prescribe a desired myocardial count density by preselecting a SPECT acquisition time now avoids additional patient radiation exposure associated with "weight-based" dosing. More recent advancements, including temporal correlation among the gated perfusion frames and higher resolution SPECT acquisitions, hold promise to further improve image quality and diagnostic accuracy. Phase analysis of gated perfusion SPECT provides the ability to assess cardiac dyssynchrony and to select those patients who will most benefit from resynchronization therapy. In combination with the higher counting statistics afforded by the new solid-state dedicated cardiac cameras, these software advancements allow for even further decreased patient radiation doses or acquisition times or both. List-mode software allows for refinement of myocardial perfusion SPECT by interrogating particular data from selected cardiac cycles. Rejection of frames degraded by arrhythmic cardiac cycles or excessive extracardiac uptake can be excluded for reconstruction. Respiratory gating, which diminishes cardiac motion and potentially decreases diaphragmatic attenuation, has been demonstrated to improve diagnostic specificity. With high-count first-pass list-mode acquisitions at rest and during pharmacologic vasodilatation, it may be possible to measure global and regional myocardial perfusion reserve to more accurately diagnose coronary artery disease and avoid false-negative studies owing to balanced ischemia.

  11. Advanced Software Development Workstation Project, phase 3

    NASA Technical Reports Server (NTRS)

    1991-01-01

    ACCESS provides a generic capability to develop software information system applications which are explicitly intended to facilitate software reuse. In addition, it provides the capability to retrofit existing large applications with a user friendly front end for preparation of input streams in a way that will reduce required training time, improve the productivity even of experienced users, and increase accuracy. Current and past work shows that ACCESS will be scalable to much larger object bases.

  12. Advanced cardiac life support refresher course using standardized objective-based Mega Code testing.

    PubMed

    Kaye, W; Mancini, M E; Rallis, S F

    1987-01-01

    The American Heart Association (AHA) recommends that those whose daily work requires knowledge and skills in advanced cardiac life support (ACLS) not only be trained in ACLS, but also be given a refresher training at least every 2 yr. However, AMA offers no recommended course for retraining; no systematic studies of retraining have been conducted on which to base these recommendations. In this paper we review and present our recommendation for a standardized approach to refresher training. Using the goals and objectives of the ACLS training program as evaluation criteria, we tested with the Mega Code a sample population who had previously been trained in ACLS. The results revealed deficiencies in ACLS knowledge and skills in the areas of assessment, defibrillation, drug therapy, and determining the cause of an abnormal blood gas value. We combined this information with our knowledge of other deficiencies identified during actual resuscitation attempts and other basic life-support and ACLS teaching experiences. We then designed a refresher course which was consistent with the overall goals and objectives of the ACLS training program, but which placed emphasis on the deficiencies identified in the pretesting. We taught our newly designed refresher course in three sessions, which included basic life support, endotracheal intubation, arrhythmia recognition and therapeutic modalities, defibrillation, and Mega Code practice. In a fourth session, using Mega Code testing, we evaluated knowledge and skill learning immediately after training. We similarly tested retention 2 to 4 months later. Performance immediately after refresher training showed improvement in all areas where performance had been weak.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. Optimization and Reconfiguration of Advanced Manufacturing Mode Based on Object-Based Knowledge Mesh and Improved Immune Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Xue, Chaogai; Cao, Haiwang

    This paper deals with an approach to the optimization and reconfiguration of advanced manufacturing mode based on the object-based knowledge mesh (OKM) and improved immune genetic algorithm (IGA). To explore the optimization and reconfiguration of the new OKM by the user's function requirements, an optimization procedure of an OKM aiming at the user's maximum function-satisfaction is proposed. Firstly, based on the definitions of the fuzzy function-satisfaction degree relationships of the users' requirements for the OKM functions and the multiple fuzzy function-satisfaction degrees of the relationships, the optimization model of the OKM multiple set operation expression is constructed. And the OKM multiple set operation expression is optimized by the immune genetic algorithm, with the steps of the OKM optimization presented in detail as well. Based upon the above, the optimization and reconfiguration of an advanced manufacturing mode are illustrated by an actual OKM example. The proposed approach proves to be very effective.

  14. The School Advanced Ventilation Engineering Software (SAVES)

    EPA Pesticide Factsheets

    The School Advanced Ventilation Engineering Software (SAVES) package is a tool to help school designers assess the potential financial payback and indoor humidity control benefits of Energy Recovery Ventilation (ERV) systems for school applications.

  15. Student Produced Advanced Mathematical Software.

    ERIC Educational Resources Information Center

    Hogben, Leslie

    The intent of this project was to develop a course for mathematics graduate students at Iowa State University. They would design and write computer programs for use by undergraduate mathematics students, and then offer the course and actually produce the software. Phase plane graphics for ordinary differential equations was selected as the topic.…

  16. Advanced Extravehicular Mobility Unit Informatics Software Design

    NASA Technical Reports Server (NTRS)

    Wright, Theodore

    2014-01-01

    This is a description of the software design for the 2013 edition of the Advanced Extravehicular Mobility Unit (AEMU) Informatics computer assembly. The Informatics system is an optional part of the space suit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and caution and warning information. In the future it will display maps with GPS position data, and video and still images captured by the astronaut.

  17. The advanced software development workstation project

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III; Pitman, Charles L.

    1991-01-01

    The Advanced Software Development Workstation (ASDW) task is researching and developing the technologies required to support Computer Aided Software Engineering (CASE) with the emphasis on those advanced methods, tools, and processes that will be of benefit to support all NASA programs. Immediate goals are to provide research and prototype tools that will increase productivity, in the near term, in projects such as the Software Support Environment (SSE), the Space Station Control Center (SSCC), and the Flight Analysis and Design System (FADS) which will be used to support the Space Shuttle and Space Station Freedom. Goals also include providing technology for development, evolution, maintenance, and operations. The technologies under research and development in the ASDW project are targeted to provide productivity enhancements during the software life cycle phase of enterprise and information system modeling, requirements generation and analysis, system design and coding, and system use and maintenance. On-line user's guides will assist users in operating the developed information system with knowledge base expert assistance.

  18. Control Software for Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.

    2006-01-01

    Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.

  19. Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.; Araujo, Ricardo

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for remote sensing applications is becoming more frequent. However, this type of information can result in several software problems related to the huge amount of data available. Object-based image analysis (OBIA) has proven to be superior to pixel-based analysis for very high-resolution images. The main objective of this work was to explore the potentialities of the OBIA methods available in two different open source software applications, Spring and OTB/Monteverdi, in order to generate an urban land cover map. An orthomosaic derived from UAVs was considered, 10 different regions of interest were selected, and two different approaches were followed. The first one (Spring) uses the region growing segmentation algorithm followed by the Bhattacharya classifier. The second approach (OTB/Monteverdi) uses the mean shift segmentation algorithm followed by the support vector machine (SVM) classifier. Two strategies were followed: four classes were considered using Spring and thereafter seven classes were considered for OTB/Monteverdi. The SVM classifier produces slightly better results and presents a shorter processing time. However, the poor spectral resolution of the data (only RGB bands) is an important factor that limits the performance of the classifiers applied.

  20. Automatic landslide detection from LiDAR DTM derivatives by geographic-object-based image analysis based on open-source software

    NASA Astrophysics Data System (ADS)

    Knevels, Raphael; Leopold, Philip; Petschko, Helene

    2017-04-01

    With high-resolution airborne Light Detection and Ranging (LiDAR) data more commonly available, many studies have been performed to facilitate the detailed information on the earth surface and to analyse its limitation. Specifically in the field of natural hazards, digital terrain models (DTM) have been used to map hazardous processes such as landslides mainly by visual interpretation of LiDAR DTM derivatives. However, new approaches are striving towards automatic detection of landslides to speed up the process of generating landslide inventories. These studies usually use a combination of optical imagery and terrain data, and are designed in commercial software packages such as ESRI ArcGIS, Definiens eCognition, or MathWorks MATLAB. The objective of this study was to investigate the potential of open-source software for automatic landslide detection based only on high-resolution LiDAR DTM derivatives in a study area within the federal state of Burgenland, Austria. The study area is very prone to landslides which have been mapped with different methodologies in recent years. The free development environment R was used to integrate open-source geographic information system (GIS) software, such as SAGA (System for Automated Geoscientific Analyses), GRASS (Geographic Resources Analysis Support System), or TauDEM (Terrain Analysis Using Digital Elevation Models). The implemented geographic-object-based image analysis (GEOBIA) consisted of (1) derivation of land surface parameters, such as slope, surface roughness, curvature, or flow direction, (2) finding optimal scale parameter by the use of an objective function, (3) multi-scale segmentation, (4) classification of landslide parts (main scarp, body, flanks) by k-mean thresholding, (5) assessment of the classification performance using a pre-existing landslide inventory, and (6) post-processing analysis for the further use in landslide inventories. The results of the developed open-source approach demonstrated good

  1. Software Framework for Advanced Power Plant Simulations

    SciTech Connect

    John Widmann; Sorin Munteanu; Aseem Jain; Pankaj Gupta; Mark Moales; Erik Ferguson; Lewis Collins; David Sloan; Woodrow Fiveland; Yi-dong Lang; Larry Biegler; Michael Locke; Simon Lingard; Jay Yun

    2010-08-01

    This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. These include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.

  2. The Advanced Software Development and Commercialization Project

    SciTech Connect

    Gallopoulos, E. . Center for Supercomputing Research and Development); Canfield, T.R.; Minkoff, M.; Mueller, C.; Plaskacz, E.; Weber, D.P.; Anderson, D.M.; Therios, I.U. ); Aslam, S.; Bramley, R.; Chen, H.-C.; Cybenko, G.; Gallopoulos, E.; Gao, H.; Malony, A.; Sameh, A. . Center for Supercomputing Research

    1990-09-01

    This is the first of a series of reports pertaining to progress in the Advanced Software Development and Commercialization Project, a joint collaborative effort between the Center for Supercomputing Research and Development of the University of Illinois and the Computing and Telecommunications Division of Argonne National Laboratory. The purpose of this work is to apply techniques of parallel computing that were pioneered by University of Illinois researchers to mature computational fluid dynamics (CFD) and structural dynamics (SD) computer codes developed at Argonne. The collaboration in this project will bring this unique combination of expertise to bear, for the first time, on industrially important problems. By so doing, it will expose the strengths and weaknesses of existing techniques for parallelizing programs and will identify those problems that need to be solved in order to enable wide spread production use of parallel computers. Secondly, the increased efficiency of the CFD and SD codes themselves will enable the simulation of larger, more accurate engineering models that involve fluid and structural dynamics. In order to realize the above two goals, we are considering two production codes that have been developed at ANL and are widely used by both industry and Universities. These are COMMIX and WHAMS-3D. The first is a computational fluid dynamics code that is used for both nuclear reactor design and safety and as a design tool for the casting industry. The second is a three-dimensional structural dynamics code used in nuclear reactor safety as well as crashworthiness studies. These codes are currently available for both sequential and vector computers only. Our main goal is to port and optimize these two codes on shared memory multiprocessors. In so doing, we shall establish a process that can be followed in optimizing other sequential or vector engineering codes for parallel processors.

  3. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  4. Advanced software development workstation project: Engineering scripting language. Graphical editor

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Software development is widely considered to be a bottleneck in the development of complex systems, both in terms of development and in terms of maintenance of deployed systems. Cost of software development and maintenance can also be very high. One approach to reducing costs and relieving this bottleneck is increasing the reuse of software designs and software components. A method for achieving such reuse is a software parts composition system. Such a system consists of a language for modeling software parts and their interfaces, a catalog of existing parts, an editor for combining parts, and a code generator that takes a specification and generates code for that application in the target language. The Advanced Software Development Workstation is intended to be an expert system shell designed to provide the capabilities of a software part composition system.

  5. Advanced information processing system: Input/output network management software

    NASA Technical Reports Server (NTRS)

    Nagle, Gail; Alger, Linda; Kemp, Alexander

    1988-01-01

    The purpose of this document is to provide the software requirements and specifications for the Input/Output Network Management Services for the Advanced Information Processing System. This introduction and overview section is provided to briefly outline the overall architecture and software requirements of the AIPS system before discussing the details of the design requirements and specifications of the AIPS I/O Network Management software. A brief overview of the AIPS architecture followed by a more detailed description of the network architecture.

  6. Advances in the REDCAT software package.

    PubMed

    Schmidt, Chris; Irausquin, Stephanie J; Valafar, Homayoun

    2013-10-07

    Residual Dipolar Couplings (RDCs) have emerged in the past two decades as an informative source of experimental restraints for the study of structure and dynamics of biological macromolecules and complexes. The REDCAT software package was previously introduced for the analysis of molecular structures using RDC data. Here we report additional features that have been included in this software package in order to expand the scope of its analyses. We first discuss the features that enhance REDCATs user-friendly nature, such as the integration of a number of analyses into one single operation and enabling convenient examination of a structural ensemble in order to identify the most suitable structure. We then describe the new features which expand the scope of RDC analyses, performing exercises that utilize both synthetic and experimental data to illustrate and evaluate different features with regard to structure refinement and structure validation. We establish the seamless interaction that takes place between REDCAT, VMD, and Xplor-NIH in demonstrations that utilize our newly developed REDCAT-VMD and XplorGUI interfaces. These modules enable visualization of RDC analysis results on the molecular structure displayed in VMD and refinement of structures with Xplor-NIH, respectively. We also highlight REDCAT's Error-Analysis feature in reporting the localized fitness of a structure to RDC data, which provides a more effective means of recognizing local structural anomalies. This allows for structurally sound regions of a molecule to be identified, and for any refinement efforts to be focused solely on locally distorted regions. The newly engineered REDCAT software package, which is available for download via the WWW from http://ifestos.cse.sc.edu, has been developed in the Object Oriented C++ environment. Our most recent enhancements to REDCAT serve to provide a more complete RDC analysis suite, while also accommodating a more user-friendly experience, and will be of

  7. Byonic: Advanced Peptide and Protein Identification Software

    PubMed Central

    Bern, Marshall; Kil, Yong J.; Becker, Christopher

    2013-01-01

    Byonic™ is the name of a software package for peptide and protein identification by tandem mass spectrometry. This software, which has only recently become commercially available, facilitates a much wider range of search possibilities than previous search software such as SEQUEST and Mascot. Byonic allows the user to define an essentially unlimited number of variable modification types. Byonic also allows the user to set a separate limit on the number of occurrences of each modification type, so that a search may consider only one or two chance modifications such as oxidations and deamidations per peptide, yet allow three or four biological modifications such as phosphorylations, which tend to cluster together. Hence Byonic can search for 10s or even 100s of modification types simultaneously without a prohibitively large combinatorial explosion. Byonic’s Wildcard Search™ allows the user to search for unanticipated or even unknown modifications alongside known modifications. Finally, Byonic’s Glycopeptide Search allows the user to identify glycopeptides without prior knowledge of glycan masses or glycosylation sites. PMID:23255153

  8. Byonic: advanced peptide and protein identification software.

    PubMed

    Bern, Marshall; Kil, Yong J; Becker, Christopher

    2012-12-01

    Byonic is the name of a software package for peptide and protein identification by tandem mass spectrometry. This software, which has only recently become commercially available, facilitates a much wider range of search possibilities than previous search software such as SEQUEST and Mascot. Byonic allows the user to define an essentially unlimited number of variable modification types. Byonic also allows the user to set a separate limit on the number of occurrences of each modification type, so that a search may consider only one or two chance modifications such as oxidations and deamidations per peptide, yet allow three or four biological modifications such as phosphorylations, which tend to cluster together. Hence, Byonic can search for tens or even hundreds of modification types simultaneously without a prohibitively large combinatorial explosion. Byonic's Wildcard Search allows the user to search for unanticipated or even unknown modifications alongside known modifications. Finally, Byonic's Glycopeptide Search allows the user to identify glycopeptides without prior knowledge of glycan masses or glycosylation sites. © 2012 by John Wiley & Sons, Inc.

  9. Advances in the REDCAT software package

    PubMed Central

    2013-01-01

    Background Residual Dipolar Couplings (RDCs) have emerged in the past two decades as an informative source of experimental restraints for the study of structure and dynamics of biological macromolecules and complexes. The REDCAT software package was previously introduced for the analysis of molecular structures using RDC data. Here we report additional features that have been included in this software package in order to expand the scope of its analyses. We first discuss the features that enhance REDCATs user-friendly nature, such as the integration of a number of analyses into one single operation and enabling convenient examination of a structural ensemble in order to identify the most suitable structure. We then describe the new features which expand the scope of RDC analyses, performing exercises that utilize both synthetic and experimental data to illustrate and evaluate different features with regard to structure refinement and structure validation. Results We establish the seamless interaction that takes place between REDCAT, VMD, and Xplor-NIH in demonstrations that utilize our newly developed REDCAT-VMD and XplorGUI interfaces. These modules enable visualization of RDC analysis results on the molecular structure displayed in VMD and refinement of structures with Xplor-NIH, respectively. We also highlight REDCAT’s Error-Analysis feature in reporting the localized fitness of a structure to RDC data, which provides a more effective means of recognizing local structural anomalies. This allows for structurally sound regions of a molecule to be identified, and for any refinement efforts to be focused solely on locally distorted regions. Conclusions The newly engineered REDCAT software package, which is available for download via the WWW from http://ifestos.cse.sc.edu, has been developed in the Object Oriented C++ environment. Our most recent enhancements to REDCAT serve to provide a more complete RDC analysis suite, while also accommodating a more user

  10. Advanced Software Methods for Physics Analysis

    NASA Astrophysics Data System (ADS)

    Lista, L.

    2006-01-01

    Unprecedented data analysis complexity is experienced in modern High Energy Physics experiments. The complexity arises from the growing size of recorded data samples, the large number of data analyses performed by different users in each single experiment, and the level of complexity of each single analysis. For this reason, the requirements on software for data analysis impose a very high level of reliability. We present two concrete examples: the former from BaBar experience with the migration to a new Analysis Model with the definition of a new model for the Event Data Store, the latter about a toolkit for multivariate statistical and parametric Monte Carlo analysis developed using generic programming.

  11. Intelligent Software Tools for Advanced Computing

    SciTech Connect

    Baumgart, C.W.

    2001-04-03

    Feature extraction and evaluation are two procedures common to the development of any pattern recognition application. These features are the primary pieces of information which are used to train the pattern recognition tool, whether that tool is a neural network, a fuzzy logic rulebase, or a genetic algorithm. Careful selection of the features to be used by the pattern recognition tool can significantly streamline the overall development and training of the solution for the pattern recognition application. This report summarizes the development of an integrated, computer-based software package called the Feature Extraction Toolbox (FET), which can be used for the development and deployment of solutions to generic pattern recognition problems. This toolbox integrates a number of software techniques for signal processing, feature extraction and evaluation, and pattern recognition, all under a single, user-friendly development environment. The toolbox has been developed to run on a laptop computer, so that it may be taken to a site and used to develop pattern recognition applications in the field. A prototype version of this toolbox has been completed and is currently being used for applications development on several projects in support of the Department of Energy.

  12. Advanced Transport Operating System (ATOPS) utility library software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A.

    1993-01-01

    The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described.

  13. Software modifications to the Demonstration Advanced Avionics Systems (DAAS)

    NASA Technical Reports Server (NTRS)

    Nedell, B. F.; Hardy, G. H.

    1984-01-01

    Critical information required for the design of integrated avionics suitable for generation aviation is applied towards software modifications for the Demonstration Advanced Avionics System (DAAS). The program emphasizes the use of data busing, distributed microprocessors, shared electronic displays and data entry devices, and improved functional capability. A demonstration advanced avionics system (DAAS) is designed, built, and flight tested in a Cessna 402, twin engine, general aviation aircraft. Software modifications are made to DAAS at Ames concurrent with the flight test program. The changes are the result of the experience obtained with the system at Ames, and the comments of the pilots who evaluated the system.

  14. Advanced transport operating system software upgrade: Flight management/flight controls software description

    NASA Technical Reports Server (NTRS)

    Clinedinst, Winston C.; Debure, Kelly R.; Dickson, Richard W.; Heaphy, William J.; Parks, Mark A.; Slominski, Christopher J.; Wolverton, David A.

    1988-01-01

    The Flight Management/Flight Controls (FM/FC) software for the Norden 2 (PDP-11/70M) computer installed on the NASA 737 aircraft is described. The software computes the navigation position estimates, guidance commands, those commands to be issued to the control surfaces to direct the aircraft in flight based on the modes selected on the Advanced Guidance Control System (AGSC) mode panel, and the flight path selected via the Navigation Control/Display Unit (NCDU).

  15. Advanced software development workstation. OOPSLA 1992 Conference. Trip report

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    1992-01-01

    This report summarizes the main trends observed at the Object Oriented Programming Systems, Languages, and Applications Conference held in Vancouver, British Columbia. This conference is the main object-oriented event that allows us to assess the dynamism of the technology and to meet the main actors of the field. It is an invaluable source of information for the advanced software development project.

  16. Advances in Games Technology: Software, Models, and Intelligence

    ERIC Educational Resources Information Center

    Prakash, Edmond; Brindle, Geoff; Jones, Kevin; Zhou, Suiping; Chaudhari, Narendra S.; Wong, Kok-Wai

    2009-01-01

    Games technology has undergone tremendous development. In this article, the authors report the rapid advancement that has been observed in the way games software is being developed, as well as in the development of games content using game engines. One area that has gained special attention is modeling the game environment such as terrain and…

  17. Advances in Games Technology: Software, Models, and Intelligence

    ERIC Educational Resources Information Center

    Prakash, Edmond; Brindle, Geoff; Jones, Kevin; Zhou, Suiping; Chaudhari, Narendra S.; Wong, Kok-Wai

    2009-01-01

    Games technology has undergone tremendous development. In this article, the authors report the rapid advancement that has been observed in the way games software is being developed, as well as in the development of games content using game engines. One area that has gained special attention is modeling the game environment such as terrain and…

  18. Advanced Spacesuit Informatics Software Design for Power, Avionics and Software Version 2.0

    NASA Technical Reports Server (NTRS)

    Wright, Theodore W.

    2016-01-01

    A description of the software design for the 2016 edition of the Informatics computer assembly of the NASAs Advanced Extravehicular Mobility Unit (AEMU), also called the Advanced Spacesuit. The Informatics system is an optional part of the spacesuit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and warning information. It also provides an interface to the suit mounted camera for recording still images, video, and audio field notes.

  19. Object-based media and stream-based computing

    NASA Astrophysics Data System (ADS)

    Bove, V. Michael, Jr.

    1998-03-01

    Object-based media refers to the representation of audiovisual information as a collection of objects - the result of scene-analysis algorithms - and a script describing how they are to be rendered for display. Such multimedia presentations can adapt to viewing circumstances as well as to viewer preferences and behavior, and can provide a richer link between content creator and consumer. With faster networks and processors, such ideas become applicable to live interpersonal communications as well, creating a more natural and productive alternative to traditional videoconferencing. In this paper is outlined an example of object-based media algorithms and applications developed by my group, and present new hardware architectures and software methods that we have developed to enable meeting the computational requirements of object- based and other advanced media representations. In particular we describe stream-based processing, which enables automatic run-time parallelization of multidimensional signal processing tasks even given heterogenous computational resources.

  20. Reliability, Safety and Error Recovery for Advanced Control Software

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2003-01-01

    For long-duration automated operation of regenerative life support systems in space environments, there is a need for advanced integration and control systems that are significantly more reliable and safe, and that support error recovery and minimization of operational failures. This presentation outlines some challenges of hazardous space environments and complex system interactions that can lead to system accidents. It discusses approaches to hazard analysis and error recovery for control software and challenges of supporting effective intervention by safety software and the crew.

  1. Advanced Transport Operating System (ATOPS) control display unit software description

    NASA Technical Reports Server (NTRS)

    Slominski, Christopher J.; Parks, Mark A.; Debure, Kelly R.; Heaphy, William J.

    1992-01-01

    The software created for the Control Display Units (CDUs), used for the Advanced Transport Operating Systems (ATOPS) project, on the Transport Systems Research Vehicle (TSRV) is described. Module descriptions are presented in a standardized format which contains module purpose, calling sequence, a detailed description, and global references. The global reference section includes subroutines, functions, and common variables referenced by a particular module. The CDUs, one for the pilot and one for the copilot, are used for flight management purposes. Operations performed with the CDU affects the aircraft's guidance, navigation, and display software.

  2. Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Damevski, Kostadin

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  3. Software control of the Advanced Technology Solar Telescope enclosure PLC hardware using COTS software

    NASA Astrophysics Data System (ADS)

    Borrowman, Alastair J.; de Bilbao, Lander; Ariño, Javier; Murga, Gaizka; Goodrich, Bret; Hubbard, John R.; Greer, Alan; Mayer, Chris; Taylor, Philip

    2012-09-01

    As PLCs evolve from simple logic controllers into more capable Programmable Automation Controllers (PACs), observatories are increasingly using such devices to control complex mechanisms1, 2. This paper describes use of COTS software to control such hardware using the Advanced Technology Solar Telescope (ATST) Common Services Framework (CSF). We present the Enclosure Control System (ECS) under development in Spain and the UK. The paper details selection of the commercial PLC communication library PLCIO. Implemented in C and delivered with source code, the library separates the programmer from communication details through a simple API. Capable of communicating with many types of PLCs (including Allen-Bradley and Siemens) the API remains the same irrespective of PLC in use. The ECS is implemented in Java using the observatory's framework that provides common services for software components. We present a design following a connection-based approach where all components access the PLC through a single connection class. The link between Java and PLCIO C library is provided by a thin Java Native Interface (JNI) layer. Also presented is a software simulator of the PLC based upon the PLCIO Virtual PLC. This creates a simulator operating below the library's API and thus requires no change to ECS software. It also provides enhanced software testing capabilities prior to hardware becoming available. Results are presented in the form of communication timing test data, showing that the use of CSF, JNI and PLCIO provide a control system capable of controlling enclosure tracking mechanisms, that would be equally valid for telescope mount control.

  4. Constructing an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Sims, Michael; Podolak, Ester; Mckay, Christopher

    1990-01-01

    Scientific model building can be an intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot be easily distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We believe that advanced software techniques can facilitate both the model building and model sharing process. In this paper, we describe a prototype for a scientific modeling software tool that serves as an aid to the scientist in developing and using models. This tool includes an interactive intelligent graphical interface, a high level domain specific modeling language, a library of physics equations and experimental datasets, and a suite of data display facilities. Our prototype has been developed in the domain of planetary atmospheric modeling, and is being used to construct models of Titan's atmosphere.

  5. Development of Advanced Multi-Modality Radiation Treatment Planning Software

    SciTech Connect

    Nigg, D W; Hartmann Siantar, C

    2002-02-19

    The Idaho National Engineering and Environmental Laboratory (INEEL) has long been active in development of advanced Monte-Carlo based computational dosimetry and treatment planning methods and software for advanced radiotherapy, with a particular focus on Neutron Capture Therapy (NCT) and, to a somewhat lesser extent, Fast-Neutron Therapy. The most recent INEEL software product system of this type is known as SERA, Simulation Environment for Radiotherapy Applications. SERA is at a mature level in its life cycle, it has been licensed for research use worldwide, and it has become well established as a computational tool for research. However, along with its strengths, SERA also has some limitations in its structure and computational methodologies. More specifically, it is optimized only for neutron-based applications. Although photon transport can be computed with SERA, the simplified model that is used is designed primarily for photons produced in the neutron transport process. Thus SERA is not appropriate for applications to, for example, standard external-beam photon radiotherapy, which is by far more commonly used in the clinic than neutron based therapy.

  6. Patient simulation software to augment an advanced pharmaceutics course.

    PubMed

    Benedict, Neal; Schonder, Kristine

    2011-03-10

    To implement and assess the effectiveness of adding a pharmaceutical care simulation program to an advanced therapeutics course. PharmaCAL (University of Pittsburgh), a software program that uses a branched-outcome decision making model, was used to create patient simulations to augment lectures given in the course. In each simulation, students were presented with a challenge, given choices, and then provided with consequences specific to their choices. A survey was administered at the end of the course and students indicated the simulations were enjoyable (92%), easy to use (90%), stimulated interest in critically ill patients (82%), and allowed for application of lecture material (91%). A 5-item presimulation and postsimulation test on the anemia simulation was administered to assess learning. Students answered significantly more questions correctly on the postsimulation test than on the presimulation test (p < 0.001). Seventy-eight percent of students answered the same 5 questions correctly on the final examination. Patient simulation software that used a branched-outcome decision model was an effective supplement to class lectures in an advanced pharmaceutics course and was well-received by pharmacy students.

  7. The scientific modeling assistant: An advanced software tool for scientific model building

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Sims, Michael H.

    1991-01-01

    Viewgraphs on the scientific modeling assistant: an advanced software tool for scientific model building are presented. The objective is to build a specialized software tool to assist in scientific model-building.

  8. The scientific modeling assistant: An advanced software tool for scientific model building

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Sims, Michael H.

    1991-01-01

    Viewgraphs on the scientific modeling assistant: an advanced software tool for scientific model building are presented. The objective is to build a specialized software tool to assist in scientific model-building.

  9. Advanced Data Format (ADF) Software Library and Users Guide

    NASA Technical Reports Server (NTRS)

    Smith, Matthew; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial. Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its 1/0 software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The ADF is a generic database manager with minimal intrinsic capability. It was written for the purpose of storing large numerical datasets in an efficient, platform independent manner. To be effective, it must be used in conjunction with external agreements on how the data will be organized within the ADF database such defined by the SIDS. There are currently 34 user callable functions that comprise the ADF

  10. Advanced Data Format (ADF) Software Library and Users Guide

    NASA Technical Reports Server (NTRS)

    Smith, Matthew; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The "CFD General Notation System" (CGNS) consists of a collection of conventions, and conforming software, for the storage and retrieval of Computational Fluid Dynamics (CFD) data. It facilitates the exchange of data between sites and applications, and helps stabilize the archiving of aerodynamic data. This effort was initiated in order to streamline the procedures in exchanging data and software between NASA and its customers, but the goal is to develop CGNS into a National Standard for the exchange of aerodynamic data. The CGNS development team is comprised of members from Boeing Commercial. Airplane Group, NASA-Ames, NASA-Langley, NASA-Lewis, McDonnell-Douglas Corporation (now Boeing-St. Louis), Air Force-Wright Lab., and ICEM-CFD Engineering. The elements of CGNS address all activities associated with the storage of data on external media and its movement to and from application programs. These elements include: 1) The Advanced Data Format (ADF) Database manager, consisting of both a file format specification and its 1/0 software, which handles the actual reading and writing of data from and to external storage media; 2) The Standard Interface Data Structures (SIDS), which specify the intellectual content of CFD data and the conventions governing naming and terminology; 3) The SIDS-to-ADF File Mapping conventions, which specify the exact location where the CFD data defined by the SIDS is to be stored within the ADF file(s); and 4) The CGNS Mid-level Library, which provides CFD-knowledgeable routines suitable for direct installation into application codes. The ADF is a generic database manager with minimal intrinsic capability. It was written for the purpose of storing large numerical datasets in an efficient, platform independent manner. To be effective, it must be used in conjunction with external agreements on how the data will be organized within the ADF database such defined by the SIDS. There are currently 34 user callable functions that comprise the ADF

  11. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan : ASC software quality engineering practices Version 3.0.

    SciTech Connect

    Turgeon, Jennifer L.; Minana, Molly A.; Hackney, Patricia; Pilch, Martin M.

    2009-01-01

    The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and the software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.

  12. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Sussman, Alan

    2014-10-21

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  13. Advanced software development workstation project ACCESS user's guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    ACCESS is a knowledge based software information system designed to assist the user in modifying retrieved software to satisfy user specifications. A user's guide is presented for the knowledge engineer who wishes to create for ACCESS a knowledge base consisting of representations of objects in some software system. This knowledge is accessible to an end user who wishes to use the catalogued software objects to create a new application program or an input stream for an existing system. The application specific portion of an ACCESS knowledge base consists of a taxonomy of object classes, as well as instances of these classes. All objects in the knowledge base are stored in an associative memory. ACCESS provides a standard interface for the end user to browse and modify objects. In addition, the interface can be customized by the addition of application specific data entry forms and by specification of display order for the taxonomy and object attributes. These customization options are described.

  14. Toward a Persistent Object Base.

    DTIC Science & Technology

    1986-07-01

    toward Improving the practice of software engineering. Such environments provide support for software development, management, and maintenance. There...standardization can all contribute to the openness, In actual practice , of an environment. Integration means that the components of the environment work together...In a persistent obed base, data abstraction should be practiced so that logical concepts are decoupled from physical representations; richer

  15. Object-based benefits without object-based representations.

    PubMed

    Fougnie, Daryl; Cormiea, Sarah M; Alvarez, George A

    2013-08-01

    Influential theories of visual working memory have proposed that the basic units of memory are integrated object representations. Key support for this proposal is provided by the same object benefit: It is easier to remember multiple features of a single object than the same set of features distributed across multiple objects. Here, we replicate the object benefit but demonstrate that features are not stored as single, integrated representations. Specifically, participants could remember 10 features better when arranged in 5 objects compared to 10 objects, yet memory for one object feature was largely independent of memory for the other object feature. These results rule out the possibility that integrated representations drive the object benefit and require a revision of the concept of object-based memory representations. We propose that working memory is object-based in regard to the factors that enhance performance but feature based in regard to the level of representational failure. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  16. Recent progress and advances in iterative software (including parallel aspects)

    SciTech Connect

    Carey, G.; Young, D.M.; Kincaid, D.

    1994-12-31

    The purpose of the workshop is to provide a forum for discussion of the current state of iterative software packages. Of particular interest is software for large scale engineering and scientific applications, especially for distributed parallel systems. However, the authors will also review the state of software development for conventional architectures. This workshop will complement the other proposed workshops on iterative BLAS kernels and applications. The format for the workshop is as follows: To provide some structure, there will be brief presentations, each of less than five minutes duration and dealing with specific facets of the subject. These will be designed to focus the discussion and to stimulate an exchange with the participants. Issues to be covered include: The evolution of iterative packages, current state of the art, the parallel computing challenge, applications viewpoint, standards, and future directions and open problems.

  17. Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.

    ERIC Educational Resources Information Center

    Clymer, S. J.

    Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…

  18. Safety Cases for Advanced Control Software: Safety Case Patterns

    DTIC Science & Technology

    2007-10-15

    safety claims about conventional software. For example, Littlewood and Strigini note in [8] that, at best, statistical testing can show a failure...ERA Technology Ltd, Report 97-0365, Project 13-01-4745, 1997 [8] B. Littlewood and L. Strigini, Assessment of ultra-high dependability of

  19. Final Report for "Center for Technology for Advanced Scientific Component Software"

    SciTech Connect

    Svetlana Shasharina

    2010-12-01

    The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.

  20. Building Software Development Capacity to Advance the State of Educational Technology

    ERIC Educational Resources Information Center

    Luterbach, Kenneth J.

    2013-01-01

    Educational technologists may advance the state of the field by increasing capacity to develop software tools and instructional applications. Presently, few academic programs in educational technology require even a single computer programming course. Further, the educational technologists who develop software generally work independently or in…

  1. Building Software Development Capacity to Advance the State of Educational Technology

    ERIC Educational Resources Information Center

    Luterbach, Kenneth J.

    2013-01-01

    Educational technologists may advance the state of the field by increasing capacity to develop software tools and instructional applications. Presently, few academic programs in educational technology require even a single computer programming course. Further, the educational technologists who develop software generally work independently or in…

  2. Center for Technology for Advanced Scientific Componet Software (TASCS)

    SciTech Connect

    Govindaraju, Madhusudhan

    2010-10-31

    Advanced Scientific Computing Research Computer Science FY 2010Report Center for Technology for Advanced Scientific Component Software: Distributed CCA State University of New York, Binghamton, NY, 13902 Summary The overall objective of Binghamton's involvement is to work on enhancements of the CCA environment, motivated by the applications and research initiatives discussed in the proposal. This year we are working on re-focusing our design and development efforts to develop proof-of-concept implementations that have the potential to significantly impact scientific components. We worked on developing parallel implementations for non-hydrostatic code and worked on a model coupling interface for biogeochemical computations coded in MATLAB. We also worked on the design and implementation modules that will be required for the emerging MapReduce model to be effective for scientific applications. Finally, we focused on optimizing the processing of scientific datasets on multi-core processors. Research Details We worked on the following research projects that we are working on applying to CCA-based scientific applications. 1. Non-Hydrostatic Hydrodynamics: Non-static hydrodynamics are significantly more accurate at modeling internal waves that may be important in lake ecosystems. Non-hydrostatic codes, however, are significantly more computationally expensive, often prohibitively so. We have worked with Chin Wu at the University of Wisconsin to parallelize non-hydrostatic code. We have obtained a speed up of about 26 times maximum. Although this is significant progress, we hope to improve the performance further, such that it becomes a practical alternative to hydrostatic codes. 2. Model-coupling for water-based ecosystems: To answer pressing questions about water resources requires that physical models (hydrodynamics) be coupled with biological and chemical models. Most hydrodynamics codes are written in Fortran, however, while most ecologists work in MATLAB. This

  3. Construction of an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.

    1992-01-01

    Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a test bed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.

  4. Construction of an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.

    1993-01-01

    Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a testbed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.

  5. Construction of an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Friedland, Peter; Keller, Richard M.; Mckay, Christopher P.; Sims, Michael H.; Thompson, David E.

    1993-01-01

    Scientific model-building can be a time intensive and painstaking process, often involving the development of large complex computer programs. Despite the effort involved, scientific models cannot be distributed easily and shared with other scientists. In general, implemented scientific models are complicated, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing, using and sharing models. The proposed tool will include an interactive intelligent graphical interface and a high-level domain-specific modeling language. As a testbed for this research, we propose to develop a software prototype in the domain of planetary atmospheric modeling.

  6. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  7. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGES

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  8. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan part 2 mappings for the ASC software quality engineering practices, version 2.0.

    SciTech Connect

    Heaphy, Robert; Sturtevant, Judith E.; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Minana, Molly A.; Hackney, Patricia; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2006-09-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR001.3.2 and CPR001.3.6 and to a Department of Energy document, ''ASCI Software Quality Engineering: Goals, Principles, and Guidelines''. This document also identifies ASC management and software project teams' responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  9. Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan. Part 2, Mappings for the ASC software quality engineering practices. Version 1.0.

    SciTech Connect

    Ellis, Molly A.; Heaphy, Robert; Sturtevant, Judith E.; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2005-01-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, 'ASCI Software Quality Engineering: Goals, Principles, and Guidelines'. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  10. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1: ASC software quality engineering practices, Version 2.0.

    SciTech Connect

    Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Minana, Molly A.; Hackney, Patricia; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2006-09-01

    The purpose of the Sandia National Laboratories Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. The plan defines the ASC program software quality practices and provides mappings of these practices to Sandia Corporate Requirements CPR 1.3.2 and 1.3.6 and to a Department of Energy document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines. This document also identifies ASC management and software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals.

  11. Advanced program development management software system. Software description and user's manual

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The objectives of this project were to apply emerging techniques and tools from the computer science discipline of paperless management to the activities of the Space Transportation and Exploration Office (PT01) in Marshall Space Flight Center (MSFC) Program Development, thereby enhancing the productivity of the workforce, the quality of the data products, and the collection, dissemination, and storage of information. The approach used to accomplish the objectives emphasized the utilization of finished form (off-the-shelf) software products to the greatest extent possible without impacting the performance of the end product, to pursue developments when necessary in the rapid prototyping environment to provide a mechanism for frequent feedback from the users, and to provide a full range of user support functions during the development process to promote testing of the software.

  12. An object-based interviewing system

    SciTech Connect

    Tonn, B.; Goeltz, R.; Chiang, Tai-Lun

    1992-05-01

    Oak Ridge National Laboratory (ORNL) has developed an object-based interviewing system (OBIS). The key design feature is that each survey question is a function, which is treated as an independent object. The Survey Manager (SM) module maintains a stack of questions and calls each question function when directed. Each question, when called, calls the Interaction Facility (IF) to set up the appropriate screen. This modular approach to automated survey design offers maximum flexibility for system development and maintenance. The software is written in Common Lisp and currently runs on Symbolics and VAX computers. The Bureau of Labor Statistics is funding a project to use OBIS to automate the Current Population Survey (CPS), an expenditure survey, and questions about intrashousehold communication to collect data to study communication and proxy response error.

  13. An object-based interviewing system

    SciTech Connect

    Tonn, B.; Goeltz, R. ); Chiang, Tai-Lun )

    1992-01-01

    Oak Ridge National Laboratory (ORNL) has developed an object-based interviewing system (OBIS). The key design feature is that each survey question is a function, which is treated as an independent object. The Survey Manager (SM) module maintains a stack of questions and calls each question function when directed. Each question, when called, calls the Interaction Facility (IF) to set up the appropriate screen. This modular approach to automated survey design offers maximum flexibility for system development and maintenance. The software is written in Common Lisp and currently runs on Symbolics and VAX computers. The Bureau of Labor Statistics is funding a project to use OBIS to automate the Current Population Survey (CPS), an expenditure survey, and questions about intrashousehold communication to collect data to study communication and proxy response error.

  14. Constructing an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Sims, Michael H.; Podolak, Esther; Mckay, Christopher P.

    1990-01-01

    A prototype is described that can serve as a scientific-modeling software tool to facilitate the development of useful scientific models. The prototype is developed for applications to planetary modeling, and specific examples are given that relate to the atmosphere of Titan. The scientific modeling tool employs a high-level domain-specific modeling language, several data-display facilities, and a library of experimental datasets and scientific equations. The planetary modeling prototype links uncomputed physical variables to computed variables with computational transformations based on a backchaining procedure. The system - implemented in LISP with an object-oriented knowledge-representation tool - is run on a workstation that provides interface with several models. The prototype is expected to form the basis for a sophisticated modeling tool that can permit active experimentation.

  15. Advanced methods of microscope control using μManager software

    PubMed Central

    Edelstein, Arthur D.; Tsuchida, Mark A.; Amodaj, Nenad; Pinkard, Henry; Vale, Ronald D.; Stuurman, Nico

    2014-01-01

    μManager is an open-source, cross-platform desktop application, to control a wide variety of motorized microscopes, scientific cameras, stages, illuminators, and other microscope accessories. Since its inception in 2005, μManager has grown to support a wide range of microscopy hardware and is now used by thousands of researchers around the world. The application provides a mature graphical user interface and offers open programming interfaces to facilitate plugins and scripts. Here, we present a guide to using some of the recently added advanced μManager features, including hardware synchronization, simultaneous use of multiple cameras, projection of patterned light onto a specimen, live slide mapping, imaging with multi-well plates, particle localization and tracking, and high-speed imaging. PMID:25606571

  16. Advanced methods of microscope control using μManager software.

    PubMed

    Edelstein, Arthur D; Tsuchida, Mark A; Amodaj, Nenad; Pinkard, Henry; Vale, Ronald D; Stuurman, Nico

    μManager is an open-source, cross-platform desktop application, to control a wide variety of motorized microscopes, scientific cameras, stages, illuminators, and other microscope accessories. Since its inception in 2005, μManager has grown to support a wide range of microscopy hardware and is now used by thousands of researchers around the world. The application provides a mature graphical user interface and offers open programming interfaces to facilitate plugins and scripts. Here, we present a guide to using some of the recently added advanced μManager features, including hardware synchronization, simultaneous use of multiple cameras, projection of patterned light onto a specimen, live slide mapping, imaging with multi-well plates, particle localization and tracking, and high-speed imaging.

  17. Development of Advanced Multi-Modality Radiation Treatment Planning Software for Neutron Radiotherapy and Beyond

    SciTech Connect

    Nigg, D; Wessol, D; Wemple, C; Harkin, G; Hartmann-Siantar, C

    2002-08-20

    The Idaho National Engineering and Environmental Laboratory (INEEL) has long been active in development of advanced Monte-Carlo based computational dosimetry and treatment planning methods and software for advanced radiotherapy, with a particular focus on Neutron Capture Therapy (NCT) and, to a somewhat lesser extent, Fast-Neutron Therapy. The most recent INEEL software system of this type is known as SERA, Simulation Environment for Radiotherapy Applications. As a logical next step in the development of modern radiotherapy planning tools to support the most advanced research, INEEL and Lawrence Livermore National Laboratory (LLNL), the developers of the PEREGRTNE computational engine for radiotherapy treatment planning applications, have recently launched a new project to collaborate in the development of a ''next-generation'' multi-modality treatment planning software system that will be useful for all modern forms of radiotherapy.

  18. Advances in Single-Photon Emission Computed Tomography Hardware and Software.

    PubMed

    Piccinelli, Marina; Garcia, Ernest V

    2016-02-01

    Nuclear imaging techniques remain today's most reliable modality for the assessment and quantification of myocardial perfusion. In recent years, the field has experienced tremendous progress both in terms of dedicated cameras for cardiac applications and software techniques for image reconstruction. The most recent advances in single-photon emission computed tomography hardware and software are reviewed, focusing on how these improvements have resulted in an even more powerful diagnostic tool with reduced injected radiation dose and acquisition time.

  19. Evaluation of the Red Blood Cell Advanced Software Application on the CellaVision DM96.

    PubMed

    Criel, M; Godefroid, M; Deckers, B; Devos, H; Cauwelier, B; Emmerechts, J

    2016-08-01

    The CellaVision Advanced Red Blood Cell (RBC) Software Application is a new software for advanced morphological analysis of RBCs on a digital microscopy system. Upon automated precharacterization into 21 categories, the software offers the possibility of reclassification of RBCs by the operator. We aimed to define the optimal cut-off to detect morphological RBC abnormalities and to evaluate the precharacterization performance of this software. Thirty-eight blood samples of healthy donors and sixty-eight samples of hospitalized patients were analyzed. Different methodologies to define a cut-off between negativity and positivity were used. Sensitivity and specificity were calculated according to these different cut-offs using the manual microscopic method as the gold standard. Imprecision was assessed by measuring analytical within-run and between-run variability and by measuring between-observer variability. By optimizing the cut-off between negativity and positivity, sensitivities exceeded 80% for 'critical' RBC categories (target cells, tear drop cells, spherocytes, sickle cells, and parasites), while specificities exceeded 80% for the other RBC morphological categories. Results of within-run, between-run, and between-observer variabilities were all clinically acceptable. The CellaVision Advanced RBC Software Application is an easy-to-use software that helps to detect most RBC morphological abnormalities in a sensitive and specific way without increasing work load, provided the proper cut-offs are chosen. However, evaluation of the images by an experienced observer remains necessary. © 2016 John Wiley & Sons Ltd.

  20. Using CONFIG for Simulation of Operation of Water Recovery Subsystems for Advanced Control Software Evaluation

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Flores, Luis; Fleming, Land; Throop, Daiv

    2002-01-01

    A hybrid discrete/continuous simulation tool, CONFIG, has been developed to support evaluation of the operability life support systems. CON FIG simulates operations scenarios in which flows and pressures change continuously while system reconfigurations occur as discrete events. In simulations, intelligent control software can interact dynamically with hardware system models. CONFIG simulations have been used to evaluate control software and intelligent agents for automating life support systems operations. A CON FIG model of an advanced biological water recovery system has been developed to interact with intelligent control software that is being used in a water system test at NASA Johnson Space Center

  1. WinSRFR: Current Advances in Software for Surface Irrigation Simulation and Analysis

    USDA-ARS?s Scientific Manuscript database

    Significant advances have been made over the last decade in the development of software for surface irrigation analysis. WinSRFR is an integrated tool that combines unsteady flow simulation with tools for system evaluation/parameter estimation, system design, and for operational optimization. Ongoi...

  2. Evaluating Object-Based Image Analysis on Glacial Micromorphology

    NASA Astrophysics Data System (ADS)

    Chin, K. S.; Sjogren, D. B.

    2007-12-01

    Micromorphology has recently been applied more in analyzing glacial sediments at a microscopic level. It provides additional information and details that may help to explain glacial processes in areas where macro- scale observations cannot yield sufficient information. However, the process of interpreting thin sections has been very subjective, and reaching general consensus about glacial processes is difficult. Remote sensing technology is increasingly helpful in the development and advancement of many sciences; the concepts that lie behind the technology in object cognition used in other fields, such as landscape ecology, can be applied to micromorphology. Similar to what has been done to landscape ecology in the past, automating the process of interpreting objects in glacial sediments may potentially simplify and decrease the subjectivity of the process. Definiens Professional 5 is an object-based image analysis program that imitates human cognitive methods; it is used in this study to identify objects apart from background matrices in multiple thin section images of glacial sediments. The program's initial results proved that more work was needed to be done for better results, but overall the software produced promising results. The method is repeatable and continues to generate consistent results with no bias or ambiguity, so the application of this method to micromorphology and other areas alike will be valuable.

  3. Software.

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 1989

    1989-01-01

    Presented are reviews of two computer software packages for Apple II computers; "Organic Spectroscopy," and "Videodisc Display Program" for use with "The Periodic Table Videodisc." A sample spectrograph from "Organic Spectroscopy" is included. (CW)

  4. Cost benefits of advanced software: A review of methodology used at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Joglekar, Prafulla N.

    1993-01-01

    To assist rational investments in advanced software, a formal, explicit, and multi-perspective cost-benefit analysis methodology is proposed. The methodology can be implemented through a six-stage process which is described and explained. The current practice of cost-benefit analysis at KSC is reviewed in the light of this methodology. The review finds that there is a vicious circle operating. Unsound methods lead to unreliable cost-benefit estimates. Unreliable estimates convince management that cost-benefit studies should not be taken seriously. Then, given external demands for cost-benefit estimates, management encourages software enginees to somehow come up with the numbers for their projects. Lacking the expertise needed to do a proper study, courageous software engineers with vested interests use ad hoc and unsound methods to generate some estimates. In turn, these estimates are unreliable, and the cycle continues. The proposed methodology should help KSC to break out of this cycle.

  5. Advanced Transport Operating System (ATOPS) Flight Management/Flight Controls (FM/FC) software description

    NASA Technical Reports Server (NTRS)

    Wolverton, David A.; Dickson, Richard W.; Clinedinst, Winston C.; Slominski, Christopher J.

    1993-01-01

    The flight software developed for the Flight Management/Flight Controls (FM/FC) MicroVAX computer used on the Transport Systems Research Vehicle for Advanced Transport Operating Systems (ATOPS) research is described. The FM/FC software computes navigation position estimates, guidance commands, and those commands issued to the control surfaces to direct the aircraft in flight. Various modes of flight are provided for, ranging from computer assisted manual modes to fully automatic modes including automatic landing. A high-level system overview as well as a description of each software module comprising the system is provided. Digital systems diagrams are included for each major flight control component and selected flight management functions.

  6. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data.

    PubMed

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages.

  7. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data

    PubMed Central

    Oostenveld, Robert; Fries, Pascal; Maris, Eric; Schoffelen, Jan-Mathijs

    2011-01-01

    This paper describes FieldTrip, an open source software package that we developed for the analysis of MEG, EEG, and other electrophysiological data. The software is implemented as a MATLAB toolbox and includes a complete set of consistent and user-friendly high-level functions that allow experimental neuroscientists to analyze experimental data. It includes algorithms for simple and advanced analysis, such as time-frequency analysis using multitapers, source reconstruction using dipoles, distributed sources and beamformers, connectivity analysis, and nonparametric statistical permutation tests at the channel and source level. The implementation as toolbox allows the user to perform elaborate and structured analyses of large data sets using the MATLAB command line and batch scripting. Furthermore, users and developers can easily extend the functionality and implement new algorithms. The modular design facilitates the reuse in other software packages. PMID:21253357

  8. Proposal for constructing an advanced software tool for planetary atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Sims, Michael H.; Podolak, Esther; Mckay, Christopher P.; Thompson, David E.

    1990-01-01

    Scientific model building can be a time intensive and painstaking process, often involving the development of large and complex computer programs. Despite the effort involved, scientific models cannot easily be distributed and shared with other scientists. In general, implemented scientific models are complex, idiosyncratic, and difficult for anyone but the original scientist/programmer to understand. We believe that advanced software techniques can facilitate both the model building and model sharing process. We propose to construct a scientific modeling software tool that serves as an aid to the scientist in developing and using models. The proposed tool will include an interactive intelligent graphical interface and a high level, domain specific, modeling language. As a testbed for this research, we propose development of a software prototype in the domain of planetary atmospheric modeling.

  9. Center for Technology for Advanced Scientific Component Software (TASCS) Consolidated Progress Report July 2006 - March 2009

    SciTech Connect

    Bernholdt, D E; McInnes, L C; Govindaraju, M; Bramley, R; Epperly, T; Kohl, J A; Nieplocha, J; Armstrong, R; Shasharina, S; Sussman, A L; Sottile, M; Damevski, K

    2009-04-14

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  10. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect

    Kostadin, Damevski

    2015-01-25

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  11. Perceptual Load Modulates Object-Based Attention

    ERIC Educational Resources Information Center

    Ho, Ming-Chou; Atchley, Paul

    2009-01-01

    Two experimental series are reported using both reaction time (RT) and a data-limited perceptual report to examine the effects of perceptual load on object-based attention. Perceptual load was manipulated across 3 levels by increasing the complexity of perceptual judgments. Data from the RT-based experiments showed object-based effects when the…

  12. Saccade latency indexes exogenous and endogenous object-based attention.

    PubMed

    Şentürk, Gözde; Greenberg, Adam S; Liu, Taosheng

    2016-10-01

    Classic studies of object-based attention have utilized keypress responses as the main dependent measure. However, people typically make saccades to fixate important objects. Recent work has shown that attention may act differently when it is deployed covertly versus in advance of a saccade. We further investigated the link between saccades and attention by examining whether object-based effects can be observed for saccades. We adapted the classical double-rectangle cueing paradigm of Egly, Driver, and Rafal (1994), and measured both the first saccade latency and the keypress reaction time (RT) to a target that appeared at the end of one of the two rectangles. Our results showed that saccade latencies exhibited higher sensitivity than did RTs for detecting effects of attention. We also assessed the generality of the attention effects by testing three types of cues: hybrid (predictive and peripheral), exogenous (nonpredictive and peripheral), and endogenous (predictive and central). We found that both RTs and saccade latencies exhibited effects of both space-based and object-based attentional selection. However, saccade latencies showed a more robust attentional modulation than RTs. For the exogenous cues, we observed a spatial inhibition of return along with an object-based effect, implying that object-based attention is independent of space-based attention. Overall, our results revealed an oculomotor correlate of object-based attention, suggesting that, in addition to spatial priority, object-level priority also affects saccade planning.

  13. Advances in software development for intelligent interfaces for alarm and emergency management consoles

    SciTech Connect

    Moseley, M.R.; Olson, C.E.

    1986-01-01

    Recent advances in technology allow features like voice synthesis, voice and speech recognition, image understanding, and intelligent data base management to be incorporated in computer driven alarm and emergency management information systems. New software development environments make it possible to do rapid prototyping of custom applications. Three examples using these technologies are discussed. (1) Maximum use is made of high-speed graphics and voice synthesis to implement a state-of-the-art alarm processing and display system with features that make the operator-machine interface efficient and accurate. Although very functional, this system is not portable or flexible; the software would have to be substantially rewritten for other applications. (2) An application generator which has the capability of ''building'' a specific alarm processing and display application in a matter of a few hours, using the site definition developed in the security planning phase to produce the custom application. This package is based on a standardized choice of hardware, within which it is capable of building a system to order, automatically constructing graphics, data tables, alarm prioritization rules, and interfaces to peripherals. (3) A software tool, the User Interface Management System (UIMS), is described which permits rapid prototyping of human-machine interfaces for a variety of applications including emergency management, alarm display and process information display. The object-oriented software of the UIMS achieves rapid prototyping of a new interface by standardizing to a class library of software objects instead of hardware objects.

  14. Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan. Part 1 : ASC software quality engineering practices version 1.0.

    SciTech Connect

    Minana, Molly A.; Sturtevant, Judith E.; Heaphy, Robert; Hodges, Ann Louise; Boucheron, Edward A.; Drake, Richard Roy; Forsythe, Christi A.; Schofield, Joseph Richard, Jr.; Pavlakos, Constantine James; Williamson, Charles Michael; Edwards, Harold Carter

    2005-01-01

    The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in DOE/AL Quality Criteria (QC-1) as conformance to customer requirements and expectations. This quality plan defines the ASC program software quality practices and provides mappings of these practices to the SNL Corporate Process Requirements (CPR 1.3.2 and CPR 1.3.6) and the Department of Energy (DOE) document, ASCI Software Quality Engineering: Goals, Principles, and Guidelines (GP&G). This quality plan identifies ASC management and software project teams' responsibilities for cost-effective software engineering quality practices. The SNL ASC Software Quality Plan establishes the signatories commitment to improving software products by applying cost-effective software engineering quality practices. This document explains the project teams opportunities for tailoring and implementing the practices; enumerates the practices that compose the development of SNL ASC's software products; and includes a sample assessment checklist that was developed based upon the practices in this document.

  15. How Prevalent Is Object-Based Attention?

    PubMed Central

    Pilz, Karin S.; Roggeveen, Alexa B.; Creighton, Sarah E.; Bennett, Patrick J.; Sekuler, Allison B.

    2012-01-01

    Previous research suggests that visual attention can be allocated to locations in space (space-based attention) and to objects (object-based attention). The cueing effects associated with space-based attention tend to be large and are found consistently across experiments. Object-based attention effects, however, are small and found less consistently across experiments. In three experiments we address the possibility that variability in object-based attention effects across studies reflects low incidence of such effects at the level of individual subjects. Experiment 1 measured space-based and object-based cueing effects for horizontal and vertical rectangles in 60 subjects comparing commonly used target detection and discrimination tasks. In Experiment 2 we ran another 120 subjects in a target discrimination task in which rectangle orientation varied between subjects. Using parametric statistical methods, we found object-based effects only for horizontal rectangles. Bootstrapping methods were used to measure effects in individual subjects. Significant space-based cueing effects were found in nearly all subjects in both experiments, across tasks and rectangle orientations. However, only a small number of subjects exhibited significant object-based cueing effects. Experiment 3 measured only object-based attention effects using another common paradigm and again, using bootstrapping, we found only a small number of subjects that exhibited significant object-based cueing effects. Our results show that object-based effects are more prevalent for horizontal rectangles, which is in accordance with the theory that attention may be allocated more easily along the horizontal meridian. The fact that so few individuals exhibit a significant object-based cueing effect presumably is why previous studies of this effect might have yielded inconsistent results. The results from the current study highlight the importance of considering individual subject data in addition to commonly

  16. NASA's Advanced Multimission Operations System: A Case Study in Formalizing Software Architecture Evolution

    NASA Technical Reports Server (NTRS)

    Barnes, Jeffrey M.

    2011-01-01

    All software systems of significant size and longevity eventually undergo changes to their basic architectural structure. Such changes may be prompted by evolving requirements, changing technology, or other reasons. Whatever the cause, software architecture evolution is commonplace in real world software projects. Recently, software architecture researchers have begun to study this phenomenon in depth. However, this work has suffered from problems of validation; research in this area has tended to make heavy use of toy examples and hypothetical scenarios and has not been well supported by real world examples. To help address this problem, I describe an ongoing effort at the Jet Propulsion Laboratory to re-architect the Advanced Multimission Operations System (AMMOS), which is used to operate NASA's deep-space and astrophysics missions. Based on examination of project documents and interviews with project personnel, I describe the goals and approach of this evolution effort and then present models that capture some of the key architectural changes. Finally, I demonstrate how approaches and formal methods from my previous research in architecture evolution may be applied to this evolution, while using languages and tools already in place at the Jet Propulsion Laboratory.

  17. NASA's Advanced Multimission Operations System: A Case Study in Formalizing Software Architecture Evolution

    NASA Technical Reports Server (NTRS)

    Barnes, Jeffrey M.

    2011-01-01

    All software systems of significant size and longevity eventually undergo changes to their basic architectural structure. Such changes may be prompted by evolving requirements, changing technology, or other reasons. Whatever the cause, software architecture evolution is commonplace in real world software projects. Recently, software architecture researchers have begun to study this phenomenon in depth. However, this work has suffered from problems of validation; research in this area has tended to make heavy use of toy examples and hypothetical scenarios and has not been well supported by real world examples. To help address this problem, I describe an ongoing effort at the Jet Propulsion Laboratory to re-architect the Advanced Multimission Operations System (AMMOS), which is used to operate NASA's deep-space and astrophysics missions. Based on examination of project documents and interviews with project personnel, I describe the goals and approach of this evolution effort and then present models that capture some of the key architectural changes. Finally, I demonstrate how approaches and formal methods from my previous research in architecture evolution may be applied to this evolution, while using languages and tools already in place at the Jet Propulsion Laboratory.

  18. The Utility of Free Software for Gravity and Magnetic Advanced Data Processing

    NASA Astrophysics Data System (ADS)

    Grandis, Hendra; Dahrin, Darharta

    2017-04-01

    The lack of computational tools, i.e. software, often hinders the proper teaching and application of geophysical data processing in academic institutions in Indonesia. Although there are academic licensing options for commercial software, such options are still way beyond the financial capability of some academic institutions. Academic community members (both lecturers and students) are supposed to be creative and resourceful to overcome such situation. Therefore, capability for writing computer programs or codes is a necessity. However, there are also many computer programs and even software that are freely available on the internet. Generally, the utility of the freely distributed software is limited for demonstration only or for visualizing and exchanging data. The paper discusses the utility of Geosoft’s Oasis Montaj Viewer along with USGS GX programs that are available for free. Useful gravity and magnetic advanced data processing (i.e. gradient calculation, spectral analysis etc.) can be performed “correctly” without any approximation that sometimes leads to dubious results and interpretation.

  19. Roles for software technologies in advancing research and theory in educational psychology.

    PubMed

    Hadwin, Allyson F; Winne, Philip H; Nesbit, John C

    2005-03-01

    While reviews abound on theoretical topics in educational psychology, it is rare that we examine our field's instrumentation development, and what effects this has on educational psychology's evolution. To repair this gap, this paper investigates and reveals the implications of software technologies for researching and theorizing about core issues in educational psychology. From a set of approximately 1,500 articles published between 1999 and 2004, we sampled illustrative studies and organized them into four broad themes: (a) innovative ways to operationalize variables, (b) the changing nature of instructional interventions, (c) new fields of research in educational psychology, and (d) new constructs to be examined. In each area, we identify novel uses of these technologies and suggest how they may advance, and, in some instances, reshape theory and methodology. Overall, we demonstrate that software technologies hold significant potential to elaborate research in the field.

  20. Advances in software development for intelligent interfaces for alarm and emergency management consoles

    SciTech Connect

    Moseley, M.R.; Olson, C.E.

    1986-01-01

    Recent advances in technology allow features like voice synthesis, voice and speech recognition, image understanding, and intelligent data base management to be incorporated in computer driven alarm and emergency management information systems. New software development environments make it possible to do rapid prototyping of custom applications. Three examples using these technologies are discussed. 1) Maximum use is made of high-speed graphics and voice synthesis to implement a state-of-the-art alarm processing and display system with features that make the operator-machine interface efficient and accurate. 2) An application generator which has the capability of ''building'' a specific alarm processing and display application in a matter of a few hours, using the site definition developed in the security planning phase to produce the custom application. 3) A software tool, is described which permits rapid prototyping of human-machine interfaces for a variety of applications including emergency management, alarm display and process information display.

  1. Advanced Transport Operating System (ATOPS) color displays software description microprocessor system

    NASA Technical Reports Server (NTRS)

    Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.

    1992-01-01

    This document describes the software created for the Sperry Microprocessor Color Display System used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global reference section includes procedures and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight cathode ray tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.

  2. Advanced Transport Operating System (ATOPS) color displays software description: MicroVAX system

    NASA Technical Reports Server (NTRS)

    Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.

    1992-01-01

    This document describes the software created for the Display MicroVAX computer used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery of February 27, 1991, known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global references section includes subroutines, functions, and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight Cathode Ray Tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.

  3. Earth Observing System (EOS)/ Advanced Microwave Sounding Unit-A (AMSU-A): Special Test Equipment. Software Requirements

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert

    1995-01-01

    This document defines the functional, performance, and interface requirements for the Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) Special Test Equipment (STE) software used in the test and integration of the instruments.

  4. ACTS (Advanced Communications Technology Satellite) Propagation Experiment: Preprocessing Software User's Manual

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.; Wang, Xuhe; Westenhaver, David

    1996-01-01

    The preprocessing software manual describes the Actspp program originally developed to observe and diagnose Advanced Communications Technology Satellite (ACTS) propagation terminal/receiver problems. However, it has been quite useful for automating the preprocessing functions needed to convert the terminal output to useful attenuation estimates. Prior to having data acceptable for archival functions, the individual receiver system must be calibrated and the power level shifts caused by ranging tone modulation must be received. Actspp provides three output files: the daylog, the diurnal coefficient file, and the file that contains calibration information.

  5. Recent advances in software for beamline design, accelerator operations and personnel training

    NASA Astrophysics Data System (ADS)

    Gillespie, George H.; Hill, Barrey W.; Martono, Hendy; Moore, John M.

    2000-03-01

    Accelerators are finding new applications in research, industry, medicine, as well as other fields, and there is a growing need for new tools to improve the productivity of scientists and engineers involved with these emerging accelerator applications. Several advances in computer software have been made that focus on meeting those needs. This paper summarizes recent work in the development of a unique software framework designed specifically to support the accelerator community: the Multi-Platform Shell for Particle Accelerator Related Codes (SPARC MP). SPARC MP includes a sophisticated beamline object model, an extensive library of GUI components, and supports a variety of particle optics codes and add-on tools. This framework has been used to create the Particle Beam Optics Laboratory (PBO Lab TM) family of software applications. PBO Lab has been used for beamline design, as a computer aid for teaching particle optics, and to support accelerator operations. Several popular charged particle optics programs, including MARYLIE, TRANSPORT, TURTLE and TRACE 3-D, have been integrated with a new version of PBO Lab. The modeling and simulation capabilities of these codes allow PBO Lab to support a wide spectrum of accelerator types. New external data interface tools are available to import beamline parameters from other sources, for example, to utilize magnet strengths generated by a control system. An overview of the new version of PBO Lab is presented.

  6. A Visual Basic simulation software tool for performance analysis of a membrane-based advanced water treatment plant.

    PubMed

    Pal, P; Kumar, R; Srivastava, N; Chowdhury, J

    2014-02-01

    A Visual Basic simulation software (WATTPPA) has been developed to analyse the performance of an advanced wastewater treatment plant. This user-friendly and menu-driven software is based on the dynamic mathematical model for an industrial wastewater treatment scheme that integrates chemical, biological and membrane-based unit operations. The software-predicted results corroborate very well with the experimental findings as indicated in the overall correlation coefficient of the order of 0.99. The software permits pre-analysis and manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. It allows quick performance analysis of the whole system as well as the individual units. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for hazardous wastewater.

  7. Attentional Spreading in Object-Based Attention

    ERIC Educational Resources Information Center

    Richard, Ashleigh M.; Lee, Hyunkyu; Vecera, Shaun P.

    2008-01-01

    The authors investigated 2 effects of object-based attention: the spread of attention within an attended object and the prioritization of search across possible target locations within an attended object. Participants performed a flanker task in which the location of the task-relevant target was fixed and known to participants. A spreading…

  8. Object-based neglect in number processing.

    PubMed

    Klein, Elise; Moeller, Korbinian; Zinsberger, Daniela; Zauner, Harald; Wood, Guilherme; Willmes, Klaus; Haider, Christine; Gassner, Alfred; Nuerk, Hans-Christoph

    2013-01-23

    Recent evidence suggests that neglect patients seem to have particular problems representing relatively smaller numbers corresponding to the left part of the mental number line. However, while this indicates space-based neglect for representational number space little is known about whether and --if so --how object-based neglect influences number processing.To evaluate influences of object-based neglect in numerical cognition, a group of neglect patients and two control groups had to compare two-digit numbers to an internally represented standard. Conceptualizing two-digit numbers as objects of which the left part (i.e., the tens digit should be specifically neglected) we were able to evaluate object-based neglect for number magnitude processing.Object-based neglect was indicated by a larger unit-decade compatibility effect actually reflecting impaired processing of the leftward tens digits. Additionally, faster processing of within- as compared to between-decade items provided further evidence suggesting particular difficulties in integrating tens and units into the place-value structure of the Arabic number system.In summary, the present study indicates that, in addition to the spatial representation of number magnitude, also the processing of place-value information of multi-digit numbers seems specifically impaired in neglect patients.

  9. Object-Based Attention and Cognitive Tunneling

    ERIC Educational Resources Information Center

    Jarmasz, Jerzy; Herdman, Chris M.; Johannsdottir, Kamilla Run

    2005-01-01

    Simulator-based research has shown that pilots cognitively tunnel their attention on head-up displays (HUDs). Cognitive tunneling has been linked to object-based visual attention on the assumption that HUD symbology is perceptually grouped into an object that is perceived and attended separately from the external scene. The present research…

  10. Object-Based Attention and Cognitive Tunneling

    ERIC Educational Resources Information Center

    Jarmasz, Jerzy; Herdman, Chris M.; Johannsdottir, Kamilla Run

    2005-01-01

    Simulator-based research has shown that pilots cognitively tunnel their attention on head-up displays (HUDs). Cognitive tunneling has been linked to object-based visual attention on the assumption that HUD symbology is perceptually grouped into an object that is perceived and attended separately from the external scene. The present research…

  11. Advance reservation access control using software-defined networking and tokens

    DOE PAGES

    Chung, Joaquin; Jung, Eun-Sung; Kettimuthu, Rajkumar; ...

    2017-03-09

    Advance reservation systems allow users to reserve dedicated bandwidth connection resources from advanced high-speed networks. A common use case for such systems is data transfers in distributed science environments in which a user wants exclusive access to the reservation. However, current advance network reservation methods cannot ensure exclusive access of a network reservation to the specific flow for which the user made the reservation. We present in this paper a novel network architecture that addresses this limitation and ensures that a reservation is used only by the intended flow. We achieve this by leveraging software-defined networking (SDN) and token-based authorization.more » We use SDN to orchestrate and automate the reservation of networking resources, end-to-end and across multiple administrative domains, and tokens to create a strong binding between the user or application that requested the reservation and the flows provisioned by SDN. Finally, we conducted experiments on the ESNet 100G SDN testbed, and demonstrated that our system effectively protects authorized flows from competing traffic in the network.« less

  12. Advance reservation access control using software-defined networking and tokens

    DOE PAGES

    Chung, Joaquin; Jung, Eun -Sung; Kettimuthu, Rajkumar; ...

    2017-03-09

    Advance reservation systems allow users to reserve dedicated bandwidth connection resources from advanced high-speed networks. A common use case for such systems is data transfers in distributed science environments in which a user wants exclusive access to the reservation. However, current advance network reservation methods cannot ensure exclusive access of a network reservation to the specific flow for which the user made the reservation. We present here a novel network architecture that addresses this limitation and ensures that a reservation is used only by the intended flow. We achieve this by leveraging software-defined networking (SDN) and token-based authorization. We usemore » SDN to orchestrate and automate the reservation of networking resources, end-to-end and across multiple administrative domains, and tokens to create a strong binding between the user or application that requested the reservation and the flows provisioned by SDN. In conclusion, we conducted experiments on the ESNet 100G SDN testbed, and demonstrated that our system effectively protects authorized flows from competing traffic in the network.« less

  13. Award-Winning CARES/Life Ceramics Durability Evaluation Software Is Making Advanced Technology Accessible

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CARES/Life software developed at the NASA Lewis Research Center eases this by providing a tool that uses probabilistic reliability analysis techniques to optimize the design and manufacture of brittle material components. CARES/Life is an integrated package that predicts the probability of a monolithic ceramic component's failure as a function of its time in service. It couples commercial finite element programs--which resolve a component's temperature and stress distribution - with reliability evaluation and fracture mechanics routines for modeling strength - limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength.

  14. Object-based representations of spatial images

    NASA Astrophysics Data System (ADS)

    Newsam, Shawn; Bhagavathy, Sitaram; Kenney, Charles; Manjunath, B. S.; Fonseca, Leila

    2001-03-01

    Object based representations of image data enable new content-related functionalities while facilitating management of large image databases. Developing such representations for multi-date and multi-spectral images is one of the objectives of the second phase of the Alexandria Digital Library (ADL) project at UCSB. Image segmentation and image registration are two of the main issues that are to be addressed in creating localized image representations. We present in this paper some of the recent and current work by the ADL's image processing group on robust image segmentation, registration, and the use of image texture for content representation. Built upon these technologies are techniques for managing large repositories of data. A texture thesaurus assists in creating a semantic classification of image regions. An object-based representation is proposed to facilitate data storage, retrieval, analysis, and navigation.

  15. Software development for the evaluation of the ergonomic compatibility on the selection of advanced manufacturing technology.

    PubMed

    Maldonado-Macías, A; Reyes, R; Guillen, L; García, J

    2012-01-01

    Advanced Manufacturing Technology (AMT) is one of the most relevant resources that companies have to achieve competitiveness and best performance. The selection of AMT is a complex problem which involves significant amount of information and uncertainty when multiple aspects must be taken into consideration. Actual models for the selection of AMT are found scarce of the Human Factors and Ergonomics perspective which can lead to a more complete and reliable decision. This paper presents the development of software that enhances the application of an Ergonomic Compatibility Evaluation Model that supports decision making processes taking into consideration ergonomic attributes of designs. Ergonomic Compatibility is a construct used in this model and it is mainly based in the concept of human-artifact compatibility on human compatible systems. Also, an Axiomatic Design approach by the use of the Information Axiom was evolved under a fuzzy environment to obtain the Ergonomic Incompatibility Content. The extension of this axiom for the evaluation of ergonomic compatibility requirements was the theoretical framework of this research. An incremental methodology of four stages was used to design and develop the software that enables to compare AMT alternatives by the evaluation of Ergonomic Compatibility Attributes.

  16. Advanced Software for Analysis of High-Speed Rolling-Element Bearings

    NASA Technical Reports Server (NTRS)

    Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.

    2003-01-01

    COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.

  17. Advanced communications technology satellite high burst rate link evaluation terminal communication protocol software user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Communication Protocol Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Communication Protocol Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Communication Protocol Software allows users to control and configure the Intermediate Frequency Switch Matrix (IFSM) on board the ACTS to yield a desired path through the spacecraft payload. Besides IFSM control, the C&PM Software System is also responsible for instrument control during HBR-LET experiments, uplink power control of the HBR-LET to demonstrate power augmentation during signal fade events, and data display. The Communication Protocol Software User's Guide, Version 1.0 (NASA CR-189162) outlines the commands and procedures to install and operate the Communication Protocol Software. Configuration files used to control the IFSM, operator commands, and error recovery procedures are discussed. The Communication Protocol Software Maintenance Manual, Version 1.0 (NASA CR-189163, to be published) is a programmer's guide to the Communication Protocol Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Communication Protocol Software, computer algorithms, format representations, and computer hardware configuration. The Communication Protocol Software Test Plan (NASA CR-189164, to be published) provides a step-by-step procedure to verify the operation of the software. Included in the Test Plan is command transmission, telemetry reception, error detection, and error recovery procedures.

  18. Advanced functionality for radio analysis in the Offline software framework of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Abreu, P.; Aglietta, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antičić, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellido, J. A.; Benzvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Caballero-Mora, K. S.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chudoba, J.; Clay, R. W.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Cotti, U.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Domenico, M.; de Donato, C.; de Jong, S. J.; de La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; de Mitri, I.; de Souza, V.; de Vries, K. D.; Decerprit, G.; Del Peral, L.; Deligny, O.; Dembinski, H.; Denkiewicz, A.; di Giulio, C.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Dobrigkeit, C.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; Dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Ferrero, A.; Fick, B.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fröhlich, U.; Fuchs, B.; Gamarra, R. F.; Gambetta, S.; García, B.; García Gámez, D.; Garcia-Pinto, D.; Gascon, A.; Gemmeke, H.; Gesterling, K.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Góra, D.; Gorgi, A.; Gouffon, P.; Gozzini, S. R.; Grashorn, E.; Grebe, S.; Griffith, N.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hague, J. D.; Hansen, P.; Harari, D.; Harmsma, S.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horneffer, A.; Hrabovský, M.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jiraskova, S.; Kadija, K.; Kampert, K. H.; Karhan, P.; Karova, T.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lautridou, P.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Lemiere, A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lucero, A.; Ludwig, M.; Lyberis, H.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Meurer, C.; Mićanović, S.; Micheletti, M. I.; Miller, W.; Miramonti, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Morris, C.; Mostafá, M.; Moura, C. A.; Mueller, S.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Nhung, P. T.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Oliva, P.; Olmos-Gilbaja, V. M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Parrisius, J.; Parsons, R. D.; Pastor, S.; Paul, T.; Pech, M.; PeĶala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Petrovic, J.; Pfendner, C.; Phan, N.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Privitera, P.; Prouza, M.; Quel, E. J.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rivera, H.; Riviére, C.; Rizi, V.; Robledo, C.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-D'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Salamida, F.; Salazar, H.; Salina, G.; Sánchez, F.; Santander, M.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Schmidt, F.; Schmidt, T.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schroeder, F.; Schulte, S.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Semikoz, D.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tamashiro, A.; Tapia, A.; Taşcău, O.; Tcaciuc, R.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tiwari, D. K.; Tkaczyk, W.; Todero Peixoto, C. J.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Warner, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Westerhoff, S.; Whelan, B. J.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Winders, L.; Winnick, M. G.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Younk, P.; Yuan, G.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Ziolkowski, M.

    2011-04-01

    The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs “radio-hybrid” measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request.

  19. Advanced functionality for radio analysis in the Offline software framework of the Pierre Auger Observatory

    SciTech Connect

    Abreu, P.; Aglietta, M.; Ahn, E.J.; Albuquerque, I.F.M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muniz, J.; Ambrosio, M.; /INFN, Naples /Copenhagen Astron. Observ. /Nijmegen U., IMAPP

    2011-01-01

    The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs 'radio-hybrid' measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request.

  20. Performance analysis and optimization of an advanced pharmaceutical wastewater treatment plant through a visual basic software tool (PWWT.VB).

    PubMed

    Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha

    2016-05-01

    A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.

  1. Three-dimensional analysis of pulmonary nodules by MSCT with Advanced Lung Analysis (ALA1) software.

    PubMed

    Volterrani, L; Mazzei, M A; Scialpi, M; Carcano, M; Carbone, S F; Ricci, V; Guazzi, G; Lupattelli, L

    2006-04-01

    The purpose of this study was to test the reproducibility of the three-dimensional (3D) Advanced Lung Analysis software (3D-ALA, GE Healthcare) in the estimation of pulmonary nodule volume. We retrospectively reviewed the unenhanced multislice CT scans (Lightspeed Pro 16 GE) of 77 patients with a solitary pulmonary nodule (n=71) or metastatic pulmonary disease (n=6). A total of 103 pulmonary nodules (19 well-circumscribed, 45 juxtavascular and 39 juxtapleural) were analysed grouped into five classes based on diameter: <5 mm, 10 nodules (9.7%); >or=5 to <10 mm, 25 nodules (24.2%); >or=10 mm to <15 mm, 41 nodules (39.8%); >or=5 to <18 mm, 14 nodules (13.6% ); >or=8 to <30 mm, 13 nodules (12.62%). The following acquisition parameters were used: slice thickness 0.625 mm, reconstruction interval 0.4 mm, pitch 0.562:1, 140 kV, 300 mAs, field of view 13 cm, bone kernel. For each of the 103 nodules three, 3D volume measurements were obtained by the 3D-ALA software. The reproducibility of nodule segmentation was evaluated according to a visual score (1=optimal, >or=95%; 2=fair, 90-95%; 3=poor, software allowed segmentation in all nodules (type 1 segmentation n=43, type 2 n=35, type 3 segmentation n=25). ALA-1 provided an identical 3D volume measurement in 62 nodules: [16 out of 19 well circumscribed (84.2%), 31 out of 45 juxtavascular (68.8%), 15 out of 39 juxtapleural (38.4%)]. Repeatability of 3D volume measurement was not possible in 41 out of 103 nodules [3 out of 19 (15.7%) well-circumscribed, 14 out of 45 (31.1%) juxtavascular, 24 out of 39 (61.5%) juxtapleural]. Among the 41 nodules with nonrepeatable 3D volume measurement, segmentation was scored as 1 in 2 out of 41 (4.8%), as 2 in 15 out of 41 (36.5%) and as 3 in 24 out of 41 (58.5%). The

  2. Framework Programmable Platform for the Advanced Software Development Workstation (FPP/ASDW). Demonstration framework document. Volume 1: Concepts and activity descriptions

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paul S.; Crump, John W.; Ackley, Keith A.

    1992-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at effectively combining tool and data integration mechanisms with a model of the software development process to provide an intelligent integrated software development environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The Advanced Software Development Workstation (ASDW) program is conducting research into development of advanced technologies for Computer Aided Software Engineering (CASE).

  3. Computer-Assisted Instruction in the Context of the Advanced Instructional System: Authoring Support Software. Final Report.

    ERIC Educational Resources Information Center

    Montgomery, Ann D.; Judd, Wilson A.

    This report details the design, development, and implementation of computer software to support the cost-effective production of computer assisted instruction (CAI) within the context of the Advanced Instructional System (AIS) located at Lowry Air Force Base. The report supplements the computer managed Air Force technical training that is…

  4. Object-based mapping of drumlins from DTMs

    NASA Astrophysics Data System (ADS)

    Eisank, C.; Dragut, L.; Blaschke, T.

    2012-04-01

    Until recently, landforms such as drumlins have only been manually delineated due to the difficulty in integrating contextual and semantic landform information in per cell classification approaches. Therefore, in most cases the results of per cell classifications presented basic landform elements or broad-scale physiographic regions that were only thematically defined. In contrast, object-based analysis provides spatially configured landform objects that are generated by terrain segmentation, the process of merging DTM cells to meaningful terrain objects at multiple scales. Such terrain objects should be favoured for landform modelling due to the following reasons: Firstly, their outlines potentially better correspond to the spatial limits of landforms as conceptualised by geoscientists; secondly, spatially aware objects enable the integration of semantic descriptions in the classification process. We present a multi-scale object-based study on automated delineation and classification of drumlins for a small test area in Bavaria, Germany. The multi-resolution segmentation algorithm is applied to create statistically meaningful objects patterns of selected DTMs, which are derived from a 5 m LiDAR DEM. For the subsequent classification of drumlins a semantics-based approach, which uses the principles of semantic modelling, is employed: initially, a geomorphological concept of the landform type drumlin is developed. The drumlin concept should ideally comprise verbal descriptions of the fundamental morphometric, morphological, hierarchical and contextual properties. Subsequently, the semantic model is built by structuring the conceptualised knowledge facts, and by associating those facts with object and class-related features, which are available in commonly used object-based software products for the development of classification rules. For the accuracy assessment we plan an integrated approach, which combines a statistical comparison to field maps and a qualitative

  5. Framework programmable platform for the advanced software development workstation. Integration mechanism design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.

  6. Recent advances in hardware and software are to improve spent fuel measurements

    SciTech Connect

    Staples, P.; Beddingfield, D. H.; Lestone, J. P.; Pelowitz, D. G.; Bytchkov, M.; Starovich, Z.; Harizanov, I.; Luna-Vellejo, J.; Lavender, C.

    2001-01-01

    Vast quantities of spent fuel are available for safeguard measurements, primarily in Commonwealth of Independent States (CIS) of the former Soviet Union. This spent fuel, much of which consists of long-cooling-time material, is going to become less unique in the world safeguards arena as reprocessing projects or permanent repositories continue to be delayed or postponed. The long cooling time of many of the spent fuel assemblies being prepared for intermediate term storage in the CIS countries promotes the possibility of increased accuracy in spent fuel assays. This improvement is made possible through the process of decay of the Curium isotopes and of fission products. An important point to consider for the future that could advance safeguards measurements for reverification and inspection would be to determine what safeguards requirements should be imposed upon this 'new' class of spent fuel, Improvements in measurement capability will obviously affect the safeguards requirements. What most significantly enables this progress in spent fuel measurements is the improvement in computer processing power and software enhancements leading to user-friendly Graphical User Interfaces (GUT's). The software used for these projects significantly reduces the IAEA inspector's time expenditure for both learning and operating computer and data acquisition systems, At the same time, by standardizing the spent fuel measurements, it is possible to increase reproducibility and reliability of the measurement data. Hardware systems will be described which take advantage of the increased computer control available to enable more complex measurement scenarios. A specific example of this is the active regulation of a spent fuel neutron coincident counter's {sup 3}He tubes high voltage, and subsequent scaling of measurement results to maintain a calibration for direct assay of the plutonium content of Fast Breeder Reactor spent fuel. The plutonium content has been successfully determined

  7. Neural mechanisms of object-based attention.

    PubMed

    Cohen, Elias H; Tong, Frank

    2015-04-01

    What neural mechanisms underlie the ability to attend to a complex object in the presence of competing overlapping stimuli? We evaluated whether object-based attention might involve pattern-specific feedback to early visual areas to selectively enhance the set of low-level features corresponding to the attended object. Using fMRI and multivariate pattern analysis, we found that activity patterns in early visual areas (V1-V4) are strongly biased in favor of the attended object. Activity patterns evoked by single faces and single houses reliably predicted which of the 2 overlapping stimulus types was being attended with high accuracy (80-90% correct). Superior knowledge of upright objects led to improved attentional selection in early areas. Across individual blocks, the strength of the attentional bias signal in early visual areas was highly predictive of the modulations found in high-level object areas, implying that pattern-specific attentional filtering at early sites can determine the quality of object-specific signals that reach higher level visual areas. Through computational modeling, we show how feedback of an average template to V1-like units can improve discrimination of exemplars belonging to the attended category. Our findings provide a mechanistic account of how feedback to early visual areas can contribute to the attentional selection of complex objects. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. GRAPHICS MANAGER (GFXMGR): An interactive graphics software program for the Advanced Electronics Design (AED) graphics controller, Model 767

    SciTech Connect

    Faculjak, D.A.

    1988-03-01

    Graphics Manager (GFXMGR) is menu-driven, user-friendly software designed to interactively create, edit, and delete graphics displays on the Advanced Electronics Design (AED) graphics controller, Model 767. The software runs on the VAX family of computers and has been used successfully in security applications to create and change site layouts (maps) of specific facilities. GFXMGR greatly benefits graphics development by minimizing display-development time, reducing tedium on the part of the user, and improving system performance. It is anticipated that GFXMGR can be used to create graphics displays for many types of applications. 8 figs., 2 tabs.

  9. Development of tools for safety analysis of control software in advanced reactors

    SciTech Connect

    Guarro, S.; Yau, M.; Motamed, M.

    1996-04-01

    Software based control systems have gained a pervasive presence in a wide variety of applications, including nuclear power plant control and protection systems which are within the oversight and licensing responsibility of the US Nuclear Regulatory Commission. While the cost effectiveness and flexibility of software based plant process control is widely recognized, it is very difficult to achieve and prove high levels of demonstrated dependability and safety assurance for the functions performed by process control software, due to the very flexibility and potential complexity of the software itself. The development of tools to model, analyze and test software design and implementations in the context of the system that the software is designed to control can greatly assist the task of providing higher levels of assurance than those obtainable by software testing alone. This report presents and discusses the development of the Dynamic Flowgraph Methodology (DFM) and its application in the dependability and assurance analysis of software-based control systems. The features of the methodology and full-scale examples of application to both generic process and nuclear power plant control systems are presented and discussed in detail. The features of a workstation software tool developed to assist users in the application of DFM are also described.

  10. Framework Programmable Platform for the advanced software development workstation: Framework processor design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les

    1991-01-01

    The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.

  11. Space Flight Software Development Software for Intelligent System Health Management

    NASA Technical Reports Server (NTRS)

    Trevino, Luis C.; Crumbley, Tim

    2004-01-01

    The slide presentation examines the Marshall Space Flight Center Flight Software Branch, including software development projects, mission critical space flight software development, software technical insight, advanced software development technologies, and continuous improvement in the software development processes and methods.

  12. I-SPINE: a software package for advances in image-guided and minimally invasive spine procedures

    NASA Astrophysics Data System (ADS)

    Choi, Jae Jeong; Cleary, Kevin R.; Zeng, Jianchao; Gary, Kevin A.; Freedman, Matthew T.; Watson, Vance; Lindisch, David; Mun, Seong K.

    2000-05-01

    While image guidance is now routinely used in the brain in the form of frameless stereotaxy, it is beginning to be more widely used in other clinical areas such as the spine. At Georgetown University Medical Center, we are developing a program to provide advanced visualization and image guidance for minimally invasive spine procedures. This is a collaboration between an engineering-based research group and physicians from the radiology, neurosurgery, and orthopaedics departments. A major component of this work is the ISIS Center Spine Procedures Imaging and Navigation Engine, which is a software package under development as the base platform for technical advances.

  13. Framework Programmable Platform for the Advanced Software Development Workstation: Preliminary system design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.

  14. ATOS: Integration of advanced technology software within distributed Spacecraft Mission Operations Systems

    NASA Technical Reports Server (NTRS)

    Jones, M.; Wheadon, J.; Omullane, W.; Whitgift, D.; Poulter, K.; Niezette, M.; Timmermans, R.; Rodriguez, Ivan; Romero, R.

    1994-01-01

    The Advanced Technology Operations System (ATOS) is a program of studies into the integration of advanced applications (including knowledge based systems (KBS)) with ground systems for the support of spacecraft mission operations.

  15. Applications of custom developed object based analysis tool: Precipitation in Pacific, Tropical cyclones precipitation, Hail areas

    NASA Astrophysics Data System (ADS)

    Skok, Gregor; Rakovec, Jože; Strajnar, Benedikt; Bacmeister, Julio; Tribbia, Joe

    2014-05-01

    In the last few years an object-based analysis software tool was developed at University of Ljubljana in collaboration with National Center for Atmospheric Research (NCAR). The tool was originally based on ideas of the Method for Object-Based Diagnostic Evaluation (MODE) developed by NCAR but has since evolved and changed considerably and is now available as a separate free software package. The software is called the Forward in Time object analysis tool (FiT tool). The software was used to analyze numerous datasets - mainly focusing on precipitation. Climatology of satellite and model precipitation in the low-and-mid latitude Pacific Ocean was performed by identifying and tracking of individual perception systems and estimating their lifespan, movement and size. A global climatology of tropical cyclone precipitation was performed using satellite data and tracking and analysis of areas with hail in Slovenia was performed using radar data. The tool will be presented along with some results of applications.

  16. Advanced engineering software for in-space assembly and manned planetary spacecraft

    NASA Technical Reports Server (NTRS)

    Delaquil, Donald; Mah, Robert

    1990-01-01

    Meeting the objectives of the Lunar/Mars initiative to establish safe and cost-effective extraterrestrial bases requires an integrated software/hardware approach to operational definitions and systems implementation. This paper begins this process by taking a 'software-first' approach to systems design, for implementing specific mission scenarios in the domains of in-space assembly and operations of the manned Mars spacecraft. The technological barriers facing implementation of robust operational systems within these two domains are discussed, and preliminary software requirements and architectures that resolve these barriers are provided.

  17. Advances in the design of a thermomechanical analyzer for fibers. II. Computer facilities and software

    NASA Astrophysics Data System (ADS)

    Noui, L.; Hearle, J. W. S.

    1995-06-01

    PC-based software for the full control of the flexible thermomechanical analyzer (FTMA) for yarns and fibers is described. The software permits a flexible procedure to control three essential parameters of the FTMA, namely tension, twist, and temperature. The computer program allows data acquisition at a programmable rate of up to 62.5 ksamples/s, on-line data display, and on-line data storage. Up to eight channels can be monitored. A circular buffer was used to store unlimited amount of data. For FTMA applications, data were calibrated in terms of Newtons for the tension, degree Celsius for the temperature, and Newton-meter for the torque and can be saved in three different formats, ASCII, LOTUS, or binary. The software is user friendly as it makes use of graphical user interface for motor control and data display. The software is also capable of controlling thermomechanical tests at constant force.

  18. Legacy Waveforms on Software Defined Radios: Benefits of Advanced Digital Signal Processing

    DTIC Science & Technology

    2010-09-01

    parts of signal processing are realized as software programms on programmable and/or reconfigurable hardware. Link-11/-16/-22 GSM ...PCs vs. Software Defined Radio  On first glance … GSM , UMTS, TETRA,… SEM-80/90/93 MAHRS SDR – Hardware Platform Operating Environment...and decoder exploit reliabilities using soft-input-soft-output (SISO) techniques  new: iterative feedback of extrinsic information Physical

  19. Advanced Software Techniques for Data Management Systems. Volume 2: Space Shuttle Flight Executive System: Functional Design

    NASA Technical Reports Server (NTRS)

    Pepe, J. T.

    1972-01-01

    A functional design of software executive system for the space shuttle avionics computer is presented. Three primary functions of the executive are emphasized in the design: task management, I/O management, and configuration management. The executive system organization is based on the applications software and configuration requirements established during the Phase B definition of the Space Shuttle program. Although the primary features of the executive system architecture were derived from Phase B requirements, it was specified for implementation with the IBM 4 Pi EP aerospace computer and is expected to be incorporated into a breadboard data management computer system at NASA Manned Spacecraft Center's Information system division. The executive system was structured for internal operation on the IBM 4 Pi EP system with its external configuration and applications software assumed to the characteristic of the centralized quad-redundant avionics systems defined in Phase B.

  20. Research in software allocation for advanced manned mission communications and tracking systems

    NASA Technical Reports Server (NTRS)

    Warnagiris, Tom; Wolff, Bill; Kusmanoff, Antone

    1990-01-01

    An assessment of the planned processing hardware and software/firmware for the Communications and Tracking System of the Space Station Freedom (SSF) was performed. The intent of the assessment was to determine the optimum distribution of software/firmware in the processing hardware for maximum throughput with minimum required memory. As a product of the assessment process an assessment methodology was to be developed that could be used for similar assessments of future manned spacecraft system designs. The assessment process was hampered by changing requirements for the Space Station. As a result, the initial objective of determining the optimum software/firmware allocation was not fulfilled, but several useful conclusions and recommendations resulted from the assessment. It was concluded that the assessment process would not be completely successful for a system with changing requirements. It was also concluded that memory requirements and hardware requirements were being modified to fit as a consequence of the change process, and although throughput could not be quantitized, potential problem areas could be identified. Finally, inherent flexibility of the system design was essential for the success of a system design with changing requirements. Recommendations resulting from the assessment included development of common software for some embedded controller functions, reduction of embedded processor requirements by hardwiring some Orbital Replacement Units (ORUs) to make better use of processor capabilities, and improvement in communications between software development personnel to enhance the integration process. Lastly, a critical observation was made regarding the software integration tasks did not appear to be addressed in the design process to the degree necessary for successful satisfaction of the system requirements.

  1. VIPER: an advanced software package to support high-throughput LC-MS peptide identification

    SciTech Connect

    Monroe, Matthew E.; Tolic, Nikola; Jaitly, Navdeep; Shaw, Jason L.; Adkins, Joshua N.; Smith, Richard D.

    2007-06-01

    High throughput liquid chromatograph-mass spectrometry (LC-MS) based proteomics analyses have necessitated development of software to manipulate large volumes of detailed data and produce confident peptide/protein identifications. VIPER unites important data processing steps in a single software package that can be used to visualize peptide mass and LC elution (i.e. retention) time “feature” relationships from individual analyses, match these LC-MS features to accurate mass and time (AMT) tags of peptides previously identified in LC-MS/MS analyses, and to identify and quantify pairs of isotopically labeled peptides.

  2. The Center for Technology for Advanced Scientific Component Software (TASCS) Lawrence Livermore National Laboratory - Site Status Update

    SciTech Connect

    Epperly, T W

    2008-12-03

    This report summarizes LLNL's progress for the period April through September of 2008 for the Center for Technology for Advanced Scientific Component Software (TASCS) SciDAC. The TASCS project is organized into four major thrust areas: CCA Environment (72%), Component Technology Initiatives (16%), CCA Toolkit (8%), and User and Application Outreach & Support (4%). The percentage of LLNL's effort allocation is shown in parenthesis for each thrust area. Major thrust areas are further broken down into activity areas, LLNL's effort directed to each activity is shown in Figure 1. Enhancements, Core Tools, and Usability are all part of CCA Environment, and Software Quality is part of Component Technology Initiatives. The balance of this report will cover our accomplishments in each of these activity areas.

  3. Investigating Advances in the Acquisition of Systems Based on Open Architecture and Open Source Software

    DTIC Science & Technology

    2011-08-01

    careers , in order to survive and evolve. Similarly, as FOSSD projects give rise to new types of requirements for community building, community...services, and creating more stimulating jobs, careers , and workforce development opportunities depend on faster, better, and cheaper software...are three highly visible astrophysics research projects whose scientific discoveries depend on processing remotely sensed data through a complex

  4. Roles for Software Technologies in Advancing Research and Theory in Educational Psychology

    ERIC Educational Resources Information Center

    Hadwin, Allyson F.; Winne, Philip H.; Nesbit, John C.

    2005-01-01

    While reviews abound on theoretical topics in educational psychology, it is rare that we examine our field's instrumentation development, and what effects this has on educational psychology's evolution. To repair this gap, this paper investigates and reveals the implications of software technologies for researching and theorizing about core issues…

  5. Developing Open Source Software To Advance High End Computing. Report to the President.

    ERIC Educational Resources Information Center

    National Coordination Office for Information Technology Research and Development, Arlington, VA.

    This is part of a series of reports to the President and Congress developed by the President's Information Technology Advisory Committee (PITAC) on key contemporary issues in information technology. This report defines open source software, explains PITAC's interest in this model, describes the process used to investigate issues in open source…

  6. Roles for Software Technologies in Advancing Research and Theory in Educational Psychology

    ERIC Educational Resources Information Center

    Hadwin, Allyson F.; Winne, Philip H.; Nesbit, John C.

    2005-01-01

    While reviews abound on theoretical topics in educational psychology, it is rare that we examine our field's instrumentation development, and what effects this has on educational psychology's evolution. To repair this gap, this paper investigates and reveals the implications of software technologies for researching and theorizing about core issues…

  7. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  8. Probability-Based Software for Grid Optimization: Improved Power System Operations Using Advanced Stochastic Optimization

    SciTech Connect

    2012-02-24

    GENI Project: Sandia National Laboratories is working with several commercial and university partners to develop software for market management systems (MMSs) that enable greater use of renewable energy sources throughout the grid. MMSs are used to securely and optimally determine which energy resources should be used to service energy demand across the country. Contributions of electricity to the grid from renewable energy sources such as wind and solar are intermittent, introducing complications for MMSs, which have trouble accommodating the multiple sources of price and supply uncertainties associated with bringing these new types of energy into the grid. Sandia’s software will bring a new, probability-based formulation to account for these uncertainties. By factoring in various probability scenarios for electricity production from renewable energy sources in real time, Sandia’s formula can reduce the risk of inefficient electricity transmission, save ratepayers money, conserve power, and support the future use of renewable energy.

  9. A software tool for advanced MRgFUS prostate therapy planning and follow up

    NASA Astrophysics Data System (ADS)

    van Straaten, Dörte; Hoogenboom, Martijn; van Amerongen, Martinus J.; Weiler, Florian; Issawi, Jumana Al; Günther, Matthias; Fütterer, Jurgen; Jenne, Jürgen W.

    2017-03-01

    US guided HIFU/FUS ablation for the therapy of prostate cancer is a clinical established method, while MR guided HIFU/FUS applications for prostate recently started clinical evaluation. Even if MRI examination is an excellent diagnostic tool for prostate cancer, it is a time consuming procedure and not practicable within an MRgFUS therapy session. The aim of our ongoing work is to develop software to support therapy planning and post-therapy follow-up for MRgFUS on localized prostate cancer, based on multi-parametric MR protocols. The clinical workflow of diagnosis, therapy and follow-up of MR guided FUS on prostate cancer was deeply analyzed. Based on this, the image processing workflow was designed and all necessary components, e.g. GUI, viewer, registration tools etc. were defined and implemented. The software bases on MeVisLab with several implemented C++ modules for the image processing tasks. The developed software, called LTC (Local Therapy Control) will register and visualize automatically all images (T1w, T2w, DWI etc.) and ADC or perfusion maps gained from the diagnostic MRI session. This maximum of diagnostic information helps to segment all necessary ROIs, e.g. the tumor, for therapy planning. Final therapy planning will be performed based on these segmentation data in the following MRgFUS therapy session. In addition, the developed software should help to evaluate the therapy success, by synchronization and display of pre-therapeutic, therapy and follow-up image data including the therapy plan and thermal dose information. In this ongoing project, the first stand-alone prototype was completed and will be clinically evaluated.

  10. Advanced multilateration theory, software development, and data processing: The MICRODOT system

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Gallagher, J. F.; Vonroos, O. H.

    1976-01-01

    The process of geometric parameter estimation to accuracies of one centimeter, i.e., multilateration, is defined and applications are listed. A brief functional explanation of the theory is presented. Next, various multilateration systems are described in order of increasing system complexity. Expected systems accuracy is discussed from a general point of view and a summary of the errors is listed. An outline of the design of a software processing system for multilateration, called MICRODOT, is presented next. The links of this software, which can be used for multilateration data simulations or operational data reduction, are examined on an individual basis. Functional flow diagrams are presented to aid in understanding the software capability. MICRODOT capability is described with respect to vehicle configurations, interstation coordinate reduction, geophysical parameter estimation, and orbit determination. Numerical results obtained from MICRODOT via data simulations are displayed both for hypothetical and real world vehicle/station configurations such as used in the GEOS-3 Project. These simulations show the inherent power of the multilateration procedure.

  11. Advanced software techniques for data management systems. Volume 1: Study of software aspects of the phase B space shuttle avionics system

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1972-01-01

    An overview of the executive system design task is presented. The flight software executive system, software verification, phase B baseline avionics system review, higher order languages and compilers, and computer hardware features are also discussed.

  12. Parallel supercomputing: Advanced methods, algorithms and software for large-scale problems. Final report, August 1, 1987--July 31, 1994

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1994-12-31

    The focus of the subject DOE sponsored research concerns parallel methods, algorithms, and software for complex applications such as those in coupled fluid flow and heat transfer. The research has been directed principally toward the solution of large-scale PDE problems using iterative solvers for finite differences and finite elements on advanced computer architectures. This work embraces parallel domain decomposition, element-by-element, spectral, and multilevel schemes with adaptive parameter determination, rational iteration and related issues. In addition to the fundamental questions related to developing new methods and mapping these to parallel computers, there are important software issues. The group has played a significant role in the development of software both for iterative solvers and also for finite element codes. The research in computational fluid dynamics (CFD) led to sustained multi-Gigaflop performance rates for parallel-vector computations of realistic large scale applications (not computational kernels alone). The main application areas for these performance studies have been two-dimensional problems in CFD. Over the course of this DOE sponsored research significant progress has been made. A report of the progression of the research is given and at the end of the report is a list of related publications and presentations over the entire grant period.

  13. Application of standard and advanced open source GIS software functionality for analysis of coordinates obtained by GNSS measurements

    NASA Astrophysics Data System (ADS)

    Ilieva, Tamara

    2016-04-01

    Currently there is wide variety of GNSS measurements used in the geodetic practice. The coordinates obtained by static, kinematic or precise point positioning GNSS measurements could be analyzed by using the standard functionality of any GIS software, but the open source ones give to the users an opportunity to make themselves advanced functionality. There is an option the coordinates obtained by measurements to be stored in spatial geodatabase and information for the precision and time of measurement to be added. The data could be visualized in different coordinate systems and projections and analyzed by applying different types of spatial analysis. The process also could be automated in high degree. An example with test data is prepared. It includes automated loading of files with coordinates obtained by GNSS measurements and additional information for the precision and the time of measurements. Standard and advanced open source GIS software functionality is used for automation of the analysis process. Also, graph theory is implemented for making time series of the data stored in the spatial geodatabase.

  14. Object-based landslide detection in different geographic regions

    NASA Astrophysics Data System (ADS)

    Friedl, Barbara; Hölbling, Daniel; Eisank, Clemens; Blaschke, Thomas

    2015-04-01

    , SPOT-5 images are combined with digital elevation models (DEM) for developing a consistent semi-automated landslide detection approach using eCognition (Trimble) software. Suitable image objects are generated by means of multiresolution segmentation. Expert knowledge, i.e. reported facts on features (e.g. mean object slope, mean NDVI) and thresholds that are commonly chosen by professionals for digital landslide mapping, is considered during classification. The applicability of a range of features is tested and the most promising parameters, i.e. features that produce appropriate results for both regions, are selected for landslide detection. However, minor adaptations of particular thresholds are necessary due to the distinct environmental conditions of the test sites. In order to reduce the number of required adjustments to a minimum, relational features and spectral indices are primarily used for classification. The obtained results are finally compared to manually digitized reference polygons and existing landslide inventories in order to quantify the applicability of the developed object-based landslide detection approach in different geographic regions.

  15. Software Systems 2--Compiler and Operating Systems Lab--Advanced, Data Processing Technology: 8025.33.

    ERIC Educational Resources Information Center

    Dade County Public Schools, Miami, FL.

    The course outline has been prepared as a guide to help the student develop the skills and knowledge necessary to succeed in the field of data processing. By learning the purpose and principles of compiler programs and operating systems, the student will become familiar with advanced data processing procedures that are representative of computer…

  16. CalVR: an advanced open source virtual reality software framework

    NASA Astrophysics Data System (ADS)

    Schulze, Jürgen P.; Prudhomme, Andrew; Weber, Philip; DeFanti, Thomas A.

    2013-03-01

    We developed CalVR because none of the existing virtual reality software frameworks offered everything we needed, such as cluster-awareness, multi-GPU capability, Linux compatibility, multi-user support, collaborative session support, or custom menu widgets. CalVR combines features from multiple existing VR frameworks into an open-source system, which we use in our laboratory on a daily basis, and for which dozens of VR applications have already been written at UCSD but also other research laboratories world-wide. In this paper, we describe the philosophy behind CalVR, its standard and unique features and functions, its programming interface, and its inner workings.

  17. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    SciTech Connect

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  18. Assessment of small hydropower potential for the Olanesti River using advanced software techniques

    NASA Astrophysics Data System (ADS)

    Moldoveanu, A.; Galie, A.; Moldoveanu, M.; Popa, F.; Tica, E.; Popa, B.

    2017-01-01

    The assessment of small hydropower potential for rivers is essential for the Renewable Energy Sources Directive implementation, in order to identify and analyze opportunities for new small hydropower developments. As the Water Framework Directive requirement is the non-deterioration of the status of the river water bodies, the aspects regarding the consistency of a flow regime downstream hydropower plant and the environmental objectives are important. The paper presents a case study, the Olanesti River (Romania), using software techniques dedicated to the assessment of small hydropower potential. The hydropower potential of the river was assessed considering the heads based on the Digital Terrain Model and the mean flow. In addition, the environmental flow was considered for the installed capacity assessment, in order to cope with the Water Framework Directive requirements. The harnessed hydropower potential, the optimal distance between the water intake and power plant location, their optimum positions along the river sector, installed capacity and electricity production, calculated at an average lifetime, are the outcomes of the software. The applicability of the tool might be extended on a regional scale in order to support the decision making authorities, taking into account the increasing demand for energy as well as the environmental issues.

  19. Inhibition of Return and Object-Based Attentional Selection

    ERIC Educational Resources Information Center

    List, Alexandra; Robertson, Lynn C.

    2007-01-01

    Visual attention research has revealed that attentional allocation can occur in space- and/or object-based coordinates. Using the direct and elegant design of R. Egly, J. Driver, and R. Rafal (1994), the present experiments tested whether space- and object-based inhibition of return (IOR) emerge under similar time courses. The experiments were…

  20. Inhibition of Return and Object-Based Attentional Selection

    ERIC Educational Resources Information Center

    List, Alexandra; Robertson, Lynn C.

    2007-01-01

    Visual attention research has revealed that attentional allocation can occur in space- and/or object-based coordinates. Using the direct and elegant design of R. Egly, J. Driver, and R. Rafal (1994), the present experiments tested whether space- and object-based inhibition of return (IOR) emerge under similar time courses. The experiments were…

  1. Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    2004-01-01

    Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.

  2. Advanced software development workstation. Comparison of two object-oriented development methodologies

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    1992-01-01

    This report is an attempt to clarify some of the concerns raised about the OMT method, specifically that OMT is weaker than the Booch method in a few key areas. This interim report specifically addresses the following issues: (1) is OMT object-oriented or only data-driven?; (2) can OMT be used as a front-end to implementation in C++?; (3) the inheritance concept in OMT is in contradiction with the 'pure and real' inheritance concept found in object-oriented (OO) design; (4) low support for software life-cycle issues, for project and risk management; (5) uselessness of functional modeling for the ROSE project; and (6) problems with event-driven and simulation systems. The conclusion of this report is that both Booch's method and Rumbaugh's method are good OO methods, each with strengths and weaknesses in different areas of the development process.

  3. Advances in software for faster procedure and lower radiotracer dose myocardial perfusion imaging.

    PubMed

    Piccinelli, Marina; Garcia, Ernest V

    2015-01-01

    The American Society of Nuclear Cardiology has recently published documents that encourage laboratories to take all the appropriate steps to greatly decrease patient radiation dose and has set the goal of 50% of all myocardial perfusion studies performed with an associated radiation exposure of 9mSv by 2014. In the present work, a description of the major software techniques readily available to shorten procedure time and decrease injected activity is presented. Particularly new reconstruction methods and their ability to include means for resolution recovery and noise regularization are described. The use of these improved reconstruction algorithms results in a consistent reduction in acquisition time, injected activity and consequently in the radiation dose absorbed by the patient. The clinical implications to the use of these techniques are also described in terms of maintained and even improved study quality, accuracy and sensitivity for the detection of heart disease.

  4. Advanced Communications Technology Satellite high burst rate link evaluation terminal experiment control and monitor software maintenance manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1992-01-01

    The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document. The EC&M Software Maintenance Manual, Version 1.0 (NASA-CR-189161) is a programmer's guide that describes current implementation of the EC&M software from a technical perspective. An overview of the EC&M software, computer algorithms, format representation, and computer hardware configuration are included in the manual.

  5. Advancing global marine biogeography research with open-source GIS software and cloud-computing

    USGS Publications Warehouse

    Fujioka, Ei; Vanden Berghe, Edward; Donnelly, Ben; Castillo, Julio; Cleary, Jesse; Holmes, Chris; McKnight, Sean; Halpin, patrick

    2012-01-01

    Across many scientific domains, the ability to aggregate disparate datasets enables more meaningful global analyses. Within marine biology, the Census of Marine Life served as the catalyst for such a global data aggregation effort. Under the Census framework, the Ocean Biogeographic Information System was established to coordinate an unprecedented aggregation of global marine biogeography data. The OBIS data system now contains 31.3 million observations, freely accessible through a geospatial portal. The challenges of storing, querying, disseminating, and mapping a global data collection of this complexity and magnitude are significant. In the face of declining performance and expanding feature requests, a redevelopment of the OBIS data system was undertaken. Following an Open Source philosophy, the OBIS technology stack was rebuilt using PostgreSQL, PostGIS, GeoServer and OpenLayers. This approach has markedly improved the performance and online user experience while maintaining a standards-compliant and interoperable framework. Due to the distributed nature of the project and increasing needs for storage, scalability and deployment flexibility, the entire hardware and software stack was built on a Cloud Computing environment. The flexibility of the platform, combined with the power of the application stack, enabled rapid re-development of the OBIS infrastructure, and ensured complete standards-compliance.

  6. An advanced software suite for the processing and analysis of silicon luminescence images

    NASA Astrophysics Data System (ADS)

    Payne, D. N. R.; Vargas, C.; Hameiri, Z.; Wenham, S. R.; Bagnall, D. M.

    2017-06-01

    Luminescence imaging is a versatile characterisation technique used for a broad range of research and industrial applications, particularly for the field of photovoltaics where photoluminescence and electroluminescence imaging is routinely carried out for materials analysis and quality control. Luminescence imaging can reveal a wealth of material information, as detailed in extensive literature, yet these techniques are often only used qualitatively instead of being utilised to their full potential. Part of the reason for this is the time and effort required for image processing and analysis in order to convert image data to more meaningful results. In this work, a custom built, Matlab based software suite is presented which aims to dramatically simplify luminescence image processing and analysis. The suite includes four individual programs which can be used in isolation or in conjunction to achieve a broad array of functionality, including but not limited to, point spread function determination and deconvolution, automated sample extraction, image alignment and comparison, minority carrier lifetime calibration and iron impurity concentration mapping.

  7. Object-based selection is contingent on attentional control settings.

    PubMed

    Taylor, J Eric T; Rajsic, Jason; Pratt, Jay

    2016-05-01

    The visual system allocates attention in object-based and location-based modes. However, the question of when attention selects objects and when it selects locations remains poorly understood. In this article, we present variations on two classic paradigms from the object-based attention literature, in which object-based effects are observed only when the object feature matches the task goal of the observer. In Experiment 1, covert orienting was influenced by task-irrelevant rectangles, but only when the target color matched the rectangle color. In Experiment 2, the region of attentional focus was adjusted to the size of task-irrelevant objects, but only when the target color matched the object color. In Experiment 3, we ruled out the possibility that contingent object-based selection is caused by color-based intratrial priming. These demonstrations of contingent object-based attention suggest that object-based selection is neither mandatory nor default, and that object-based effects are contingent on simple, top-down attentional control settings.

  8. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    PubMed

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Advanced communications technology satellite high burst rate link evaluation terminal power control and rain fade software test plan, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1993-01-01

    The Power Control and Rain Fade Software was developed at the NASA Lewis Research Center to support the Advanced Communications Technology Satellite High Burst Rate Link Evaluation Terminal (ACTS HBR-LET). The HBR-LET is an experimenters terminal to communicate with the ACTS for various experiments by government, university, and industry agencies. The Power Control and Rain Fade Software is one segment of the Control and Performance Monitor (C&PM) Software system of the HBR-LET. The Power Control and Rain Fade Software automatically controls the LET uplink power to compensate for signal fades. Besides power augmentation, the C&PM Software system is also responsible for instrument control during HBR-LET experiments, control of the Intermediate Frequency Switch Matrix on board the ACTS to yield a desired path through the spacecraft payload, and data display. The Power Control and Rain Fade Software User's Guide, Version 1.0 outlines the commands and procedures to install and operate the Power Control and Rain Fade Software. The Power Control and Rain Fade Software Maintenance Manual, Version 1.0 is a programmer's guide to the Power Control and Rain Fade Software. This manual details the current implementation of the software from a technical perspective. Included is an overview of the Power Control and Rain Fade Software, computer algorithms, format representations, and computer hardware configuration. The Power Control and Rain Fade Test Plan provides a step-by-step procedure to verify the operation of the software using a predetermined signal fade event. The Test Plan also provides a means to demonstrate the capability of the software.

  10. An open software framework for advancement of x-ray optics simulation and modeling

    NASA Astrophysics Data System (ADS)

    Bruhwiler, David L.; Chubar, Oleg; Nagler, Robert; Krzywinski, Jacek; Boehnlein, Amber

    2014-09-01

    Accurate physical-optics based simulation of emission, transport and use in experiments of fully- and partially-coherent X-ray radiation is essential for both designers and users of experiments at state-of-the-art light sources: low-emittance storage rings, energy-recovery linacs and free-electron lasers. To be useful for different applications, the simulations must include accurate physical models for the processes of emission, for the structures of X-ray optical elements, interaction of the radiation with samples, and propagation of scattered X-rays to a detector. Based on the "Synchrotron Radiation Workshop" (SRW) open source computer code, we are developing a simulation framework, including a graphical user interface, web interface for client-server simulations, data format for wave-optics based representation of partially-coherent X-ray radiation, and a dictionary for universal description of optical elements. Also, we are evaluating formats for sample and experimental data representation for different types of experiments and processing. The simulation framework will facilitate start-to-end simulations by different computer codes complementary to SRW, for example GENESIS and FAST codes for simulating self-amplified spontaneous emission, SHADOW and McXtrace geometrical ray-tracing codes, as well as codes for simulation of interaction of radiation with matter and data processing in experiments exploiting coherence of radiation. The development of the new framework is building on components developed for the Python-based RadTrack software, which is designed for loose coupling of multiple electron and radiation codes to enable sophisticated workflows. We are exploring opportunities for collaboration with teams pursuing similar developments at European Synchrotron Radiation Facility and the European XFEL.

  11. Random visual noise impairs object-based attention.

    PubMed

    Abrams, Richard A; Law, Mark B

    2002-02-01

    Object-based visual attention is observed when the benefit of attending to one element in a display extends to other elements that are part of the same perceptual object. Apperceptive agnosia is an object identification deficit in which spatial attention is preserved but object-based attention is impaired. Some debate exists regarding the extent to which the object-based impairment can be attributed to perceptual mechanisms that are specifically involved in grouping and segmentation of a scene, as opposed to early sensory processes. In the present paper we show that random visual noise is sufficient to eliminate the object benefit, a result inconsistent with the view that grouping mechanisms are responsible for the effect. The results have implications for an understanding of apperceptive agnosia, and for an understanding of object-based attention more generally.

  12. MeltDB 2.0–advances of the metabolomics software system

    PubMed Central

    Kessler, Nikolas; Neuweger, Heiko; Bonte, Anja; Langenkämper, Georg; Niehaus, Karsten; Nattkemper, Tim W.; Goesmann, Alexander

    2013-01-01

    Motivation: The research area metabolomics achieved tremendous popularity and development in the last couple of years. Owing to its unique interdisciplinarity, it requires to combine knowledge from various scientific disciplines. Advances in the high-throughput technology and the consequently growing quality and quantity of data put new demands on applied analytical and computational methods. Exploration of finally generated and analyzed datasets furthermore relies on powerful tools for data mining and visualization. Results: To cover and keep up with these requirements, we have created MeltDB 2.0, a next-generation web application addressing storage, sharing, standardization, integration and analysis of metabolomics experiments. New features improve both efficiency and effectivity of the entire processing pipeline of chromatographic raw data from pre-processing to the derivation of new biological knowledge. First, the generation of high-quality metabolic datasets has been vastly simplified. Second, the new statistics tool box allows to investigate these datasets according to a wide spectrum of scientific and explorative questions. Availability: The system is publicly available at https://meltdb.cebitec.uni-bielefeld.de. A login is required but freely available. Contact: nkessler@cebitec.uni-bielefeld.de PMID:23918246

  13. Upside-down: Perceived space affects object-based attention.

    PubMed

    Papenmeier, Frank; Meyerhoff, Hauke S; Brockhoff, Alisa; Jahn, Georg; Huff, Markus

    2017-07-01

    Object-based attention influences the subjective metrics of surrounding space. However, does perceived space influence object-based attention, as well? We used an attentive tracking task that required sustained object-based attention while objects moved within a tracking space. We manipulated perceived space through the availability of depth cues and varied the orientation of the tracking space. When rich depth cues were available (appearance of a voluminous tracking space), the upside-down orientation of the tracking space (objects appeared to move high on a ceiling) caused a pronounced impairment of tracking performance compared with an upright orientation of the tracking space (objects appeared to move on a floor plane). In contrast, this was not the case when reduced depth cues were available (appearance of a flat tracking space). With a preregistered second experiment, we showed that those effects were driven by scene-based depth cues and not object-based depth cues. We conclude that perceived space affects object-based attention and that object-based attention and perceived space are closely interlinked. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Mental rotation with egocentric and object-based transformations.

    PubMed

    Voyer, Daniel; Jansen, Petra; Kaltner, Sandra

    2017-11-01

    Two experiments were conducted to address methodological issues with past studies investigating the influence of egocentric and object-based transformations on performance and sex differences in mental rotation. In previous work, the egocentric and object-based mental rotation tasks confounded the stimulus type (embodied vs. non-embodied) and transformation task (egocentric vs. object-based). In both experiments presented here, the same stimuli were used regardless of the type of transformation but task instructions were modified to induce either egocentric (left-right judgment) or object-based (same-different judgment) processing. Experiment 1 used pairs of letters whereas Experiment 2 presented pairs of line-drawings of human hands. For both experiments, it was hypothesized that the mental rotation slope for response time would be steeper for object-based than for egocentric transformations. This hypothesis was verified in both experiments. Furthermore, Experiment 2 showed a reduced male advantage for egocentric compared to object-based rotations, whereas this pattern was reversed for Experiment 1. In conclusion, the present study showed that the influence of the type of transformation involved in mental rotation can be examined with the same set of stimuli simply by modifying task instructions.

  15. Object-based neglect varies with egocentric position.

    PubMed

    Karnath, Hans-Otto; Mandler, André; Clavagnier, Simon

    2011-10-01

    Different reference frames have been identified to influence neglect behavior. In particular, neglect has been demonstrated to be related to the contralesional side of the subject's body (egocentric reference frames) as well as to the contralesional side of individual objects irrespective of their position to the patient (object-based reference frame). There has been discussion whether this distinction separates neglect into body- and object-based forms. The present experiment aimed to prove possible interactions between object-based and egocentric aspects in spatial neglect. Neglect patients' eye and head movements were recorded while they explored objects at five egocentric positions along the horizontal dimension of space. The patients showed both egocentric as well as object-based behavior. Most interestingly, data analysis revealed that object-based neglect varied with egocentric position. Although the neglect of the objects' left side was strong at contralesional egocentric positions, it ameliorated at more ipsilesional egocentric positions of the objects. The patients showed steep, ramp-shaped patterns of exploration for objects located on the far contralesional side and a broadening of these patterns as the locations of the objects shifted more to the ipsilesional side. The data fitted well with the saliency curves predicted by a model of space representation, which suggests that visual input is represented in two modes simultaneously: in veridical egocentric coordinates and in within-object coordinates.

  16. A review of supervised object-based land-cover image classification

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue

    2017-08-01

    Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial

  17. Earth Observing System (EOS)/Advanced Microwave Sounding Unit-A (AMSU-A) software assurance plan

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert; Smith, Claude

    1994-01-01

    This document defines the responsibilities of Software Quality Assurance (SOA) for the development of the flight software installed in EOS/AMSU-A instruments, and the ground support software used in the test and integration of the EOS/AMSU-A instruments.

  18. Advanced communications technology satellite high burst rate link evaluation terminal experiment control and monitor software user's guide, version 1.0

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.

    1992-01-01

    The Experiment Control and Monitor (EC&M) software was developed at NASA Lewis Research Center to support the Advanced Communications Technology Satellite (ACTS) High Burst Rate Link Evaluation Terminal (HBR-LET). The HBR-LET is an experimenter's terminal to communicate with the ACTS for various investigations by government agencies, universities, and industry. The EC&M software is one segment of the Control and Performance Monitoring (C&PM) software system of the HBR-LET. The EC&M software allows users to initialize, control, and monitor the instrumentation within the HBR-LET using a predefined sequence of commands. Besides instrument control, the C&PM software system is also responsible for computer communication between the HBR-LET and the ACTS NASA Ground Station and for uplink power control of the HBR-LET to demonstrate power augmentation during rain fade events. The EC&M Software User's Guide, Version 1.0 (NASA-CR-189160) outlines the commands required to install and operate the EC&M software. Input and output file descriptions, operator commands, and error recovery procedures are discussed in the document.

  19. Distinct mechanisms subserve location- and object-based visual attention.

    PubMed

    Chou, Wei-Lun; Yeh, Su-Ling; Chen, Chien-Chung

    2014-01-01

    Visual attention can be allocated to either a location or an object, named location- or object-based attention, respectively. Despite the burgeoning evidence in support of the existence of two kinds of attention, little is known about their underlying mechanisms in terms of whether they are achieved by enhancing signal strength or excluding external noises. We adopted the noise-masking paradigm in conjunction with the double-rectangle method to probe the mechanisms of location-based attention and object-based attention. Two rectangles were shown, and one end of one rectangle was cued, followed by the target appearing at (a) the cued location; (b) the uncued end of the cued rectangle; and (c) the equal-distant end of the uncued rectangle. Observers were required to detect the target that was superimposed at different levels of noise contrast. We explored how attention affects performance by assessing the threshold versus external noise contrast (TvC) functions and fitted them with a divisive inhibition model. Results show that location-based attention - lower threshold at cued location than at uncued location - was observed at all noise levels, a signature of signal enhancement. However, object-based attention - lower threshold at the uncued end of the cued than at the uncued rectangle - was found only in high-noise conditions, a signature of noise exclusion. Findings here shed a new insight into the current theories of object-based attention.

  20. Distinct mechanisms subserve location- and object-based visual attention

    PubMed Central

    Chou, Wei-Lun; Yeh, Su-Ling; Chen, Chien-Chung

    2014-01-01

    Visual attention can be allocated to either a location or an object, named location- or object-based attention, respectively. Despite the burgeoning evidence in support of the existence of two kinds of attention, little is known about their underlying mechanisms in terms of whether they are achieved by enhancing signal strength or excluding external noises. We adopted the noise-masking paradigm in conjunction with the double-rectangle method to probe the mechanisms of location-based attention and object-based attention. Two rectangles were shown, and one end of one rectangle was cued, followed by the target appearing at (a) the cued location; (b) the uncued end of the cued rectangle; and (c) the equal-distant end of the uncued rectangle. Observers were required to detect the target that was superimposed at different levels of noise contrast. We explored how attention affects performance by assessing the threshold versus external noise contrast (TvC) functions and fitted them with a divisive inhibition model. Results show that location-based attention – lower threshold at cued location than at uncued location – was observed at all noise levels, a signature of signal enhancement. However, object-based attention – lower threshold at the uncued end of the cued than at the uncued rectangle – was found only in high-noise conditions, a signature of noise exclusion. Findings here shed a new insight into the current theories of object-based attention. PMID:24904472

  1. Object-Based Classification and Change Detection of Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Park, J. G.; Harada, I.; Kwak, Y.

    2016-06-01

    Topography and geology are factors to characterize the distribution of natural vegetation. Topographic contour is particularly influential on the living conditions of plants such as soil moisture, sunlight, and windiness. Vegetation associations having similar characteristics are present in locations having similar topographic conditions unless natural disturbances such as landslides and forest fires or artificial disturbances such as deforestation and man-made plantation bring about changes in such conditions. We developed a vegetation map of Japan using an object-based segmentation approach with topographic information (elevation, slope, slope direction) that is closely related to the distribution of vegetation. The results found that the object-based classification is more effective to produce a vegetation map than the pixel-based classification.

  2. SimTool - An object based approach to simulation construction

    NASA Technical Reports Server (NTRS)

    Crues, Edwin Z.; Yazbeck, Marwan E.; Edwards, H. C.; Barnette, Randall D.

    1993-01-01

    The creation and maintenance of large complex simulations can be a difficult and error prone task. A number of interactive and automated tools have been developed to aid in simulation construction and maintenance. Many of these tools are based upon object oriented analysis and design concepts. One such tool, SimTool, is an object based integrated tool set for the development, maintenance, and operation of large, complex and long lived simulations. This paper discusses SimTool's object based approach to simulation design, construction and execution. It also discusses the services provided to various levels of SimTool users to assist them in a wide range of simulation tasks. Also, with the aid of an implemented and working simulation example, this paper discusses SimTool's key design and operational features. Finally, this paper presents a condensed discussion of SimTool's Entity-Relationship-Attribute (ERA) modeling approach.

  3. An object-based methodology for knowledge representation in SGML

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object-based methodology for knowledge representation and its Standard Generalized Markup Language (SGML) implementation is presented. The methodology includes class, perspective domain, and event constructs for representing knowledge within an object paradigm. The perspective construct allows for representation of knowledge from multiple and varying viewpoints. The event construct allows actual use of knowledge to be represented. The SGML implementation of the methodology facilitates usability, structured, yet flexible knowledge design, and sharing and reuse of knowledge class libraries.

  4. Object-Based Epistemology at a Creationist Museum

    NASA Astrophysics Data System (ADS)

    Wendel, Paul J.

    2011-01-01

    In a regional young-earth creationist museum, objects are presented as if they speak for themselves, purportedly embodying proof that the earth is less than 10,000 years old, that humans have lived on earth throughout its history, and that dinosaurs and humans lived simultaneously. In public lectures, tours, and displays, museum associates emphasize direct observation over inference or theory. These emphases resonate closely with the "object-based epistemology" of the late nineteenth century described in Steven Conn's Museums and American Intellectual Life, 1876- 1926. In Conn's description, museum objects, artfully arranged and displayed, were intended to speak for themselves, and observation and categorization were valued over experiment and theory. The regional young-earth creationist museum is observed to partly succeed and partly fail in implementing an object-based epistemology. Although object-based epistemology represents a nineteenth-century approach to knowledge and museum display, it is compatible with an inductive approach to biblical interpretation and it confers various rhetorical advantages to creationist arguments. It is concluded that a focus on the theory-laden nature of data would likely strengthen nature-of-science education efforts to increase public acceptance of evolution.

  5. Software Reviews.

    ERIC Educational Resources Information Center

    Smith, Richard L., Ed.

    1988-01-01

    Reviews three computer software programs: (1) "Discovery! Experiences with Scientific Reasoning"--problem solving for grades 4-12 (Apple II); (2) "Organic Stereochemistry"--a tutorial for organic chemistry for advanced secondary/college level (Apple II); and (3) "SHOW PARTNER (2.01)"--a graphics utility tool for…

  6. Object-based Landslide Mapping: Examples, Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Eisank, Clemens; Friedl, Barbara; Chang, Kang-Tsung; Tsai, Tsai-Tsung; Birkefeldt Møller Pedersen, Gro; Betts, Harley; Cigna, Francesca; Chiang, Shou-Hao; Aubrey Robson, Benjamin; Bianchini, Silvia; Füreder, Petra; Albrecht, Florian; Spiekermann, Raphael; Weinke, Elisabeth; Blaschke, Thomas; Phillips, Chris

    2016-04-01

    Over the last decade, object-based image analysis (OBIA) has been increasingly used for mapping landslides that occur after triggering events such as heavy rainfall. The increasing availability and quality of Earth Observation (EO) data in terms of temporal, spatial and spectral resolution allows for comprehensive mapping of landslides at multiple scales. Most often very high resolution (VHR) or high resolution (HR) optical satellite images are used in combination with a digital elevation model (DEM) and its products such as slope and curvature. Semi-automated object-based mapping makes use of various characteristics of image objects that are derived through segmentation. OBIA enables numerous spectral, spatial, contextual and textural image object properties to be applied during an analysis. This is especially useful when mapping complex natural features such as landslides and constitutes an advantage over pixel-based image analysis. However, several drawbacks in the process of object-based landslide mapping have not been overcome yet. The developed classification routines are often rather complex and limited regarding their transferability across areas and sensors. There is still more research needed to further improve present approaches and to fully exploit the capabilities of OBIA for landslide mapping. In this study several examples of object-based landslide mapping from various geographical regions with different characteristics are presented. Examples from the Austrian and Italian Alps are shown, whereby one challenge lies in the detection of small-scale landslides on steep slopes while preventing the classification of false positives with similar spectral properties (construction areas, utilized land, etc.). Further examples feature landslides mapped in Iceland, where the differentiation of landslides from other landscape-altering processes in a highly dynamic volcanic landscape poses a very distinct challenge, and in Norway, which is exposed to multiple

  7. Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A) software management plan

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert

    1994-01-01

    This document defines the responsibilites for the management of the like-cycle development of the flight software installed in the AMSU-A instruments, and the ground support software used in the test and integration of the AMSU-A instruments.

  8. Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A): Software concept document

    NASA Technical Reports Server (NTRS)

    Schwantje, Robert

    1994-01-01

    This document presents the concept for two separate but closely related software systems. The first is the special test equipment (STE) software used at Aerojet for AMSU-A instrument testing, and the second is the workstation software used at the spacecraft integration facility to monitor the AMSU-A instrument when installed on the spacecraft. The primary use of the STE software is to monitor the data output from the AMSU-A instruments, to command the instruments, and to perform automated thermal-vacuum calibration testing. The primary use of the workstation software is to monitor the AMSU-A instrument's performance through an Ethernet link during the instrument/spacecraft integration process.

  9. The Cementitious Barriers Partnership Experimental Programs and Software Advancing DOE’s Waste Disposal/Tank Closure Efforts – 15436

    SciTech Connect

    Burns, Heather; Flach, Greg; Smith, Frank; Langton, Christine; Brown, Kevin; Mallick, Pramod

    2015-01-27

    The U.S. Department of Energy Environmental Management (DOE-EM) Office of Tank Waste Management-sponsored Cementitious Barriers Partnership (CBP) is chartered with providing the technical basis for implementing cement-based waste forms and radioactive waste containment structures for long-term disposal. DOE needs in this area include the following to support progress in final treatment and disposal of legacy waste and closure of High-Level Waste (HLW) tanks in the DOE complex: long-term performance predictions, flow sheet development and flow sheet enhancements, and conceptual designs for new disposal facilities. The DOE-EM Cementitious Barriers Partnership is producing software and experimental programs resulting in new methods and data needed for end-users involved with environmental cleanup and waste disposal. Both the modeling tools and the experimental data have already benefited the DOE sites in the areas of performance assessments by increasing confidence backed up with modeling support, leaching methods, and transport properties developed for actual DOE materials. In 2014, the CBP Partnership released the CBP Software Toolbox –“Version 2.0” which provides concrete degradation models for 1) sulfate attack, 2) carbonation, and 3) chloride initiated rebar corrosion, and includes constituent leaching. These models are applicable and can be used by both DOE and the Nuclear Regulatory Commission (NRC) for service life and long-term performance evaluations and predictions of nuclear and radioactive waste containment structures across the DOE complex, including future SRS Saltstone and HLW tank performance assessments and special analyses, Hanford site HLW tank closure projects and other projects in which cementitious barriers are required, the Advanced Simulation Capability for Environmental Management (ASCEM) project which requires source terms from cementitious containment structures as input to their flow simulations, regulatory reviews of DOE performance

  10. Integrated Power, Avionics, and Software (iPAS) Space Telecommunications Radio System (STRS) Radio User's Guide -- Advanced Exploration Systems (AES)

    NASA Technical Reports Server (NTRS)

    Roche, Rigoberto; Shalkhauser, Mary Jo Windmille

    2017-01-01

    The Integrated Power, Avionics and Software (IPAS) software defined radio (SDR) was implemented on the Reconfigurable, Intelligently-Adaptive Communication System (RAICS) platform, for radio development at NASA Johnson Space Center. Software and hardware description language (HDL) code were delivered by NASA Glenn Research Center for use in the IPAS test bed and for development of their own Space Telecommunications Radio System (STRS) waveforms on the RAICS platform. The purpose of this document is to describe how to setup and operate the IPAS STRS Radio platform with its delivered test waveform.

  11. Demonstrating Change with Astronaut Photography Using Object Based Image Analysis

    NASA Technical Reports Server (NTRS)

    Hollier, Andi; Jagge, Amy

    2017-01-01

    Every day, hundreds of images of Earth flood the Crew Earth Observations database as astronauts use hand held digital cameras to capture spectacular frames from the International Space Station. The variety of resolutions and perspectives provide a template for assessing land cover change over decades. We will focus on urban growth in the second fastest growing city in the nation, Houston, TX, using Object-Based Image Analysis. This research will contribute to the land change science community, integrated resource planning, and monitoring of the rapid rate of urban sprawl.

  12. An object-based methodology for knowledge representation

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object based methodology for knowledge representation is presented. The constructs and notation to the methodology are described and illustrated with examples. The ``blocks world,`` a classic artificial intelligence problem, is used to illustrate some of the features of the methodology including perspectives and events. Representing knowledge with perspectives can enrich the detail of the knowledge and facilitate potential lines of reasoning. Events allow example uses of the knowledge to be represented along with the contained knowledge. Other features include the extensibility and maintainability of knowledge represented in the methodology.

  13. Advancing the discussion about systematic classroom behavioral observation, a product review of Tenny, J. (2010). eCOVE observation software. Pacific City, OR: eCOVE Software, LLC.

    PubMed

    Froiland, John Mark; Smith, Liana

    2014-05-01

    Applied child psychologists and behavioral consultants often use systematic behavioral observations to inform the psychological assessment and intervention development process for children referred for attention and hyperactivity problems. This article provides a review of the 2010 version of the eCOVE classroom observation software in terms of its utility in tracking the progress of children with attention and hyperactive behaviors and its use in evaluating teacher behaviors that may impede or promote children's attention and positive behavior. The eCOVE shows promise as an efficient tool for psychologists and behavioral consultants who want to evaluate the effects of interventions for children with symptoms of ADHD, ODD, mood disorders and learning disorders; however, some research-based improvements for future models are suggested. The reviewers also share their firsthand experience in using eCOVE to evaluate teacher and student behavior exhibited on a television show about teaching urban high school students and during a movie about an eccentric new kindergarten teacher. Rich examples are provided of using strategic behavioral observations to reveal how to improve the classroom environment so as to facilitate attention, motivation and positive behavior among youth. Broader implications for enhancing the use of systematic behavioral observations in the assessment of children and adolescents with attention disorders and related behavioral problems are discussed. Key issues are examined such as the use of behavioral observations during psychological consultation to prevent the previously found gender bias in referrals for ADHD. Using behavioral observations to enhance differential diagnosis is also discussed.

  14. Object-based image analysis using multiscale connectivity.

    PubMed

    Braga-Neto, Ulisses; Goutsias, John

    2005-06-01

    This paper introduces a novel approach for image analysis based on the notion of multiscale connectivity. We use the proposed approach to design several novel tools for object-based image representation and analysis which exploit the connectivity structure of images in a multiscale fashion. More specifically, we propose a nonlinear pyramidal image representation scheme, which decomposes an image at different scales by means of multiscale grain filters. These filters gradually remove connected components from an image that fail to satisfy a given criterion. We also use the concept of multiscale connectivity to design a hierarchical data partitioning tool. We employ this tool to construct another image representation scheme, based on the concept of component trees, which organizes partitions of an image in a hierarchical multiscale fashion. In addition, we propose a geometrically-oriented hierarchical clustering algorithm which generalizes the classical single-linkage algorithm. Finally, we propose two object-based multiscale image summaries, reminiscent of the well-known (morphological) pattern spectrum, which can be useful in image analysis and image understanding applications.

  15. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  16. Observed bodies generate object-based spatial codes.

    PubMed

    Taylor, Alison; Flynn, Maria; Edmonds, Caroline J; Gardner, Mark R

    2016-09-01

    Contemporary studies of spatial and social cognition frequently use human figures as stimuli. The interpretation of such studies may be complicated by spatial compatibility effects that emerge when researchers employ spatial responses, and participants spontaneously code spatial relationships about an observed body. Yet, the nature of these spatial codes - whether they are location- or object-based, and coded from the perspective of the observer or the figure - has not been determined. Here, we investigated this issue by exploring spatial compatibility effects arising for objects held by a visually presented whole-bodied schematic human figure. In three experiments, participants responded to the colour of the object held in the figure's left or right hand, using left or right key presses. Left-right compatibility effects were found relative to the participant's egocentric perspective, rather than the figure's. These effects occurred even when the figure was rotated by 90° to the left or to the right, and the coloured objects were aligned with the participant's midline. These findings are consistent with spontaneous spatial coding from the participant's perspective and relative to the normal upright orientation of the body. This evidence for object-based spatial coding implies that the domain general cognitive mechanisms that result in spatial compatibility effects may contribute to certain spatial perspective-taking and social cognition phenomena.

  17. The creation of a public database of precision phantoms to facilitate the evaluation and standardization of advanced visualization and quantification software

    NASA Astrophysics Data System (ADS)

    Chen, Joseph J.; Saenz, Naomi J.; Siegel, Eliot L.

    2009-02-01

    In order to validate CT imaging as a biomarker, it is important to ascertain the variability and artifacts associated with various forms of advanced visualization and quantification software. The purpose of the paper is to describe the rationale behind the creation of a free, public resource that contains phantom datasets for CT designed to facilitate testing, development and standardization of advanced visualization and quantification software. For our research, three phantoms were scanned at multiple kVp and mAs settings utilizing a 64-channel MDCT scanner at a collimation of 0.75 mm. Images were reconstructed at a slice thickness of 0.75 mm and archived in DICOM format. The phantoms consisted of precision spheres, balls of different materials and sizes, and slabs of Last-A-Foam(R) at varying densities. The database of scans is stored in an archive utilizing software developed for the National Cancer Imaging Archive and is publically available. The scans were completed successfully and the datasets are available for free and unrestricted download. The CT images can be accessed in DICOM format via http or FTP or utilizing caGRID. A DICOM database of phantom data was successfully created and made available to the public. We anticipate that this database will be useful as a reference for physicists for quality control purposes, for developers of advanced visualization and quantification software, and for others who need to test the performance of their systems against a known "gold" standard. We plan to add more phantom images in the future and expand to other imaging modalities.

  18. The OASE project: Object-based Analysis and Seamless prediction

    NASA Astrophysics Data System (ADS)

    Troemel, Silke; Wapler, Kathrin; Bick, Theresa; Diederich, Malte; Deneke, Hartwig; Horvath, Akos; Senf, Fabian; Simmer, Clemens; Simon, Juergen

    2013-04-01

    The research group on Object-based Analysis and SEamless prediction (OASE) is part of the Hans Ertel Centre for Weather Research (HErZ). The group consists of scientists at the Meteorological Institute, University of Bonn, the Leibniz-Institute for Tropospheric Research in Leipzig and the German Weather Service. OASE addresses seamless prediction of convective events from nowcasting to daily predictions by combining radar/satellite compositing and tracking with high-resolution model-based ensemble generation and prediction. While observation-based nowcasting provides good results for lead times between 0-1 hours, numerical weather prediction addresses lead times between 3-21 hours. Especially the discontinuity between 1-3 hours needs to be addressed. Therefore a central goal of the project is a near real-time high-resolved unprecedented data base. A radar and satellite remote sensing-driven 3D observation-microphysics composite covering Germany, currently under development, contains gridded observations and estimated microphysical quantities. Observations and microphysics are intertwined via forward operators and estimated inverse relations, which also provide uncertainties for model ensemble initialisations. The lifetime evolution of dynamics and microphysics in (severe) convective storms is analysed based on 3D scale-space tracking. An object-based analysis condenses the information contained in the dynamic 3D distributions of observables and related microphysics into descriptors, which will allow identifying governing processes leading to the formation and evolution of severe weather events. The object-based approach efficiently characterises and quantifies the process structure and life cycles of severe weather events, and facilitates nowcasting and the generation and initialisation of model prediction ensembles. Observation-based nowcasting will exploit the dual-composite based 3D feature detection and tracking to generate a set of predictions (observation

  19. Advanced Resistive Exercise Device (ARED) Flight Software (FSW): A Unique Approach to Exercise in Long Duration Habitats

    NASA Technical Reports Server (NTRS)

    Mangieri, Mark

    2005-01-01

    ARED flight instrumentation software is associated with an overall custom designed resistive exercise system that will be deployed on the International Space Station (ISS). This innovative software application fuses together many diverse and new technologies into a robust and usable package. The software takes advantage of touchscreen user interface technology by providing a graphical user interface on a Windows based tablet PC, meeting a design constraint of keyboard-less interaction with flight crewmembers. The software interacts with modified commercial data acquisition (DAQ) hardware to acquire multiple channels of sensor measurment from the ARED device. This information is recorded on the tablet PC and made available, via International Space Station (ISS) Wireless LAN (WLAN) and telemetry subsystems, to ground based mission medics and trainers for analysis. The software includes a feature to accept electronically encoded prescriptions of exercises that guide crewmembers through a customized regimen of resistive weight training, based on personal analysis. These electronically encoded prescriptions are provided to the crew via ISS WLAN and telemetry subsystems. All personal data is securely associated with an individual crew member, based on a PIN ID mechanism.

  20. Attentional rhythm: a temporal analogue of object-based attention.

    PubMed

    De Freitas, Julian; Liverence, Brandon M; Scholl, Brian J

    2014-02-01

    The underlying units of attention are often discrete visual objects. Perhaps the clearest form of evidence for this is the same-object advantage: Following a spatial cue, responses are faster to probes occurring on the same object than they are to probes occurring on other objects, while equating brute distance. Is this a fundamentally spatial effect, or can same-object advantages also occur in time? We explored this question using independently normed rhythmic temporal sequences, structured into phrases and presented either visually or auditorily. Detection was speeded when cues and probes both lay within the same rhythmic phrase, compared to when they spanned a phrase boundary, while equating brute duration. This same-phrase advantage suggests that object-based attention is a more general phenomenon than has been previously suspected: Perceptual structure constrains attention, in both space and time, and in both vision and audition.

  1. Object-based attentional facilitation and inhibition are neuropsychologically dissociated.

    PubMed

    Smith, Daniel T; Ball, Keira; Swalwell, Robert; Schenk, Thomas

    2016-01-08

    Salient peripheral cues produce a transient shift of attention which is superseded by a sustained inhibitory effect. Cueing part of an object produces an inhibitory cueing effect (ICE) that spreads throughout the object. In dynamic scenes the ICE stays with objects as they move. We examined object-centred attentional facilitation and inhibition in a patient with visual form agnosia. There was no evidence of object-centred attentional facilitation. In contrast, object-centred ICE was observed in 3 out of 4 tasks. These inhibitory effects were strongest where cues to objecthood were highly salient. These data are evidence of a neuropsychological dissociation between the facilitatory and inhibitory effects of attentional cueing. From a theoretical perspective the findings suggest that 'grouped arrays' are sufficient for object-based inhibition, but insufficient to generate object-centred attentional facilitation.

  2. Introducing shape constraints into object-based traveltime tomography

    NASA Astrophysics Data System (ADS)

    Gaullier, G.; Charbonnier, P.; Heitz, F.; Côte, P.

    2016-09-01

    Traveltime tomography is a difficult, ill-posed reconstruction problem due to the nonlinearity of the forward model and the limited number of measurements usually available. In such an adverse situation, pixel-based regularization methods are generally unable to provide satisfactory reconstructions. In this paper we propose a novel object-based reconstruction method that introduces prior information about the shape of the structures to be reconstructed, which yields high quality geoacoustic inversion. The proposed method approaches the forward model by a series of linear problems, leading to a sequence of minimizations during which the shape prior is introduced. The method is demonstrated on synthetic and real data, collected on a specific bench dedicated to non-destructive testing of civil engineering structures.

  3. Object-based attention and visual area LO.

    PubMed

    de-Wit, Lee H; Kentridge, Robert W; Milner, A David

    2009-05-01

    We investigated the neural basis of so-called "object-based attention" by examining patient D.F., who has visual form agnosia caused by bilateral damage to the lateral occipital (LO) area of the ventral visual stream. We tested D.F.'s object-based attention in two ways. In the first experiment, we used a spatial cueing procedure to compare the costs associated with shifting attention within versus between two separate outline figures. D.F. did not show the normal advantage of within-object over between-object attention shifts. In the second experiment, we used a complementary paradigm in which two separate stimuli, presented either on the same or on different objects, have to be identified as the same or different, We found no evidence for the normal pattern of superior performance for within versus between figure comparisons. In a third experiment, we checked that D.F. showed normal shift costs for invalid as opposed to valid cueing in a standard Posner spatial attention task. In a final experiment, we compared horizontal versus vertical attention shifting in group of healthy controls without the presence of outline rectangles, and found that their pattern of shift costs was indistinguishable from that seen in D.F. when the rectangles were present (Experiment 1). We conclude that whilst D.F. has a normal spatial orienting system this is completely uninfluenced by object structure. We suggest that area LO may mediate form processing precisely at the stage where visual representations normally influence the spread of attention.

  4. Calculation Software

    NASA Technical Reports Server (NTRS)

    1994-01-01

    MathSoft Plus 5.0 is a calculation software package for electrical engineers and computer scientists who need advanced math functionality. It incorporates SmartMath, an expert system that determines a strategy for solving difficult mathematical problems. SmartMath was the result of the integration into Mathcad of CLIPS, a NASA-developed shell for creating expert systems. By using CLIPS, MathSoft, Inc. was able to save the time and money involved in writing the original program.

  5. Writing Teachers Writing Software: Creating Our Place in the Electronic Age. Advances in Computers and Composition on Studies Series.

    ERIC Educational Resources Information Center

    LeBlanc, Paul J.

    Presenting a comprehensive look at (and critical history of) computer-aided composition (CAC), this book focuses on faculty development of software for composition studies. The book describes who is building these writing tools, how they are doing so, how their work is being received, and what is likely to affect their efforts in the future.…

  6. Object-based change detection: dimension of damage in residential areas of Abu Suruj, Sudan

    NASA Astrophysics Data System (ADS)

    Demharter, Timo; Michel, Ulrich; Ehlers, Manfred; Reinartz, Peter

    2011-11-01

    Given the importance of Change Detection, especially in the field of crisis management, this paper discusses the advantage of object-based Change Detection. This project and the used methods give an opportunity to coordinate relief actions strategically. The principal objective of this project was to develop an algorithm which allows to detect rapidly damaged and destroyed buildings in the area of Abu Suruj. This Sudanese village is located in West-Darfur and has become the victim of civil war. The software eCognition Developer was used to per-form an object-based Change Detection on two panchromatic Quickbird 2 images from two different time slots. The first image shows the area before, the second image shows the area after the massacres in this region. Seeking a classification for the huts of the Sudanese town Abu Suruj was reached by first segmenting the huts and then classifying them on the basis of geo-metrical and brightness-related values. The huts were classified as "new", "destroyed" and "preserved" with the help of a automated algorithm. Finally the results were presented in the form of a map which displays the different conditions of the huts. The accuracy of the project is validated by an accuracy assessment resulting in an Overall Classification Accuracy of 90.50 percent. These change detection results allow aid organizations to provide quick and efficient help where it is needed the most.

  7. Geographic object-based delineation of neighborhoods of Accra, Ghana using QuickBird satellite imagery.

    PubMed

    Stow, Douglas A; Lippitt, Christopher D; Weeks, John R

    2010-08-01

    The objective was to test GEographic Object-based Image Analysis (GEOBIA) techniques for delineating neighborhoods of Accra, Ghana using QuickBird multispectral imagery. Two approaches to aggregating census enumeration areas (EAs) based on image-derived measures of vegetation objects were tested: (1) merging adjacent EAs according to vegetation measures and (2) image segmentation. Both approaches exploit readily available functions within commercial GEOBIA software. Image-derived neighborhood maps were compared to a reference map derived by spatial clustering of slum index values (from census data), to provide a relative assessment of potential map utility. A size-constrained iterative segmentation approach to aggregation was more successful than standard image segmentation or feature merge techniques. The segmentation approaches account for size and shape characteristics, enabling more realistic neighborhood boundaries to be delineated. The percentage of vegetation patches within each EA yielded more realistic delineation of potential neighborhoods than mean vegetation patch size per EA.

  8. Geographic Object-Based Image Analysis - Towards a new paradigm

    NASA Astrophysics Data System (ADS)

    Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis - GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the 'per-pixel paradigm' and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.

  9. Geographic Object-Based Image Analysis - Towards a new paradigm.

    PubMed

    Blaschke, Thomas; Hay, Geoffrey J; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis - GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the 'per-pixel paradigm' and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.

  10. [Role of connectedness in early object-based attentional selection].

    PubMed

    Takeya, Ryuji; Kasai, Tetsuko

    2014-08-01

    It has been suggested that uniform connectedness is the most fundamental factor in forming units of attentional selection, while there are evidences that attention can select a perceptual group that consists of separate elements with similar features. The present study examined the effects of connectedness and a boundary-feature similarity on early spatial-selection processes using a sustained-focal-attention paradigm of event-related potentials (ERPs). Bilateral stimuli were manipulated to have an orthogonal combination of connectedness (C-, C+) and a similarity in boundary feature (S-, S+). ERPs were recorded from 15 participants who were instructed to pay attention to the left or the right visual field and to respond to a target shape that appeared infrequently in the attended field. The ERP attention effect in the N1 latency range (125-185 ms) was decreased for stimuli with connectedness and/or boundary-feature similarity, and the effects of the two grouping factors were independent of each other. The present result suggests that multiple grouping factors, including connectedness, operate in parallel in early processes of object-based attention-spreading.

  11. An Object-Based Architecture for Biomedical Expert Database Systems

    PubMed Central

    Barsalou, Thierry

    1988-01-01

    Objects play a major role in both database and artificial intelligence research. In this paper, we present a novel architecture for expert database systems that introduces an object-based interface between relational databases and expert systems. We exploit a semantic model of the database structure to map relations automatically into object templates, where each template can be a complex combination of join and projection operations. Moreover, we arrange the templates into object networks that represent different views of the same database. Separate processes instantiate those templates using data from the base relations, cache the resulting instances in main memory, navigate through a given network's objects, and update the database according to changes made at the object layer. In the context of an immunologic-research application, we demonstrate the capabilities of a prototype implementation of the architecture. The resulting model provides enhanced tools for database structuring and manipulation. In addition, this architecture supports efficient bidirectional communication between database and expert systems through the shared object layer.

  12. A model of proto-object based saliency

    PubMed Central

    Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph

    2013-01-01

    Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601

  13. Multi-objective based spectral unmixing for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Shi, Zhenwei

    2017-02-01

    Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.

  14. Object-based rapid change detection for disaster management

    NASA Astrophysics Data System (ADS)

    Thunig, Holger; Michel, Ulrich; Ehlers, Manfred; Reinartz, Peter

    2011-11-01

    Rapid change detection is used in cases of natural hazards and disasters. This analysis lead to quick information about areas of damage. In certain cases the lack of information after catastrophe events is obstructing supporting measures within disaster management. Earthquakes, tsunamis, civil war, volcanic eruption, droughts and floods have much in common: people are directly affected, landscapes and buildings are destroyed. In every case geospatial data is necessary to gain knowledge as basement for decision support. Where to go first? Which infrastructure is usable? How much area is affected? These are essential questions which need to be answered before appropriate, eligible help can be established. This study presents an innovative strategy to retrieve post event information by use of an object-based change detection approach. Within a transferable framework, the developed algorithms can be implemented for a set of remote sensing data among different investigation areas. Several case studies are the base for the retrieved results. Within a coarse dividing into statistical parts and the segmentation in meaningful objects, the framework is able to deal with different types of change. By means of an elaborated normalized temporal change index (NTCI) panchromatic datasets are used to extract areas which are destroyed, areas which were not affected and in addition areas which are developing new for cases where rebuilding has already started. The results of the study are also feasible for monitoring urban growth.

  15. Development of Object-Based Teleoperator Control for Unstructured Applications

    DTIC Science & Technology

    1996-12-01

    4-23 5.1. Module Sampling Rates of Test Set #5 in Appendix C 5-7 A.1. PUMA 560 D-H parameters ....... .................... A-2 A.2. ROBOTICA Input...June, 1996. 33. Schneider, D. L., EENG 540 Class Notes, 1994. 34. Nethery, John, Robotica : User’s guide and reference manual, University of Illnois...case of PUMA robot. First, the overall forward kinematics were computed using the ROBOTICA mathematic software [34], then some of joints are set to be

  16. Payload software technology: Software technology development plan

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Programmatic requirements for the advancement of software technology are identified for meeting the space flight requirements in the 1980 to 1990 time period. The development items are described, and software technology item derivation worksheets are presented along with the cost/time/priority assessments.

  17. Computational complexity of object-based image compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    2005-09-01

    Image compression via transform coding applied to small rectangular regions or encoding blocks appears to be approaching asymptotic rate-distortion performance. However, an emerging compression technology, called object-based compression (OBC) promises significantly improved performance via compression ratios ranging from 200:1 to as high as 2,500:1. OBC involves segmentation of image regions, followed by efficient encoding of each region's content and boundary. During decompression, such regions can be approximated by objects from a codebook, yielding a reconstructed image that is semantically equivalent to the corresponding source image, but has pixel- and featural-level differences. Semantic equivalence between the source and decompressed image facilitates fast decompression through efficient substitutions, albeit at the cost of codebook search in the compression step. Given small codebooks, OBC holds promise for information-push technologies where approximate context is sufficient, for example, transmission of surveillance images that provide the gist of a scene. However, OBC is not necessarily useful for applications requiring high accuracy, such as medical image processing, because substitution of source content can be inaccurate at small spatial scales. The cost of segmentation is a significant disadvantage in current OBC implementations. Several innovative techniques have been developed for region segmentation, as discussed in a previous paper [4]. Additionally, tradeoffs between representational fidelity, computational cost, and storage requirement occur, as with the vast majority of lossy compression algorithms. This paper analyzes the computational (time) and storage (space) complexities of several recent OBC algorithms applied to single-frame imagery. A time complexity model is proposed, which can be associated theoretically with a space complexity model that we have previously published [2]. The result, when combined with measurements of

  18. Object-based detection of vehicles in airborne data

    NASA Astrophysics Data System (ADS)

    Schilling, Hendrik; Bulatov, Dimitri; Middelmann, Wolfgang

    2015-10-01

    Robust detection of vehicles in airborne data is a challenging task since a high variation in the object signatures - depending on data resolution - and often a small contrast between objects and background lead to high false classification rates and missed detections. Despite these facts, many applications require reliable results which can be obtained in a short time. In this paper, an object-based approach for vehicle detection in airborne laser scans (ALS) and photogrammetrically reconstructed 2.5D data is described. The focus of this paper lies on a robust object segmentation algorithm as well as the identification of features for a reliable separation between vehicles and background (all nonevehicle objects) on different scenes. The described method is based on three consecutive steps, namely, object segmentation, feature extraction and supervised classification. In the first step, the 2.5D data is segmented and possible targets are identified. The segmentation progress is based on the morphological top-hat filtering, which leaves areas that are smaller than a given filter size and higher (brighter) than their surroundings. The approach is chosen due to the low computational effort of this filter, which allows a fast computation even for large areas. The next step is feature extraction. Based on the initial segmentation, features for every identified object are extracted. In addition to frequently used features like height above ground, object area, or point distribution, more complex features like object planarity, entropy in the intensity image, and lineness measures are used. The last step contains classification of each object. For this purpose, a random forest classifier (RF) using the normalized features extracted in the previous step is chosen. RFs are suitable for high dimensional and nonlinear problems. In contrast to other approaches (e.g. maximum likelihood classifier), RFs achieves good results even with relatively small training samples.

  19. Detection and object-based classification of offshore oil slicks using ENVISAT-ASAR images.

    PubMed

    Akar, Sertac; Süzen, Mehmet Lutfi; Kaymakci, Nuretdin

    2011-12-01

    The aim of this study is to propose and test a multi-level methodology for detection of oil slicks in ENVISAT Advanced Synthetic Aperture Radar (ASAR) imagery, which can be used to support the identification of hydrocarbon seeps. We selected Andrusov Ridge in the Central Black Sea as the test study area where extensive hydrocarbon seepages were known to occur continuously. Hydrocarbon seepage from tectonic or stratigraphic origin at the sea floor causes oily gas plumes to rise up to the sea surface and form thin oil films called oil slicks. Microwave sensors like synthetic aperture radar (SAR) are very suitable for ocean remote sensing as they measure the backscattered radiation from the surface and show the roughness of the terrain. Oil slicks dampen the sea waves creating dark patches in the SAR image. The proposed and applied methodology includes three levels: visual interpretation, image filtering and object-based oil spill detection. Level I, after data preparation with visual interpretation, includes dark spots identification and subsets/scenes creation. After this process, the procedure continues with categorization of subsets/scenes into three cases based on contrast difference of dark spots to the surroundings. In level II, by image and morphological filtering, it includes preparation of subsets/scenes for segmentation. Level III includes segmentation and feature extraction which is followed by object-based classification. The object-based classification is applied with the fuzzy membership functions defined by extracted features of ASAR subsets/scenes, where the parameters of the detection algorithms are tuned specifically for each case group. As a result, oil slicks are discriminated from look-alikes with an overall classification accuracy of 83% for oil slicks and 77% for look-alikes obtained by averaging three different cases.

  20. Prism adaptation does not alter object-based attention in healthy participants

    PubMed Central

    Bultitude, Janet H.

    2013-01-01

    Hemispatial neglect (‘neglect’) is a disabling condition that can follow damage to the right side of the brain, in which patients show difficulty in responding to or orienting towards objects and events that occur on the left side of space. Symptoms of neglect can manifest in both space- and object-based frames of reference. Although patients can show a combination of these two forms of neglect, they are considered separable and have distinct neurological bases. In recent years considerable evidence has emerged to demonstrate that spatial symptoms of neglect can be reduced by an intervention called prism adaptation. Patients point towards objects viewed through prismatic lenses that shift the visual image to the right. Approximately five minutes of repeated pointing results in a leftward recalibration of pointing and improved performance on standard clinical tests for neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Here we examined the effect of prism adaptation on the performance of healthy participants who completed a computerised test of space- and object-based attention. Participants underwent adaptation to leftward- or rightward-shifting prisms, or performed neutral pointing according to a between-groups design. Significant pointing after-effects were found for both prism groups, indicating successful adaptation. In addition, the results of the computerised test revealed larger reaction-time costs associated with shifts of attention between two objects compared to shifts of attention within the same object, replicating previous work. However there were no differences in the performance of the three groups, indicating that prism adaptation did not influence space- or object-based attention for this task. When combined with existing literature, the results are consistent with the proposal that prism

  1. Revealing text in a complexly rolled silver scroll from Jerash with computed tomography and advanced imaging software

    NASA Astrophysics Data System (ADS)

    Hoffmann Barfod, Gry; Larsen, John Møller; Lichtenberger, Achim; Raja, Rubina

    2015-12-01

    Throughout Antiquity magical amulets written on papyri, lead and silver were used for apotropaic reasons. While papyri often can be unrolled and deciphered, metal scrolls, usually very thin and tightly rolled up, cannot easily be unrolled without damaging the metal. This leaves us with unreadable results due to the damage done or with the decision not to unroll the scroll. The texts vary greatly and tell us about the cultural environment and local as well as individual practices at a variety of locations across the Mediterranean. Here we present the methodology and the results of the digital unfolding of a silver sheet from Jerash in Jordan from the mid-8th century CE. The scroll was inscribed with 17 lines in presumed pseudo-Arabic as well as some magical signs. The successful unfolding shows that it is possible to digitally unfold complexly folded scrolls, but that it requires a combination of the know-how of the software and linguistic knowledge.

  2. Computer Software.

    ERIC Educational Resources Information Center

    Kay, Alan

    1984-01-01

    Discusses the nature and development of computer software. Programing, programing languages, types of software (including dynamic spreadsheets), and software of the future are among the topics considered. (JN)

  3. Object-based change detection for landslide monitoring based on SPOT imagery

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Friedl, Barbara; Eisank, Clemens

    2014-05-01

    The steadily increasing availability of Earth observation (EO) data from a wide range of sensors facilitates the long-time monitoring of mass movements and retrospective analysis. Pixel-based approaches are most commonly used for detecting changes based on optical remote sensing data. However, single pixels are not suitable for depicting natural phenomena such as landslides in their full complexity and their transformation over time. By applying semi-automated object-based change detection limitations inherent to pixel-based methods can be overcome to a certain extent. For instance, the problem of variant spectral reflectance for the same pixel location in images from different points in time can be minimized. Therefore, atmospheric and radiometric correction of input data sets - although highly recommended - seems to be not that important for developing a straightforward change detection approach based on object-based image analysis (OBIA). The object-based change detection approach was developed for a subset of the Baichi catchment, which is located in the Shihmen Reservoir watershed in northern Taiwan. The study area is characterized by mountainous terrain with steep slopes and is regularly affected by severe landslides and debris flows. Several optical satellite images, i.e. SPOT images from different years and seasons with a spatial resolution ranging from 2.5 to 6.25 m, have been used for monitoring the past evolution of landslides and landslide affected areas. A digital elevation model (DEM) with 5 m spatial resolution was integrated in the analysis for supporting the differentiation of landslides and debris flows. The landslide changes were identified by comparing feature values of segmentation-derived image objects between two subsequent images in eCognition (Trimble) software. To increase the robustness and transferability of the approach we identified changes by using the relative difference in values of band-specific relational features, spectral

  4. Revealing text in a complexly rolled silver scroll from Jerash with computed tomography and advanced imaging software

    PubMed Central

    Hoffmann Barfod, Gry; Larsen, John Møller; Raja, Rubina

    2015-01-01

    Throughout Antiquity magical amulets written on papyri, lead and silver were used for apotropaic reasons. While papyri often can be unrolled and deciphered, metal scrolls, usually very thin and tightly rolled up, cannot easily be unrolled without damaging the metal. This leaves us with unreadable results due to the damage done or with the decision not to unroll the scroll. The texts vary greatly and tell us about the cultural environment and local as well as individual practices at a variety of locations across the Mediterranean. Here we present the methodology and the results of the digital unfolding of a silver sheet from Jerash in Jordan from the mid-8th century CE. The scroll was inscribed with 17 lines in presumed pseudo-Arabic as well as some magical signs. The successful unfolding shows that it is possible to digitally unfold complexly folded scrolls, but that it requires a combination of the know-how of the software and linguistic knowledge. PMID:26648504

  5. Payload software technology

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A software analysis was performed of known STS sortie payload elements and their associated experiments. This provided basic data for STS payload software characteristics and sizes. A set of technology drivers was identified based on a survey of future technology needs and an assessment of current software technology. The results will be used to evolve a planned approach to software technology development. The purpose of this plan is to ensure that software technology is advanced at a pace and a depth sufficient to fulfill the identified future needs.

  6. Problem-Solving Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    CBR Express software solves problems by adapting sorted solutions to new problems specified by a user. It is applicable to a wide range of situations. The technology was originally developed by Inference Corporation for Johnson Space Center's Advanced Software Development Workstation. The project focused on the reuse of software designs, and Inference used CBR as part of the ACCESS prototype software. The commercial CBR Express is used as a "help desk" for customer support, enabling reuse of existing information when necessary. It has been adopted by several companies, among them American Airlines, which uses it to solve reservation system software problems.

  7. Status Report on the Development of Micro-Scheduling Software for the Advanced Outage Control Center Project

    SciTech Connect

    Germain, Shawn St.; Thomas, Kenneth; Farris, Ronald; Joe, Jeffrey

    2014-09-01

    The long-term viability of existing nuclear power plants (NPPs) in the United States (U.S.) is dependent upon a number of factors, including maintaining high capacity factors, maintaining nuclear safety, and reducing operating costs, particularly those associated with refueling outages. Refueling outages typically take 20-30 days, and for existing light water NPPs in the U.S., the reactor cannot be in operation during the outage. Furthermore, given that many NPPs generate between $1-1.5 million/day in revenue when in operation, there is considerable interest in shortening the length of refueling outages. Yet, refueling outages are highly complex operations, involving multiple concurrent and dependent activities that are difficult to coordinate. Finding ways to improve refueling outage performance while maintaining nuclear safety has proven to be difficult. The Advanced Outage Control Center project is a research and development (R&D) demonstration activity under the Light Water Reactor Sustainability (LWRS) Program. LWRS is a R&D program which works with industry R&D programs to establish technical foundations for the licensing and managing of long-term, safe, and economical operation of current NPPs. The Advanced Outage Control Center project has the goal of improving the management of commercial NPP refueling outages. To accomplish this goal, this INL R&D project is developing an advanced outage control center (OCC) that is specifically designed to maximize the usefulness of communication and collaboration technologies for outage coordination and problem resolution activities. This report describes specific recent efforts to develop a capability called outage Micro-Scheduling. Micro-Scheduling is the ability to allocate and schedule outage support task resources on a sub-hour basis. Micro-Scheduling is the real-time fine-tuning of the outage schedule to react to the actual progress of the primary outage activities to ensure that support task resources are

  8. A novel mask proximity correction software combining accuracy and reduced writing time for the manufacturing of advanced photomasks

    NASA Astrophysics Data System (ADS)

    Schiavone, Patrick; Martin, Luc; Browning, Clyde; Farys, Vincent; Sundermann, Frank; Narukawa, Shogo; Takikawa, Tadahiko; Hayashi, Naoya

    2012-06-01

    The new generations of photomasks are seen to bring more and more challenges to the mask manufacturer. Maskshops face two conflicting requirements, namely improving pattern fidelity and reducing or at least maintaining acceptable writing time. These requirements are getting more and more challenging since pattern size continuously shrinks and data volumes continuously grows. Although the classical dose modulation Proximity Effect Correction is able to provide sufficient process control to the mainstream products, an increased number of published and wafer data show that the mask process is becoming a nonnegligible contributor to the 28nm technology yield. We will show in this paper that a novel approach of mask proximity effect correction is able to meet the dual challenge of the new generation of masks. Unlike the classical approach, the technique presented in this paper is based on a concurrent optimization of the dose and geometry of the fractured shots. Adding one more parameter allows providing the best possible compromise between accuracy and writing time since energy latitude can be taken into account as well. This solution is implemented in the Inscale software package from Aselta Nanographics. We have assessed the capability of this technology on several levels of a 28nm technology. On this set, the writing time has been reduced up to 25% without sacrificing the accuracy which at the same time has been improved significantly compared to the existing process. The experiments presented in the paper confirm that a versatile proximity effect correction strategy, combining dose and geometry modulation helps the users to tradeoff between resolution/accuracy and e-beam write time.

  9. A Symphony of Software.

    ERIC Educational Resources Information Center

    Currents, 2002

    2002-01-01

    Offers a descriptive table of databases that help higher education institutions orchestrate advancement operations. Information includes vendor, contact, software, price, database engine/server platform, recommended reporting tools, record capacity, and client type. (EV)

  10. A Symphony of Software.

    ERIC Educational Resources Information Center

    Currents, 2002

    2002-01-01

    Offers a descriptive table of databases that help higher education institutions orchestrate advancement operations. Information includes vendor, contact, software, price, database engine/server platform, recommended reporting tools, record capacity, and client type. (EV)

  11. Biological imaging software tools.

    PubMed

    Eliceiri, Kevin W; Berthold, Michael R; Goldberg, Ilya G; Ibáñez, Luis; Manjunath, B S; Martone, Maryann E; Murphy, Robert F; Peng, Hanchuan; Plant, Anne L; Roysam, Badrinath; Stuurman, Nico; Stuurmann, Nico; Swedlow, Jason R; Tomancak, Pavel; Carpenter, Anne E

    2012-06-28

    Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the inherent challenges and the overall status of available software for bioimage informatics, focusing on open-source options.

  12. Biological Imaging Software Tools

    PubMed Central

    Eliceiri, Kevin W.; Berthold, Michael R.; Goldberg, Ilya G.; Ibáñez, Luis; Manjunath, B.S.; Martone, Maryann E.; Murphy, Robert F.; Peng, Hanchuan; Plant, Anne L.; Roysam, Badrinath; Stuurman, Nico; Swedlow, Jason R.; Tomancak, Pavel; Carpenter, Anne E.

    2013-01-01

    Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis, and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the challenges in that domain, and the overall status of available software for bioimage informatics, focusing on open source options. PMID:22743775

  13. Combining TerraSAR-X and SPOT-5 data for object-based landslide detection

    NASA Astrophysics Data System (ADS)

    Friedl, B.; Hölbling, D.; Füreder, P.

    2012-04-01

    Landslide detection and classification is an essential requirement in pre- and post-disaster hazard analysis. In earlier studies landslide detection often was achieved through time-consuming and cost-intensive field surveys and visual orthophoto interpretation. Recent studies show that Earth Observation (EO) data offer new opportunities for fast, reliable and accurate landslide detection and classification, which may conduce to an effective landslide monitoring and landslide hazard management. To ensure the fast recognition and classification of landslides at a regional scale, a (semi-)automated object-based landslide detection approach is established for a study site situated in the Huaguoshan catchment, Southern Taiwan. The study site exhibits a high vulnerability to landslides and debris flows, which are predominantly typhoon-induced. Through the integration of optical satellite data (SPOT-5 with 2.5 m GSD), SAR (Synthetic Aperture Radar) data (TerraSAR-X Spotlight with 2.95 m GSD) and digital elevation information (DEM with 5 m GSD) including its derived products (e.g. slope, curvature, flow accumulation) landslides may be examined in a more efficient way as if relying on single data sources only. The combination of optical and SAR data in an object-based image analysis (OBIA) domain for landslide detection and classification has not been investigated so far, even if SAR imagery show valuable properties for landslide detection, which differ from optical data (e.g. high sensitivity to surface roughness and soil moisture). The main purpose of this study is to recognize and analyze existing landslides by applying object-based image analysis making use of eCognition software. OBIA provides a framework for examining features defined by spectral, spatial, textural, contextual as well as hierarchical properties. Objects are derived through image segmentation and serve as input for the classification process, which relies on transparent rulesets, representing knowledge

  14. Object-based algorithms and methods for quantifying urban growth pattern using sequential satellite images

    NASA Astrophysics Data System (ADS)

    Yu, Bailang; Liu, Hongxing; Gao, Yige; Wu, Jianping

    2008-08-01

    Previously, urban growth pattern is described and measured by the pixel-by-pixel comparison of satellite images. The geographic extent, patterns and types of urban growth are derived from satellite images separated in time. However, the pixel-by-pixel comparison approach suffers from several drawbacks. Firstly, slight error in image geo-reference can cause false detection of changes. Secondly, it's difficult to recognize and correct artifact changes induced by data noise and data processing errors. Thirdly, only limited information can be derived. In this paper, we present a new objectbased method to describe and quantify urban growth patterns. The different types of land cover are classified from sequential satellite images as urban objects. The geometric and shape attributes of objects and the spatial relationship between them are employed to identify the different types of urban growth pattern. The algorithms involved in the object-based method are implemented by using C++ programming language and the software user interface is developed by using ArcObjects and VB.Net. A simulated example is given to demonstrate the utility and effectiveness of this new method.

  15. Physics pedagogical software: The CIP software contest

    NASA Astrophysics Data System (ADS)

    Donnelly, Denis

    1999-09-01

    In 1989, a decision was made that the journal Computers in Physics (CIP) would sponsor a software contest under the auspices of its education department. The goal was to reward software authors for their efforts to advance undergraduate pedagogy and to advertise their work to the larger physics community. The contest is now in its ninth year. In the following remarks, we will first attempt, some general observations and then some evaluative comments.

  16. Summary Describing Integration of ERM Methodology into Supervisory Control Framework with Software Package Documentation; Advanced Reactor Technology Milestone: M4AT-16PN2301052

    SciTech Connect

    Ramuhalli, Pradeep; Hirt, Evelyn H.; Dib, Gerges; Veeramany, Arun; Bonebrake, Christopher A.; Roy, Surajit

    2016-09-20

    This project involved the development of enhanced risk monitors (ERMs) for active components in Advanced Reactor (AdvRx) designs by integrating real-time information about equipment condition with risk monitors. Health monitoring techniques in combination with predictive estimates of component failure based on condition and risk monitors can serve to indicate the risk posed by continued operation in the presence of detected degradation. This combination of predictive health monitoring based on equipment condition assessment and risk monitors can also enable optimization of maintenance scheduling with respect to the economics of plant operation. This report summarizes PNNL’s multi-year project on the development and evaluation of an ERM concept for active components while highlighting FY2016 accomplishments. Specifically, this report provides a status summary of the integration and demonstration of the prototypic ERM framework with the plant supervisory control algorithms being developed at Oak Ridge National Laboratory (ORNL), and describes additional case studies conducted to assess sensitivity of the technology to different quantities. Supporting documentation on the software package to be provided to ONRL is incorporated in this report.

  17. Object-based landslide mapping on satellite images from different sensors

    NASA Astrophysics Data System (ADS)

    Hölbling, Daniel; Friedl, Barbara; Eisank, Clemens; Blaschke, Thomas

    2015-04-01

    Several studies have proven that object-based image analysis (OBIA) is a suitable approach for landslide mapping using remote sensing data. Mostly, optical satellite images are utilized in combination with digital elevation models (DEMs) for semi-automated mapping. The ability of considering spectral, spatial, morphometric and contextual features in OBIA constitutes a significant advantage over pixel-based methods, especially when analysing non-uniform natural phenomena such as landslides. However, many of the existing knowledge-based OBIA approaches for landslide mapping are rather complex and are tailored to specific data sets. These restraints lead to a lack of transferability of OBIA mapping routines. The objective of this study is to develop an object-based approach for landslide mapping that is robust against changing input data with different resolutions, i.e. optical satellite imagery from various sensors. Two study sites in Taiwan were selected for developing and testing the landslide mapping approach. One site is located around the Baolai village in the Huaguoshan catchment in the southern-central part of the island, the other one is a sub-area of the Taimali watershed in Taitung County near the south-eastern Pacific coast. Both areas are regularly affected by severe landslides and debris flows. A range of very high resolution (VHR) optical satellite images was used for the object-based mapping of landslides and for testing the transferability across different sensors and resolutions: (I) SPOT-5, (II) Formosat-2, (III) QuickBird, and (IV) WorldView-2. Additionally, a digital elevation model (DEM) with 5 m spatial resolution and its derived products (e.g. slope, plan curvature) were used for supporting the semi-automated mapping, particularly for differentiating source areas and accumulation areas according to their morphometric characteristics. A focus was put on the identification of comparatively stable parameters (e.g. relative indices), which could be

  18. Sustaining Software-Intensive Systems

    DTIC Science & Technology

    2006-05-01

    sustainment personnel), help desk, COTS product management, and technology refresh. Successful software sustainment consists of more than modifying and...being released at least every two years. In addition, the technology on which COTS software relies is constantly advancing, which requires the program...Address NDS maintenance and technology refresh in post deployment software support planning − Address unavailability of NDS when needed especially

  19. Advanced Tools for Software Maintenance.

    DTIC Science & Technology

    1982-12-01

    Old Applications ...... 118 11.3.3 Training People to Use New Tools ......... 119 Appendix A. Ada Style Guidelines . . . . . . . . . . . . . 121...and application -specific programming techniques and methods. - The Intelligent Editor provides facilities for manipulating programs at several...are applicable today or in the near future. In identifying tools and techniques, this study focused on one aspect of the maintenance problem

  20. Advanced Languages for Systems Software

    DTIC Science & Technology

    1994-01-01

    these are too numerous to list here. Edoardo Biagioni . Post-doctoral researcher. System networking and kernel design and imple- mentation. Kenneth Cline...John Backus, John H. Williams, and Edward L. Wimmers. The programming language FL. In Turner [131], pages 219-247. [12] Edoardo Biagioni , Nicholas

  1. Support Vector Machine Classification of Object-Based Data for Crop Mapping, Using Multi-Temporal Landsat Imagery

    NASA Astrophysics Data System (ADS)

    Devadas, R.; Denham, R. J.; Pringle, M.

    2012-07-01

    Crop mapping and time series analysis of agronomic cycles are critical for monitoring land use and land management practices, and analysing the issues of agro-environmental impacts and climate change. Multi-temporal Landsat data can be used to analyse decadal changes in cropping patterns at field level, owing to its medium spatial resolution and historical availability. This study attempts to develop robust remote sensing techniques, applicable across a large geographic extent, for state-wide mapping of cropping history in Queensland, Australia. In this context, traditional pixel-based classification was analysed in comparison with image object-based classification using advanced supervised machine-learning algorithms such as Support Vector Machine (SVM). For the Darling Downs region of southern Queensland we gathered a set of Landsat TM images from the 2010-2011 cropping season. Landsat data, along with the vegetation index images, were subjected to multiresolution segmentation to obtain polygon objects. Object-based methods enabled the analysis of aggregated sets of pixels, and exploited shape-related and textural variation, as well as spectral characteristics. SVM models were chosen after examining three shape-based parameters, twenty-three textural parameters and ten spectral parameters of the objects. We found that the object-based methods were superior to the pixel-based methods for classifying 4 major landuse/land cover classes, considering the complexities of within field spectral heterogeneity and spectral mixing. Comparative analysis clearly revealed that higher overall classification accuracy (95%) was observed in the object-based SVM compared with that of traditional pixel-based classification (89%) using maximum likelihood classifier (MLC). Object-based classification also resulted speckle-free images. Further, object-based SVM models were used to classify different broadacre crop types for summer and winter seasons. The influence of different shape

  2. Covert orienting in the split brain: Right hemisphere specialization for object-based attention.

    PubMed

    Kingstone, Alan

    2015-12-18

    The present paper takes as its starting point Phil Bryden's long-standing interest in human attention and the role it can play in laterality effects. Past split-brain research has suggested that object-based attention is lateralized to the left hemisphere [e.g., Egly, R., Rafal, R. D., Driver, J., & Starreveld, Y. (1994). Covert orienting in the split brain reveals hemispheric specialization for object-based attention. Psychological Science, 5(6), 380-382]. The task used to isolate object-based attention in that previous work, however, has been found wanting [Vecera, S. P. (1994). Grouped locations and object-based attention: Comment on Egly, Driver, and Rafal (1994). Journal of Experimental Psychology: General, 123(3), 316-320]; and indeed, subsequent research with healthy participants using a different task has suggested that object-based attention is lateralized to the opposite right hemisphere (RH) [Valsangkar-Smyth, M. A., Donovan, C. L., Sinnett, S., Dawson, M. R., & Kingstone, A. (2004). Hemispheric performance in object-based attention. Psychonomic Bulletin & Review, 11(1), 84-91]. The present study tested the same split-brain as Egly, Rafal, et al. (1994) but used the object-based attention task introduced by Valsangkar-Smyth et al. (2004). The results confirm that object-based attention is lateralized to the RH. They also suggest that subcortical interhemispheric competition may occur and be dominated by the RH.

  3. Comparing Pixel and Object-Based Approaches to Map an Understorey Invasive Shrub in Tropical Mixed Forests

    PubMed Central

    Niphadkar, Madhura; Nagendra, Harini; Tarantino, Cristina; Adamo, Maria; Blonda, Palma

    2017-01-01

    The establishment of invasive alien species in varied habitats across the world is now recognized as a genuine threat to the preservation of biodiversity. Specifically, plant invasions in understory tropical forests are detrimental to the persistence of healthy ecosystems. Monitoring such invasions using Very High Resolution (VHR) satellite remote sensing has been shown to be valuable in designing management interventions for conservation of native habitats. Object-based classification methods are very helpful in identifying invasive plants in various habitats, by their inherent nature of imitating the ability of the human brain in pattern recognition. However, these methods have not been tested adequately in dense tropical mixed forests where invasion occurs in the understorey. This study compares a pixel-based and object-based classification method for mapping the understorey invasive shrub Lantana camara (Lantana) in a tropical mixed forest habitat in the Western Ghats biodiversity hotspot in India. Overall, a hierarchical approach of mapping top canopy at first, and then further processing for the understorey shrub, using measures such as texture and vegetation indices proved effective in separating out Lantana from other cover types. In the first method, we implement a simple parametric supervised classification for mapping cover types, and then process within these types for Lantana delineation. In the second method, we use an object-based segmentation algorithm to map cover types, and then perform further processing for separating Lantana. The improved ability of the object-based approach to delineate structurally distinct objects with characteristic spectral and spatial characteristics of their own, as well as with reference to their surroundings, allows for much flexibility in identifying invasive understorey shrubs among the complex vegetation of the tropical forest than that provided by the parametric classifier. Conservation practices in tropical mixed

  4. Comparing Pixel and Object-Based Approaches to Map an Understorey Invasive Shrub in Tropical Mixed Forests.

    PubMed

    Niphadkar, Madhura; Nagendra, Harini; Tarantino, Cristina; Adamo, Maria; Blonda, Palma

    2017-01-01

    The establishment of invasive alien species in varied habitats across the world is now recognized as a genuine threat to the preservation of biodiversity. Specifically, plant invasions in understory tropical forests are detrimental to the persistence of healthy ecosystems. Monitoring such invasions using Very High Resolution (VHR) satellite remote sensing has been shown to be valuable in designing management interventions for conservation of native habitats. Object-based classification methods are very helpful in identifying invasive plants in various habitats, by their inherent nature of imitating the ability of the human brain in pattern recognition. However, these methods have not been tested adequately in dense tropical mixed forests where invasion occurs in the understorey. This study compares a pixel-based and object-based classification method for mapping the understorey invasive shrub Lantana camara (Lantana) in a tropical mixed forest habitat in the Western Ghats biodiversity hotspot in India. Overall, a hierarchical approach of mapping top canopy at first, and then further processing for the understorey shrub, using measures such as texture and vegetation indices proved effective in separating out Lantana from other cover types. In the first method, we implement a simple parametric supervised classification for mapping cover types, and then process within these types for Lantana delineation. In the second method, we use an object-based segmentation algorithm to map cover types, and then perform further processing for separating Lantana. The improved ability of the object-based approach to delineate structurally distinct objects with characteristic spectral and spatial characteristics of their own, as well as with reference to their surroundings, allows for much flexibility in identifying invasive understorey shrubs among the complex vegetation of the tropical forest than that provided by the parametric classifier. Conservation practices in tropical mixed

  5. System For Retrieving Reusable Software

    NASA Technical Reports Server (NTRS)

    Van Warren, Lloyd; Beckman, Brian C.

    1993-01-01

    Encyclopedia of Software Components (ESC) is information-retrieval system of computer hardware and software providing access to generic reusable software tools and parts. Core of ESC is central tool base, which is repository of reusable software. It receives queries and submissions from user through local browser subsystem and receives authorized updates from maintenance subsystem. Sends retrievals to local browser subsystem and user's submissions to maintenance subsystem. Future versions will provide for advanced media, including voice and video, and will link system to database-management system. Programmers will not only retrieve software, but also modify, execute, and cross-link with other software.

  6. Exploring the relationship between object realism and object-based attention effects.

    PubMed

    Roque, Nelson; Boot, Walter R

    2015-09-01

    Visual attention prioritizes processing of locations in space, and evidence also suggests that the benefits of attention can be shaped by the presence of objects (object-based attention). However, the prevalence of object-based attention effects has been called into question recently by evidence from a large-sampled study employing classic attention paradigms (Pilz et al., 2012). We conducted two experiments to explore factors that might determine when and if object-based attention effects are observed, focusing on the degree to which the concreteness and realism of objects might contribute to these effects. We adapted the classic attention paradigm first reported by Egly, Driver, and Rafal (1994) by replacing abstract bar stimuli in some conditions with objects that were more concrete and familiar to participants: items of silverware. Furthermore, we varied the realism of these items of silverware, presenting either cartoon versions or photo-realistic versions. Contrary to predictions, increased realism did not increase the size of object-based effects. In fact, no clear object-based effects were observed in either experiment, consistent with previous failures to replicate these effects in similar paradigms. While object-based attention may exist, and may have important influences on how we parse the visual world, these and other findings suggest that the two-object paradigm typically relied upon to study object-based effects may not be the best paradigm to investigate these issues.

  7. Reuse Metrics for Object Oriented Software

    NASA Technical Reports Server (NTRS)

    Bieman, James M.

    1998-01-01

    One way to increase the quality of software products and the productivity of software development is to reuse existing software components when building new software systems. In order to monitor improvements in reuse, the level of reuse must be measured. In this NASA supported project we (1) derived a suite of metrics which quantify reuse attributes for object oriented, object based, and procedural software, (2) designed prototype tools to take these measurements in Ada, C++, Java, and C software, (3) evaluated the reuse in available software, (4) analyzed the relationship between coupling, cohesion, inheritance, and reuse, (5) collected object oriented software systems for our empirical analyses, and (6) developed quantitative criteria and methods for restructuring software to improve reusability.

  8. New Combined Object-Based Technique for Efficient Urban Classsification Using WORLDVIEW-2 Data

    NASA Astrophysics Data System (ADS)

    Elsharkawy, A.; Elhabiby, M.; El-Sheimy, N.

    2012-07-01

    The new advances of having eight bands satellite mission similar to WorldView-2, WV-2, give the chance to address and solve some of the traditional problems related to the low spatial and/or spectral resolution; such as the lack of details for certain features or the inability of the conventional classifiers to detect some land-cover types because of missing efficient spectrum information and analysis techniques. High-resolution imagery is particularly well suited to urban applications. High spectral and spatial resolution of WorldView-2 data introduces challenges in detailed mapping of urban features. Classification of Water, Shadows, Red roofs and concrete buildings spectrally exhibit significant confusion either from the high similarity in the spectral response (e.g. water and Shadows) or the similarity in material type (e.g. red roofs and concrete buildings). This research study assesses the enhancement of the classification accuracy and efficiency for a data set of WorldView-2 satellite imagery using the full 8-bands through integrating the output of classification process using three band ratios with another step involves an object-based technique for extracting shadows, water, vegetation, building, Bare soil and asphalt roads. Second generation curvelet transform will be used in the second step, specifically to detect buildings' boundaries, which will aid the new algorithm of band ratios classification through efficient separation of the buildings. The combined technique is tested, and the preliminary results show a great potential of the new bands in the WV-2 imagery in the separation between confusing classes such as water and shadows, and the testing is extended to the separation between bare soils and asphalt roads. The Integrated band ratio-curvelet transform edge detection techniques increased the percentage of building detection by more than 30%.

  9. Estimation of Trees Outside Forests using IRS High Resolution data by Object Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Pujar, G. S.; Reddy, P. M.; Reddy, C. S.; Jha, C. S.; Dadhwal, V. K.

    2014-11-01

    Assessment of Trees outside forests (TOF) is widely being recognized as a pivotal theme, in sustainable natural resource management, due to their role in offering variety of goods, such as timber, fruits and fodder as well as services like water, carbon, biodiversity. Forest Conservation efforts involving reduction of deforestation and degradation may have to increasingly rely on alternatives provided by TOF in catering to economic demands in forest edges. Spatial information systems involving imaging, analysis and monitoring to achieve objectives under protocols like REDD+, require incorporation of information content from areas under forest as well as trees outside forests, to aid holistic decisions. In this perspective, automation in retrieving information on area under trees, growing outside forests, using high resolution imaging is essential so that measuring and verification of extant carbon pools, are strengthened. Retrieval of this tree cover is demonstrated herewith, using object based image analysis in a forest edge of dry deciduous forests of Eastern Ghats, in Khammam district of Telangana state of India. IRS high resolution panchromatic 2.5 m data (Cartosat-1 Orthorectified) used in tandem with 5.8 m multispectral LISS IV data, discerns tree crowns and clusters at a detailed scale and hence semi-automated approach is attempted to classify TOF from a pair of image from relatively crop and cloud free season. Object based image analysis(OBIA) approach as implemented in commercial suite of e-Cognition (Ver 8.9) consists of segmentation at user defined scale followed by application of wide range of spectral, textural and object geometry based parameters for classification. Software offers innovative blend of raster and vector features that can be juxtaposed flexibly, across scales horizontally or vertically. Segmentation was carried out at multiple scales to discern first the major land covers, such as forest, water, agriculture followed by that at a finer

  10. Sandia National Laboratories Advanced Simulation and Computing (ASC) : appraisal method for the implementation of the ASC software quality engineering practices: Version 1.0.

    SciTech Connect

    Turgeon, Jennifer; Minana, Molly A.

    2008-02-01

    This document provides a guide to the process of conducting software appraisals under the Sandia National Laboratories (SNL) ASC Program. The goal of this document is to describe a common methodology for planning, conducting, and reporting results of software appraisals thereby enabling: development of an objective baseline on implementation of the software quality engineering (SQE) practices identified in the ASC Software Quality Plan across the ASC Program; feedback from project teams on SQE opportunities for improvement; identification of strengths and opportunities for improvement for individual project teams; guidance to the ASC Program on the focus of future SQE activities Document contents include process descriptions, templates to promote consistent conduct of appraisals, and an explanation of the relationship of this procedure to the SNL ASC software program.

  11. Software Program: Software Management Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.

  12. Proprietary software

    NASA Technical Reports Server (NTRS)

    Marnock, M. J.

    1971-01-01

    The protection of intellectual property by a patent, a copyright, or trade secrets is reviewed. The present and future use of computers and software are discussed, along with the governmental uses of software. The popularity of contractual agreements for sale or lease of computer programs and software services is also summarized.

  13. The influence of object similarity and orientation on object-based cueing.

    PubMed

    Hein, Elisabeth; Blaschke, Stefan; Rolke, Bettina

    2017-01-01

    Responses to targets that appear at a noncued position within the same object (invalid-same) compared to a noncued position at an equidistant different object (invalid-different) tend to be faster and more accurate. These cueing effects have been taken as evidence that visual attention can be object based (Egly, Driver, & Rafal, Journal of Experimental Psychology: General, 123, 161-177, 1994). Recent findings, however, have shown that the object-based cueing effect is influenced by object orientation, suggesting that the cueing effect might be due to a more general facilitation of attentional shifts across the horizontal meridian (Al-Janabi & Greenberg, Attention, Perception, & Psychophysics, 1-17, 2016; Pilz, Roggeveen, Creighton, Bennet, & Sekuler, PLOS ONE, 7, e30693, 2012). The aim of this study was to investigate whether the object-based cueing effect is influenced by object similarity and orientation. According to the object-based attention account, objects that are less similar to each other should elicit stronger object-based cueing effects independent of object orientation, whereas the horizontal meridian theory would not predict any effect of object similarity. We manipulated object similarity by using a color (Exp. 1, Exp. 2A) or shape change (Exp. 2B) to distinguish two rectangles in a variation of the classic two-rectangle paradigm (Egly et al., 1994). We found that the object-based cueing effects were influenced by the orientation of the rectangles and strengthened by object dissimilarity. We suggest that object-based cueing effects are strongly affected by the facilitation of attention along the horizontal meridian, but that they also have an object-based attentional component, which is revealed when the dissimilarity between the presented objects is accentuated.

  14. Object-based glacier mapping in the Hohe Tauern Mountains of Austria

    NASA Astrophysics Data System (ADS)

    Aubrey Robson, Benjamin; Hölbling, Daniel; Nuth, Christopher; Olaf Dahl, Svein

    2015-04-01

    Up-to-date and frequent glacier outlines are a necessity for many applications within glaciology. While multispectral band ratios are a comparatively robust method for automatically classifying clean ice on a pixel-based level, semi- or fully automated glacier inventories are complicated by spectral similarities between classes such as debris-covered glacier ice and the surrounding bedrock and moraines, or between clean ice and turbid pro-glacial water. Most glacier inventories therefore require a great deal of manual correction. Here, we present a glacier inventory of the Hohe Tauern Mountains in the Central Eastern Alps in Austria. Numerous glaciers, including the Pasterze Glacier, which is the longest glacier in the Eastern Alps, shape this mountainous region. The mapping of glaciers is based on object-based image analysis (OBIA) using both high resolution (HR) satellite imagery from Landsat 8 and a digital elevation model (DEM) derived from Airborne Laser Scanning (ALS) data. We automatically classify clean ice, debris-covered ice and glacial lakes. Image objects are created by applying the multiresolution segmentation algorithm implemented in the eCognition (Trimble) software. The resulting image objects are classified using a combination of various features, whereby a focus was put on the selection of robust features that are ideally applicable for mapping large areas, for example spectral indices such as the Normalized Differenced Vegetation Index (NDVI), Normalized Difference Snow and Ice Index (NDSI), Normalised Difference Water Index (NDWI), Land and Water Mask (LWK) and a ratio of the SWIR and NIR spectral bands. The ability of OBIA to incorporate optical and elevation data and to individually address data-specific characteristics helps differentiate debris-covered ice from surrounding features not only by using spectral properties but also based on morphological and topographic parameters, while the inclusion of rulesets relying on contextuality, size

  15. A comparison of pixel-based and object-based approaches for land use land cover classification in semi-arid areas, Sudan

    NASA Astrophysics Data System (ADS)

    Adam, H. E.; Csaplovics, E.; Elhaja, M. E.

    2016-06-01

    This paper deals with the comparison between application of pixel-based and object- based approaches in land use land cover classification in semi-arid areas in Sudan. The second aim is to assess the accuracy of classification for each approach. The study was conducted in the gum arabic belt in North Kordofan State, which is affected by modifications in conditions and composition of vegetation cover trends. The study used ASTER L1B registered radiance at the sensor image acquired on (19.10.2010). The image was radiometrically corrected by using ENVI-FLAASH software. Subset with an area of (40880) ha was created. The image classification (pixel-based and object-based) and accuracy assessment were conducted. Total number of (47) GCPs were surveyed and used in accuracy assessment using ERDAS 9.1. Image segmentation process was implemented using Definiens eCognition 7.1 software. Segmentation level 4 of scale parameter 25 was selected for classification based on colour and form homogeneity. Land use land cover classes were derived by classification using the nearest neighbor classifier with membership functions (fuzzy logic) for each class. The land use land cover distribution in the area for forest dominated by Acacia Senegal is (20%) and for residential area is (1.50%) for the two methods of classification. While for bare and farm land, grass and bush land and mixed woodland classes are (6.69% and 1.63%), (18.62% and 15.16%) and (53% and 61%) for pixel based and object based methods, respectively. The overall accuracy and Kappa statistic of the classification produced by the pixel-based and object-based were (72.92%, and 54.17%) and (0.6259 and 0.3810), respectively. The pixel based approach performed slightly better than the object-based approach in land use land cover classification in the semi-arid land in gum Arabic belt.

  16. Object-based attention underlies the rehearsal of feature binding in visual working memory.

    PubMed

    Shen, Mowei; Huang, Xiang; Gao, Zaifeng

    2015-04-01

    Feature binding is a core concept in many research fields, including the study of working memory (WM). Over the past decade, it has been debated whether keeping the feature binding in visual WM consumes more visual attention than the constituent single features. Previous studies have only explored the contribution of domain-general attention or space-based attention in the binding process; no study so far has explored the role of object-based attention in retaining binding in visual WM. We hypothesized that object-based attention underlay the mechanism of rehearsing feature binding in visual WM. Therefore, during the maintenance phase of a visual WM task, we inserted a secondary mental rotation (Experiments 1-3), transparent motion (Experiment 4), or an object-based feature report task (Experiment 5) to consume the object-based attention available for binding. In line with the prediction of the object-based attention hypothesis, Experiments 1-5 revealed a more significant impairment for binding than for constituent single features. However, this selective binding impairment was not observed when inserting a space-based visual search task (Experiment 6). We conclude that object-based attention underlies the rehearsal of binding representation in visual WM. (c) 2015 APA, all rights reserved.

  17. The modulation of spatial congruency by object-based attention: analysing the "locus" of the modulation.

    PubMed

    Luo, Chunming; Lupiáñez, Juan; Funes, María Jesús; Fu, Xiaolan

    2011-12-01

    Earlier studies have demonstrated that spatial cueing differentially reduces stimulus-stimulus congruency (e.g., spatial Stroop) interference but not stimulus-response congruency (e.g., Simon; e.g., Lupiáñez & Funes, 2005). This spatial cueing modulation over spatial Stroop seems to be entirely attributable to object-based attention (e.g., Luo, Lupiáñez, Funes, & Fu, 2010). In the present study, two experiments were conducted to further explore whether the cueing modulation of spatial Stroop is object based and/or space based and to analyse the "locus" of this modulation. In Experiment 1, we found that the cueing modulation over spatial Stroop is entirely object based, independent of stimulus-response congruency. In Experiment 2, we observed that the modulation of object-based attention over the spatial Stroop only occurred at a short cue-target interval (i.e., stimulus onset asynchrony; SOA), whereas the stimulus-response congruency effect was not modulated either by object-based or by location-based attentional cueing. The overall pattern of results suggests that the spatial cueing modulation over spatial Stroop arises from object-based attention and occurs at the perceptual stage of processing.

  18. Computer software.

    PubMed

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  19. Target-object integration, attention distribution, and object orientation interactively modulate object-based selection.

    PubMed

    Al-Janabi, Shahd; Greenberg, Adam S

    2016-10-01

    The representational basis of attentional selection can be object-based. Various studies have suggested, however, that object-based selection is less robust than spatial selection across experimental paradigms. We sought to examine the manner by which the following factors might explain this variation: Target-Object Integration (targets 'on' vs. part 'of' an object), Attention Distribution (narrow vs. wide), and Object Orientation (horizontal vs. vertical). In Experiment 1, participants discriminated between two targets presented 'on' an object in one session, or presented as a change 'of' an object in another session. There was no spatial cue-thus, attention was initially focused widely-and the objects were horizontal or vertical. We found evidence of object-based selection only when targets constituted a change 'of' an object. Additionally, object orientation modulated the sign of object-based selection: We observed a same-object advantage for horizontal objects, but a same-object cost for vertical objects. In Experiment 2, an informative cue preceded a single target presented 'on' an object or as a change 'of' an object (thus, attention was initially focused narrowly). Unlike in Experiment 1, we found evidence of object-based selection independent of target-object integration. We again found that the sign of selection was modulated by the objects' orientation. This result may reflect a meridian effect, which emerged due to anisotropies in the cortical representations when attention is oriented endogenously. Experiment 3 revealed that object orientation did not modulate object-based selection when attention was oriented exogenously. Our findings suggest that target-object integration, attention distribution, and object orientation modulate object-based selection, but only in combination.

  20. Using digital photographs and object-based image analysis to estimate percent ground cover in vegetation plots

    USGS Publications Warehouse

    Luscier, J.D.; Thompson, W.L.; Wilson, J.M.; Gorham, B.E.; Dragut, L.D.

    2006-01-01

    Ground vegetation influences habitat selection and provides critical resources for survival and reproduction of animals. Researchers often employ visual methods to estimate ground cover, but these approaches may be prone to observer bias. We therefore evaluated a method using digital photographs of vegetation to objectively quantify percent ground cover of grasses, forbs, shrubs, litter, and bare ground within 90 plots of 2m2. We carried out object-based image analysis, using a software program called eCognition, to divide photographs into different vegetation classes (based on similarities among neighboring pixels) to estimate percent ground cover for each category. We used the Kappa index of agreement (KIA) to quantify correctly classified, randomly selected segments of all images. Our KIA values indicated strong agreement (> 80%) of all vegetation categories, with an average of 90-96% (SE = 5%) of shrub, litter, forb, and grass segments classified correctly. We also created artificial plots with known percentages of each vegetation category to evaluate the accuracy of software predictions. Observed differences between true cover and eCognition estimates for each category ranged from 1 to 4%. This technique provides a repeatable and reliable way to estimate percent ground cover that allows quantification of classification accuracy. ?? The Ecological Society of America.

  1. The effects of visual search efficiency on object-based attention.

    PubMed

    Greenberg, Adam S; Rosen, Maya; Cutrone, Elizabeth; Behrmann, Marlene

    2015-07-01

    The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41-51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161-177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention.

  2. Object-based classification of earthquake damage from high-resolution optical imagery using machine learning

    NASA Astrophysics Data System (ADS)

    Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene

    2016-07-01

    Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.

  3. Software Sleuth

    NASA Technical Reports Server (NTRS)

    2000-01-01

    NASA's need to trace mistakes to their source to try and eliminate them in the future has resulted in software known as Root Cause Analysis (RoCA). Fair, Isaac & Co., Inc. has applied RoCA software, originally developed under an SBIR contract with Kennedy, to its predictive software technology. RoCA can generate graphic reports to make analysis of problems easier and more efficient.

  4. Software safety

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy

    1987-01-01

    Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.

  5. Software Reviews.

    ERIC Educational Resources Information Center

    Beezer, Robert A.; And Others

    1988-01-01

    Reviews for three software packages are given. Those packages are: Linear Algebra Computer Companion; Probability and Statistics Demonstrations and Tutorials; and Math Utilities: CURVES, SURFS, AND DIFFS. (PK)

  6. Detection of object-based manipulation by the statistical features of object contour.

    PubMed

    Richao, Chen; Gaobo, Yang; Ningbo, Zhu

    2014-03-01

    Object-based manipulations, such as adding or removing objects for digital video, are usually malicious forgery operations. Compared with the conventional double MPEG compression or frame-based tampering, it makes more sense to detect these object-based manipulations because they might directly affect our understanding towards the video content. In this paper, a passive video forensics scheme is proposed for object-based forgery operations. After extracting the adjustable width areas around object boundary, several statistical features such as the moment features of detailed wavelet coefficients and the average gradient of each colour channel are obtained and input into support vector machine (SVM) as feature vectors for the classification of natural objects and forged ones. Experimental results on several videos sequence with static background show that the proposed approach can achieve an accuracy of correct detection from 70% to 95%.

  7. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    requirements. This allows the projects leeway to meet these requirements in many forms that best suit a particular project's needs and safety risk. In other words, it tells the project what to do, not how to do it. This update also incorporated advances in the state of the practice of software safety from academia and private industry. It addresses some of the more common issues now facing software developers in the NASA environment such as the use of Commercial-Off-the-Shelf Software (COTS), Modified OTS (MOTS), Government OTS (GOTS), and reused software. A team from across NASA developed the update and it has had both NASA-wide internal reviews by software engineering, quality, safety, and project management. It has also had expert external review. This presentation and paper will discuss the new NASA Software Safety Standard, its organization, and key features. It will start with a brief discussion of some NASA mission failures and incidents that had software as one of their root causes. It will then give a brief overview of the NASA Software Safety Process. This will include an overview of the key personnel responsibilities and functions that must be performed for safety-critical software.

  8. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    requirements. This allows the projects leeway to meet these requirements in many forms that best suit a particular project's needs and safety risk. In other words, it tells the project what to do, not how to do it. This update also incorporated advances in the state of the practice of software safety from academia and private industry. It addresses some of the more common issues now facing software developers in the NASA environment such as the use of Commercial-Off-the-Shelf Software (COTS), Modified OTS (MOTS), Government OTS (GOTS), and reused software. A team from across NASA developed the update and it has had both NASA-wide internal reviews by software engineering, quality, safety, and project management. It has also had expert external review. This presentation and paper will discuss the new NASA Software Safety Standard, its organization, and key features. It will start with a brief discussion of some NASA mission failures and incidents that had software as one of their root causes. It will then give a brief overview of the NASA Software Safety Process. This will include an overview of the key personnel responsibilities and functions that must be performed for safety-critical software.

  9. Object-based Conditional Random Fields for Road Extraction from Remote Sensing Image

    NASA Astrophysics Data System (ADS)

    Huang, Zhijian; Xu, Fanjiang; Lu, Lei; Nie, Hongshan

    2014-03-01

    To make full use of spatially contextual information and topological information in the procedure of Object-based Image Analysis (OBIA), an object-based conditional random field is proposed and used for road extraction. Objects are produced with an initial segmentation, then their neighbours are constructed. Each object is represented by three kinds of features, including the colour, the gradient of histogram and the texture. Formulating the road extraction as a binary classification problem, a Conditional Random Fields model learns and is used for inference. The experimental results demonstrate that the proposed method is effective.

  10. JTpack90: A parallel, object-based, Fortran 90 linear algebra package

    SciTech Connect

    Turner, J.A.; Kothe, D.B.; Ferrell, R.C.

    1997-03-01

    The authors have developed an object-based linear algebra package, currently with emphasis on sparse Krylov methods, driven primarily by needs of the Los Alamos National Laboratory parallel unstructured-mesh casting simulation tool Telluride. Support for a number of sparse storage formats, methods, and preconditioners have been implemented, driven primarily by application needs. They describe the object-based Fortran 90 approach, which enhances maintainability, performance, and extensibility, the parallelization approach using a new portable gather/scatter library (PGSLib), current capabilities and future plans, and present preliminary performance results on a variety of platforms.

  11. The Analysis of Object-Based Change Detection in Mining Area: a Case Study with Pingshuo Coal Mine

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Zhou, W.; Li, Y.

    2017-09-01

    Accurate information on mining land use and land cover change are crucial for monitoring and environmental change studies. In this paper, RapidEye Remote Sensing Image (Map 2012) and SPOT7 Remote Sensing Image (Map 2015) in Pingshuo Mining Area are selected to monitor changes combined with object-based classification and change vector analysis method, we also used R in highresolution remote sensing image for mining land classification, and found the feasibility and the flexibility of open source software. The results show that (1) the classification of reclaimed mining land has higher precision, the overall accuracy and kappa coefficient of the classification of the change region map were 86.67 % and 89.44 %. It's obvious that object-based classification and change vector analysis which has a great significance to improve the monitoring accuracy can be used to monitor mining land, especially reclaiming mining land; (2) the vegetation area changed from 46 % to 40 % accounted for the proportion of the total area from 2012 to 2015, and most of them were transformed into the arable land. The sum of arable land and vegetation area increased from 51 % to 70 %; meanwhile, build-up land has a certain degree of increase, part of the water area was transformed into arable land, but the extent of the two changes is not obvious. The result illustrated the transformation of reclaimed mining area, at the same time, there is still some land convert to mining land, and it shows the mine is still operating, mining land use and land cover are the dynamic procedure.

  12. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen

    1988-01-01

    Presents reviews of six computer software programs for teaching science. Provides the publisher, grade level, cost, and descriptions of software, including: (1) "Recycling Logic"; (2) "Introduction to Biochemistry"; (3) "Food for Thought"; (4) "Watts in a Home"; (5) "Geology in Action"; and (6)…

  13. Software Reviews.

    ERIC Educational Resources Information Center

    Miller, Anne, Ed.; Radziemski, Cathy, Ed.

    1988-01-01

    Reviews two software packages for the Macintosh series. "Course Builder 2.0," a courseware authoring system, allows the user to create programs which stand alone and may be used independently in the classroom. "World Builder," an artificial intelligence software package, allows creative thinking, problem-solving, and…

  14. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen

    1988-01-01

    Presents reviews of six computer software programs for teaching science. Provides the publisher, grade level, cost, and descriptions of software, including: (1) "Recycling Logic"; (2) "Introduction to Biochemistry"; (3) "Food for Thought"; (4) "Watts in a Home"; (5) "Geology in Action"; and (6)…

  15. Software Reviews.

    ERIC Educational Resources Information Center

    Miller, Anne, Ed.; Radziemski, Cathy, Ed.

    1988-01-01

    Reviews two software packages for the Macintosh series. "Course Builder 2.0," a courseware authoring system, allows the user to create programs which stand alone and may be used independently in the classroom. "World Builder," an artificial intelligence software package, allows creative thinking, problem-solving, and…

  16. Software Bridge

    NASA Technical Reports Server (NTRS)

    1995-01-01

    I-Bridge is a commercial version of software developed by I-Kinetics under a NASA Small Business Innovation Research (SBIR) contract. The software allows users of Windows applications to gain quick, easy access to databases, programs and files on UNIX services. Information goes directly onto spreadsheets and other applications; users need not manually locate, transfer and convert data.

  17. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1988

    1988-01-01

    Reviews six software packages for the Apple II family. Programs reviewed include "Science Courseware: Earth Science Series"; "Heat and Light"; "In Search of Space: Introduction to Model Rocketry"; "Drug Education Series: Drugs--Their Effects on You'"; "Uncertainties and Measurement"; and "Software Films: Learning about Science Series," which…

  18. Software Repository

    NASA Technical Reports Server (NTRS)

    Merwarth, P., D.

    1983-01-01

    The Common Software Module Repository (CSMR) is computerized library system with high product and service visibility to potential users. Online capabilities of system allow both librarian and user to interact with library. Librarian is responsible for maintaining information in CSMR library. User searches library to locate software modules that meet his or her current needs.

  19. Is Object-Based Attention Mandatory? Strategic Control over Mode of Attention

    ERIC Educational Resources Information Center

    Yeari, Menahem; Goldsmith, Morris

    2010-01-01

    Is object-based attention mandatory or under strategic control? In an adapted spatial cuing paradigm, participants focused initially on a central arrow cue that was part of a perceptual group (Experiment 1) or a uniformly connected object (Experiment 2), encompassing one of the potential target locations. The cue always pointed to an opposite,…

  20. The Object-Based Simon Effect: Grasping Affordance or Relative Location of the Graspable Part?

    ERIC Educational Resources Information Center

    Cho, Dongbin; Proctor, Robert W.

    2010-01-01

    Reaction time is often shorter when the irrelevant graspable handle of an object corresponds with the location of a keypress response to the relevant attribute than when it does not. This object-based Simon effect has been attributed to an affordance for grasping the handle with the hand to the same side. Because a grasping affordance should…

  1. Object-Based Image Analysis Beyond Remote Sensing - the Human Perspective

    NASA Astrophysics Data System (ADS)

    Blaschke, T.; Lang, S.; Tiede, D.; Papadakis, M.; Györi, A.

    2016-06-01

    We introduce a prototypical methodological framework for a place-based GIS-RS system for the spatial delineation of place while incorporating spatial analysis and mapping techniques using methods from different fields such as environmental psychology, geography, and computer science. The methodological lynchpin for this to happen - when aiming to delineate place in terms of objects - is object-based image analysis (OBIA).

  2. Feature selection methods for object-based classification of sub-decimeter resolution digital aerial imagery

    USDA-ARS?s Scientific Manuscript database

    Due to the availability of numerous spectral, spatial, and contextual features, the determination of optimal features and class separabilities can be a time consuming process in object-based image analysis (OBIA). While several feature selection methods have been developed to assist OBIA, a robust c...

  3. Using object-based image analysis to guide the selection of field sample locations

    USDA-ARS?s Scientific Manuscript database

    One of the most challenging tasks for resource management and research is designing field sampling schemes to achieve unbiased estimates of ecosystem parameters as efficiently as possible. This study focused on the potential of fine-scale image objects from object-based image analysis (OBIA) to be u...

  4. Is Object-Based Attention Mandatory? Strategic Control over Mode of Attention

    ERIC Educational Resources Information Center

    Yeari, Menahem; Goldsmith, Morris

    2010-01-01

    Is object-based attention mandatory or under strategic control? In an adapted spatial cuing paradigm, participants focused initially on a central arrow cue that was part of a perceptual group (Experiment 1) or a uniformly connected object (Experiment 2), encompassing one of the potential target locations. The cue always pointed to an opposite,…

  5. The Object-Based Simon Effect: Grasping Affordance or Relative Location of the Graspable Part?

    ERIC Educational Resources Information Center

    Cho, Dongbin; Proctor, Robert W.

    2010-01-01

    Reaction time is often shorter when the irrelevant graspable handle of an object corresponds with the location of a keypress response to the relevant attribute than when it does not. This object-based Simon effect has been attributed to an affordance for grasping the handle with the hand to the same side. Because a grasping affordance should…

  6. Twenty-third water reactor safety information meeting: Volume 2, Human factors research; Advanced I and C hardware and software; Severe accident research; Probabilistic risk assessment topics; Individual plant examination: Proceedings

    SciTech Connect

    Monteleone, S.

    1996-03-01

    This three-volume report contains papers presented at the Twenty- Third Water Reactor Safety Information Meeting held at the Bethesda Marriott Hotel, Bethesda, Maryland, October 23-25, 1995. The papers are printed in the order of their presentation in each session and describe progress and results of programs in nuclear safety research conducted in this country and abroad. Foreign participation in the meeting included papers presented by researchers from France, Italy, Japan, Norway, Russia, Sweden, and Switzerland. This document, Volume 2, present topics in human factors research, advanced instrumentation and control hardware and software, severe accident research, probabilistic risk assessment, and individual plant examination. Individual papers have been cataloged separately.

  7. A multiple-point spatially weighted k-NN method for object-based classification

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.

    2016-10-01

    Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.

  8. Bindings in working memory: The role of object-based attention.

    PubMed

    Gao, Zaifeng; Wu, Fan; Qiu, Fangfang; He, Kaifeng; Yang, Yue; Shen, Mowei

    2017-02-01

    Over the past decade, it has been debated whether retaining bindings in working memory (WM) requires more attention than retaining constituent features, focusing on domain-general attention and space-based attention. Recently, we proposed that retaining bindings in WM needs more object-based attention than retaining constituent features (Shen, Huang, & Gao, 2015, Journal of Experimental Psychology: Human Perception and Performance, doi: 10.1037/xhp0000018 ). However, only unitized visual bindings were examined; to establish the role of object-based attention in retaining bindings in WM, more emperical evidence is required. We tested 4 new bindings that had been suggested requiring no more attention than the constituent features in the WM maintenance phase: The two constituent features of binding were stored in different WM modules (cross-module binding, Experiment 1), from auditory and visual modalities (cross-modal binding, Experiment 2), or temporally (cross-time binding, Experiments 3) or spatially (cross-space binding, Experiments 4-6) separated. In the critical condition, we added a secondary object feature-report task during the delay interval of the change-detection task, such that the secondary task competed for object-based attention with the to-be-memorized stimuli. If more object-based attention is required for retaining bindings than for retaining constituent features, the secondary task should impair the binding performance to a larger degree relative to the performance of constituent features. Indeed, Experiments 1-6 consistently revealed a significantly larger impairment for bindings than for the constituent features, suggesting that object-based attention plays a pivotal role in retaining bindings in WM.

  9. Software Smarts

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Under an SBIR (Small Business Innovative Research) contract with Johnson Space Center, Knowledge Based Systems Inc. (KBSI) developed an intelligent software environment for modeling and analyzing mission planning activities, simulating behavior, and, using a unique constraint propagation mechanism, updating plans with each change in mission planning activities. KBSI developed this technology into a commercial product, PROJECTLINK, a two-way bridge between PROSIm, KBSI's process modeling and simulation software and leading project management software like Microsoft Project and Primavera's SureTrak Project Manager.

  10. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are six computer software packages including "Lunar Greenhouse,""Dyno-Quest,""How Weather Works,""Animal Trackers,""Personal Science Laboratory," and "The Skeletal and Muscular Systems." Availability, functional, and hardware requirements are discussed. (CW)

  11. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Eugene T., Ed.

    1988-01-01

    Presents reviews by classroom teachers of software for teaching science. Includes material on the work of geologists, genetics, earth science, classification of living things, astronomy, endangered species, skeleton, drugs, and heartbeat. Provides information on availability and equipment needed. (RT)

  12. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane

    1990-01-01

    Reviews two programs: (1) "The Weather Machine" on understanding weather and weather forecasting and (2) "The Mystery of the Hotel Victoria" on problem solving in mathematics. Presents the descriptions, advantages, and weaknesses of the software. (YP)

  13. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1987-01-01

    Provides a review of four science software programs. Includes topics such as plate tectonics, laboratory experiment simulations, the human body, and light and temperature. Contains information on ordering and reviewers' comments. (ML)

  14. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are three computer software packages including "Martin Luther King, Jr.: Instant Replay of History,""Weeds to Trees," and "The New Print Shop, School Edition." Discussed are hardware requirements, costs, grade levels, availability, emphasis, strengths, and weaknesses. (CW)

  15. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1987-01-01

    Reviews seven computer software programs that can be used in science education programs. Describes courseware which deals with muscles and bones, terminology, classifying animals without backbones, molecular structures, drugs, genetics, and shaping the earth's surface. (TW)

  16. Software Reviews.

    ERIC Educational Resources Information Center

    Mathematics and Computer Education, 1988

    1988-01-01

    Presents reviews of six software packages. Includes (1) "Plain Vanilla Statistics"; (2) "MathCAD 2.0"; (3) "GrFx"; (4) "Trigonometry"; (5) "Algebra II"; (6) "Algebra Drill and Practice I, II, and III." (PK)

  17. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are six computer software packages including "Lunar Greenhouse,""Dyno-Quest,""How Weather Works,""Animal Trackers,""Personal Science Laboratory," and "The Skeletal and Muscular Systems." Availability, functional, and hardware requirements are discussed. (CW)

  18. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Eugene T., Ed.

    1988-01-01

    Presents reviews by classroom teachers of software for teaching science. Includes material on the work of geologists, genetics, earth science, classification of living things, astronomy, endangered species, skeleton, drugs, and heartbeat. Provides information on availability and equipment needed. (RT)

  19. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1987-01-01

    Reviews seven computer software programs that can be used in science education programs. Describes courseware which deals with muscles and bones, terminology, classifying animals without backbones, molecular structures, drugs, genetics, and shaping the earth's surface. (TW)

  20. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1987-01-01

    Provides reviews of six computer software programs designed for use in elementary science education programs. Provides the title, publisher, grade level, and descriptions of courseware on ant farms, drugs, genetics, beachcombing, matter, and test generation. (TW)

  1. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane

    1990-01-01

    Reviews two programs: (1) "The Weather Machine" on understanding weather and weather forecasting and (2) "The Mystery of the Hotel Victoria" on problem solving in mathematics. Presents the descriptions, advantages, and weaknesses of the software. (YP)

  2. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1987-01-01

    Provides a review of four science software programs. Includes topics such as plate tectonics, laboratory experiment simulations, the human body, and light and temperature. Contains information on ordering and reviewers' comments. (ML)

  3. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1988

    1988-01-01

    Provides reviews of four educational software packages which deal with problem solving, mathematics, history, and reading comprehension. Includes information about appropriate hardware, grade level, the publisher and purchasing. Contains comments about the strengths and weaknesses of each program. (TW)

  4. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are three computer software packages including "Martin Luther King, Jr.: Instant Replay of History,""Weeds to Trees," and "The New Print Shop, School Edition." Discussed are hardware requirements, costs, grade levels, availability, emphasis, strengths, and weaknesses. (CW)

  5. Software Reviews.

    ERIC Educational Resources Information Center

    Dwyer, Donna; And Others

    1989-01-01

    Reviewed are seven software packages for Apple and IBM computers. Included are: "Toxicology"; "Science Corner: Space Probe"; "Alcohol and Pregnancy"; "Science Tool Kit Plus"; Computer Investigations: Plant Growth"; "Climatrolls"; and "Animal Watch: Whales." (CW)

  6. Software Reviews.

    ERIC Educational Resources Information Center

    Dwyer, Donna; And Others

    1989-01-01

    Reviewed are seven software packages for Apple and IBM computers. Included are: "Toxicology"; "Science Corner: Space Probe"; "Alcohol and Pregnancy"; "Science Tool Kit Plus"; Computer Investigations: Plant Growth"; "Climatrolls"; and "Animal Watch: Whales." (CW)

  7. Reusable Software.

    DTIC Science & Technology

    1984-03-01

    overseeing reusable software, the Reusable Software Organization ( RUSO ). This author does not feel at this time that establishment of such a specific...49] have not been accompanied by establishment of RUSO -like activities. There is need, however, for assurance that functions which a RUSO might be...assurance 6. establishment and maintenance of reuse archival facilities and activities. Actual establishment of a RUSO is best dictated by size of the

  8. Weed mapping in early-season maize fields using object-based analysis of unmanned aerial vehicle (UAV) images.

    PubMed

    Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca

    2013-01-01

    The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r(2)=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance.

  9. Weed Mapping in Early-Season Maize Fields Using Object-Based Analysis of Unmanned Aerial Vehicle (UAV) Images

    PubMed Central

    Peña, José Manuel; Torres-Sánchez, Jorge; de Castro, Ana Isabel; Kelly, Maggi; López-Granados, Francisca

    2013-01-01

    The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r2=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance. PMID:24146963

  10. Object-based benthic habitat mapping in the Florida Keys from hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Caiyun; Selch, Donna; Xie, Zhixiao; Roberts, Charles; Cooper, Hannah; Chen, Ge

    2013-12-01

    Accurate mapping of benthic habitats in the Florida Keys is essential in developing effective management strategies for this unique coastal ecosystem. In this study, we evaluated the applicability of hyperspectral imagery collected from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) for benthic habitat mapping in the Florida Keys. An overall accuracy of 84.3% and 86.7% was achieved respectively for a group-level (3-class) and code-level (12-class) classification by integrating object-based image analysis (OBIA), hyperspectral image processing methods, and machine learning algorithms. Accurate and informative object-based benthic habitat maps were produced. Three commonly used image correction procedures (atmospheric, sun-glint, and water-column corrections) were proved unnecessary for small area mapping in the Florida Keys. Inclusion of bathymetry data in the mapping procedure did not increase the classification accuracy. This study indicates that hyperspectral systems are promising in accurate benthic habitat mapping at a fine detail level.

  11. Developmental Changes in Mental Rotation: A Dissociation Between Object-Based and Egocentric Transformations.

    PubMed

    Kaltner, Sandra; Jansen, Petra

    2016-01-01

    The present study was conducted to investigate developmental changes of mental rotation performance. We compared children, adults, and older adults regarding their performance in object-based and egocentric transformations. Both children and older adults showed higher overall reaction times compared to adults. Results were interpreted against the background of impaired working memory capacity in both children and older adults. Since mental changes in working memory are mediated by age differences in cognitive processing speed, cognitive speed is supposed to be the underlying factor. Regarding both types of transformations, an advantage of egocentric over object-based human figures was only found in adults which led us to tentatively propose that children and older adults show deficits in perspective taking compared to adults.

  12. Developmental Changes in Mental Rotation: A Dissociation Between Object-Based and Egocentric Transformations

    PubMed Central

    Kaltner, Sandra; Jansen, Petra

    2016-01-01

    The present study was conducted to investigate developmental changes of mental rotation performance. We compared children, adults, and older adults regarding their performance in object-based and egocentric transformations. Both children and older adults showed higher overall reaction times compared to adults. Results were interpreted against the background of impaired working memory capacity in both children and older adults. Since mental changes in working memory are mediated by age differences in cognitive processing speed, cognitive speed is supposed to be the underlying factor. Regarding both types of transformations, an advantage of egocentric over object-based human figures was only found in adults which led us to tentatively propose that children and older adults show deficits in perspective taking compared to adults. PMID:27512525

  13. Using SAR Interferograms and Coherence Images for Object-Based Delineation of Unstable Slopes

    NASA Astrophysics Data System (ADS)

    Friedl, Barbara; Holbling, Daniel

    2015-05-01

    This study uses synthetic aperture radar (SAR) interferometric products for the semi-automated identification and delineation of unstable slopes and active landslides. Single-pair interferograms and coherence images are therefore segmented and classified in an object-based image analysis (OBIA) framework. The rule-based classification approach has been applied to landslide-prone areas located in Taiwan and Southern Germany. The semi-automatically obtained results were validated against landslide polygons derived from manual interpretation.

  14. Object-based inversion of crosswell radar tomography data to monitor vegetable oil injection experiments

    USGS Publications Warehouse

    Lane, Jr., John W.; Day-Lewis, Frederick D.; Versteeg, Roelof J.; Casey, Clifton C.

    2004-01-01

    Crosswell radar methods can be used to dynamically image ground-water flow and mass transport associated with tracer tests, hydraulic tests, and natural physical processes, for improved characterization of preferential flow paths and complex aquifer heterogeneity. Unfortunately, because the raypath coverage of the interwell region is limited by the borehole geometry, the tomographic inverse problem is typically underdetermined, and tomograms may contain artifacts such as spurious blurring or streaking that confuse interpretation.We implement object-based inversion (using a constrained, non-linear, least-squares algorithm) to improve results from pixel-based inversion approaches that utilize regularization criteria, such as damping or smoothness. Our approach requires pre- and post-injection travel-time data. Parameterization of the image plane comprises a small number of objects rather than a large number of pixels, resulting in an overdetermined problem that reduces the need for prior information. The nature and geometry of the objects are based on hydrologic insight into aquifer characteristics, the nature of the experiment, and the planned use of the geophysical results.The object-based inversion is demonstrated using synthetic and crosswell radar field data acquired during vegetable-oil injection experiments at a site in Fridley, Minnesota. The region where oil has displaced ground water is discretized as a stack of rectangles of variable horizontal extents. The inversion provides the geometry of the affected region and an estimate of the radar slowness change for each rectangle. Applying petrophysical models to these results and porosity from neutron logs, we estimate the vegetable-oil emulsion saturation in various layers.Using synthetic- and field-data examples, object-based inversion is shown to be an effective strategy for inverting crosswell radar tomography data acquired to monitor the emplacement of vegetable-oil emulsions. A principal advantage of

  15. Goal-directed attention alters the tuning of object-based representations in extrastriate cortex

    PubMed Central

    Chen, Anthony J.-W.; Britton, Michael; Turner, Gary R.; Vytlacil, Jason; Thompson, Todd W.; D'Esposito, Mark

    2012-01-01

    Humans survive in environments that contain a vast quantity and variety of visual information. All items of perceived visual information must be represented within a limited number of brain networks. The human brain requires mechanisms for selecting only a relevant fraction of perceived information for more in-depth processing, where neural representations of that information may be actively maintained and utilized for goal-directed behavior. Object-based attention is crucial for goal-directed behavior and yet remains poorly understood. Thus, in the study we investigate how neural representations of visual object information are guided by selective attention. The magnitude of activation in human extrastriate cortex has been shown to be modulated by attention; however, object-based attention is not likely to be fully explained by a localized gain mechanism. Thus, we measured information coded in spatially distributed patterns of brain activity with fMRI while human participants performed a task requiring selective processing of a relevant visual object category that differed across conditions. Using pattern classification and spatial correlation techniques, we found that the direction of selective attention is implemented as a shift in the tuning of object-based information representations within extrastriate cortex. In contrast, we found that representations within lateral prefrontal cortex (PFC) coded for the attention condition rather than the concrete representations of object category. In sum, our findings are consistent with a model of object-based selective attention in which representations coded within extrastriate cortex are tuned to favor the representation of goal-relevant information, guided by more abstract representations within lateral PFC. PMID:22737117

  16. Spatial attention is necessary for object-based attention: Evidence from temporal-order judgments.

    PubMed

    Donovan, Ian; Pratt, Jay; Shomstein, Sarah

    2017-04-01

    Attentional selection is a dynamic process that relies on multiple types of representations. That object representations contribute to attentional selection has been known for decades; however, most evidence for this contribution has been gleaned from studies that have relied on various forms of spatial cueing (some endogenous and some exogenous). It has thus remained unclear whether object-based attentional selection is a direct result of spatial cuing, or whether it still emerges without any spatial marker. Here we used a novel method-the temporal-order judgment (TOJ)-to examine whether object-based guidance emerges in the absence of spatial cuing. Participants were presented with two rectangles oriented either horizontally or vertically. Following a 150-ms preview time, two target stimuli were presented on the same or on different objects, and participants were asked to report which of the two stimuli had appeared first. The targets consisted of stimuli that formed a percept of a "hole" or a "hill." First, we demonstrated that the "hill" target was indeed processed faster, as evidenced by a positive perceived simultaneity (PSS) measure. We then demonstrated that if two targets appeared with equal probabilities on the same and on different objects, the PSS values, although positive, were not modulated by the objects. In a subsequent set of experiments, we showed that objects can modulate attentional allocation-however, only when they are biased by a spatial (endogenous) cue. In other words, in the absence of a spatial cue or bias, object representations do not guide attentional selection. In addition to providing new constraints for theories of object-based attentional guidance, these experiments introduce a novel paradigm for measuring object-based attentional effects.

  17. Effect of Pansharpened Image on Some of Pixel Based and Object Based Classification Accuracy

    NASA Astrophysics Data System (ADS)

    Karakus, P.; Karabork, H.

    2016-06-01

    Classification is the most important method to determine type of crop contained in a region for agricultural planning. There are two types of the classification. First is pixel based and the other is object based classification method. While pixel based classification methods are based on the information in each pixel, object based classification method is based on objects or image objects that formed by the combination of information from a set of similar pixels. Multispectral image contains a higher degree of spectral resolution than a panchromatic image. Panchromatic image have a higher spatial resolution than a multispectral image. Pan sharpening is a process of merging high spatial resolution panchromatic and high spectral resolution multispectral imagery to create a single high resolution color image. The aim of the study was to compare the potential classification accuracy provided by pan sharpened image. In this study, SPOT 5 image was used dated April 2013. 5m panchromatic image and 10m multispectral image are pan sharpened. Four different classification methods were investigated: maximum likelihood, decision tree, support vector machine at the pixel level and object based classification methods. SPOT 5 pan sharpened image was used to classification sun flowers and corn in a study site located at Kadirli region on Osmaniye in Turkey. The effects of pan sharpened image on classification results were also examined. Accuracy assessment showed that the object based classification resulted in the better overall accuracy values than the others. The results that indicate that these classification methods can be used for identifying sun flower and corn and estimating crop areas.

  18. Information extraction with object based support vector machines and vegetation indices

    NASA Astrophysics Data System (ADS)

    Ustuner, Mustafa; Abdikan, Saygin; Balik Sanli, Fusun

    2016-07-01

    Information extraction through remote sensing data is important for policy and decision makers as extracted information provide base layers for many application of real world. Classification of remotely sensed data is the one of the most common methods of extracting information however it is still a challenging issue because several factors are affecting the accuracy of the classification. Resolution of the imagery, number and homogeneity of land cover classes, purity of training data and characteristic of adopted classifiers are just some of these challenging factors. Object based image classification has some superiority than pixel based classification for high resolution images since it uses geometry and structure information besides spectral information. Vegetation indices are also commonly used for the classification process since it provides additional spectral information for vegetation, forestry and agricultural areas. In this study, the impacts of the Normalized Difference Vegetation Index (NDVI) and Normalized Difference Red Edge Index (NDRE) on the classification accuracy of RapidEye imagery were investigated. Object based Support Vector Machines were implemented for the classification of crop types for the study area located in Aegean region of Turkey. Results demonstrated that the incorporation of NDRE increase the classification accuracy from 79,96% to 86,80% as overall accuracy, however NDVI decrease the classification accuracy from 79,96% to 78,90%. Moreover it is proven than object based classification with RapidEye data give promising results for crop type mapping and analysis.

  19. Luminance contrast has little influence on the spread of object-based attention.

    PubMed

    Watson, Poppy; Korjoukov, Ilia; Vartak, Devavrat; Roelfsema, Pieter R

    2013-06-07

    We direct our attention to those visual stimuli that are relevant to our behavioral goals. Some of the visual stimuli that surround us are represented more strongly, because they have a higher luminance contrast. However, selective attention also boosts the representation of visual stimuli. It is not yet well understood how attention and contrast interact. Some previous theories proposed that attentional effects are strongest at low contrast, others that they are strongest at high contrast and yet others that the effects of selective attention are largely independent of contrast. In the present study, we investigated the interaction between selective attention and luminance contrast with a contour-grouping task that provides a sensitive measure of the spread of object-based attention, with delays of several hundreds of milliseconds. We find that the spread of object-based attention is largely independent of contrast, and that subjects experience little difficulty in grouping low-contrast contour elements in the presence of other contour elements with a much higher contrast. The results imply that object-based attention and contrast have largely independent effects on visual processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Online decoding of object-based attention using real-time fMRI.

    PubMed

    Niazi, Adnan M; van den Broek, Philip L C; Klanke, Stefan; Barth, Markus; Poel, Mannes; Desain, Peter; van Gerven, Marcel A J

    2014-01-01

    Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object-based attention when it is directed to coherent forms or objects in the visual field. This study used real-time functional magnetic resonance imaging for moment-to-moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole-brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan-by-scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom-up sensory input, object-based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real-time decoding of object-based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity.

  1. Serial grouping of 2D-image regions with object-based attention in humans

    PubMed Central

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-01-01

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188

  2. Space-Based but not Object-Based Inhibition of Return is Impaired in Parkinson's Disease

    PubMed Central

    Possin, Katherine L.; Filoteo, J. Vincent; Song, David D.; Salmon, David P.

    2009-01-01

    Impairments in certain aspects of attention have frequently been reported in Parkinson's disease (PD), including reduced inhibition of return (IOR). Recent evidence suggests that IOR can occur when attention is directed at objects or locations, but previous investigations of IOR in PD have not systematically compared these two frames of reference. The present study compared the performance of 18 nondemented patients with PD and 18 normal controls on an IOR task with two conditions. In the “object-present” condition, objects surrounded the cues and targets so that attention was cued to both a spatial location and to a specific object. In the “object-absent” condition, surrounding objects were not presented so that attention was cued only to a spatial location. When participants had to rely on space-based cues, PD patients demonstrated reduced IOR compared to controls. In contrast, when objects were present in the display and participants could use object-based cues, PD patients exhibited normal IOR. These results suggest that PD patients are impaired in inhibitory aspects of space-based attention, but are able to overcome this impairment when their attention can be directed at object-based frames of reference. This dissociation supports the view that space-based and object-based components of attention involve distinct neurocognitive processes. PMID:19397864

  3. Remote sensing clustering analysis based on object-based interval modeling

    NASA Astrophysics Data System (ADS)

    He, Hui; Liang, Tianheng; Hu, Dan; Yu, Xianchuan

    2016-09-01

    In object-based clustering, image data are segmented into objects (groups of pixels) and then clustered based on the objects' features. This method can be used to automatically classify high-resolution, remote sensing images, but requires accurate descriptions of object features. In this paper, we ascertain that interval-valued data model is appropriate for describing clustering prototype features. With this in mind, we developed an object-based interval modeling method for high-resolution, multiband, remote sensing data. We also designed an adaptive interval-valued fuzzy clustering method. We ran experiments utilizing images from the SPOT-5 satellite sensor, for the Pearl River Delta region and Beijing. The results indicate that the proposed algorithm considers both the anisotropy of the remote sensing data and the ambiguity of objects. Additionally, we present a new dissimilarity measure for interval vectors, which better separates the interval vectors generated by features of the segmentation units (objects). This approach effectively limits classification errors caused by spectral mixing between classes. Compared with the object-based unsupervised classification method proposed earlier, the proposed algorithm improves the classification accuracy without increasing computational complexity.

  4. Software Tools for Software Maintenance

    DTIC Science & Technology

    1988-10-01

    Amisteant" project was commmioned to study the problems of software maintenance and to investigate the concept of bringing together a combinationof loosely...integrated tools that could improve the productivity o maintenance programmers and increase the reliability of modified programs. One area of study has...that can aid in understanding the program and modifying it. Background work for study in this area included in examination of existing software tools

  5. Documentation Driven Software Development

    DTIC Science & Technology

    2010-06-01

    Students PERCENT_SUPPORTEDNAME Andrew Chen Roberto Sandoval Doug Anunciado Robert Halle John Evans Paul Dailey Joey Rivera Bruce Lewis Doug Lange...09-146, Sep. 2009. PhD Theses 1. R. Sandoval . “Security Software Development and Integration Testing for Advanced Concept Technology...2) STUDENT/ SUPPORTED PERSONNEL MATERIAL FOR THIS REPORTING PERIOD: (a) Graduate Students: 16 A. Chen, R. Sandoval , D. Anunciado, R. Halle, J

  6. Addressing Software Security

    NASA Technical Reports Server (NTRS)

    Bailey, Brandon

    2015-01-01

    Historically security within organizations was thought of as an IT function (web sites/servers, email, workstation patching, etc.) Threat landscape has evolved (Script Kiddies, Hackers, Advanced Persistent Threat (APT), Nation States, etc.) Attack surface has expanded -Networks interconnected!! Some security posture factors Network Layer (Routers, Firewalls, etc.) Computer Network Defense (IPS/IDS, Sensors, Continuous Monitoring, etc.) Industrial Control Systems (ICS) Software Security (COTS, FOSS, Custom, etc.)

  7. Software engineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III; Hiott, Jim; Golej, Jim; Plumb, Allan

    1993-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. The Johnson Space Center (JSC) created a significant set of tools to develop and maintain FORTRAN and C code during development of the space shuttle. This tool set forms the basis for an integrated environment to reengineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. The latest release of the environment was in Feb. 1992.

  8. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. JSC created a significant set of tools to develop and maintain FORTRAN and C code during development of the Space Shuttle. This tool set forms the basis for an integrated environment to re-engineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. A beta vision of the environment was released in Mar. 1991. The commercial potential for such re-engineering tools is very great. CASE TRENDS magazine reported it to be the primary concern of over four hundred of the top MIS executives.

  9. Automatic segmentation software in locally advanced rectal cancer: READY (REsearch program in Auto Delineation sYstem)-RECTAL 02: prospective study.

    PubMed

    Gambacorta, Maria A; Boldrini, Luca; Valentini, Chiara; Dinapoli, Nicola; Mattiucci, Gian C; Chiloiro, Giuditta; Pasini, Danilo; Manfrida, Stefania; Caria, Nicola; Minsky, Bruce D; Valentini, Vincenzo

    2016-07-05

    To validate autocontouring software (AS) in a clinical practice including a two steps delineation quality assurance (QA) procedure.The existing delineation agreement among experts for rectal cancer and the overlap and time criteria that have to be verified to allow the use of AS were defined.Median Dice Similarity Coefficient (MDSC), Mean slicewise Hausdorff Distances (MSHD) and Total-Time saving (TT) were analyzed.Two expert Radiation Oncologists reviewed CT-scans of 44 patients and agreed the reference-CTV: the first 14 consecutive cases were used to populate the software Atlas and 30 were used as Test.Each expert performed a manual (group A) and an automatic delineation (group B) of 15 Test patients.The delineations were compared with the reference contours.The overlap between the manual and automatic delineations with MDSC and MSHD and the TT were analyzed.Three acceptance criteria were set: MDSC ≥ 0.75, MSHD ≤1mm and TT sparing ≥ 50%.At least 2 criteria had to be met, one of which had to be TT saving, to validate the system.The MDSC was 0.75, MSHD 2.00 mm and the TT saving 55.5% between group A and group B. MDSC among experts was 0.84.Autosegmentation systems in rectal cancer partially met acceptability criteria with the present version.

  10. Automatic segmentation software in locally advanced rectal cancer: READY (REsearch program in Auto Delineation sYstem)-RECTAL 02: prospective study

    PubMed Central

    Dinapoli, Nicola; Mattiucci, Gian C.; Chiloiro, Giuditta; Pasini, Danilo; Manfrida, Stefania; Caria, Nicola; Minsky, Bruce D.

    2016-01-01

    To validate autocontouring software (AS) in a clinical practice including a two steps delineation quality assurance (QA) procedure. The existing delineation agreement among experts for rectal cancer and the overlap and time criteria that have to be verified to allow the use of AS were defined. Median Dice Similarity Coefficient (MDSC), Mean slicewise Hausdorff Distances (MSHD) and Total-Time saving (TT) were analyzed. Two expert Radiation Oncologists reviewed CT-scans of 44 patients and agreed the reference-CTV: the first 14 consecutive cases were used to populate the software Atlas and 30 were used as Test. Each expert performed a manual (group A) and an automatic delineation (group B) of 15 Test patients. The delineations were compared with the reference contours. The overlap between the manual and automatic delineations with MDSC and MSHD and the TT were analyzed. Three acceptance criteria were set: MDSC ≥ 0.75, MSHD ≤1mm and TT sparing ≥ 50%. At least 2 criteria had to be met, one of which had to be TT saving, to validate the system. The MDSC was 0.75, MSHD 2.00 mm and the TT saving 55.5% between group A and group B. MDSC among experts was 0.84. Autosegmentation systems in rectal cancer partially met acceptability criteria with the present version. PMID:27302924

  11. Antiterrorist Software

    NASA Technical Reports Server (NTRS)

    Clark, David A.

    1998-01-01

    In light of the escalation of terrorism, the Department of Defense spearheaded the development of new antiterrorist software for all Government agencies by issuing a Broad Agency Announcement to solicit proposals. This Government-wide competition resulted in a team that includes NASA Lewis Research Center's Computer Services Division, who will develop the graphical user interface (GUI) and test it in their usability lab. The team launched a program entitled Joint Sphere of Security (JSOS), crafted a design architecture (see the following figure), and is testing the interface. This software system has a state-ofthe- art, object-oriented architecture, with a main kernel composed of the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS will be used as the software "breadboard" for assembling the components of explosions, such as blast and collapse simulations.

  12. Control Software

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Real-Time Innovations, Inc. (RTI) collaborated with Ames Research Center, the Jet Propulsion Laboratory and Stanford University to leverage NASA research to produce ControlShell software. RTI is the first "graduate" of Ames Research Center's Technology Commercialization Center. The ControlShell system was used extensively on a cooperative project to enhance the capabilities of a Russian-built Marsokhod rover being evaluated for eventual flight to Mars. RTI's ControlShell is complex, real-time command and control software, capable of processing information and controlling mechanical devices. One ControlShell tool is StethoScope. As a real-time data collection and display tool, StethoScope allows a user to see how a program is running without changing its execution. RTI has successfully applied its software savvy in other arenas, such as telecommunications, networking, video editing, semiconductor manufacturing, automobile systems, and medical imaging.

  13. Landscape object-based analysis of wetland plant functional types: the effects of spatial scale, vegetation classes and classifier methods

    NASA Astrophysics Data System (ADS)

    Dronova, I.; Gong, P.; Wang, L.; Clinton, N.; Fu, W.; Qi, S.

    2011-12-01

    Remote sensing-based vegetation classifications representing plant function such as photosynthesis and productivity are challenging in wetlands with complex cover and difficult field access. Recent advances in object-based image analysis (OBIA) and machine-learning algorithms offer new classification tools; however, few comparisons of different algorithms and spatial scales have been discussed to date. We applied OBIA to delineate wetland plant functional types (PFTs) for Poyang Lake, the largest freshwater lake in China and Ramsar wetland conservation site, from 30-m Landsat TM scene at the peak of spring growing season. We targeted major PFTs (C3 grasses, C3 forbs and different types of C4 grasses and aquatic vegetation) that are both key players in system's biogeochemical cycles and critical providers of waterbird habitat. Classification results were compared among: a) several object segmentation scales (with average object sizes 900-9000 m2); b) several families of statistical classifiers (including Bayesian, Logistic, Neural Network, Decision Trees and Support Vector Machines) and c) two hierarchical levels of vegetation classification, a generalized 3-class set and more detailed 6-class set. We found that classification benefited from object-based approach which allowed including object shape, texture and context descriptors in classification. While a number of classifiers achieved high accuracy at the finest pixel-equivalent segmentation scale, the highest accuracies and best agreement among algorithms occurred at coarser object scales. No single classifier was consistently superior across all scales, although selected algorithms of Neural Network, Logistic and K-Nearest Neighbors families frequently provided the best discrimination of classes at different scales. The choice of vegetation categories also affected classification accuracy. The 6-class set allowed for higher individual class accuracies but lower overall accuracies than the 3-class set because

  14. Comparison of Object-Based Image Analysis Approaches to Mapping New Buildings in Accra, Ghana Using Multi-Temporal QuickBird Satellite Imagery

    PubMed Central

    Tsai, Yu Hsin; Stow, Douglas; Weeks, John

    2013-01-01

    The goal of this study was to map and quantify the number of newly constructed buildings in Accra, Ghana between 2002 and 2010 based on high spatial resolution satellite image data. Two semi-automated feature detection approaches for detecting and mapping newly constructed buildings based on QuickBird very high spatial resolution satellite imagery were analyzed: (1) post-classification comparison; and (2) bi-temporal layerstack classification. Feature Analyst software based on a spatial contextual classifier and ENVI Feature Extraction that uses a true object-based image analysis approach of image segmentation and segment classification were evaluated. Final map products representing new building objects were compared and assessed for accuracy using two object-based accuracy measures, completeness and correctness. The bi-temporal layerstack method generated more accurate results compared to the post-classification comparison method due to less confusion with background objects. The spectral/spatial contextual approach (Feature Analyst) outperformed the true object-based feature delineation approach (ENVI Feature Extraction) due to its ability to more reliably delineate individual buildings of various sizes. Semi-automated, object-based detection followed by manual editing appears to be a reliable and efficient approach for detecting and enumerating new building objects. A bivariate regression analysis was performed using neighborhood-level estimates of new building density regressed on a census-derived measure of socio-economic status, yielding an inverse relationship with R2 = 0.31 (n = 27; p = 0.00). The primary utility of the new building delineation results is to support spatial analyses of land cover and land use and demographic change. PMID:24415810

  15. [Software version and medical device software supervision].

    PubMed

    Peng, Liang; Liu, Xiaoyan

    2015-01-01

    The importance of software version in the medical device software supervision does not cause enough attention at present. First of all, the effect of software version in the medical device software supervision is discussed, and then the necessity of software version in the medical device software supervision is analyzed based on the discussion of the misunderstanding of software version. Finally the concrete suggestions on software version naming rules, software version supervision for the software in medical devices, and software version supervision scheme are proposed.

  16. Mapping gully-affected areas in the region of Taroudannt, Morocco based on Object-Based Image Analysis (OBIA)

    NASA Astrophysics Data System (ADS)

    d'Oleire-Oltmanns, Sebastian; Marzolff, Irene; Tiede, Dirk; Blaschke, Thomas

    2015-04-01

    The need for area-wide landform mapping approaches, especially in terms of land degradation, can be ascribed to the fact that within area-wide landform mapping approaches, the (spatial) context of erosional landforms is considered by providing additional information on the physiography neighboring the distinct landform. This study presents an approach for the detection of gully-affected areas by applying object-based image analysis in the region of Taroudannt, Morocco, which is highly affected by gully erosion while simultaneously representing a major region of agro-industry with a high demand of arable land. Various sensors provide readily available high-resolution optical satellite data with a much better temporal resolution than 3D terrain data which lead to the development of an area-wide mapping approach to extract gully-affected areas using only optical satellite imagery. The classification rule-set was developed with a clear focus on virtual spatial independence within the software environment of eCognition Developer. This allows the incorporation of knowledge about the target objects under investigation. Only optical QuickBird-2 satellite data and freely-available OpenStreetMap (OSM) vector data were used as input data. The OSM vector data were incorporated in order to mask out plantations and residential areas. Optical input data are more readily available for a broad range of users compared to terrain data, which is considered to be a major advantage. The methodology additionally incorporates expert knowledge and freely-available vector data in a cyclic object-based image analysis approach. This connects the two fields of geomorphology and remote sensing. The classification results allow conclusions on the current distribution of gullies. The results of the classification were checked against manually delineated reference data incorporating expert knowledge based on several field campaigns in the area, resulting in an overall classification accuracy of 62

  17. Educational Software.

    ERIC Educational Resources Information Center

    Northwest Regional Educational Lab., Portland, OR.

    The third session of IT@EDU98 consisted of five papers on educational software and was chaired by Tran Van Hao (University of Education, Ho Chi Minh City, Vietnam). "Courseware Engineering" (Nguyen Thanh Son, Ngo Ngoc Bao Tran, Quan Thanh Tho, Nguyen Hong Lam) briefly describes the use of courseware. "Machine Discovery Theorems in Geometry: A…

  18. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1990

    1990-01-01

    Reviewed are seven computer software packages for IBM and/or Apple Computers. Included are "Windows on Science: Volume 1--Physical Science"; "Science Probe--Physical Science"; "Wildlife Adventures--Grizzly Bears"; "Science Skills--Development Programs"; "The Clean Machine"; "Rock Doctor";…

  19. Software Review.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1989-01-01

    Reviewed is a computer software package entitled "Audubon Wildlife Adventures: Grizzly Bears" for Apple II and IBM microcomputers. Included are availability, hardware requirements, cost, and a description of the program. The murder-mystery flavor of the program is stressed in this program that focuses on illegal hunting and game…

  20. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are six software packages for Apple and/or IBM computers. Included are "Autograph,""The New Game Show,""Science Probe-Earth Science,""Pollution Patrol,""Investigating Plant Growth," and "AIDS: The Investigation." Discussed are the grade level, function, availability, cost, and hardware requirements of each. (CW)

  1. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1988

    1988-01-01

    Reviews five software packages for use with school age children. Includes "Science Toolkit Module 2: Earthquake Lab"; "Adaptations and Identification"; "Geoworld"; "Body Systems II Series: The Blood System: A Liquid of Life," all for Apple II, and "Science Courseware: Life Science/Biology" for…

  2. Reviews: Software.

    ERIC Educational Resources Information Center

    Mackenzie, Norma N.; And Others

    1988-01-01

    Reviews four computer software packages including: "The Physical Science Series: Sound" which demonstrates making waves, speed of sound, doppler effect, and human hearing; "Andromeda" depicting celestial motions in any direction; "Biology Quiz: Humans" covering chemistry, cells, viruses, and human biology; and…

  3. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1988

    1988-01-01

    Reviews six software packages for use with school age children ranging from grade 3 to grade 12. Includes "The Microcomputer Based Lab Project: Motion, Sound"; "Genetics"; "Geologic History"; "The Microscope Simulator"; and "Wiz Works" all for Apple II and "Reading for Information: Level…

  4. Software Reviews.

    ERIC Educational Resources Information Center

    Smith, Richard L., Ed.

    1988-01-01

    Contains evaluations of two computer software packages, "Simulation Experiments 45-48 in Epstein's Laboratory Manual for Chemistry" and "Maps and Legends--the Cartographer (Ver 3.0)." Includes a brief description, applications, and the perceived strengths and weaknesses for each package. (CW)

  5. Software Reviews.

    ERIC Educational Resources Information Center

    Mackenzie, Norma N.; And Others

    1988-01-01

    Describes computer software for use with various age groups. Topics include activities involving temperature, simulations, earth science, the circulatory system, human body, reading in science, and ecology. Provides information on equipment needed, availability, package contents, and price. Comments of reviews are presented by classroom teachers.…

  6. Star Software.

    ERIC Educational Resources Information Center

    Kloza, Brad

    2000-01-01

    Presents a collection of computer software programs designed to spark learning enthusiasm at every grade level and across the curriculum. They include Reader Rabbit's Learn to Read, Spelling Power, Mind Twister Math, Community Construction Kit, Breaking the Code, Encarta Africana 2000, Virtual Serengeti, Operation: Frog (Deluxe), and My First…

  7. Software Reviews.

    ERIC Educational Resources Information Center

    Bitter, Gary G., Ed.

    1989-01-01

    Describes three software packages: (1) "MacMendeleev"--database/graphic display for chemistry, grades 10-12, Macintosh; (2) "Geometry One: Foundations"--geometry tutorial, grades 7-12, IBM; (3) "Mathematics Exploration Toolkit"--algebra and calculus tutorial, grades 8-12, IBM. (MVL)

  8. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1988

    1988-01-01

    Reviewed three computer software packages for Apple II series computers. Includes "The Right Job," a career counseling program; "Zoyon Patrol," a problem-solving program; and "Adventures with Charts and Graphs: Project Zoo," a graphing, mathematics, and science skills program. Each review includes strengths, weaknesses, and suggestions for use.…

  9. Software Reviews.

    ERIC Educational Resources Information Center

    Teles, Elizabeth, Ed.; And Others

    1990-01-01

    Reviewed are two computer software packages for Macintosh microcomputers including "Phase Portraits," an exploratory graphics tool for studying first-order planar systems; and "MacMath," a set of programs for exploring differential equations, linear algebra, and other mathematical topics. Features, ease of use, cost, availability, and hardware…

  10. Software Reviews.

    ERIC Educational Resources Information Center

    Bitter, Gary G., Ed.

    1990-01-01

    Reviews three computer software: (1) "Elastic Lines: The Electronic Geoboard" on elementary geometry; (2) "Wildlife Adventures: Whales" on environmental science; and (3) "What Do You Do with a Broken Calculator?" on computation and problem solving. Summarizes the descriptions, strengths and weaknesses, and…

  11. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1990-01-01

    Reviews two computer software programs: (1) "Conquering Ratios and Proportions" using a medieval theme for guided practice in identifying and forming ratios for grades 5-8, and (2) "Percent Word Problems" providing problems for finding a percentage of a number and a number from a percentage. (YP)

  12. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Programs in use today generally have all of the function and information processing capabilities required to do their specified job. However, older programs usually use obsolete technology, are not integrated properly with other programs, and are difficult to maintain. Reengineering is becoming a prominent discipline as organizations try to move their systems to more modern and maintainable technologies. The Johnson Space Center (JSC) Software Technology Branch (STB) is researching and developing a system to support reengineering older FORTRAN programs into more maintainable forms that can also be more readily translated to a modern languages such as FORTRAN 8x, Ada, or C. This activity has led to the development of maintenance strategies for design recovery and reengineering. These strategies include a set of standards, methodologies, and the concepts for a software environment to support design recovery and reengineering. A brief description of the problem being addressed and the approach that is being taken by the STB toward providing an economic solution to the problem is provided. A statement of the maintenance problems, the benefits and drawbacks of three alternative solutions, and a brief history of the STB experience in software reengineering are followed by the STB new FORTRAN standards, methodology, and the concepts for a software environment.

  13. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1989

    1989-01-01

    Reviews of seven software packages are presented including "The Environment I: Habitats and EcoSystems; II Cycles and Interactions"; "Super Sign Maker"; "The Great Knowledge Race: Substance Abuse"; "Exploring Science: Temperature"; "Fast Food Calculator and RD Aide"; "The Human Body:…

  14. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are seven computer software packages including "Frog Dissection Lab Report,""Backyard Birds,""LEGO TC Logo,""Alcohol--Four Interactive Programs,""Windows on Science--Life Science,""Climate and Weather/Our Town Database," and "Weeds to Trees." Discussed are availability, features, strengths, and weaknesses. (CW)

  15. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are two computer software packages: "Super Solvers Midnight Rescue!" a problem-solving program for IBM PCs; and "Interactive Physics," a simulation program for the Macintosh computer. The functions of the package are discussed including strengths and weaknesses and teaching suggestions. (CW)

  16. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are seven computer software packages including "Frog Dissection Lab Report,""Backyard Birds,""LEGO TC Logo,""Alcohol--Four Interactive Programs,""Windows on Science--Life Science,""Climate and Weather/Our Town Database," and "Weeds to Trees." Discussed are availability, features, strengths, and weaknesses. (CW)

  17. Reviews: Software.

    ERIC Educational Resources Information Center

    Mackenzie, Norma N.; And Others

    1988-01-01

    Reviews four computer software packages including: "The Physical Science Series: Sound" which demonstrates making waves, speed of sound, doppler effect, and human hearing; "Andromeda" depicting celestial motions in any direction; "Biology Quiz: Humans" covering chemistry, cells, viruses, and human biology; and…

  18. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are six software packages for Apple and/or IBM computers. Included are "Autograph,""The New Game Show,""Science Probe-Earth Science,""Pollution Patrol,""Investigating Plant Growth," and "AIDS: The Investigation." Discussed are the grade level, function, availability, cost, and hardware requirements of each. (CW)

  19. Software Reviews.

    ERIC Educational Resources Information Center

    History Microcomputer Review, 1988

    1988-01-01

    Reviews seven educational computer software packages covering such topics as presidential elections, the American Revolution, the Vietnam War, the construction of historical time lines, and general U.S. history. Also reviews a program designed to help tailor data entry files. Provides ordering information, price, and computer compatibility…

  20. Software Reviews.

    ERIC Educational Resources Information Center

    Sidwell, Joseph C.; And Others

    1988-01-01

    Gives a review of four software packages including "Science Toolkit: Module 3--Body Lab" for measuring heart rate, lung capacity, and response time; "Project Zoo: Adventures with Charts and Graphs" for developing process skills; "The Body Electric" for explaining electrical activity in the body; and "M-ss-ng…

  1. Software Patents.

    ERIC Educational Resources Information Center

    Burke, Edmund B.

    1994-01-01

    Outlines basic patent law information that pertains to computer software programs. Topics addressed include protection in other countries; how to obtain patents; kinds of patents; duration; classes of patentable subject matter, including machines and processes; patentability searches; experimental use prior to obtaining a patent; and patent…

  2. Statistical Software.

    ERIC Educational Resources Information Center

    Callamaras, Peter

    1983-01-01

    This buyer's guide to seven major types of statistics software packages for microcomputers reviews Edu-Ware Statistics 3.0; Financial Planning; Speed Stat; Statistics with DAISY; Human Systems Dynamics package of Stats Plus, ANOVA II, and REGRESS II; Maxistat; and Moore-Barnes' MBC Test Construction and MBC Correlation. (MBR)

  3. Star Software.

    ERIC Educational Resources Information Center

    Kloza, Brad

    2000-01-01

    Presents a collection of computer software programs designed to spark learning enthusiasm at every grade level and across the curriculum. They include Reader Rabbit's Learn to Read, Spelling Power, Mind Twister Math, Community Construction Kit, Breaking the Code, Encarta Africana 2000, Virtual Serengeti, Operation: Frog (Deluxe), and My First…

  4. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are computer software packages: "Where in Time Is Carmen Sandiego,""The Bio Sci Videodisc," and "Bio Sci Stacks." Included are hardware requirements, costs, emphasis, grade level, and availability. Functions of the packages are discussed including strengths and weaknesses and teaching suggestions. (CW)

  5. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1989

    1989-01-01

    Reviews of seven software packages are presented including "The Environment I: Habitats and EcoSystems; II Cycles and Interactions"; "Super Sign Maker"; "The Great Knowledge Race: Substance Abuse"; "Exploring Science: Temperature"; "Fast Food Calculator and RD Aide"; "The Human Body:…

  6. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1989-01-01

    Reviewed are two computer software programs for Apple II computers on weather for upper elementary and middle school grades. "Weather" introduces the major factors (temperature, humidity, wind, and air pressure) affecting weather. "How Weather Works" uses simulation and auto-tutorial formats on sun, wind, fronts, clouds, and…

  7. Software Reviews.

    ERIC Educational Resources Information Center

    Bitter, Gary G., Ed.

    1989-01-01

    Reviews three software packages: (1) "The Weather Machine Courseware Kit" for grades 7-12; (2) "Exploring Measurement, Time, and Money--Level I," for primary level mathematics; and (3) "Professor DOS with SmartGuide for DOS" providing an extensive tutorial covering DOS 2.1 to 4.0. Discusses the strengths and…

  8. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1988

    1988-01-01

    Reviewed three computer software packages for Apple II series computers. Includes "The Right Job," a career counseling program; "Zoyon Patrol," a problem-solving program; and "Adventures with Charts and Graphs: Project Zoo," a graphing, mathematics, and science skills program. Each review includes strengths, weaknesses, and suggestions for use.…

  9. Reviews, Software.

    ERIC Educational Resources Information Center

    Science Teacher, 1988

    1988-01-01

    Reviews two software programs for Apple series computers. Includes "Orbital Mech," a basic planetary orbital simulation for the Macintosh, and "START: Stimulus and Response Tools for Experiments in Memory, Learning, Cognition, and Perception," a program that demonstrates basic psychological principles and experiments. (CW)

  10. Software Reviews.

    ERIC Educational Resources Information Center

    Kinnaman, Daniel E.; And Others

    1988-01-01

    Reviews four educational software packages for Apple, IBM, and Tandy computers. Includes "How the West was One + Three x Four,""Mavis Beacon Teaches Typing,""Math and Me," and "Write On." Reviews list hardware requirements, emphasis, levels, publisher, purchase agreements, and price. Discusses the strengths…

  11. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1989-01-01

    Reviewed are two computer software programs for Apple II computers on weather for upper elementary and middle school grades. "Weather" introduces the major factors (temperature, humidity, wind, and air pressure) affecting weather. "How Weather Works" uses simulation and auto-tutorial formats on sun, wind, fronts, clouds, and…

  12. Software Update.

    ERIC Educational Resources Information Center

    Currents, 2000

    2000-01-01

    A chart of 40 alumni-development database systems provides information on vendor/Web site, address, contact/phone, software name, price range, minimum suggested workstation/suggested server, standard reports/reporting tools, minimum/maximum record capacity, and number of installed sites/client type. (DB)

  13. Software Surveyor

    DTIC Science & Technology

    2003-10-01

    1995. [7] A. Carzaniga . Siena: A Wide-Area Event Notification Service. University of Colorado Software Engineering Research Laboratory (SERL...Architecture, http://java.sun.com/j2se/1.4/docs/guide/jpda/jdi/in dex.html [1] A. Carzaniga , D.S. Rosenblum, and A.L. Wolf "Design and Evaluation of a

  14. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are computer software packages: "Where in Time Is Carmen Sandiego,""The Bio Sci Videodisc," and "Bio Sci Stacks." Included are hardware requirements, costs, emphasis, grade level, and availability. Functions of the packages are discussed including strengths and weaknesses and teaching suggestions. (CW)

  15. Software Reviews.

    ERIC Educational Resources Information Center

    Mackenzie, Norma N.; And Others

    1988-01-01

    Describes computer software for use with various age groups. Topics include activities involving temperature, simulations, earth science, the circulatory system, human body, reading in science, and ecology. Provides information on equipment needed, availability, package contents, and price. Comments of reviews are presented by classroom teachers.…

  16. Software Reviews.

    ERIC Educational Resources Information Center

    Mathematics and Computer Education, 1987

    1987-01-01

    Presented are reviews of several microcomputer software programs. Included are reviews of: (1) Microstat (Zenith); (2) MathCAD (MathSoft); (3) Discrete Mathematics (True Basic); (4) CALCULUS (True Basic); (5) Linear-Kit (John Wiley); and (6) Geometry Sensei (Broderbund). (RH)

  17. Software Reviews.

    ERIC Educational Resources Information Center

    Bitter, Gary G., Ed.

    1990-01-01

    Reviews three computer software: (1) "Elastic Lines: The Electronic Geoboard" on elementary geometry; (2) "Wildlife Adventures: Whales" on environmental science; and (3) "What Do You Do with a Broken Calculator?" on computation and problem solving. Summarizes the descriptions, strengths and weaknesses, and…

  18. A 3D object-based model to simulate highly-heterogeneous, coarse, braided river deposits

    NASA Astrophysics Data System (ADS)

    Huber, E.; Huggenberger, P.; Caers, J.

    2016-12-01

    There is a critical need in hydrogeological modeling for geologically more realistic representation of the subsurface. Indeed, widely-used representations of the subsurface heterogeneity based on smooth basis functions such as cokriging or the pilot-point approach fail at reproducing the connectivity of high permeable geological structures that control subsurface solute transport. To realistically model the connectivity of high permeable structures of coarse, braided river deposits, multiple-point statistics and object-based models are promising alternatives. We therefore propose a new object-based model that, according to a sedimentological model, mimics the dominant processes of floodplain dynamics. Contrarily to existing models, this object-based model possesses the following properties: (1) it is consistent with field observations (outcrops, ground-penetrating radar data, etc.), (2) it allows different sedimentological dynamics to be modeled that result in different subsurface heterogeneity patterns, (3) it is light in memory and computationally fast, and (4) it can be conditioned to geophysical data. In this model, the main sedimentological elements (scour fills with open-framework-bimodal gravel cross-beds, gravel sheet deposits, open-framework and sand lenses) and their internal structures are described by geometrical objects. Several spatial distributions are proposed that allow to simulate the horizontal position of the objects on the floodplain as well as the net rate of sediment deposition. The model is grid-independent and any vertical section can be computed algebraically. Furthermore, model realizations can serve as training images for multiple-point statistics. The significance of this model is shown by its impact on the subsurface flow distribution that strongly depends on the sedimentological dynamics modeled. The code will be provided as a free and open-source R-package.

  19. The effect of input data transformations on object-based image analysis

    PubMed Central

    LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.

    2011-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829

  20. The effect of input data transformations on object-based image analysis.

    PubMed

    Lippitt, Christopher D; Coulter, Lloyd L; Freeman, Mary; Lamantia-Bishop, Jeffrey; Pang, Wyson; Stow, Douglas A

    2012-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest.

  1. Object-based mental rotation and visual perspective-taking in typical development and Williams syndrome.

    PubMed

    Broadbent, Hannah J; Farran, Emily K; Tolmie, Andrew

    2014-01-01

    This study examined Object-based (OB) rotation and Visual Perspective-Taking (VPT) abilities in Williams syndrome (WS) compared to typically developing (TD) 5-10-year-olds. Extensive difficulties with both types of imagined rotation were observed in WS; WS performance was in line with the level of ability observed in TD 5-year-olds. However, an atypical pattern of errors on OB and VPT tasks was observed in WS compared to TD groups. Deficits in imagined rotations are consistent with known atypical cortical development in WS. Such difficulties in updating the position of the self following movement in WS may have implications for large-scale spatial navigation.

  2. Object-based attention in Chinese readers of Chinese words: beyond Gestalt principles.

    PubMed

    Li, Xingshan; Logan, Gordon D

    2008-10-01

    Most object-based attention studies use objects defined bottom-up by Gestalt principles. In the present study, we defined objects top-down, using Chinese words that were seen as objects by skilled readers of Chinese. Using a spatial cuing paradigm, we found that a target character was detected faster if it was in the same word as the cued character than if it was in a different word. Because there were no bottom-up factors that distinguished the words, these results showed that objects defined by subjects' knowledge--in this case, lexical information--can also constrain the deployment of attention.

  3. Parameter-Based Performance Analysis of Object-Based Image Analysis Using Aerial and Quikbird-2 Images

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz, M.

    2014-09-01

    Opening new possibilities for research, very high resolution (VHR) imagery acquired by recent commercial satellites and aerial systems requires advanced approaches and techniques that can handle large volume of data with high local variance. Delineation of land use/cover information from VHR images is a hot research topic in remote sensing. In recent years, object-based image analysis (OBIA) has become a popular solution for image analysis tasks as it considers shape, texture and content information associated with the image objects. The most important stage of OBIA is the image segmentation process applied prior to classification. Determination of optimal segmentation parameters is of crucial importance for the performance of the selected classifier. In this study, effectiveness and applicability of the segmentation method in relation to its parameters was analysed using two VHR images, an aerial photo and a Quickbird-2 image. Multi-resolution segmentation technique was employed with its optimal parameters of scale, shape and compactness that were defined after an extensive trail process on the data sets. Nearest neighbour classifier was applied on the segmented images, and then the accuracy assessment was applied. Results show that segmentation parameters have a direct effect on the classification accuracy, and low values of scale-shape combinations produce the highest classification accuracies. Also, compactness parameter was found to be having minimal effect on the construction of image objects, hence it can be set to a constant value in image classification.

  4. Object-based cropland degradation identification: a case study in Uzbekistan

    NASA Astrophysics Data System (ADS)

    Dubovyk, Olena; Menz, Gunter; Conrad, Christopher; Khamzina, Asia

    2012-10-01

    Sustainability of irrigated agriculture-based economies, such as in Central Asia, is threatened by cropland degradation. The field-based identification of the degraded agricultural areas can aid in developing appropriate land rehabilitation and monitoring programs. This paper combined the object-based change detection and spectral mixture analysis to develop an approach for identifying parcels of irrigated degraded cropland in Northern Uzbekistan, Central Asia. A linear spectral unmixing, followed by the object-based change vector analysis, was applied to the multiple Landsat TM images, acquired in 1987 and 2009. Considering a spectral dimensionality of Landsat TM, a multiple 4-endmember model (green vegetation, water, dark soil, and bright soil) was set up for the analysis. The spectral unmixing results were valid, as indicated by the overall root mean square errors of <2.5% reflectance for all images. The results of change detection revealed that about 33% (84,540 ha) of cropland in the study area were affected by the degradation processes to varying degrees. Spatial distribution of degraded fields was mainly associated with the abandoned fields and lands with inherently low fertile soils. The proposed approach could be elaborated for a field-based monitoring of cropland degradation in similar landscapes of Central Asia and elsewhere.

  5. Image Mining in Remote Sensing for Coastal Wetlands Mapping: from Pixel Based to Object Based Approach

    NASA Astrophysics Data System (ADS)

    Farda, N. M.; Danoedoro, P.; Hartono; Harjoko, A.

    2016-11-01

    The availably of remote sensing image data is numerous now, and with a large amount of data it makes “knowledge gap” in extraction of selected information, especially coastal wetlands. Coastal wetlands provide ecosystem services essential to people and the environment. The aim of this research is to extract coastal wetlands information from satellite data using pixel based and object based image mining approach. Landsat MSS, Landsat 5 TM, Landsat 7 ETM+, and Landsat 8 OLI images located in Segara Anakan lagoon are selected to represent data at various multi temporal images. The input for image mining are visible and near infrared bands, PCA band, invers PCA bands, mean shift segmentation bands, bare soil index, vegetation index, wetness index, elevation from SRTM and ASTER GDEM, and GLCM (Harralick) or variability texture. There is three methods were applied to extract coastal wetlands using image mining: pixel based - Decision Tree C4.5, pixel based - Back Propagation Neural Network, and object based - Mean Shift segmentation and Decision Tree C4.5. The results show that remote sensing image mining can be used to map coastal wetlands ecosystem. Decision Tree C4.5 can be mapped with highest accuracy (0.75 overall kappa). The availability of remote sensing image mining for mapping coastal wetlands is very important to provide better understanding about their spatiotemporal coastal wetlands dynamics distribution.

  6. An automated object-based classification approach for updating CORINE land cover data

    NASA Astrophysics Data System (ADS)

    Wehrmann, Thilo; Dech, Stefan; Glaser, Ruediger

    2004-10-01

    In this paper, an object based classification approach for land cover and land use classes is presented, and first test results are shown. Recently, there is an increasing demand for information on actual land cover resp. land use from planning, administration and science institutions. Remote sensing provides timely information products in different geometric and thematic scales. The effort to manually classify land use data is still very high. Therefore a new approach is required to incorperate automated image classification to human image understanding. The proposed approach couples object-based clasification technique -a rather new trend in image classification - with machine learning capacities (Support Vector Classifier) depending on information levels. To ensure spatial and spectral transferability of the classification scheme, the data has to be passed through several generalisation levels. The segmentation generates homogeneous and contiguous image objects. The hierarchical rule type uses direct and derived spectral attributes combined with spatial features and information extracted from the metadata. The identified land cover objects can be converted into the current CORINE classes after classification.

  7. Object based change detection of Central Asian Tugai vegetation with very high spatial resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Gärtner, Philipp; Förster, Michael; Kurban, Alishir; Kleinschmit, Birgit

    2014-09-01

    Ecological restoration of degraded riparian Tugai forests in north-western China is a key driver to combat desertification in this region. Recent restoration efforts attempt to recover the forest along with its most dominant tree species, Populus euphratica. The present research observed the response of natural vegetation using an object based change detection method on QuickBird (2005) and WorldView2 (2011) data. We applied the region growing approach to derived Normalized Difference Vegetation Index (NDVI) values in order to identify single P. euphratica trees, delineate tree crown areas and quantify crown diameter changes. Results were compared to 59 reference trees. The findings confirmed a positive tree crown growth and suggest a crown diameter increase of 1.14 m, on average. On a single tree basis, tree crown diameters of larger crowns were generally underestimated. Small crowns were slightly underestimated in QuickBird and overestimated in Worldview2 images. The results of the automated tree crown delineation show a moderate relation to field reference data with R20052: 0.36 and R20112: 0.48. The object based image analysis (OBIA) method proved to be applicable in sparse riparian Tugai forests and showed great suitability to evaluate ecological restoration efforts in an endangered ecosystem.

  8. The role of closure in defining the "objects" of object-based attention.

    PubMed

    Marino, Alexandria C; Scholl, Brian J

    2005-10-01

    Many recent studies have concluded that the underlying units of visual attention are often discrete objects whose boundaries constrain the allocation of attention. However, relatively few studies have explored the particular stimulus cues that determine what counts as an "object" of attention. We explore this issue in the context of the two-rectangles stimuli previously used by many investigators. We first show, using both spatial-cuing and divided-attention paradigms, that same-object advantages occur even when the ends of the two rectangles are not drawn. This is consistent with previous reports that have emphasized the importance of individual contours in guiding attention, and our study shows that such effects can occur in displays that also contain grouping cues. In our divided-attention experiment, however, this contour-driven same-object advantage was significantly weaker than that obtained with the standard stimulus, with the added cue of closure--demonstrating that contour-based processes are not the whole story. These results confirm and extend the observation that same-object advantages can be observed even without full-fledged objects. At the same time, however, these studies show that boundary closure-one of the most important cues to objecthood per se-can directly influence attention. We conclude that object-based attention is not an all-or-nothing phenomenon; object-based effects can be independently strengthened or weakened by multiple cues to objecthood.

  9. Vineyard parcel identification from Worldview-2 images using object-based classification model

    NASA Astrophysics Data System (ADS)

    Sertel, Elif; Yay, Irmak

    2014-01-01

    Accurate identification of spatial distribution and characteristics of vineyard parcels is an important task for the effective management of vineyard areas, precision viticulture, and farmer registries. This study aimed to develop rule sets to be used in object-based classification of Worldview-2 satellite images to accurately delineate the boundaries of vineyards having different plantation styles. Multilevel segmentation was applied to Worldview-2 images to create different sizes of image objects representing different land cover categories with respect to scale parameter. Texture analysis and several new spectral indices were applied to objects at different segmentation levels to accurately classify land cover classes of forest, cultivated areas, harvested areas, impervious, bareland, and vineyards. A specific attention was given to vineyard class to identify vine areas at the parcel level considering their different plantation styles. The results illustrated that the combined usage of a newly developed decision tree and image segmentation during the object-based classification process could provide highly accurate results for the identification of vineyard parcels. Linearly planted vineyards could be classified with 100% producer's accuracy due to their regular textural characteristics, whereas regular gridwise and irregular gridwise (distributed) vineyard parcels could be classified with 94.87% producer's accuracy in this research.

  10. Automated spine and vertebrae detection in CT images using object-based image analysis.

    PubMed

    Schwier, M; Chitiboi, T; Hülnhagen, T; Hahn, H K

    2013-09-01

    Although computer assistance has become common in medical practice, some of the most challenging tasks that remain unsolved are in the area of automatic detection and recognition. The human visual perception is in general far superior to computer vision algorithms. Object-based image analysis is a relatively new approach that aims to lift image analysis from a pixel-based processing to a semantic region-based processing of images. It allows effective integration of reasoning processes and contextual concepts into the recognition method. In this paper, we present an approach that applies object-based image analysis to the task of detecting the spine in computed tomography images. A spine detection would be of great benefit in several contexts, from the automatic labeling of vertebrae to the assessment of spinal pathologies. We show with our approach how region-based features, contextual information and domain knowledge, especially concerning the typical shape and structure of the spine and its components, can be used effectively in the analysis process. The results of our approach are promising with a detection rate for vertebral bodies of 96% and a precision of 99%. We also gain a good two-dimensional segmentation of the spine along the more central slices and a coarse three-dimensional segmentation.

  11. Lossy to lossless object-based coding of 3-D MRI data.

    PubMed

    Menegaz, Gloria; Thiran, Jean-Philippe

    2002-01-01

    We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.

  12. Visual field meridians modulate the reallocation of object-based attention.

    PubMed

    Barnas, Adam J; Greenberg, Adam S

    2016-10-01

    Object-based attention (OBA) enhances processing within the boundaries of a selected object. Larger OBA effects have been observed for horizontal compared to vertical rectangles, which were eliminated when controlling for attention shifts across the visual field meridians. We aimed to elucidate the modulatory role of the meridians on OBA. We hypothesized that the contralateral organization of visual cortex accounts for these differences in OBA prioritization. Participants viewed "L"-shaped objects and, following a peripheral cue at the object vertex, detected the presence of a target at the cued location (valid), or at a non-cued location (invalid) offset either horizontally or vertically. In Experiment 1, the single displayed object contained components crossing both meridians. In Experiment 2, one cued object and one non-cued object were displayed such that both crossed the meridians. In Experiment 3, one cued object was sequestered into one screen quadrant, with its vertex either near or far from fixation. Results from Experiments 1 and 2 revealed a horizontal shift advantage (faster RTs for horizontal shifts across the vertical meridian compared to vertical shifts across the horizontal meridian), regardless of whether shifts take place within a cued object (Experiment 1) or between objects (Experiment 2). Results from Experiment 3 revealed no difference between horizontal and vertical shifts for objects that were positioned far from fixation, although the horizontal shift advantage reappeared for objects near fixation. These findings suggest a critical modulatory role of visual field meridians in the efficiency of reorienting object-based attention.

  13. Object-based and action-based visual perception in children with spina bifida and hydrocephalus.

    PubMed

    Dennis, Maureen; Fletcher, Jack M; Rogers, Tracey; Hetherington, Ross; Francis, David J

    2002-01-01

    Children with spina bifida and hydrocephalus (SBH) have long been known to have difficulties with visual perception. We studied how children with SBH perform 12 visual perception tasks requiring object identification, multistable representations of visual space, or visually guided overt actions. Four tasks required object-based processing (visual constancy illusions, face recognition, recognition of fragmented objects, line orientation). Four tasks required the representation of visual space in egocentric coordinates (stereopsis, visual figure-ground identification, perception of multistable figures, egocentric mental rotation). Four tasks required the coupling of visual space to overt movement (visual pursuit, figure drawing, visually guided route finding, visually guided route planning). Effect sizes, measuring the magnitude of the difference between SBH children and controls, were consistently larger for action-based than object-based visual perception tasks. Within action-based tasks, effect sizes were large and roughly comparable for tasks requiring the representation of visual space and for tasks requiring visually guided action. The results are discussed in terms of the physical and brain problems of children with SBH that limit their ability to build effective situation models of space.

  14. Preliminary Results of Earthquake-Induced Building Damage Detection with Object-Based Image Classification

    NASA Astrophysics Data System (ADS)

    Sabuncu, A.; Uca Avci, Z. D.; Sunar, F.

    2016-06-01

    Earthquakes are the most destructive natural disasters, which result in massive loss of life, infrastructure damages and financial losses. Earthquake-induced building damage detection is a very important step after earthquakes since earthquake-induced building damage is one of the most critical threats to cities and countries in terms of the area of damage, rate of collapsed buildings, the damage grade near the epicenters and also building damage types for all constructions. Van-Ercis (Turkey) earthquake (Mw= 7.1) was occurred on October 23th, 2011; at 10:41 UTC (13:41 local time) centered at 38.75 N 43.36 E that places the epicenter about 30 kilometers northern part of the city of Van. It is recorded that, 604 people died and approximately 4000 buildings collapsed or seriously damaged by the earthquake. In this study, high-resolution satellite images of Van-Ercis, acquired by Quickbird-2 (Digital Globe Inc.) after the earthquake, were used to detect the debris areas using an object-based image classification. Two different land surfaces, having homogeneous and heterogeneous land covers, were selected as case study areas. As a first step of the object-based image processing, segmentation was applied with a convenient scale parameter and homogeneity criterion parameters. As a next step, condition based classification was used. In the final step of this preliminary study, outputs were compared with streetview/ortophotos for the verification and evaluation of the classification accuracy.

  15. Topic Modelling for Object-Based Classification of Vhr Satellite Images Based on Multiscale Segmentations

    NASA Astrophysics Data System (ADS)

    Shen, Li; Wu, Linmei; Li, Zhipeng

    2016-06-01

    Multiscale segmentation is a key prerequisite step for object-based classification methods. However, it is often not possible to determine a sole optimal scale for the image to be classified because in many cases different geo-objects and even an identical geo-object may appear at different scales in one image. In this paper, an object-based classification method based on mutliscale segmentation results in the framework of topic modelling is proposed to classify VHR satellite images in an entirely unsupervised fashion. In the stage of topic modelling, grayscale histogram distributions for each geo-object class and each segment are learned in an unsupervised manner from multiscale segments. In the stage of classification, each segment is allocated a geo-object class label by the similarity comparison between the grayscale histogram distributions of each segment and each geo-object class. Experimental results show that the proposed method can perform better than the traditional methods based on topic modelling.

  16. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach.

    PubMed

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications.

  17. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    PubMed Central

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  18. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    PubMed

    Hu, Brian; Niebur, Ernst

    2017-09-19

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  19. Sensor-independent approach to recognition: the object-based approach

    NASA Astrophysics Data System (ADS)

    Morrow, Jim C.; Hossain, Sqama

    1994-03-01

    This paper introduces a fundamentally different approach to recognition -- the object-based approach -- which is inherently knowledge-based and sensor independent. The paper begins with a description of an object-based recognition system, contrasting it with the image-based approach. Next, the multilevel stage of the system, incorporating several sensor data sources is described. From these sources elements of the situation hypothesis are generated as directed by the recognition goal. Depending on the degree of correspondence between the sensor-fed elements and the object-model-fed elements, a hypothetical element is created. The hypothetical element is further employed to develop evidence for the sensor-fed element through the inclusion of secondary sensor outputs. The sensor-fed element is thus modeled in more detail, and further evidence is added to the hypothetical element. Several levels of reasoning and data integration are involved in this overall process; further, a self-adjusting correction mechanism is included through the feedback from the hypothetical element to the sensors, thus defining secondary output connections to the sensor-fed element. Some preliminary work based on this approach has been carried out and initial results show improvements over the conventional image-based approach.

  20. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  1. Object-Based Arctic Sea Ice Feature Extraction through High Spatial Resolution Aerial photos

    NASA Astrophysics Data System (ADS)

    Miao, X.; Xie, H.

    2015-12-01

    High resolution aerial photographs used to detect and classify sea ice features can provide accurate physical parameters to refine, validate, and improve climate models. However, manually delineating sea ice features, such as melt ponds, submerged ice, water, ice/snow, and pressure ridges, is time-consuming and labor-intensive. An object-based classification algorithm is developed to automatically extract sea ice features efficiently from aerial photographs taken during the Chinese National Arctic Research Expedition in summer 2010 (CHINARE 2010) in the MIZ near the Alaska coast. The algorithm includes four steps: (1) the image segmentation groups the neighboring pixels into objects based on the similarity of spectral and textural information; (2) the random forest classifier distinguishes four general classes: water, general submerged ice (GSI, including melt ponds and submerged ice), shadow, and ice/snow; (3) the polygon neighbor analysis separates melt ponds and submerged ice based on spatial relationship; and (4) pressure ridge features are extracted from shadow based on local illumination geometry. The producer's accuracy of 90.8% and user's accuracy of 91.8% are achieved for melt pond detection, and shadow shows a user's accuracy of 88.9% and producer's accuracies of 91.4%. Finally, pond density, pond fraction, ice floes, mean ice concentration, average ridge height, ridge profile, and ridge frequency are extracted from batch processing of aerial photos, and their uncertainties are estimated.

  2. An operational framework for object-based land use classification of heterogeneous rural landscapes

    NASA Astrophysics Data System (ADS)

    Watmough, Gary R.; Palm, Cheryl A.; Sullivan, Clare

    2017-02-01

    The characteristics of very high resolution (VHR) satellite data are encouraging development agencies to investigate its use in monitoring and evaluation programmes. VHR data pose challenges for land use classification of heterogeneous rural landscapes as it is not possible to develop generalised and transferable land use classification definitions and algorithms. We present an operational framework for classifying VHR satellite data in heterogeneous rural landscapes using an object-based and random forest classifier. The framework overcomes the challenges of classifying VHR data in anthropogenic landscapes. It does this by using an image stack of RGB-NIR, Normalised Difference Vegetation Index (NDVI) and textural bands in a two-phase object-based classification. The framework can be applied to data acquired by different sensors, with different view and illumination geometries, at different times of the year. Even with these complex input data the framework can produce classification results that are comparable across time. Here we describe the framework and present an example of its application using data from QuickBird (2 images) and GeoEye (1 image) sensors.

  3. Joint source-channel coding for wireless object-based video communications utilizing data hiding.

    PubMed

    Wang, Haohong; Tsaftaris, Sotirios A; Katsaggelos, Aggelos K

    2006-08-01

    In recent years, joint source-channel coding for multimedia communications has gained increased popularity. However, very limited work has been conducted to address the problem of joint source-channel coding for object-based video. In this paper, we propose a data hiding scheme that improves the error resilience of object-based video by adaptively embedding the shape and motion information into the texture data. Within a rate-distortion theoretical framework, the source coding, channel coding, data embedding, and decoder error concealment are jointly optimized based on knowledge of the transmission channel conditions. Our goal is to achieve the best video quality as expressed by the minimum total expected distortion. The optimization problem is solved using Lagrangian relaxation and dynamic programming. The performance of the proposed scheme is tested using simulations of a Rayleigh-fading wireless channel, and the algorithm is implemented based on the MPEG-4 verification model. Experimental results indicate that the proposed hybrid source-channel coding scheme significantly outperforms methods without data hiding or unequal error protection.

  4. Computer-Aided Software Engineering - An approach to real-time software development

    NASA Technical Reports Server (NTRS)

    Walker, Carrie K.; Turkovich, John J.

    1989-01-01

    A new software engineering discipline is Computer-Aided Software Engineering (CASE), a technology aimed at automating the software development process. This paper explores the development of CASE technology, particularly in the area of real-time/scientific/engineering software, and a history of CASE is given. The proposed software development environment for the Advanced Launch System (ALS CASE) is described as an example of an advanced software development system for real-time/scientific/engineering (RT/SE) software. The Automated Programming Subsystem of ALS CASE automatically generates executable code and corresponding documentation from a suitably formatted specification of the software requirements. Software requirements are interactively specified in the form of engineering block diagrams. Several demonstrations of the Automated Programming Subsystem are discussed.

  5. Computer-Aided Software Engineering - An approach to real-time software development

    NASA Technical Reports Server (NTRS)

    Walker, Carrie K.; Turkovich, John J.

    1989-01-01

    A new software engineering discipline is Computer-Aided Software Engineering (CASE), a technology aimed at automating the software development process. This paper explores the development of CASE technology, particularly in the area of real-time/scientific/engineering software, and a history of CASE is given. The proposed software development environment for the Advanced Launch System (ALS CASE) is described as an example of an advanced software development system for real-time/scientific/engineering (RT/SE) software. The Automated Programming Subsystem of ALS CASE automatically generates executable code and corresponding documentation from a suitably formatted specification of the software requirements. Software requirements are interactively specified in the form of engineering block diagrams. Several demonstrations of the Automated Programming Subsystem are discussed.

  6. Automated Digital Image Analysis of Islet Cell Mass Using Nikon's Inverted Eclipse Ti Microscope and Software to Improve Engraftment may Help to Advance the Therapeutic Efficacy and Accessibility of Islet Transplantation across Centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method ( p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number ( r(2) = 0.91) and total islet number ( r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method ( p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method ( p < 0.001). The ADIA method also detected small islets between 10 and 50 μm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  7. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  8. Software Epistemology

    DTIC Science & Technology

    2016-03-01

    epistemology have focused on two contrary goals: first, small signatures that are able to identify malware that may have polymorphic presentation and...one version of a library can interoperate with another version of the same library. In the case of small signatures for malware , signatures must be...from source code or machine binaries—enables the rapid identification of known software vulnerabilities, unsafe use cases, and hidden malware in

  9. The EOSDIS software challenge

    NASA Astrophysics Data System (ADS)

    Jaworski, Allan

    1993-08-01

    The Earth Observing System (EOS) Data and Information System (EOSDIS) will serve as a major resource for the earth science community, supporting both command and control of complex instruments onboard the EOS spacecraft and the archiving, distribution, and analysis of data. The scale of EOSDIS and the volume of multidisciplinary research to be conducted using EOSDIS resources will produce unparalleled needs for technology transparency, data integration, and system interoperability. The scale of this effort far outscopes any previous scientific data system in its breadth or operational and performance needs. Modern hardware technology can meet the EOSDIS technical challenge. Multiprocessing speeds of many giga-flops are being realized by modern computers. Online storage disk, optical disk, and videocassette libraries with storage capacities of many terabytes are now commercially available. Radio frequency and fiber optics communications networks with gigabit rates are demonstrable today. It remains, of course, to perform the system engineering to establish the requirements, architectures, and designs that will implement the EOSDIS systems. Software technology, however, has not enjoyed the price/performance advances of hardware. Although we have learned to engineer hardware systems which have several orders of magnitude greater complexity and performance than those built in the 1960's, we have not made comparable progress in dramatically reducing the cost of software development. This lack of progress may significantly reduce our capabilities to achieve economically the types of highly interoperable, responsive, integraded, and productive environments which are needed by the earth science community. This paper describes some of the EOSDIS software requirements and current activities in the software community which are applicable to meeting the EOSDIS challenge. Some of these areas include intelligent user interfaces, software reuse libraries, and domain engineering

  10. Know Your Software Options.

    ERIC Educational Resources Information Center

    Moxley, Doug

    1986-01-01

    Advice on how to find the best software for institutional needs is presented. Purchasing prewritten software, acquiring custom-written software, and improving ready-made software are discussed. Questions to ask before buying software are provided. (MLW)

  11. Space Station Software Issues

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor); Beskenis, S. (Editor)

    1985-01-01

    Issues in the development of software for the Space Station are discussed. Software acquisition and management, software development environment, standards, information system support for software developers, and a future software advisory board are addressed.

  12. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    NASA Astrophysics Data System (ADS)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  13. Hierarchical Object-based Image Analysis approach for classification of sub-meter multispectral imagery in Tanzania

    NASA Astrophysics Data System (ADS)

    Chung, C.; Nagol, J. R.; Tao, X.; Anand, A.; Dempewolf, J.

    2015-12-01

    Increasing agricultural production while at the same time preserving the environment has become a challenging task. There is a need for new approaches for use of multi-scale and multi-source remote sensing data as well as ground based measurements for mapping and monitoring crop and ecosystem state to support decision making by governmental and non-governmental organizations for sustainable agricultural development. High resolution sub-meter imagery plays an important role in such an integrative framework of landscape monitoring. It helps link the ground based data to more easily available coarser resolution data, facilitating calibration and validation of derived remote sensing products. Here we present a hierarchical Object Based Image Analysis (OBIA) approach to classify sub-meter imagery. The primary reason for choosing OBIA is to accommodate pixel sizes smaller than the object or class of interest. Especially in non-homogeneous savannah regions of Tanzania, this is an important concern and the traditional pixel based spectral signature approach often fails. Ortho-rectified, calibrated, pan sharpened 0.5 meter resolution data acquired from DigitalGlobe's WorldView-2 satellite sensor was used for this purpose. Multi-scale hierarchical segmentation was performed using multi-resolution segmentation approach to facilitate the use of texture, neighborhood context, and the relationship between super and sub objects for training and classification. eCognition, a commonly used OBIA software program, was used for this purpose. Both decision tree and random forest approaches for classification were tested. The Kappa index agreement for both algorithms surpassed the 85%. The results demonstrate that using hierarchical OBIA can effectively and accurately discriminate classes at even LCCS-3 legend.

  14. A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data

    NASA Astrophysics Data System (ADS)

    Gajda, Agnieszka; Wójtowicz-Nowakowska, Anna

    2013-04-01

    A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data Land cover maps are generally produced on the basis of high resolution imagery. Recently, LiDAR (Light Detection and Ranging) data have been brought into use in diverse applications including land cover mapping. In this study we attempted to assess the accuracy of land cover classification using both high resolution aerial imagery and LiDAR data (airborne laser scanning, ALS), testing two classification approaches: a pixel-based classification and object-oriented image analysis (OBIA). The study was conducted on three test areas (3 km2 each) in the administrative area of Kraków, Poland, along the course of the Vistula River. They represent three different dominating land cover types of the Vistula River valley. Test site 1 had a semi-natural vegetation, with riparian forests and shrubs, test site 2 represented a densely built-up area, and test site 3 was an industrial site. Point clouds from ALS and ortophotomaps were both captured in November 2007. Point cloud density was on average 16 pt/m2 and it contained additional information about intensity and encoded RGB values. Ortophotomaps had a spatial resolution of 10 cm. From point clouds two raster maps were generated: intensity (1) and (2) normalised Digital Surface Model (nDSM), both with the spatial resolution of 50 cm. To classify the aerial data, a supervised classification approach was selected. Pixel based classification was carried out in ERDAS Imagine software. Ortophotomaps and intensity and nDSM rasters were used in classification. 15 homogenous training areas representing each cover class were chosen. Classified pixels were clumped to avoid salt and pepper effect. Object oriented image object classification was carried out in eCognition software, which implements both the optical and ALS data. Elevation layers (intensity, firs/last reflection, etc.) were used at segmentation stage due to

  15. Advancing the integration of hospital IT. Pitfalls and perspectives when replacing specialized software for high-risk environments with enterprise system extensions.

    PubMed

    Engelmann, Carsten; Ametowobla, Dzifa

    2017-05-17

    Planning and controlling surgical operations hugely impacts upon productivity, patient safety, and surgeons' careers. Established, specialized software for this task is being increasingly replaced by "Operating Room (OR)-modules" appended to enterprise-wide resource planning (ERP) systems. As a result, usability problems are re-emerging and require developers' attention. Systematic evaluation of the functionality and social repercussions of a global, market-leading IT business control system (SAP R3, Germany), adapted for real-time OR process steering. Field study involving document analyses, interviews, and a 73-item survey addressed to 77 qualified (> 1-year system experience) senior planning executives (end users; "planners") working in surgical departments of university hospitals. Planners reported that 57% of electronic operation requests contained contradictory information. Key screens contained clinically irrelevant areas (36 +/- 29%). Compared to the legacy system, users reported either no improvements or worse performance, in regard to co-ordination of OR stakeholders, intra-day program changes, and safety. Planners concluded that the ERP-planning module was "non-intuitive" (66%), increased planning work (56%, p=0.002), and did not impact upon either organizational mishap spectrum or frequency. Interviews evidenced intra-institutional power shifts due to increased system complexity. Planners resented e.g. a trend towards increased personal culpability for mishap. Highly complex enterprise system extensions may not be directly suited to specific process steering tasks in a high risk/low error-environment like the OR. In view of surgeons' high primary task load, the repeated call for simpler IT is an imperative for ERP extensions. System design should consider a) that current OR IT suffers from an input limitation regarding planning-relevant real-time data, and b) that there are social processes that strongly affect planning and particularly ERP use beyond

  16. Mapping Arctic Ocean Coastline Change With Landsat Archive Data And Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Hulslander, D.

    2010-12-01

    The melting of arctic permafrost is a significant effect of climate change. The combination of rising sea level, longer periods of ice-free conditions in the Arctic Ocean and melting permafrost can greatly accelerate coastline changes in general and arctic coastal erosion in particular. Anderson et al. (2009; Geology News) have measured erosion rates of 15 m per year at sites along the Alaskan Arctic Ocean coastline dominated by ice-cemented peats and silt-rich permafrost. With over 45,000 km of Arctic Ocean coastline, it is important that coastline movement and transgressive oceanic regimes be mapped and tracked with accurate data. Determining historic coastal erosion rates for this region is as important as mapping the current extent of the phenomenon to create as complete a picture as possible and locate where rapid erosion is an emergent process. The extent of the area involved combined with its inaccessibility and inhospitable conditions makes geologic remote sensing an appropriate tool for characterizing Arctic Ocean coastal erosion. Traditional weaknesses associated with using remote sensing in the geosciences have included a lack of historical data or baseline information as well as difficulties in systematization of feature mapping. Using object-based image analysis on Landsat archive data can overcome these issues and may allow for a potential multi-decadal map of Arctic Ocean coastline changes. The Landsat family of sensors (MSS 1-3 and TM/ETM 4, 5, and 7) have been providing imagery as frequently as every 16 days since July 1972. The frequent revisits maximize the chance of getting cloud-free imagery at least once per year in most study areas. Also, Landsat data are well characterized, extensively studied, and freely available from the USGS EROS Data Center Archive, making it an ideal and stable source of data for mapping the Arctic Ocean coastline. Delineating large sections of coastline from imagery by hand digitization would be impractical due to the

  17. Using object-based geomorphometry for hydro-geomorphological analysis in a Mediterranean research catchment

    NASA Astrophysics Data System (ADS)

    Guida, Domenico; Cuomo, Albina; Palmieri, Vincenzo

    2016-08-01

    The aim of the paper is to apply an object-based geomorphometric procedure to define the runoff contribution areas and support a hydro-geomorphological analysis of a 3 km2 Mediterranean research catchment (southern Italy). Daily and sub-hourly discharge and electrical conductivity data were collected and recorded during a 3-year monitoring activity. Hydro-chemograph analyses carried out on these data revealed a strong seasonal hydrological response in the catchment that differed from the stormflow events that occur in the wet periods and in dry periods. This analysis enabled us to define the hydro-chemograph signatures related to increasing flood magnitude, which progressively involves various runoff components (baseflow, subsurface flow and surficial flow) and an increasing contributing area to discharge. Field surveys and water table/discharge measurements carried out during a selected storm event enabled us to identify and map specific runoff source areas with homogeneous geomorphological units previously defined as hydro-geomorphotypes (spring points, diffuse seepage along the main channel, seepage along the riparian corridors, diffuse outflow from hillslope taluses and concentrate sapping from colluvial hollows). Following the procedures previously proposed and used by authors for object-based geomorphological mapping, a hydro-geomorphologically oriented segmentation and classification was performed with the eCognition (Trimble, Inc.) package. The best agreement with the expert-based geomorphological mapping was obtained with weighted plan curvature at different-sized windows. By combining the hydro-chemical analysis and object-based hydro-geomorphotype map, the variability of the contribution areas was graphically modeled for the selected event, which occurred during the wet season, by using the log values of flow accumulation that better fit the contribution areas. The results allow us to identify the runoff component on hydro-chemographs for each time step

  18. Mental rotation performance in soccer players and gymnasts in an object-based mental rotation task

    PubMed Central

    Jansen, Petra; Lehmann, Jennifer

    2013-01-01

    In this study, the effect of motor expertise on an object-based mental rotation task was investigated. 60 males and 60 females (40 soccer players, 40 gymnasts, and 40 non-athletes, equivalent males and females in each group) solved a psychometric mental rotation task with both cube and human figures. The results revealed that all participants had a higher mental rotation accuracy for human figures compared to cubed figures, that the gender difference was reduced with human figures, and that gymnasts demonstrated a better mental rotation performance than non-athletes. The results are discussed against the background of the existing literature on motor experts, mental rotation performance as well as the importance of the testing situation and the test construction. PMID:23833695

  19. 3D-modeling of deformed halite hopper crystals by Object Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Leitner, Christoph; Hofmann, Peter; Marschallinger, Robert

    2014-12-01

    Object Based Image Analysis (OBIA) is an established method for analyzing multiscale and multidimensional imagery in a range of disciplines. In the present study this method was used for the 3D reconstruction of halite hopper crystals in a mudrock sample, based on Computed Tomography data. To quantitatively assess the reliability of OBIA results, they were benchmarked against a corresponding "gold standard", a reference 3D model of the halite crystals that was derived by manual expert digitization of the CT images. For accuracy assessment, classical per-scene statistics were extended to per-object statistics. The strength of OBIA was to recognize all objects similar to halite hopper crystals and in particular to eliminate cracks. Using a support vector machine (SVM) classifier on top of OBIA, unsuitable objects like halite crystal clusters, polyhalite-coated crystals and spherical halite crystals were effectively dismissed, but simultaneously the number of well-shaped halites was reduced.

  20. A proto-object based saliency model in three-dimensional space.

    PubMed

    Hu, Brian; Kane-Jackson, Ralinkae; Niebur, Ernst

    2016-02-01

    Most models of visual saliency operate on two-dimensional images, using elementary image features such as intensity, color, or orientation. The human visual system, however, needs to function in complex three-dimensional environments, where depth information is often available and may be used to guide the bottom-up attentional selection process. In this report we extend a model of proto-object based saliency to include depth information and evaluate its performance on three separate three-dimensional eye tracking datasets. Our results show that the additional depth information provides a small, but statistically significant, improvement in the model's ability to predict perceptual saliency (eye fixations) in natural scenes. The computational mechanisms of our model have direct neural correlates, and our results provide further evidence that proto-objects help to establish perceptual organization of the scene. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Evaluation of an object-based data model implemented over a proprietary, legacy data model.

    PubMed Central

    Pollard, D. L.; Hales, J. W.

    1995-01-01

    Most computerized medical information today is contained in legacy systems. As vendors slowly move to open systems, legacy systems remain in use and contain valuable information. This paper evaluates the use of an object model imposed on an existing database to improve the ease with which data can be accessed. This study demonstrates that data elements can be retrieved without specific programming knowledge of the underlying data structure. It also suggests that underlying data structures can be changed without updating application code. Programs written using the object model were easier to program but ran greater than one order of magnitude slower than traditionally coded programs. In this paper, the legacy information system is introduced, the methods used to implement and evaluate the object-based data model are explained, and the results and conclusions are presented. PMID:8563303

  2. Mental rotation performance in soccer players and gymnasts in an object-based mental rotation task.

    PubMed

    Jansen, Petra; Lehmann, Jennifer

    2013-01-01

    In this study, the effect of motor expertise on an object-based mental rotation task was investigated. 60 males and 60 females (40 soccer players, 40 gymnasts, and 40 non-athletes, equivalent males and females in each group) solved a psychometric mental rotation task with both cube and human figures. The results revealed that all participants had a higher mental rotation accuracy for human figures compared to cubed figures, that the gender difference was reduced with human figures, and that gymnasts demonstrated a better mental rotation performance than non-athletes. The results are discussed against the background of the existing literature on motor experts, mental rotation performance as well as the importance of the testing situation and the test construction.

  3. Object formation in visual working memory: Evidence from object-based attention.

    PubMed

    Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei

    2016-09-01

    We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception.

  4. Reprint of: Object-based attentional facilitation and inhibition are neuropsychologically dissociated.

    PubMed

    Smith, Daniel T; Ball, Keira; Swalwell, Robert; Schenk, Thomas

    2016-11-01

    Salient peripheral cues produce a transient shift of attention which is superseded by a sustained inhibitory effect. Cueing part of an object produces an inhibitory cueing effect (ICE) that spreads throughout the object. In dynamic scenes the ICE stays with objects as they move. We examined object-centred attentional facilitation and inhibition in a patient with visual form agnosia. There was no evidence of object-centred attentional facilitation. In contrast, object-centred ICE was observed in 3 out of 4 tasks. These inhibitory effects were strongest where cues to objecthood were highly salient. These data are evidence of a neuropsychological dissociation between the facilitatory and inhibitory effects of attentional cueing. From a theoretical perspective the findings suggest that 'grouped arrays' are sufficient for object-based inhibition, but insufficient to generate object-centred attentional facilitation.

  5. Building and Programming a Smart Robotic System for Distinguishing Objects Based on their Shape and Colour

    NASA Astrophysics Data System (ADS)

    Sharari, T. M.

    2015-03-01

    This paper presents a robotic system designed for holding and placing objects based on their colour and shape. The presented robot is given a complete set of instructions of positions and orientation angles for each manipulation motion. The main feature in this paper is that the developed robot used a combination of vision and motion systems for holding and placing the work-objects, mounted on the flat work-plane, based on their shapes and colors. This combination improves the flexibility of manipulation which may help eliminate the use of some expensive manipulation tasks in a variety of industrial applications. The robotic system presented in this paper is designed as an educational robot that possesses the ability for holding-and-placing operations with limited load. To process the various instructions for holding and placing the work objects, a main control unit - Manipulation Control Unit (MCU) is used as well as a slave unit that performed the actual instructions from the MCU.

  6. Survey of Object-Based Data Reduction Techniques in Observational Astronomy

    NASA Astrophysics Data System (ADS)

    Łukasik, Szymon; Moitinho, André; Kowalski, Piotr A.; Falcão, António; Ribeiro, Rita A.; Kulczycki, Piotr

    2016-01-01

    Dealing with astronomical observations represents one of the most challenging areas of big data analytics. Besides huge variety of data types, dynamics related to continuous data flow from multiple sources, handling enormous volumes of data is essential. This paper provides an overview of methods aimed at reducing both the number of features/attributes as well as data instances. It concentrates on data mining approaches not related to instruments and observation tools instead working on processed object-based data. The main goal of this article is to describe existing datasets on which algorithms are frequently tested, to characterize and classify available data reduction algorithms and identify promising solutions capable of addressing present and future challenges in astronomy.

  7. Object-based class modelling for multi-scale riparian forest habitat mapping

    NASA Astrophysics Data System (ADS)

    Strasser, Thomas; Lang, Stefan

    2015-05-01

    Object-based class modelling allows for mapping complex, hierarchical habitat systems. The riparian zone, including forests, represents such a complex ecosystem. Forests within riparian zones are biologically high productive and characterized by a rich biodiversity; thus considered of high community interest with an imperative to be protected and regularly monitored. Satellite earth observation (EO) provides tools for capturing the current state of forest habitats such as forest composition including intermixture of non-native tree species. Here we present a semi-automated object based image analysis (OBIA) approach for the mapping of riparian forests by applying class modelling of habitats based on the European Nature Information System (EUNIS) habitat classifications and the European Habitats Directive (HabDir) Annex 1. A very high resolution (VHR) WorldView-2 satellite image provided the required spatial and spectral details for a multi-scale image segmentation and rule-base composition to generate a six-level hierarchical representation of riparian forest habitats. Thereby habitats were hierarchically represented within an image object hierarchy as forest stands, stands of homogenous tree species and single trees represented by sunlit tree crowns. 522 EUNIS level 3 (EUNIS-3) habitat patches with a mean patch size (MPS) of 12,349.64 m2 were modelled from 938 forest stand patches (MPS = 6868.20 m2) and 43,742 tree stand patches (MPS = 140.79 m2). The delineation quality of the modelled EUNIS-3 habitats (focal level) was quantitatively assessed to an expert-based visual interpretation showing a mean deviation of 11.71%.

  8. Spatial and thematic assessment of object-based forest stand delineation using an OFA-matrix

    NASA Astrophysics Data System (ADS)

    Hernando, A.; Tiede, D.; Albrecht, F.; Lang, S.

    2012-10-01

    The delineation and classification of forest stands is a crucial aspect of forest management. Object-based image analysis (OBIA) can be used to produce detailed maps of forest stands from either orthophotos or very high resolution satellite imagery. However, measures are then required for evaluating and quantifying both the spatial and thematic accuracy of the OBIA output. In this paper we present an approach for delineating forest stands and a new Object Fate Analysis (OFA) matrix for accuracy assessment. A two-level object-based orthophoto analysis was first carried out to delineate stands on the Dehesa Boyal public land in central Spain (Avila Province). Two structural features were first created for use in class modelling, enabling good differentiation between stands: a relational tree cover cluster feature, and an arithmetic ratio shadow/tree feature. We then extended the OFA comparison approach with an OFA-matrix to enable concurrent validation of thematic and spatial accuracies. Its diagonal shows the proportion of spatial and thematic coincidence between a reference data and the corresponding classification. New parameters for Spatial Thematic Loyalty (STL), Spatial Thematic Loyalty Overall (STLOVERALL) and Maximal Interfering Object (MIO) are introduced to summarise the OFA-matrix accuracy assessment. A stands map generated by OBIA (classification data) was compared with a map of the same area produced from photo interpretation and field data (reference data). In our example the OFA-matrix results indicate good spatial and thematic accuracies (>65%) for all stand classes except for the shrub stands (31.8%), and a good STLOVERALL (69.8%). The OFA-matrix has therefore been shown to be a valid tool for OBIA accuracy assessment.

  9. Object-based "dynamic cover types" - a new framework for monitoring landscape-level ecosystem change

    NASA Astrophysics Data System (ADS)

    Dronova, I.; Wang, L.; Gong, P.; Zhong, L.

    2012-12-01

    Traditional analyses of ecosystem change with remote sensing data often focus on transitions between 'static' landscape cover types. However, in dynamic landscapes with frequent disturbance long-term surface trends may be obscured by intermediate shorter-term variation. Availability of high-quality remote sensing data is often inconsistent among change periods, which contributes to the uncertainty in change detection among 'static' classes. Alternatively, we propose Dynamic Cover Types (DCTs) to characterize highly variable areas based on their nested change regimes shaped by climate, phenology and disturbance. We define DCTs as sequences of surface transformations that have distinct temporal trajectories observable across landscapes within a given change period. To illustrate and test this concept, we combined multispectral and microwave satellite imagery to classify DCTs for a large complex seasonally inundated freshwater wetland in China in 2007-2008. Instead of using pixels, we mapped DCTs using object-based image analysis and supervised machine-learning algorithms to characterize common change types based on their spatial and temporal context. Spatial distributions of mapped DCTs simultaneously reflected several key drivers of wetland change, including broad-scale changes in submersion times, vegetation phenology and prevalence of plant cover and localized fine-scale disturbance. We further examined DCT response to a hypothetical scenario of a warmer wetter early spring by substituting spring 2008 images with 2007 ones. In this comparison, the strongest response was detected from DCTs that were closely associated with the water body and represented critical habitat for wintering migratory waterbirds in this area. Results indicate that object-based dynamic class boundaries may provide useful spatial units to highlight characteristic types of landscape change for environmental research, ecosystem monitoring and management considerations.

  10. Influence of galvanic vestibular stimulation on egocentric and object-based mental transformations.

    PubMed

    Lenggenhager, Bigna; Lopez, Christophe; Blanke, Olaf

    2008-01-01

    The vestibular system analyses angular and linear accelerations of the head that are important information for perceiving the location of one's own body in space. Vestibular stimulation and in particular galvanic vestibular stimulation (GVS) that allow a systematic modification of vestibular signals has so far mainly been used to investigate vestibular influence on sensori-motor integration in eye movements and postural control. Comparatively, only a few behavioural and imaging studies have investigated how cognition of space and body may depend on vestibular processing. This study was designed to differentiate the influence of left versus right anodal GVS compared to sham stimulation on object-based versus egocentric mental transformations. While GVS was applied, subjects made left-right judgments about pictures of a plant or a human body presented at different orientations in the roll plane. All subjects reported illusory sensations of body self-motion and/or visual field motion during GVS. Response times in the mental transformation task were increased during right but not left anodal GVS for the more difficult stimuli and the larger angles of rotation. Post-hoc analyses suggested that the interfering effect of right anodal GVS was only present in subjects who reported having imagined turning themselves to solve the mental transformation task (egocentric transformation) as compared to those subjects having imagined turning the picture in space (object-based mental transformation). We suggest that this effect relies on shared functional and cortical mechanisms in the posterior parietal cortex associated with both right anodal GVS and mental imagery.

  11. Top-down facilitation of visual object recognition: object-based and context-based contributions.

    PubMed

    Fenske, Mark J; Aminoff, Elissa; Gronau, Nurit; Bar, Moshe

    2006-01-01

    The neural mechanisms subserving visual recognition are traditionally described in terms of bottom-up analysis, whereby increasingly complex aspects of the visual input are processed along a hierarchical progression of cortical regions. However, the importance of top-down facilitation in successful recognition has been emphasized in recent models and research findings. Here we consider evidence for top-down facilitation of recognition that is triggered by early information about an object, as well as by contextual associations between an object and other objects with which it typically appears. The object-based mechanism is proposed to trigger top-down facilitation of visual recognition rapidly, using a partially analyzed version of the input image (i.e., a blurred image) that is projected from early visual areas directly to the prefrontal cortex (PFC). This coarse representation activates in the PFC information that is back-projected as "initial guesses" to the temporal cortex where it presensitizes the most likely interpretations of the input object. In addition to this object-based facilitation, a context-based mechanism is proposed to trigger top-down facilitation through contextual associations between objects in scenes. These contextual associations activate predictive information about which objects are likely to appear together, and can influence the "initial guesses" about an object's identity. We have shown that contextual associations are analyzed by a network that includes the parahippocampal cortex and the retrosplenial complex. The integrated proposal described here is that object- and context-based top-down influences operate together, promoting efficient recognition by framing early information about an object within the constraints provided by a lifetime of experience with contextual associations.

  12. Object-based cloud detection of multitemporal high-resolution stationary satellite images

    NASA Astrophysics Data System (ADS)

    Zheng, Lijuan; Wu, Yu; Yu, Tao; Yang, Jian; Zhang, Zhouwei

    2017-07-01

    Satellite remote sensing that utilizes highly accurate cloud detection is important for monitoring natural disasters. The GaoFen-4, China's first high-resolution stationary satellite, was recently launched and acquires imagery at a spatial resolution of 50 m and a high temporal resolution (up to 10 min). An object-based cloud detection method was conducted for a time series of GaoFen-4 images. The cloudy objects were obtained from the individual images, and the outlier detection of multiple temporal objects was further processed for refinement. In the initial cloud detection, the objects were segmented by the mean-shift algorithm, and their morphological features were extracted by extended attribute profiles. The threshold-detected cloudy objects were trained according to spectral and morphological features, and the initial objects were classified as cloudy or clear by a regularized least-squares classifier. Furthermore, the medians and standard deviations of the classified cloudy and clear objects were calculated and subsequently refined by the outlier detection of multiple temporal images. The clear object features deviated more than a multiple of standard deviations from the medians of the clear objects that were classified as cloudy objects. Additionally, the refined clear objects were obtained by a similar outlier detection method. Flood event monitoring using GaoFen-4 images showed that the average overall accuracy of the initial cloud detection was 83.4% and increased to 93.3% after refinement. This object-based cloud detection method was insensitive to variations in land objects and can effectively improve cloud detection within small or thin areas, which can be helpful for the monitoring of natural disasters.

  13. A novel fractal monocular and stereo video codec with object-based functionality

    NASA Astrophysics Data System (ADS)

    Zhu, Shiping; Li, Liyun; Wang, Zaikuo

    2012-12-01

    Based on the classical fractal video compression method, an improved monocular fractal compression method is proposed which includes using more effective macroblock partition scheme instead of classical quadtree partition scheme; using improved fast motion estimation to increase the calculation speed; using homo-I-frame like in H.264, etc. The monocular codec uses the motion compensated prediction (MCP) structure. And stereo fractal video coding is proposed which matches the macroblock with two reference frames in left and right views, and it results in increasing compression ratio and reducing bit rate/bandwidth when transmitting compressed video data. The stereo codec combines MCP and disparity compensated prediction. And a new method of object-based fractal video coding is proposed in which each object can be encoded and decoded independently with higher compression ratio and speed and less bit rate/bandwidth when transmitting compressed stereo video data greatly. Experimental results indicate that the proposed monocular method can raise compression ratio 3.6 to 7.5 times, speed up compression time 5.3 to 22.3 times, and improve the image quality 3.81 to 9.24 dB in comparison with circular prediction mapping and non-contractive interframe mapping. The PSNR of the proposed stereo video coding is about 0.17 dB higher than that of the proposed monocular video coding, and 0.69 dB higher than that of JMVC 4.0 on average. Comparing with the bit rate resulted by the proposed monocular video coding and JMVC 4.0, the proposed stereo video coding achieves, on average, 2.53 and 21.14 Kbps bit rate saving, respectively. The proposed object-based fractal monocular and stereo video coding methods are simple and effective, and they make the applications of fractal monocular and stereo video coding more flexible and practicable.

  14. Evaluating fuzzy operators of an object-based image analysis for detecting landslides and their changes

    NASA Astrophysics Data System (ADS)

    Feizizadeh, Bakhtiar; Blaschke, Thomas; Tiede, Dirk; Moghaddam, Mohammad Hossein Rezaei

    2017-09-01

    This article presents a method of object-based image analysis (OBIA) for landslide delineation and landslide-related change detection from multi-temporal satellite images. It uses both spatial and spectral information on landslides, through spectral analysis, shape analysis, textural measurements using a gray-level co-occurrence matrix (GLCM), and fuzzy logic membership functionality. Following an initial segmentation step, particular combinations of various information layers were investigated to generate objects. This was achieved by applying multi-resolution segmentation to IRS-1D, SPOT-5, and ALOS satellite imagery in sequential steps of feature selection and object classification, and using slope and flow direction derivatives from a digital elevation model together with topographically-oriented gray level co-occurrence matrices. Fuzzy membership values were calculated for 11 different membership functions using 20 landslide objects from a landslide training data. Six fuzzy operators were used for the final classification and the accuracies of the resulting landslide maps were compared. A Fuzzy Synthetic Evaluation (FSE) approach was adapted for validation of the results and for an accuracy assessment using the landslide inventory database. The FSE approach revealed that the AND operator performed best with an accuracy of 93.87% for 2005 and 94.74% for 2011, closely followed by the MEAN Arithmetic operator, while the OR and AND (*) operators yielded relatively low accuracies. An object-based change detection was then applied to monitor landslide-related changes that occurred in northern Iran between 2005 and 2011. Knowledge rules to detect possible landslide-related changes were developed by evaluating all possible landslide-related objects for both time steps.

  15. A bio-inspired method and system for visual object-based attention and segmentation

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak

    2010-04-01

    This paper describes a method and system of human-like attention and object segmentation in visual scenes that (1) attends to regions in a scene in their rank of saliency in the image, (2) extracts the boundary of an attended proto-object based on feature contours, and (3) can be biased to boost the attention paid to specific features in a scene, such as those of a desired target object in static and video imagery. The purpose of the system is to identify regions of a scene of potential importance and extract the region data for processing by an object recognition and classification algorithm. The attention process can be performed in a default, bottom-up manner or a directed, top-down manner which will assign a preference to certain features over others. One can apply this system to any static scene, whether that is a still photograph or imagery captured from video. We employ algorithms that are motivated by findings in neuroscience, psychology, and cognitive science to construct a system that is novel in its modular and stepwise approach to the problems of attention and region extraction, its application of a flooding algorithm to break apart an image into smaller proto-objects based on feature density, and its ability to join smaller regions of similar features into larger proto-objects. This approach allows many complicated operations to be carried out by the system in a very short time, approaching real-time. A researcher can use this system as a robust front-end to a larger system that includes object recognition and scene understanding modules; it is engineered to function over a broad range of situations and can be applied to any scene with minimal tuning from the user.

  16. Software Prototyping

    PubMed Central

    Del Fiol, Guilherme; Hanseler, Haley; Crouch, Barbara Insley; Cummins, Mollie R.

    2016-01-01

    Summary Background Health information exchange (HIE) between Poison Control Centers (PCCs) and Emergency Departments (EDs) could improve care of poisoned patients. However, PCC information systems are not designed to facilitate HIE with EDs; therefore, we are developing specialized software to support HIE within the normal workflow of the PCC using user-centered design and rapid prototyping. Objective To describe the design of an HIE dashboard and the refinement of user requirements through rapid prototyping. Methods Using previously elicited user requirements, we designed low-fidelity sketches of designs on paper with iterative refinement. Next, we designed an interactive high-fidelity prototype and conducted scenario-based usability tests with end users. Users were asked to think aloud while accomplishing tasks related to a case vignette. After testing, the users provided feedback and evaluated the prototype using the System Usability Scale (SUS). Results Survey results from three users provided useful feedback that was then incorporated into the design. After achieving a stable design, we used the prototype itself as the specification for development of the actual software. Benefits of prototyping included having 1) subject-matter experts heavily involved with the design; 2) flexibility to make rapid changes, 3) the ability to minimize software development efforts early in the design stage; 4) rapid finalization of requirements; 5) early visualization of designs; 6) and a powerful vehicle for communication of the design to the programmers. Challenges included 1) time and effort to develop the prototypes and case scenarios; 2) no simulation of system performance; 3) not having all proposed functionality available in the final product; and 4) missing needed data elements in the PCC information system. PMID:27081404

  17. Software Surrogate

    NASA Technical Reports Server (NTRS)

    1999-01-01

    In 1994, Blackboard Technology received a NASA Phase I SBIR award entitled "A Blackboard-Based Framework for Mixed-Initiative, Crewed- Space-System Applications." This research continued in Phase II at JSC, where a generic architecture was developed in which a software surrogate serves as the operator's representative in the fast-paced realm of nearly autonomous, intelligent systems. This SBIR research effort addressed the need to support human-operator monitoring and intervention with intelligent systems such as those being developed for NASA's crewed space program.

  18. Analysis Software

    NASA Technical Reports Server (NTRS)

    1994-01-01

    General Purpose Boundary Element Solution Technology (GPBEST) software employs the boundary element method of mechanical engineering analysis, as opposed to finite element. It is, according to one of its developers, 10 times faster in data preparation and more accurate than other methods. Its use results in less expensive products because the time between design and manufacturing is shortened. A commercial derivative of a NASA-developed computer code, it is marketed by Best Corporation to solve problems in stress analysis, heat transfer, fluid analysis and yielding and cracking of solids. Other applications include designing tractor and auto parts, household appliances and acoustic analysis.

  19. Simulation Software

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Various NASA Small Business Innovation Research grants from Marshall Space Flight Center, Langley Research Center and Ames Research Center were used to develop the 'kernel' of COMCO's modeling and simulation software, the PHLEX finite element code. NASA needed it to model designs of flight vehicles; one of many customized commercial applications is UNISIM, a PHLEX-based code for analyzing underground flows in oil reservoirs for Texaco, Inc. COMCO's products simulate a computational mechanics problem, estimate the solution's error and produce the optimal hp-adapted mesh for the accuracy the user chooses. The system is also used as a research or training tool in universities and in mechanical design in industrial corporations.

  20. Expert System Software Assistant for Payload Operations

    NASA Technical Reports Server (NTRS)

    Rogers, Mark N.

    1997-01-01

    The broad objective of this expert system software based application was to demonstrate the enhancements and cost savings that can be achieved through expert system software utilization in a spacecraft ground control center. Spacelab provided a valuable proving ground for this advanced software technology; a technology that will be exploited and expanded for future ISS operations. Our specific focus was on demonstrating payload cadre command and control efficiency improvements through the use of "smart" software which monitors flight telemetry, provides enhanced schematic-based data visualization, and performs advanced engineering data analysis.

  1. Software system safety

    NASA Technical Reports Server (NTRS)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  2. CLARAty Functional-Layer Software

    NASA Technical Reports Server (NTRS)

    Nesnas, Issa; Volpe, Richard; Das, Hari; Mutz, Darren; Petras, Richard; Estlin, Tara

    2003-01-01

    Functional-layer software for the Coupled Layer Architecture for Robotics Autonomy (CLARAty) is being developed. [CLARAty was described in Coupled-Layer Architecture for Advanced Software for Robots (NPO-21218), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48. To recapitulate: CLARAty was proposed to improve the modularity of robotic software while tightening the coupling between planning/execution and control subsystems. Whereas prior robotic software architectures have typically contained three levels, the CLARAty architecture contains two layers: a decision layer and a functional layer.] Just as an operating system provides abstraction from computational hardware, the CLARAty functional-layer software provides for abstraction for the different robotic systems. The functional-layer software establishes interrelated, object-oriented hierarchies that contain active and passive objects that represent the different levels of system abstrations and components. The functional-layer software is decomposed into a set of reusable core components and a set of extended components that adapt the reusable set to specific hardware implementations. The reusable components (a) provide behavior and interface definitions and implementations of basic functionality, (b) provide local executive capabilities, (c) manage local resources, and (d) support state and resource queries by the decision layer. Software for robotic systems can be built by use of these components.

  3. Quantification of Impervious Surfaces Along the Wasatch Front, Utah: AN Object-Based Image Analysis Approach to Identifying AN Indicator for Wetland Stress

    NASA Astrophysics Data System (ADS)

    Leydsman-McGinty, E. I.; Ramsey, R. D.; McGinty, C.

    2013-12-01

    The Remote Sensing/GIS Laboratory at Utah State University, in cooperation with the United States Environmental Protection Agency, is quantifying impervious surfaces for three watershed sub-basins in Utah. The primary objective of developing watershed-scale quantifications of impervious surfaces is to provide an indicator of potential impacts to wetlands that occur within the Wasatch Front and along the Great Salt Lake. A geospatial layer of impervious surfaces can assist state agencies involved with Utah's Wetlands Program Plan (WPP) in understanding the impacts of impervious surfaces on wetlands, as well as support them in carrying out goals and actions identified in the WPP. The three watershed sub-basins, Lower Bear-Malad, Lower Weber, and Jordan, span the highly urbanized Wasatch Front and are consistent with focal areas in need of wetland monitoring and assessment as identified in Utah's WPP. Geospatial layers of impervious surface currently exist in the form of national and regional land cover datasets; however, these datasets are too coarse to be utilized in fine-scale analyses. In addition, the pixel-based image processing techniques used to develop these coarse datasets have proven insufficient in smaller scale or detailed studies, particularly when applied to high-resolution satellite imagery or aerial photography. Therefore, object-based image analysis techniques are being implemented to develop the geospatial layer of impervious surfaces. Object-based image analysis techniques employ a combination of both geospatial and image processing methods to extract meaningful information from high-resolution imagery. Spectral, spatial, textural, and contextual information is used to group pixels into image objects and then subsequently used to develop rule sets for image classification. eCognition, an object-based image analysis software program, is being utilized in conjunction with one-meter resolution National Agriculture Imagery Program (NAIP) aerial

  4. Object-Based Classification as an Alternative Approach to the Traditional Pixel-Based Classification to Identify Potential Habitat of the Grasshopper Sparrow

    NASA Astrophysics Data System (ADS)

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.

  5. Object-based classification as an alternative approach to the traditional pixel-based classification to identify potential habitat of the grasshopper sparrow.

    PubMed

    Jobin, Benoît; Labrecque, Sandra; Grenier, Marcelle; Falardeau, Gilles

    2008-01-01

    The traditional method of identifying wildlife habitat distribution over large regions consists of pixel-based classification of satellite images into a suite of habitat classes used to select suitable habitat patches. Object-based classification is a new method that can achieve the same objective based on the segmentation of spectral bands of the image creating homogeneous polygons with regard to spatial or spectral characteristics. The segmentation algorithm does not solely rely on the single pixel value, but also on shape, texture, and pixel spatial continuity. The object-based classification is a knowledge base process where an interpretation key is developed using ground control points and objects are assigned to specific classes according to threshold values of determined spectral and/or spatial attributes. We developed a model using the eCognition software to identify suitable habitats for the Grasshopper Sparrow, a rare and declining species found in southwestern Québec. The model was developed in a region with known breeding sites and applied on other images covering adjacent regions where potential breeding habitats may be present. We were successful in locating potential habitats in areas where dairy farming prevailed but failed in an adjacent region covered by a distinct Landsat scene and dominated by annual crops. We discuss the added value of this method, such as the possibility to use the contextual information associated to objects and the ability to eliminate unsuitable areas in the segmentation and land cover classification processes, as well as technical and logistical constraints. A series of recommendations on the use of this method and on conservation issues of Grasshopper Sparrow habitat is also provided.

  6. Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Syed, Abdul Haleem

    Land-use-and-land-cover (LULC) mapping is crucial in precision agriculture, environmental monitoring, disaster response, and military applications. The demand for improved and more accurate LULC maps has led to the emergence of a key methodology known as Geographic Object-Based Image Analysis (GEOBIA). The core idea of the GEOBIA for an object-based classification system (OBC) is to change the unit of analysis from single-pixels to groups-of-pixels called `objects' through segmentation. While this new paradigm solved problems and improved global accuracy, it also raised new challenges such as the loss of accuracy in categories that are less abundant, but potentially important. Although this trade-off may be acceptable in some domains, the consequences of such an accuracy loss could be potentially fatal in others (for instance, landmine detection). This thesis proposes a method to improve OBC performance by eliminating such accuracy losses. Specifically, we examine the two key players of an OBC system: Hierarchical Segmentation and Supervised Classification. Further, we propose a model to understand the source of accuracy errors in minority categories and provide a method called Scale Fusion to eliminate those errors. This proposed fusion method involves two stages. First, the characteristic scale for each category is estimated through a combination of segmentation and supervised classification. Next, these estimated scales (segmentation maps) are fused into one combined-object-map. Classification performance is evaluated by comparing results of the multi-cut-and-fuse approach (proposed) to the traditional single-cut (SC) scale selection strategy. Testing on four different data sets revealed that our proposed algorithm improves accuracy on minority classes while performing just as well on abundant categories. Another active obstacle, presented by today's remotely sensed images, is the volume of information produced by our modern sensors with high spatial and

  7. Object Based Image Analysis Combining High Spatial Resolution Imagery and Laser Point Clouds for Urban Land Cover

    NASA Astrophysics Data System (ADS)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high

  8. Stand Replacing Disturbance History from Object-Based Image Analysis (OBIA) of LiDAR Data

    NASA Astrophysics Data System (ADS)

    Sanchez Lopez, N.; Hudak, A. T.; Boschetti, L.

    2016-12-01

    Spatially explicit information on the location, the extent, and the time since a stand replacing forest disturbance occured all have the potential to improve the accuracy of carbon cycle models, and ultimately to reduce the uncertainties in the global carbon budget (Frolking et al., 2009). Earth observation optical satellite data offers a unique opportunity for systematic monitoring of stand-replacing disturbances (Hansen et al., 2013) by detecting the abrupt spectral changes induced by the disturbance, but discriminates poorly between stands of different age, as spectral response of optical data saturates on closed canopy forests. Thus, the potential of optical satellite data to reconstruct the disturbance history of a forest is limited by the short time series of suitable data (starting with the launch of Landsat-1 in 1972). In contrast, LIDAR data directly reflects stand characteristics such as height and density that can be correlated to the time since disturbance. In this study we focus on Object Based Image Analysis (OBIA) of LiDAR data to identify forest stands (objects) based on the age since the last disturbance, to test whether it is possible to extend the disturbance history of a forest beyond what is possible with Landsat data. The study area was located in the Clear Creek watershed and the Selway River & Elk Creek ( 54,000 ha) inside the Nez Perce-Clearwater National Forests (Idaho), using airborne LiDAR data collected in 2009 (Clear Creek watershed) and 2012 (Selway River & Elk Creek). Extensive datasets of disturbances are available over the study area: decadal maps of stand-replacing fires compiled from historical photographs are available from 1870 to 1940, and yearly clearcut maps compiled from timber harvest records are available from 1950 as part of the US Forest Service FACTS (Forest ACtivity Tracking System) dataset. Additionally, a field campaign was conducted in the summer of 2016 to collect additional measurements on plots of known

  9. Quantification and Analysis of Icebergs in a Tidewater Glacier Fjord Using an Object-Based Approach.

    PubMed

    McNabb, Robert W; Womble, Jamie N; Prakash, Anupma; Gens, Rudiger; Haselwimmer, Christian E

    2016-01-01

    Tidewater glaciers are glaciers that terminate in, and calve icebergs into, the ocean. In addition to the influence that tidewater glaciers have on physical and chemical oceanography, floating icebergs serve as habitat for marine animals such as harbor seals (Phoca vitulina richardii). The availability and spatial distribution of glacier ice in the fjords is likely a key environmental variable that influences the abundance and distribution of selected marine mammals; however, the amount of ice and the fine-scale characteristics of ice in fjords have not been systematically quantified. Given the predicted changes in glacier habitat, there is a need for the development of methods that could be broadly applied to quantify changes in available ice habitat in tidewater glacier fjords. We present a case study to describe a novel method that uses object-based image analysis (OBIA) to classify floating glacier ice in a tidewater glacier fjord from high-resolution aerial digital imagery. Our objectives were to (i) develop workflows and rule sets to classify high spatial resolution airborne imagery of floating glacier ice; (ii) quantify the amount and fine-scale characteristics of floating glacier ice; (iii) and develop processes for automating the object-based analysis of floating glacier ice for large number of images from a representative survey day during June 2007 in Johns Hopkins Inlet (JHI), a tidewater glacier fjord in Glacier Bay National Park, southeastern Alaska. On 18 June 2007, JHI was comprised of brash ice ([Formula: see text] = 45.2%, SD = 41.5%), water ([Formula: see text] = 52.7%, SD = 42.3%), and icebergs ([Formula: see text] = 2.1%, SD = 1.4%). Average iceberg size per scene was 5.7 m2 (SD = 2.6 m2). We estimate the total area (± uncertainty) of iceberg habitat in the fjord to be 455,400 ± 123,000 m2. The method works well for classifying icebergs across scenes (classification accuracy of 75.6%); the largest classification errors occur in areas with

  10. Quantification and Analysis of Icebergs in a Tidewater Glacier Fjord Using an Object-Based Approach

    PubMed Central

    McNabb, Robert W.; Womble, Jamie N.; Prakash, Anupma; Gens, Rudiger; Haselwimmer, Christian E.

    2016-01-01

    Tidewater glaciers are glaciers that terminate in, and calve icebergs into, the ocean. In addition to the influence that tidewater glaciers have on physical and chemical oceanography, floating icebergs serve as habitat for marine animals such as harbor seals (Phoca vitulina richardii). The availability and spatial distribution of glacier ice in the fjords is likely a key environmental variable that influences the abundance and distribution of selected marine mammals; however, the amount of ice and the fine-scale characteristics of ice in fjords have not been systematically quantified. Given the predicted changes in glacier habitat, there is a need for the development of methods that could be broadly applied to quantify changes in available ice habitat in tidewater glacier fjords. We present a case study to describe a novel method that uses object-based image analysis (OBIA) to classify floating glacier ice in a tidewater glacier fjord from high-resolution aerial digital imagery. Our objectives were to (i) develop workflows and rule sets to classify high spatial resolution airborne imagery of floating glacier ice; (ii) quantify the amount and fine-scale characteristics of floating glacier ice; (iii) and develop processes for automating the object-based analysis of floating glacier ice for large number of images from a representative survey day during June 2007 in Johns Hopkins Inlet (JHI), a tidewater glacier fjord in Glacier Bay National Park, southeastern Alaska. On 18 June 2007, JHI was comprised of brash ice (x¯ = 45.2%, SD = 41.5%), water (x¯ = 52.7%, SD = 42.3%), and icebergs (x¯ = 2.1%, SD = 1.4%). Average iceberg size per scene was 5.7 m2 (SD = 2.6 m2). We estimate the total area (± uncertainty) of iceberg habitat in the fjord to be 455,400 ± 123,000 m2. The method works well for classifying icebergs across scenes (classification accuracy of 75.6%); the largest classification errors occur in areas with densely-packed ice, low contrast between

  11. A Multidimensional Software Engineering Course

    ERIC Educational Resources Information Center

    Barzilay, O.; Hazzan, O.; Yehudai, A.

    2009-01-01

    Software engineering (SE) is a multidimensional field that involves activities in various areas and disciplines, such as computer science, project management, and system engineering. Though modern SE curricula include designated courses that address these various subjects, an advanced summary course that synthesizes them is still missing. Such a…

  12. A Multidimensional Software Engineering Course

    ERIC Educational Resources Information Center

    Barzilay, O.; Hazzan, O.; Yehudai, A.

    2009-01-01

    Software engineering (SE) is a multidimensional field that involves activities in various areas and disciplines, such as computer science, project management, and system engineering. Though modern SE curricula include designated courses that address these various subjects, an advanced summary course that synthesizes them is still missing. Such a…

  13. A Mathematics Software Database Update.

    ERIC Educational Resources Information Center

    Cunningham, R. S.; Smith, David A.

    1987-01-01

    Contains an update of an earlier listing of software for mathematics instruction at the college level. Topics are: advanced mathematics, algebra, calculus, differential equations, discrete mathematics, equation solving, general mathematics, geometry, linear and matrix algebra, logic, statistics and probability, and trigonometry. (PK)

  14. Object Based Land Use and Land Cover Classification Using Sumbandilasat Imagery in Pretoria, South Africa

    NASA Astrophysics Data System (ADS)

    Mhangara, P.; Odindi, J.

    2012-04-01

    The launch of SumbandilaSat satellite on 17 September 2009 has ushered in new possibilities in land use/cover classification. SumbandilaSat has a ground sampling resolution of 6.25 m and possesses the red, red-edge and NIR spectral bands. The objective of this study was to demonstrate the feasibility of using SumbandilaSat imagery to classify land use/cover using an object-based classification ruleset that uses spectral indices, Haralick texture and the original spectral bands to classify urban land use and land cover in Pretoria, South Africa. High classification accuracies were achieved for water, forest, urban built up areas, irrigated-riparian vegetation and grass and sparse vegetation as shown by the very high KIAs higher than 0.85. Bare soil achieved a relatively low classification accuracy of 0.65. This study also showcases the value of SumbandilaSat imagery and the efficacy of object-oriented techniques in land use/cover classification as revealed by the overall accuracy of 0.93 and KIA of 0.91.

  15. Object-based image analysis for the impact of sewage pollution in Malad Creek, Mumbai, India.

    PubMed

    Shirke, Shivani; Pinto, Shannon M; Kushwaha, Vikash K; Mardikar, Trupti; Vijay, Ritesh

    2016-02-01

    Today, object-based image analysis provides an option for integrating spatial information beyond conventional pixel-based classifications for high-resolution imagery. Due to its rare applicability in pollution assessment, an attempt has been made to assess the spatial extent of sewage pollution in Malad Creek, Mumbai, India. Based on multiresolution segmentation of an IRS P6 (LISS IV) image and the Normalized Difference Turbidity Index (NDTI), the various water quality regions in the creek were classified. The existing literature implies that the reflectance of turbid water is similar to that of bare soil which gives positive NDTI values. In contrast to this, negative values of NDTI are observed in the present study due to the presence of organic matter which absorbs light and imparts turbidity, which is supported by the significant correlation between NDTI and turbidity. A strong relationship is observed between turbidity and water quality parameters, implying the impact of organic matter through discharges of sewage in the creek. Based on the classified regions and the water quality parameters, the extent of pollution was ranked as high, moderate, low and least. The methodology developed in the present study was successfully applied on an IKONOS image for the same study area but a different time frame. The approach will help in impact assessment of sewage pollution and its spatial extent in other water bodies.

  16. Three Dimentional Reconstruction of Large Cultural Heritage Objects Based on Uav Video and Tls Data

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Wu, T. H.; Shen, Y.; Wu, L.

    2016-06-01

    This paper investigates the synergetic use of unmanned aerial vehicle (UAV) and terrestrial laser scanner (TLS) in 3D reconstruction of cultural heritage objects. Rather than capturing still images, the UAV that equips a consumer digital camera is used to collect dynamic videos to overcome its limited endurance capacity. Then, a set of 3D point-cloud is generated from video image sequences using the automated structure-from-motion (SfM) and patch-based multi-view stereo (PMVS) methods. The TLS is used to collect the information that beyond the reachability of UAV imaging e.g., partial building facades. A coarse to fine method is introduced to integrate the two sets of point clouds UAV image-reconstruction and TLS scanning for completed 3D reconstruction. For increased reliability, a variant of ICP algorithm is introduced using local terrain invariant regions in the combined designation. The experimental study is conducted in the Tulou culture heritage building in Fujian province, China, which is focused on one of the TuLou clusters built several hundred years ago. Results show a digital 3D model of the Tulou cluster with complete coverage and textural information. This paper demonstrates the usability of the proposed method for efficient 3D reconstruction of heritage object based on UAV video and TLS data.

  17. Region segmentation techniques for object-based image compression: a review

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    2004-10-01

    Image compression based on transform coding appears to be approaching an asymptotic bit rate limit for application-specific distortion levels. However, a new compression technology, called object-based compression (OBC) promises improved rate-distortion performance at higher compression ratios. OBC involves segmentation of image regions, followed by efficient encoding of each region"s content and boundary. Advantages of OBC include efficient representation of commonly occurring textures and shapes in terms of pointers into a compact codebook of region contents and boundary primitives. This facilitates fast decompression via substitution, at the cost of codebook search in the compression step. Segmentation cose and error are significant disadvantages in current OBC implementations. Several innovative techniques have been developed for region segmentation, including (a) moment-based analysis, (b) texture representation in terms of a syntactic grammar, and (c) transform coding approaches such as wavelet based compression used in MPEG-7 or JPEG-2000. Region-based characterization with variance templates is better understood, but lacks the locality of wavelet representations. In practice, tradeoffs are made between representational fidelity, computational cost, and storage requirement. This paper overviews current techniques for automatic region segmentation and representation, especially those that employ wavelet classification and region growing techniques. Implementational discussion focuses on complexity measures and performance metrics such as segmentation error and computational cost.

  18. A building extraction approach for Airborne Laser Scanner data utilizing the Object Based Image Analysis paradigm

    NASA Astrophysics Data System (ADS)

    Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas

    2016-10-01

    In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied "as is" to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.

  19. Object-based illumination normalization for multi-temporal satellite images in urban area

    NASA Astrophysics Data System (ADS)

    Su, Nan; Zhang, Ye; Tian, Shu; Yan, Yiming

    2016-09-01

    Multi-temporal satellite images acquisition with different illumination conditions cause radiometric difference to have a huge effect on image quality during remote sensing image processing. In particular, image matching of satellite stereo images with great difference between acquisition dates is very difficult for the high-precision DSM generation in the field of satellite photogrammetry. Therefore, illumination normalization is one of the greatest application technology to eliminate radiometric difference for image matching and other image applications. In this paper, we proposed a novel method of object-based illumination normalization to improve image matching of different temporal satellite stereo images in urban area. Our proposed method include two main steps: 1) the object extraction 2) multi-level illumination normalization. Firstly, we proposed a object extraction method for the same objects extraction among the multi-temporal satellite images, which can keep the object structural attribute. Moreover, the multi-level illumination normalization is proposed by combining gradient domain method and singular value decomposition (SVD) according to characteristic information of relevant objects. Our proposed method has great improvement for the illumination of object area to be benefit for image matching in urban area with multiple objects. And the histogram similarity parameter and matching rate are used for illumination consistency quantitative evaluation. The experiments have been conducted on different satellite images with different acquisition dates in the same urban area to verify the effectiveness of our proposed method. The experimental results demonstrate a good performance by comparing other methods.

  20. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning

    NASA Astrophysics Data System (ADS)

    Fernandez Galarreta, J.; Kerle, N.; Gerke, M.

    2014-09-01

    Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.

  1. An object-based classification method for automatic detection of lunar impact craters from topographic data

    NASA Astrophysics Data System (ADS)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  2. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning

    NASA Astrophysics Data System (ADS)

    Fernandez Galarreta, J.; Kerle, N.; Gerke, M.

    2015-06-01

    Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.

  3. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction

    PubMed Central

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  4. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction.

    PubMed

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-09-06

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients' psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller's mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study.

  5. Fully automated objective-based method for master recession curve separation.

    PubMed

    Posavec, Kristijan; Parlov, Jelena; Nakić, Zoran

    2010-01-01

    The fully automated objective-based method for master recession curve (MRC) separation was developed by using Microsoft Excel spreadsheet and Visual Basic for Applications (VBA) code. The core of the program code is used to construct an MRC by using the adapted matching strip method (Posavec et al. 2006). Criteria for separating the MRC into two or three segments are determined from the flow-duration curve and are represented as the probable range of percent of flow rate duration. Successive separations are performed automatically on two and three MRCs using sets of percent of flow rate duration from selected ranges and an optimal separation model scenario, having the highest average coefficient of determination R(2), is selected as the most appropriate one. The resulting separated master recession curves are presented graphically, whereas the statistics are presented numerically, all in separate sheets. Examples of field data obtained from two springs in Istria, Croatia, are used to illustrate its application. The freely available Excel spreadsheet and VBA program ensures the ease of use and applicability for larger data sets.

  6. Object-based approach to national land cover mapping using HJ satellite imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Li, Xiaosong; Yuan, Quanzhi; Liu, Yu

    2014-01-01

    To meet the carbon storage estimate in ecosystems for a national carbon strategy, we introduce a consistent database of China land cover. The Chinese Huan Jing (HJ) satellite is proven efficient in the cloud-free acquisition of seasonal image series in a monsoon region and in vegetation identification for mesoscale land cover mapping. Thirty-eight classes of level II land cover are generated based on the Land Cover Classification System of the United Nations Food and Agriculture Organization that follows a standard and quantitative definition. Twenty-four layers of derivative spectral, environmental, and spatial features compose the classification database. Object-based approach characterizing additional nonspectral features is conducted through mapping, and multiscale segmentations are applied on object boundary match to target real-world conditions. This method sufficiently employs spatial information, in addition to spectral characteristics, to improve classification accuracy. The algorithm of hierarchical classification is employed to follow step-by-step procedures that effectively control classification quality. This algorithm divides the dual structures of universal and local trees. Consistent universal trees suitable to most regions are performed first, followed by local trees that depend on specific features of nine climate stratifications. The independent validation indicates the overall accuracy reaches 86%.

  7. Object-based forest classification to facilitate landscape-scale conservation in the Mississippi Alluvial Valley

    USGS Publications Warehouse

    Mitchell, Michael; Wilson, R. Randy; Twedt, Daniel J.; Mini, Anne E.; James, J. Dale

    2016-01-01

    The Mississippi Alluvial Valley is a floodplain along the southern extent of the Mississippi River extending from southern Missouri to the Gulf of Mexico. This area once encompassed nearly 10 million ha of floodplain forests, most of which has been converted to agriculture over the past two centuries. Conservation programs in this region revolve around protection of existing forest and reforestation of converted lands. Therefore, an accurate and up to date classification of forest cover is essential for conservation planning, including efforts that prioritize areas for conservation activities. We used object-based image analysis with Random Forest classification to quickly and accurately classify forest cover. We used Landsat band, band ratio, and band index statistics to identify and define similar objects as our training sets instead of selecting individual training points. This provided a single rule-set that was used to classify each of the 11 Landsat 5 Thematic Mapper scenes that encompassed the Mississippi Alluvial Valley. We classified 3,307,910±85,344 ha (32% of this region) as forest. Our overall classification accuracy was 96.9% with Kappa statistic of 0.96. Because this method of forest classification is rapid and accurate, assessment of forest cover can be regularly updated and progress toward forest habitat goals identified in conservation plans can be periodically evaluated.

  8. Geographic Object-Based Image Analysis – Towards a new paradigm

    PubMed Central

    Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ‘per-pixel paradigm’ and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm. PMID:24623958

  9. A Framework for Geographic Object-Based Image Analysis (GEOBIA) based on geographic ontology

    NASA Astrophysics Data System (ADS)

    Gu, H. Y.; Li, H. T.; Yan, L.; Lu, X. J.

    2015-06-01

    GEOBIA (Geographic Object-Based Image Analysis) is not only a hot topic of current remote sensing and geographical research. It is believed to be a paradigm in remote sensing and GIScience. The lack of a systematic approach designed to conceptualize and formalize the class definitions makes GEOBIA a highly subjective and difficult method to reproduce. This paper aims to put forward a framework for GEOBIA based on geographic ontology theory, which could implement "Geographic entities - Image objects - Geographic objects" true reappearance. It consists of three steps, first, geographical entities are described by geographic ontology, second, semantic network model is built based on OWL(ontology web language), at last, geographical objects are classified with decision rule or other classifiers. A case study of farmland ontology was conducted for describing the framework. The strength of this framework is that it provides interpretation strategies and global framework for GEOBIA with the property of objective, overall, universal, universality, etc., which avoids inconsistencies caused by different experts' experience and provides an objective model for mage analysis.

  10. Efficient reconstruction of dielectric objects based on integral equation approach with Gauss-Newton minimization.

    PubMed

    Tong, Mei Song; Yang, Kuo; Sheng, Wei Tian; Zhu, Zhen Ying

    2013-12-01

    Reconstruction of unknown objects by microwave illumination requires efficient inversion for measured electromagnetic scattering data. In the integral equation approach for reconstructing dielectric objects based on the Born iterative method or its variations, the volume integral equations are involved because the imaging domain is fully inhomogeneous. When solving the forward scattering integral equation, the Nyström method is used because the traditional method of moments may be inconvenient due to the inhomogeneity of the imaging domain. The benefits of the Nyström method include the simple implementation without using any basis and testing functions and low requirement on geometrical discretization. When solving the inverse scattering integral equation, the Gauss-Newton minimization approach with a line search method (LSM) and multiplicative regularization method (MRM) is employed. The LSM can optimize the search of step size in each iteration, whereas the MRM may reduce the number of numerical experiments for choosing the regularization parameter. Numerical examples for reconstructing typical dielectric objects under limited observation angles are presented to illustrate the inversion approach.

  11. Temporal segmentation of video objects for hierarchical object-based motion description.

    PubMed

    Fu, Yue; Ekin, Ahmet; Tekalp, A Murat; Mehrotra, Rajiv

    2002-01-01

    This paper describes a hierarchical approach for object-based motion description of video in terms of object motions and object-to-object interactions. We present a temporal hierarchy for object motion description, which consists of low-level elementary motion units (EMU) and high-level action units (AU). Likewise, object-to-object interactions are decomposed into a hierarchy of low-level elementary reaction units (ERU) and high-level interaction units (IU). We then propose an algorithm for temporal segmentation of video objects into EMUs, whose dominant motion can be described by a single representative parametric model. The algorithm also computes a representative (dominant) affine model for each EMU. We also provide algorithms for identification of ERUs and for classification of the type of ERUs. Experimental results demonstrate that segmenting the life-span of video objects into EMUS and ERUs facilitates the generation of high-level visual summaries for fast browsing and navigation. At present, the formation of high-level action and interaction units is done interactively. We also provide a set of query-by-example results for low-level EMU retrieval from a database based on similarity of the representative dominant affine models.

  12. Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use

    PubMed Central

    Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil

    2013-01-01

    The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648

  13. Automatic fuzzy object-based analysis of VHSR images for urban objects extraction

    NASA Astrophysics Data System (ADS)

    Sebari, Imane; He, Dong-Chen

    2013-05-01

    We present an automatic approach for object extraction from very high spatial resolution (VHSR) satellite images based on Object-Based Image Analysis (OBIA). The proposed solution requires no input data other than the studied image. Not input parameters are required. First, an automatic non-parametric cooperative segmentation technique is applied to create object primitives. A fuzzy rule base is developed based on the human knowledge used for image interpretation. The rules integrate spectral, textural, geometric and contextual object proprieties. The classes of interest are: tree, lawn, bare soil and water for natural classes; building, road, parking lot for man made classes. The fuzzy logic is integrated in our approach in order to manage the complexity of the studied subject, to reason with imprecise knowledge and to give information on the precision and certainty of the extracted objects. The proposed approach was applied to extracts of Ikonos images of Sherbrooke city (Canada). An overall total extraction accuracy of 80% was observed. The correctness rates obtained for building, road and parking lot classes are of 81%, 75% and 60%, respectively.

  14. Object-based model verification by a genetic algorithm approach: Application in archeological targets

    NASA Astrophysics Data System (ADS)

    Başokur, Ahmet T.; Akca, Irfan

    2011-08-01

    A new target-oriented parameterization scheme, named the object-based model, is suggested to represent man-made or natural targets as regular shapes embedded in a two-dimensional resistivity background. The numerical values of the target parameters (size, depth, location and resistivity) are estimated in three steps consisting of conventional regularized inversion, exclusion of anomalous regions and delineation of target bodies. The method produces sharp edges and sharp variation in intrinsic resistivity between the targets and background. The number of target objects is decided by the visual inspection of the 2D resistivity section derived from the application of a conventional cell-based regularized inversion. The 2D background is also extracted from the same section. A genetic algorithm approach is used at the final stage to test a large number of distinct models. Each test model consists of the same number of objects buried in the 2D background. The size, depth, location and resistivity of the targets are estimated from a class of models generated by the application of biological rules. The derived images of buried bodies have sharp edges and can then be understood by engineers and archeologists. However, if the hypothesis about the 'conceptual model' is very different from the geometry of the subsurface, the proposed approach will not be able to produce satisfactory results.

  15. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    NASA Astrophysics Data System (ADS)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  16. Object-Based Multiple Foreground Video Co-Segmentation via Multi-State Selection Graph.

    PubMed

    Fu, Huazhu; Xu, Dong; Zhang, Bao; Lin, Stephen; Ward, Rabab Kreidieh

    2015-11-01

    We present a technique for multiple foreground video co-segmentation in a set of videos. This technique is based on category-independent object proposals. To identify the foreground objects in each frame, we examine the properties of the various regions that reflect the characteristics of foregrounds, considering the intra-video coherence of the foreground as well as the foreground consistency among the different videos in the set. Multiple foregrounds are handled via a multi-state selection graph in which a node representing a video frame can take multiple labels that correspond to different objects. In addition, our method incorporates an indicator matrix that for the first time allows accurate handling of cases with common foreground objects missing in some videos, thus preventing irrelevant regions from being misclassified as foreground objects. An iterative procedure is proposed to optimize our new objective function. As demonstrated through comprehensive experiments, this object-based multiple foreground video co-segmentation method compares well with related techniques that co-segment multiple foregrounds.

  17. An object-based assessment of the High Resolution Rapid Refresh Model over the western US

    NASA Astrophysics Data System (ADS)

    Bytheway, J. L.; Kummerow, C. D.

    2015-12-01

    The High Resolution Rapid Refresh (HRRR) model is an hourly-updated, radar-initialized, 3km storm-resolving model run operationally in the US since September 2014, with the main goal of improving forecasts of convective precipitation. Here, we identify precipitating objects in the forecast and observed fields in a manner similar to the Method for Object Based Deterministic Evaluation (MODE). The identified objects are used to perform an assessment of HRRR model forecasts of warm season convective precipitation over the complex terrain of the western US (west of 105W). The constellation of satellites comprising the Global Precipitation Measurement (GPM) Mission provide observations of precipitating objects in the domain, and can be used to assess not only the quality of the rain/no-rain forecast, but also the model's representation of the microphysical and other environmental properties governing precipitation behavior (e.g. latent heating, vertically integrated liquid, etc.). Special attention will be paid to vertical profiles of liquid and ice water contents that can be inferred from the observed reflectivity profiles of the GPM Dual-frequency Precipitation Radar (DPR) and other imaging and sounding instruments. Potential relationships between these properties and the success or failure of the model forecast will be discussed.

  18. Geologic Sequestration Software Suite

    SciTech Connect

    Black, Gary; Bonneville, PNNL Alain; Sivaramakrishnan, PNNL Chandrika; Purohit, PNNL Sumit; White, PNNL Signe; Lansing, PNNL Carina; Gosink, PNNL Luke; Guillen, PNNL Zoe; Moeglein, PNNL William; Gorton, PNNL Ian; PNNL,

    2013-11-04

    GS3 is the bundling of the Geological Sequestration Software Suite domain tools with the Velo wiki user interface, rich client interface, and data store. Velo is an application domain independent collaborative user environment for modeling and simulation. Velo has a web browser based wiki interface integrated with a sophisticated content management system supporting data and knowledge management required for large-scale scientific modeling projects. GS3 adds tools and capability specifically in the area of modeling subsurface reservoirs for the purpose of carbon sequestration. Velo is a core software framework to create scientific domain user environments. Velo is not tied to a specific domain although it provides novel capability needed by many application areas. A well-defined Velo integration layer allows custom applications such as GS3 to leverage the core Velo components to reduce development cost/time and ultimately provide a more capable software product. Compared with previous efforts like ECCE and SALSSA, Velo is a major advancement being a web browser based interface, having a more comprehensive data management architecture, and having intrinsic support for collaboration through the wiki. GS3 adds specific domain tools for looking at site data, developing conceptual and numerical models, building simulation input files, launching and monitoring the progress of those simulations and being able to look at and interpret simulation output.

  19. Use of Remote Imagery and Object-based Image Methods to Count Plants in an Open-field Container Nursery

    NASA Astrophysics Data System (ADS)

    Leiva Lopez, Josue Nahun

    In general, the nursery industry lacks an automated inventory control system. Object-based image analysis (OBIA) software and aerial images could be used to count plants in nurseries. The objectives of this research were: 1) to evaluate the effect of an unmanned aerial vehicle (UAV) flight altitude and plant canopy separation of container-grown plants on count accuracy using aerial images and 2) to evaluate the effect of plant canopy shape, presence of flowers, and plant status (living and dead) on counting accuracy of container-grown plants using remote sensing images. Images were analyzed using Feature AnalystRTM (FA) and an algorithm trained using MATLABRTM. Total count error, false positives and unidentified plants were recorded from output images using FA; only total count error was reported for the MATLAB algorithm. For objective 1, images were taken at 6, 12 and 22 m above the ground using a UAV. Plants were placed on black fabric and gravel, and spaced as follows: 5 cm between canopy edges, canopy edges touching, and 5 cm of canopy edge overlap. In general, when both methods were considered, total count error was smaller [ranging from -5 (undercount) to 4 (over count)] when plants were fully separated with the exception of images taken at 22 m. FA showed a smaller total count error (-2) than MATLAB (-5) when plants were placed on black fabric than those placed on gravel. For objective 2, the plan was to continue using the UAV, however, due to the unexpected disruption of the GPS-based navigation by heightened solar flare activity in 2013, a boom lift that could provide images on a more reliable basis was used. When images obtained using a boom lift were analyzed using FA there was no difference between variables measured when an algorithm trained with an image displaying regular or irregular plant canopy shape was applied to images displaying both plant canopy shapes even though the canopy shape of 'Sea Green' juniper is less compact than 'Plumosa Compacta

  20. Sandia software guidelines: Software quality planning

    SciTech Connect

    Not Available

    1987-08-01

    This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. In consonance with the IEEE Standard for Software Quality Assurance Plans, this volume identifies procedures to follow in producing a Software Quality Assurance Plan for an organization or a project, and provides an example project SQA plan. 2 figs., 4 tabs.

  1. How a face may affect object-based attention: evidence from adults and 8-month-old infants

    PubMed Central

    Valenza, Eloisa; Franchin, Laura; Bulf, Hermann

    2013-01-01

    Object-based attention operates on perceptual objects, opening the possibility that the costs and benefits humans have to pay to move attention between-objects might be affected by the nature of the stimuli. The current study reported two experiments with adults and 8-month-old infants investigating whether object-based-attention is affected by the type of stimulus (faces vs. non-faces stimuli). Using the well-known cueing task developed by Egly et al. (1994) to study the object-based component of attention, in Experiment 1 adult participants were presented with two upright, inverted or scrambled faces and an eye-tracker measured their saccadic latencies to find a target that could appear on the same object that was just cued or on the other object that was uncued. Data showed that an object-based effect (a smaller cost to shift attention within- compared to between-objects) occurred only with scrambled face, but not with upright or inverted faces. In Experiment 2 the same task was performed with 8-month-old infants, using upright and inverted faces. Data revealed that an object-based effect emerges only for inverted faces but not for upright faces. Overall, these findings suggest that object-based attention is modulated by the type of stimulus and by the experience acquired by the viewer with different objects. PMID:24723860

  2. EDS operator and control software

    SciTech Connect

    Ott, L.L.

    1985-04-01

    The Enrichment Diagnostic System (EDS) was developed at Lawrence Livermore National Laboratory (LLNL) to acquire, display and analyze large quantities of transient data for a real-time Advanced Vapor Laser Isotope Separation (AVLIS) experiment. Major topics discussed in this paper are the EDS operator interface (SHELL) program, the data acquisition and analysis scheduling software, and the graphics software. The workstation concept used in EDS, the software used to configure a user's workstation, and the ownership and management of a diagnostic are described. An EDS diagnostic is a combination of hardware and software designed to study specific aspects of the process. Overall system performance is discussed from the standpoint of scheduling techniques, evaluation tools, optimization techniques, and program-to-program communication methods. EDS is based on a data driven design which keeps the need to modify software to a minimum. This design requires a fast and reliable data base management system. A third party data base management product, Berkeley Software System Database, written explicitly for HP1000's, is used for all EDS data bases. All graphics is done with an in-house graphics product, Device Independent Graphics Library (DIGLIB). Examples of devices supported by DIGLIB are: Versatec printer/plotters, Raster Technologies Graphic Display Controllers, and HP terminals (HP264x and HP262x). The benefits derived by using HP hardware and software as well as obstacles imposed by the HP environment are presented in relation to EDS development and implementation.

  3. An object-based image analysis of pinyon and juniper woodlands treated to reduce fuels.

    PubMed

    Hulet, April; Roundy, Bruce A; Petersen, Steven L; Jensen, Ryan R; Bunting, Stephen C

    2014-03-01

    Mechanical and prescribed fire treatments are commonly used to reduce fuel loads and maintain or restore sagebrush steppe rangelands across the Great Basin where pinyon (Pinus) and juniper (Juniperus) trees are encroaching and infilling. Geospatial technologies, particularly remote sensing, could potentially be used in these ecosystems to (1) evaluate the longevity of fuel reduction treatments, (2) provide data for planning and designing future fuel-reduction treatments, and (3) assess the spatial distribution of horizontal fuel structure following fuel-reduction treatments. High-spatial resolution color-infrared imagery (0.06-m pixels) was acquired for pinyon and juniper woodland plots where fuels were reduced by either prescribed fire, tree cutting, or mastication at five sites in Oregon, California, Nevada, and Utah. Imagery was taken with a Vexcel UltraCam X digital camera in June 2009. Within each treatment plot, ground cover was measured as part of the Sagebrush Steppe Treatment Evaluation Project. Trimble eCognition Developer was used to classify land cover classes using object-based image analysis (OBIA) techniques. Differences between cover estimates using OBIA and ground-measurements were not consistently higher or lower for any land cover class and when evaluated for individual sites, were within ±5 % of each other. The overall accuracy and the K hat statistic for classified thematic maps for each treatment were: prescribed burn 85 % and 0.81; cut and fell 82 % and 0.77, and mastication 84 % and 0.80. Although cover assessments from OBIA differed somewhat from ground measurements, they are sufficiently accurate to evaluate treatment success and for supporting a broad range of management concerns.

  4. An Object-Based Image Analysis of Pinyon and Juniper Woodlands Treated to Reduce Fuels

    NASA Astrophysics Data System (ADS)

    Hulet, April; Roundy, Bruce A.; Petersen, Steven L.; Jensen, Ryan R.; Bunting, Stephen C.

    2014-03-01

    Mechanical and prescribed fire treatments are commonly used to reduce fuel loads and maintain or restore sagebrush steppe rangelands across the Great Basin where pinyon ( Pinus) and juniper ( Juniperus) trees are encroaching and infilling. Geospatial technologies, particularly remote sensing, could potentially be used in these ecosystems to (1) evaluate the longevity of fuel reduction treatments, (2) provide data for planning and designing future fuel-reduction treatments, and (3) assess the spatial distribution of horizontal fuel structure following fuel-reduction treatments. High-spatial resolution color-infrared imagery (0.06-m pixels) was acquired for pinyon and juniper woodland plots where fuels were reduced by either prescribed fire, tree cutting, or mastication at five sites in Oregon, California, Nevada, and Utah. Imagery was taken with a Vexcel UltraCam X digital camera in June 2009. Within each treatment plot, ground cover was measured as part of the Sagebrush Steppe Treatment Evaluation Project. Trimble eCognition Developer was used to classify land cover classes using object-based image analysis (OBIA) techniques. Differences between cover estimates using OBIA and ground-measurements were not consistently higher or lower for any land cover class and when evaluated for individual sites, were within ±5 % of each other. The overall accuracy and the K hat statistic for classified thematic maps for each treatment were: prescribed burn 85 % and 0.81; cut and fell 82 % and 0.77, and mastication 84 % and 0.80. Although cover assessments from OBIA differed somewhat from ground measurements, they are sufficiently accurate to evaluate treatment success and for supporting a broad range of management concerns.

  5. Mapping temporal changes in connectivity using high-resolution aerial data and object based image analysis

    NASA Astrophysics Data System (ADS)

    Masselink, Rens; Anders, Niels; Keesstra, Saskia; Seeger, Manuel

    2014-05-01

    Within the field of geomorphology mapping has always been an important tool to interpret spatial and temporal distributions of phenomena and processes at the surface. In the field of connectivity however, although throughout the past decade many articles have been published, there are only very few that go into the mapping of connectivity. This study aimed at developing a new, automated method for mapping connectivity within agricultural catchments. The method, which is a combination of Object-Based Image Analysis (OBIA) and traditional geomorphological field mapping, was applied to two agricultural catchments in Navarre, Spain, both with an area of approximately 2 sq.km. An unmanned aerial vehicle (UAV) was used to take aerial photographs with a resolution of 6 cm, of which a DEM with a 12 cm resolution was created using structure-from-motion photogrammetry. Connectivity was mapped within the study areas using OBIA using a top down method, meaning that connectivity was mapped at different scale levels, starting at the largest scale. Firstly sub-catchments were automatically delineated, after which several characteristics and features that affect connectivity within the sub-catchments were classified, e.g. landuse, landslides, rills, gullies, riparian vegetation, changes in slope, ploughing direction etc. In two consecutive years (2013-2014) photographs were taken and connectivity of both catchments of both years will be compared. Future work will include a quantification of the mapped connectivity (highly connected years vs. low connected years), causes and consequences of these differences in connectivity, comparison to existing connectivity indices and comparison of mapped connectivity in sub-catchments and measured discharge.

  6. a Region-Based Multi-Scale Approach for Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.

    2016-06-01

    Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  7. High resolution mapping of development in the wildland-urban interface using object based image extraction.

    PubMed

    Caggiano, Michael D; Tinkham, Wade T; Hoffman, Chad; Cheng, Antony S; Hawbaker, Todd J

    2016-10-01

    The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m(2)) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA

  8. Object-Based Change Detection Using High-Resolution Remotely Sensed Data and GIS

    NASA Astrophysics Data System (ADS)

    Sofina, N.; Ehlers, M.

    2012-08-01

    High resolution remotely sensed images provide current, detailed, and accurate information for large areas of the earth surface which can be used for change detection analyses. Conventional methods of image processing permit detection of changes by comparing remotely sensed multitemporal images. However, for performing a successful analysis it is desirable to take images from the same sensor which should be acquired at the same time of season, at the same time of a day, and - for electro-optical sensors - in cloudless conditions. Thus, a change detection analysis could be problematic especially for sudden catastrophic events. A promising alternative is the use of vector-based maps containing information about the original urban layout which can be related to a single image obtained after the catastrophe. The paper describes a methodology for an object-based search of destroyed buildings as a consequence of a natural or man-made catastrophe (e.g., earthquakes, flooding, civil war). The analysis is based on remotely sensed and vector GIS data. It includes three main steps: (i) generation of features describing the state of buildings; (ii) classification of building conditions; and (iii) data import into a GIS. One of the proposed features is a newly developed 'Detected Part of Contour' (DPC). Additionally, several features based on the analysis of textural information corresponding to the investigated vector objects are calculated. The method is applied to remotely sensed images of areas that have been subjected to an earthquake. The results show the high reliability of the DPC feature as an indicator for change.

  9. An Object-Based Method for Estimation of River Discharge from Remotely-Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Burgett, D. A.; Blesius, L.; Davis, J. D.

    2010-12-01

    High resolution satellite and aerial imagery of fluvial systems contain much information about planform river channel features. However, not much is known about how these forms quantitatively related to river channel process, specifically, discharge. This research explores methods for remote image-based river discharge estimation through Object-Based Image Processing (OBIA) and GIS techniques. Previous efforts in image-based discharge estimation have relied primarily on manual delineation of river features and the input of reach-averaged values of these features into statistically based models for estimation. In addition to analyzing OBIA techniques for channel feature delineation and measurement, this approach investigates techniques of discharge estimation model design, validation, and correction along a reach, utilizing variation in “standard” channel features (e.g. water surface width), along with less tangible channel feature metrics derived from OBIA. Rather than predefine the channel unit of analysis, this work also considers the accuracy of model parameters derived from a range of channel scales, from longer reach-averaged to cross-sectional. High resolution (1 m) color infrared orthoimagery from 2005 and 2009 National Agricultural Inventory Program (NAIP) of 50 river reaches (ranging in discharge from approximately 13 m3s-1 to 856 m3s-1) were utilized for this analysis. These reaches, all near United States Geological Survey (USGS) river gages in California, USA, were split randomly and evenly into 25 reaches each for model design and validation, respectively. This approach allows better isolation of error resulting from user bias in channel feature measurement, and moves toward a more precise, standardized system of measurement for remotely observable channel form.

  10. Modelling of cooperating robotized systems with the use of object-based approach

    NASA Astrophysics Data System (ADS)

    Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.

    2015-11-01

    Today's robotized manufacturing systems are characterized by high efficiency. The emphasis is placed mainly on the simultaneous work of machines. It could manifest in many ways, where the most spectacular one is the cooperation of several robots, during work on the same detail. What's more, recently a dual-arm robots are used that could mimic the manipulative skills of human hands. As a result, it is often hard to deal with the situation, when it is necessary not only to maintain sufficient precision, but also the coordination and proper sequence of movements of individual robots’ arms. The successful completion of this task depends on the individual robot control systems and their respective programmed, but also on the well-functioning communication between robot controllers. A major problem in case of cooperating robots is the possibility of collision between particular links of robots’ kinematic chains. This is not a simple case, because the manufacturers of robotic systems do not disclose the details of the control algorithms, then it is hard to determine such situation. Another problem with cooperation of robots is how to inform the other units about start or completion of part of the task, so that other robots can take further actions. This paper focuses on communication between cooperating robotic units, assuming that every robot is represented by object-based model. This problem requires developing a form of communication protocol that the objects can use for collecting the information about its environment. The approach presented in the paper is not limited to the robots and could be used in a wider range, for example during modelling of the complete workcell or production line.

  11. Neural activity associated with self, other, and object-based counterfactual thinking.

    PubMed

    De Brigard, Felipe; Nathan Spreng, R; Mitchell, Jason P; Schacter, Daniel L

    2015-04-01

    Previous research has shown that autobiographical episodic counterfactual thinking-i.e., mental simulations about alternative ways in which one's life experiences could have occurred-engages the brain's default network (DN). However, it remains unknown whether or not the DN is also engaged during impersonal counterfactual thoughts, specifically those involving other people or objects. The current study compares brain activity during counterfactual simulations involving the self, others and objects. In addition, counterfactual thoughts involving others were manipulated in terms of similarity and familiarity with the simulated characters. The results indicate greater involvement of DN during person-based (i.e., self and other) as opposed to object-based counterfactual simulations. However, the involvement of different regions of the DN during other-based counterfactual simulations was modulated by how close and/or similar the simulated character was perceived to be by the participant. Simulations involving unfamiliar characters preferentially recruited dorsomedial prefrontal cortex. Simulations involving unfamiliar similar characters, characters with whom participants identified personality traits, recruited lateral temporal gyrus. Finally, our results also revealed differential coupling of right hippocampus with lateral prefrontal and temporal cortex during counterfactual simulations involving familiar similar others, but with left transverse temporal gyrus and medial frontal and inferior temporal gyri during counterfactual simulations involving either oneself or unfamiliar dissimilar others. These results suggest that different brain mechanisms are involved in the simulation of personal and impersonal counterfactual thoughts, and that the extent to which regions associated with autobiographical memory are recruited during the simulation of counterfactuals involving others depends on the perceived similarity and familiarity with the simulated individuals. Copyright

  12. Coregistration refinement of hyperspectral images and DSM: An object-based approach using spectral information

    NASA Astrophysics Data System (ADS)

    Avbelj, Janja; Iwaszczuk, Dorota; Müller, Rupert; Reinartz, Peter; Stilla, Uwe

    2015-02-01

    For image fusion in remote sensing applications the georeferencing accuracy using position, attitude, and camera calibration measurements can be insufficient. Thus, image processing techniques should be employed for precise coregistration of images. In this article a method for multimodal object-based image coregistration refinement between hyperspectral images (HSI) and digital surface models (DSM) is presented. The method is divided in three parts: object outline detection in HSI and DSM, matching, and determination of transformation parameters. The novelty of our proposed coregistration refinement method is the use of material properties and height information of urban objects from HSI and DSM, respectively. We refer to urban objects as objects which are typical in urban environments and focus on buildings by describing them with 2D outlines. Furthermore, the geometric accuracy of these detected building outlines is taken into account in the matching step and for the determination of transformation parameters. Hence, a stochastic model is introduced to compute optimal transformation parameters. The feasibility of the method is shown by testing it on two aerial HSI of different spatial and spectral resolution, and two DSM of different spatial resolution. The evaluation is carried out by comparing the accuracies of the transformations parameters to the reference parameters, determined by considering object outlines at much higher resolution, and also by computing the correctness and the quality rate of the extracted outlines before and after coregistration refinement. Results indicate that using outlines of objects instead of only line segments is advantageous for coregistration of HSI and DSM. The extraction of building outlines in comparison to the line cue extraction provides a larger amount of assigned lines between the images and is more robust to outliers, i.e. false matches.

  13. Mapping Eroded Areas on Mountain Grassland with Terrestrial Photogrammetry and Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Mayr, Andreas; Rutzinger, Martin; Bremer, Magnus; Geitner, Clemens

    2016-06-01

    In the Alps as well as in other mountain regions steep grassland is frequently affected by shallow erosion. Often small landslides or snow movements displace the vegetation together with soil and/or unconsolidated material. This results in bare earth surface patches within the grass covered slope. Close-range and remote sensing techniques are promising for both mapping and monitoring these eroded areas. This is essential for a better geomorphological process understanding, to assess past and recent developments, and to plan mitigation measures. Recent developments in image matching techniques make it feasible to produce high resolution orthophotos and digital elevation models from terrestrial oblique images. In this paper we propose to delineate the boundary of eroded areas for selected scenes of a study area, using close-range photogrammetric data. Striving for an efficient, objective and reproducible workflow for this task, we developed an approach for automated classification of the scenes into the classes grass and eroded. We propose an object-based image analysis (OBIA) workflow which consists of image segmentation and automated threshold selection for classification using the Excess Green Vegetation Index (ExG). The automated workflow is tested with ten different scenes. Compared to a manual classification, grass and eroded areas are classified with an overall accuracy between 90.7% and 95.5%, depending on the scene. The methods proved to be insensitive to differences in illumination of the scenes and greenness of the grass. The proposed workflow reduces user interaction and is transferable to other study areas. We conclude that close-range photogrammetry is a valuable low-cost tool for mapping this type of eroded areas in the field with a high level of detail and quality. In future, the output will be used as ground truth for an area-wide mapping of eroded areas in coarser resolution aerial orthophotos acquired at the same time.

  14. Applying object-based segmentation in the temporal domain to characterise snow seasonality

    NASA Astrophysics Data System (ADS)

    Thompson, Jeffery A.; Lees, Brian G.

    2014-11-01

    In the context of a changing climate it is important to be able to monitor and map descriptors of snow seasonality. Because of its relatively low elevation range, Australia's alpine bioregion is a marginal area for seasonal snow-cover with high inter-annual variability. It has been predicted that snow-cover will become increasingly ephemeral within the alpine bioregion as warming continues. To assist the monitoring of snow seasonality and ephemeral snow-cover, a remote sensing method is proposed. The method adapted principles of object-based image analysis that have traditionally be used in the spatial domain and applied them in the temporal domain. The method allows for a more comprehensive characterisation of snow seasonality relative to other methods. Using high-temporal resolution (daily) MODIS image time-series, remotely sensed descriptors were derived and validated using in situ observations. Overall, moderate to strong relationships were observed between the remotely sensed descriptors of the persistent snow-covered period (start r = 0.70, p < 0.001; end r = 0.88, p < 0.001 and duration r = 0.88, p < 0.001) and their in situ counterparts. Although only weak correspondence (r = 0.39, p < 0.05) was observed for the number of ephemeral events detected using remote sensing, this was thought to be related to differences in the sampling frequency of the in situ observations relative to the remotely sense observations. For 2009, the mapped results for the number of snow-cover events suggested that snow-cover between 1400 and 1799 m was characterised by a high numbers of ephemeral events.

  15. A fuzzy rule base system for object-based feature extraction and classification

    NASA Astrophysics Data System (ADS)

    Jin, Xiaoying; Paswaters, Scott

    2007-04-01

    In this paper, we present a fuzzy rule base system for object-based feature extraction and classification on remote sensing imagery. First, the object primitives are generated from the segmentation step. Object primitives are defined as individual regions with a set of attributes computed on the regions. The attributes computed include spectral, texture and shape measurements. Crisp rules are very intuitive to the users. They are usually represented as "GT (greater than)", "LT (less than)" and "IB (In Between)" with numerical values. The features can be manually generated by querying on the attributes using these crisp rules and monitoring the resulting selected object primitives. However, the attributes of different features are usually overlapping. The information is inexact and not suitable for traditional digital on/off decisions. Here a fuzzy rule base system is built to better model the uncertainty inherent in the data and vague human knowledge. Rather than representing attributes in linguistic terms like "Small", "Medium", "Large", we proposed a new method for automatic fuzzification of the traditional crisp concepts "GT", "LT" and "IB". Two sets of membership functions are defined to model those concepts. One is based on the piecewise linear functions, the other is based on S-type membership functions. A novel concept "fuzzy tolerance" is proposed to control the degree of fuzziness of each rule. The experimental results on classification and extracting features such as water, buildings, trees, fields and urban areas have shown that this newly designed fuzzy rule base system is intuitive and allows users to easily generate fuzzy rules.

  16. Object-based gully system prediction from medium resolution imagery using Random Forests

    NASA Astrophysics Data System (ADS)

    Shruthi, Rajesh B. V.; Kerle, Norman; Jetten, Victor; Stein, Alfred

    2014-07-01

    Erosion, in particular gully erosion, is a widespread problem. Its mapping is crucial for erosion monitoring and remediation of degraded areas. In addition, mapping of areas with high potential for future gully erosion can be used to assist prevention strategies. Good relations with topographic variables collected from the field are appropriate for determining areas susceptible to gullying. Image analysis of high resolution remotely sensed imagery (HRI) in combination with field verification has proven to be a good approach, although dependent on expensive imagery. Automatic and semi-automatic methods, such as object-oriented analysis (OOA), are rapid and reproducible. However, HRI data are not always available. We therefore attempted to identify gully systems using statistical modeling of image features from medium resolution imagery, here ASTER. These data were used for determining areas within gully system boundaries (GSB) using a semi-automatic method based on OOA. We assess if the selection of useful object features can be done in an objective and transferable way, using Random Forests (RF) for prediction of gully systems at regional scale, here in the Sehoul region, near Rabat, Morocco. Moderate success was achieved using a semi-automatic object-based RF model (out-of-bag error of 18.8%). Besides compensating for the imbalance between gully and non-gully classes, the procedure followed in this study enabled us to balance the classification error rates. The user's and producer's accuracy of the data with a balanced set of class showed an improved accuracy of the spatial estimates of gully systems, when compared to the data with imbalanced class. The model over-predicted the area within the GSB (13-27%), but its overall performance demonstrated that medium resolution satellite images contain sufficient information to identify gully systems, so that large areas can be mapped with relatively little effort and acceptable accuracy.

  17. Extraction of Benthic Cover Information from Video Tows and Photographs Using Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Estomata, M. T. L.; Blanco, A. C.; Nadaoka, K.; Tomoling, E. C. M.

    2012-07-01

    Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES) was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU), which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA), which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05).

  18. High resolution mapping of development in the wildland-urban interface using object based image extraction

    USGS Publications Warehouse

    Caggiano, Michael D.; Tinkham, Wade T.; Hoffman, Chad; Cheng, Antony S.; Hawbaker, Todd J.

    2016-01-01

    The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m2) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA

  19. Forecast bias analysis using object-based verification of regional WRF summertime convective forecasts

    NASA Astrophysics Data System (ADS)

    Starzec, Mariusz

    Forecast verification remains a crucial component of improving model forecasts, but still remains a challenge to perform. An objective method is developed to verify simulated reflectivity against radar reflectivity at a 1 km altitude utilizing the Method for Object-based Diagnostic Evaluation (MODE) Tool. Comparing the reflectivity field allows for an instantaneous view of what is occurring in simulations without any averaging that may occur when analyzing fields such as accumulated precipitation. The objective method is applied to high resolution 3 km and 1 km local convective WRF summertime forecasts in the Northern Plains region. The bulk verification statistics reveal that forecasts generate too many objects, over-forecast the areal coverage of convection, and over-intensify convection. No noteworthy increases in skill are found when increasing to 1 km resolution and instead lead to a significant over-forecasting of small cells. A sensitivity study is performed to investigate the forecast biases found by varying the cloud droplet concentration, microphysical scheme, and horizontal resolution on a case day containing weakly forced convection mostly below the freezing level. Changing the cloud droplet concentration has a strong impact on the number of object and area biases. Increasing droplet counts to observed values generates a forecast that more closely resembles the observations in terms of area and object counts, but leads not enough rain generation. Changing the microphysical scheme produces the most pronounced effects on object counts and intensity, which is attributed to differences in autoconversion formulations. Coarsening the resolution from 3 km to 9 km leads to a decrease in skill, showing that 3 km simulations are more effective at convective forecasts. Increasing the resolution to 1 km results in amplifying the object count bias, and is found to not be worth the additional computational expense.

  20. Office Computer Software: A Comprehensive Review of Software Programs.

    ERIC Educational Resources Information Center

    Secretary, 1992

    1992-01-01

    Describes types of software including system software, application software, spreadsheets, accounting software, graphics packages, desktop publishing software, database, desktop and personal information management software, project and records management software, groupware, and shareware. (JOW)