Automated reuseable components system study results
NASA Technical Reports Server (NTRS)
Gilroy, Kathy
1989-01-01
The Automated Reusable Components System (ARCS) was developed under a Phase 1 Small Business Innovative Research (SBIR) contract for the U.S. Army CECOM. The objectives of the ARCS program were: (1) to investigate issues associated with automated reuse of software components, identify alternative approaches, and select promising technologies, and (2) to develop tools that support component classification and retrieval. The approach followed was to research emerging techniques and experimental applications associated with reusable software libraries, to investigate the more mature information retrieval technologies for applicability, and to investigate the applicability of specialized technologies to improve the effectiveness of a reusable component library. Various classification schemes and retrieval techniques were identified and evaluated for potential application in an automated library system for reusable components. Strategies for library organization and management, component submittal and storage, and component search and retrieval were developed. A prototype ARCS was built to demonstrate the feasibility of automating the reuse process. The prototype was created using a subset of the classification and retrieval techniques that were investigated. The demonstration system was exercised and evaluated using reusable Ada components selected from the public domain. A requirements specification for a production-quality ARCS was also developed.
The SoRReL papers: Recent publications of the Software Reuse Repository Lab
NASA Technical Reports Server (NTRS)
Eichmann, David A. (Editor)
1992-01-01
The entire publication is presented of some of the papers recently published by the SoRReL. Some typical titles are as follows: Design of a Lattice-Based Faceted Classification System; A Hybrid Approach to Software Reuse Repository Retrieval; Selecting Reusable Components Using Algebraic Specifications; Neural Network-Based Retrieval from Reuse Repositories; and A Neural Net-Based Approach to Software Metrics.
Neural network-based retrieval from software reuse repositories
NASA Technical Reports Server (NTRS)
Eichmann, David A.; Srinivas, Kankanahalli
1992-01-01
A significant hurdle confronts the software reuser attempting to select candidate components from a software repository - discriminating between those components without resorting to inspection of the implementation(s). We outline an approach to this problem based upon neural networks which avoids requiring the repository administrators to define a conceptual closeness graph for the classification vocabulary.
Design of a lattice-based faceted classification system
NASA Technical Reports Server (NTRS)
Eichmann, David A.; Atkins, John
1992-01-01
We describe a software reuse architecture supporting component retrieval by facet classes. The facets are organized into a lattice of facet sets and facet n-tuples. The query mechanism supports precise retrieval and flexible browsing.
Partial Automation of Requirements Tracing
NASA Technical Reports Server (NTRS)
Hayes, Jane; Dekhtyar, Alex; Sundaram, Senthil; Vadlamudi, Sravanthi
2006-01-01
Requirements Tracing on Target (RETRO) is software for after-the-fact tracing of textual requirements to support independent verification and validation of software. RETRO applies one of three user-selectable information-retrieval techniques: (1) term frequency/inverse document frequency (TF/IDF) vector retrieval, (2) TF/IDF vector retrieval with simple thesaurus, or (3) keyword extraction. One component of RETRO is the graphical user interface (GUI) for use in initiating a requirements-tracing project (a pair of artifacts to be traced to each other, such as a requirements spec and a design spec). Once the artifacts have been specified and the IR technique chosen, another component constructs a representation of the artifact elements and stores it on disk. Next, the IR technique is used to produce a first list of candidate links (potential matches between the two artifact levels). This list, encoded in Extensible Markup Language (XML), is optionally processed by a filtering component designed to make the list somewhat smaller without sacrificing accuracy. Through the GUI, the user examines a number of links and returns decisions (yes, these are links; no, these are not links). Coded in XML, these decisions are provided to a "feedback processor" component that prepares the data for the next application of the IR technique. The feedback reduces the incidence of erroneous candidate links. Unlike related prior software, RETRO does not require the user to assign keywords, and automatically builds a document index.
2002-06-01
techniques for addressing the software component retrieval problem. Steigerwald [Ste91] introduced the use of algebraic specifications for defining the...provided in terms of a specification written using Luqi’s Prototype Specification Description Language (PSDL) [LBY88] augmented with an algebraic
Knowledge-based reusable software synthesis system
NASA Technical Reports Server (NTRS)
Donaldson, Cammie
1989-01-01
The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.
Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission
NASA Technical Reports Server (NTRS)
Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan
2010-01-01
The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.
Software Framework for Peer Data-Management Services
NASA Technical Reports Server (NTRS)
Hughes, John; Hardman, Sean; Crichton, Daniel; Hyon, Jason; Kelly, Sean; Tran, Thuy
2007-01-01
Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-to-peer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.
The Apache OODT Project: An Introduction
NASA Astrophysics Data System (ADS)
Mattmann, C. A.; Crichton, D. J.; Hughes, J. S.; Ramirez, P.; Goodale, C. E.; Hart, A. F.
2012-12-01
Apache OODT is a science data system framework, borne over the past decade, with 100s of FTEs of investment, tens of sponsoring agencies (NASA, NIH/NCI, DoD, NSF, universities, etc.), and hundreds of projects and science missions that it powers everyday to their success. At its core, Apache OODT carries with it two fundamental classes of software services and components: those that deal with information integration from existing science data repositories and archives, that themselves have already-in-use business processes and models for populating those archives. Information integration allows search, retrieval, and dissemination across these heterogeneous systems, and ultimately rapid, interactive data access, and retrieval. The other suite of services and components within Apache OODT handle population and processing of those data repositories and archives. Workflows, resource management, crawling, remote data retrieval, curation and ingestion, along with science data algorithm integration all are part of these Apache OODT software elements. In this talk, I will provide an overview of the use of Apache OODT to unlock and populate information from science data repositories and archives. We'll cover the basics, along with some advanced use cases and success stories.
CARDS: A blueprint and environment for domain-specific software reuse
NASA Technical Reports Server (NTRS)
Wallnau, Kurt C.; Solderitsch, Anne Costa; Smotherman, Catherine
1992-01-01
CARDS (Central Archive for Reusable Defense Software) exploits advances in domain analysis and domain modeling to identify, specify, develop, archive, retrieve, understand, and reuse domain-specific software components. An important element of CARDS is to provide visibility into the domain model artifacts produced by, and services provided by, commercial computer-aided software engineering (CASE) technology. The use of commercial CASE technology is important to provide rich, robust support for the varied roles involved in a reuse process. We refer to this kind of use of knowledge representation systems as supporting 'knowledge-based integration.'
NASA Technical Reports Server (NTRS)
Flora-Adams, Dana; Makihara, Jeanne; Benenyan, Zabel; Berner, Jeff; Kwok, Andrew
2007-01-01
Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-topeer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.
NA-42 TI Shared Software Component Library FY2011 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knudson, Christa K.; Rutz, Frederick C.; Dorow, Kevin E.
The NA-42 TI program initiated an effort in FY2010 to standardize its software development efforts with the long term goal of migrating toward a software management approach that will allow for the sharing and reuse of code developed within the TI program, improve integration, ensure a level of software documentation, and reduce development costs. The Pacific Northwest National Laboratory (PNNL) has been tasked with two activities that support this mission. PNNL has been tasked with the identification, selection, and implementation of a Shared Software Component Library. The intent of the library is to provide a common repository that is accessiblemore » by all authorized NA-42 software development teams. The repository facilitates software reuse through a searchable and easy to use web based interface. As software is submitted to the repository, the component registration process captures meta-data and provides version control for compiled libraries, documentation, and source code. This meta-data is then available for retrieval and review as part of library search results. In FY2010, PNNL and staff from the Remote Sensing Laboratory (RSL) teamed up to develop a software application with the goal of replacing the aging Aerial Measuring System (AMS). The application under development includes an Advanced Visualization and Integration of Data (AVID) framework and associated AMS modules. Throughout development, PNNL and RSL have utilized a common AMS code repository for collaborative code development. The AMS repository is hosted by PNNL, is restricted to the project development team, is accessed via two different geographic locations and continues to be used. The knowledge gained from the collaboration and hosting of this repository in conjunction with PNNL software development and systems engineering capabilities were used in the selection of a package to be used in the implementation of the software component library on behalf of NA-42 TI. The second task managed by PNNL is the development and continued maintenance of the NA-42 TI Software Development Questionnaire. This questionnaire is intended to help software development teams working under NA-42 TI in documenting their development activities. When sufficiently completed, the questionnaire illustrates that the software development activities recorded incorporate significant aspects of the software engineering lifecycle. The questionnaire template is updated as comments are received from NA-42 and/or its development teams and revised versions distributed to those using the questionnaire. PNNL also maintains a list of questionnaire recipients. The blank questionnaire template, the AVID and AMS software being developed, and the completed AVID AMS specific questionnaire are being used as the initial content to be established in the TI Component Library. This report summarizes the approach taken to identify requirements, search for and evaluate technologies, and the approach taken for installation of the software needed to host the component library. Additionally, it defines the process by which users request access for the contribution and retrieval of library content.« less
Automation and hypermedia technology applications
NASA Technical Reports Server (NTRS)
Jupin, Joseph H.; Ng, Edward W.; James, Mark L.
1993-01-01
This paper represents a progress report on HyLite (Hypermedia Library technology): a research and development activity to produce a versatile system as part of NASA's technology thrusts in automation, information sciences, and communications. HyLite can be used as a system or tool to facilitate the creation and maintenance of large distributed electronic libraries. The contents of such a library may be software components, hardware parts or designs, scientific data sets or databases, configuration management information, etc. Proliferation of computer use has made the diversity and quantity of information too large for any single user to sort, process, and utilize effectively. In response to this information deluge, we have created HyLite to enable the user to process relevant information into a more efficient organization for presentation, retrieval, and readability. To accomplish this end, we have incorporated various AI techniques into the HyLite hypermedia engine to facilitate parameters and properties of the system. The proposed techniques include intelligent searching tools for the libraries, intelligent retrievals, and navigational assistance based on user histories. HyLite itself is based on an earlier project, the Encyclopedia of Software Components (ESC) which used hypermedia to facilitate and encourage software reuse.
ERIC Educational Resources Information Center
Martins, Rosane Maria; Chaves, Magali Ribeiro; Pirmez, Luci; Rust da Costa Carmo, Luiz Fernando
2001-01-01
Discussion of the need to filter and retrieval relevant information from the Internet focuses on the use of mobile agents, specific software components which are based on distributed artificial intelligence and integrated systems. Surveys agent technology and discusses the agent building package used to develop two applications using IBM's Aglet…
Ground-Based Correction of Remote-Sensing Spectral Imagery
NASA Technical Reports Server (NTRS)
Alder-Golden, Steven M.; Rochford, Peter; Matthew, Michael; Berk, Alexander
2007-01-01
Software has been developed for an improved method of correcting for the atmospheric optical effects (primarily, effects of aerosols and water vapor) in spectral images of the surface of the Earth acquired by airborne and spaceborne remote-sensing instruments. In this method, the variables needed for the corrections are extracted from the readings of a radiometer located on the ground in the vicinity of the scene of interest. The software includes algorithms that analyze measurement data acquired from a shadow-band radiometer. These algorithms are based on a prior radiation transport software model, called MODTRAN, that has been developed through several versions up to what are now known as MODTRAN4 and MODTRAN5 . These components have been integrated with a user-friendly Interactive Data Language (IDL) front end and an advanced version of MODTRAN4. Software tools for handling general data formats, performing a Langley-type calibration, and generating an output file of retrieved atmospheric parameters for use in another atmospheric-correction computer program known as FLAASH have also been incorporated into the present soft-ware. Concomitantly with the soft-ware described thus far, there has been developed a version of FLAASH that utilizes the retrieved atmospheric parameters to process spectral image data.
Spatial Data Management System (SDMS)
NASA Technical Reports Server (NTRS)
Hutchison, Mark W.
1994-01-01
The Spatial Data Management System (SDMS) is a testbed for retrieval and display of spatially related material. SDMS permits the linkage of large graphical display objects with detail displays and explanations of its smaller components. SDMS combines UNIX workstations, MIT's X Window system, TCP/IP and WAIS information retrieval technology to prototype a means of associating aggregate data linked via spatial orientation. SDMS capitalizes upon and extends previous accomplishments of the Software Technology Branch in the area of Virtual Reality and Automated Library Systems.
PcapDB: Search Optimized Packet Capture, Version 0.1.0.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrell, Paul; Steinfadt, Shannon
PcapDB is a packet capture system designed to optimize the captured data for fast search in the typical (network incident response) use case. The technology involved in this software has been submitted via the IDEAS system and has been filed as a provisional patent. It includes the following primary components: capture: The capture component utilizes existing capture libraries to retrieve packets from network interfaces. Once retrieved the packets are passed to additional threads for sorting into flows and indexing. The sorted flows and indexes are passed to other threads so that they can be written to disk. These components aremore » written in the C programming language. search: The search components provide a means to find relevant flows and the associated packets. A search query is parsed and represented as a search tree. Various search commands, written in C, are then used resolve this tree into a set of search results. The tree generation and search execution management components are written in python. interface: The PcapDB web interface is written in Python on the Django framework. It provides a series of pages, API's, and asynchronous tasks that allow the user to manage the capture system, perform searches, and retrieve results. Web page components are written in HTML,CSS and Javascript.« less
Reusable Software Component Retrieval via Normalized Algebraic Specifications
1991-12-01
outputs. In fact, this method of query is simpler for matching since it relieves the system from the burden of generating a test set. Eichmann [Eich9l...September 1991. [Eich9l] Eichmann , David A., "Selecting Reusable Components Using Algebraic Specifications", Proceedings of the Second International...Technology Atlanta, Georgia 30332-0800 12. Dr. David Eichmann 1 Department of Statistics and Computer Science Knapp Hall West Virginia University Morgantown, West Virginia 26506 226
An application of machine learning to the organization of institutional software repositories
NASA Technical Reports Server (NTRS)
Bailin, Sidney; Henderson, Scott; Truszkowski, Walt
1993-01-01
Software reuse has become a major goal in the development of space systems, as a recent NASA-wide workshop on the subject made clear. The Data Systems Technology Division of Goddard Space Flight Center has been working on tools and techniques for promoting reuse, in particular in the development of satellite ground support software. One of these tools is the Experiment in Libraries via Incremental Schemata and Cobweb (ElvisC). ElvisC applies machine learning to the problem of organizing a reusable software component library for efficient and reliable retrieval. In this paper we describe the background factors that have motivated this work, present the design of the system, and evaluate the results of its application.
21 CFR 111.35 - Under this subpart D, what records must you make and keep?
Code of Federal Regulations, 2013 CFR
2013-04-01
..., and any other contact surfaces that are used to manufacture, package, label, or hold components or... current software is not able to retrieve such records) and of data entered into computer systems that you use to manufacture, package, label, or hold dietary supplements. (i) Your backup file (e.g., a hard...
21 CFR 111.35 - Under this subpart D, what records must you make and keep?
Code of Federal Regulations, 2010 CFR
2010-04-01
..., and any other contact surfaces that are used to manufacture, package, label, or hold components or... current software is not able to retrieve such records) and of data entered into computer systems that you use to manufacture, package, label, or hold dietary supplements. (i) Your backup file (e.g., a hard...
21 CFR 111.35 - Under this subpart D, what records must you make and keep?
Code of Federal Regulations, 2011 CFR
2011-04-01
..., and any other contact surfaces that are used to manufacture, package, label, or hold components or... current software is not able to retrieve such records) and of data entered into computer systems that you use to manufacture, package, label, or hold dietary supplements. (i) Your backup file (e.g., a hard...
QCS : a system for querying, clustering, and summarizing documents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.
2006-08-01
Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test setsmore » from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence ''trimming'', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.« less
QCS: a system for querying, clustering and summarizing documents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Schlesinger, Judith D.; O'Leary, Dianne P.
2006-10-01
Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test setsmore » from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.« less
The Chandra X-ray Center data system: supporting the mission of the Chandra X-ray Observatory
NASA Astrophysics Data System (ADS)
Evans, Janet D.; Cresitello-Dittmar, Mark; Doe, Stephen; Evans, Ian; Fabbiano, Giuseppina; Germain, Gregg; Glotfelty, Kenny; Hall, Diane; Plummer, David; Zografou, Panagoula
2006-06-01
The Chandra X-ray Center Data System provides end-to-end scientific software support for Chandra X-ray Observatory mission operations. The data system includes the following components: (1) observers' science proposal planning tools; (2) science mission planning tools; (3) science data processing, monitoring, and trending pipelines and tools; and (4) data archive and database management. A subset of the science data processing component is ported to multiple platforms and distributed to end-users as a portable data analysis package. Web-based user tools are also available for data archive search and retrieval. We describe the overall architecture of the data system and its component pieces, and consider the design choices and their impacts on maintainability. We discuss the many challenges involved in maintaining a large, mission-critical software system with limited resources. These challenges include managing continually changing software requirements and ensuring the integrity of the data system and resulting data products while being highly responsive to the needs of the project. We describe our use of COTS and OTS software at the subsystem and component levels, our methods for managing multiple release builds, and adapting a large code base to new hardware and software platforms. We review our experiences during the life of the mission so-far, and our approaches for keeping a small, but highly talented, development team engaged during the maintenance phase of a mission.
yourSky: Custom Sky-Image Mosaics via the Internet
NASA Technical Reports Server (NTRS)
Jacob, Joseph
2003-01-01
yourSky (http://yourSky.jpl.nasa.gov) is a computer program that supplies custom astronomical image mosaics of sky regions specified by requesters using client computers connected to the Internet. [yourSky is an upgraded version of the software reported in Software for Generating Mosaics of Astronomical Images (NPO-21121), NASA Tech Briefs, Vol. 25, No. 4 (April 2001), page 16a.] A requester no longer has to engage in the tedious process of determining what subset of images is needed, nor even to know how the images are indexed in image archives. Instead, in response to a requester s specification of the size and location of the sky area, (and optionally of the desired set and type of data, resolution, coordinate system, projection, and image format), yourSky automatically retrieves the component image data from archives totaling tens of terabytes stored on computer tape and disk drives at multiple sites and assembles the component images into a mosaic image by use of a high-performance parallel code. yourSky runs on the server computer where the mosaics are assembled. Because yourSky includes a Web-interface component, no special client software is needed: ordinary Web browser software is sufficient.
Space Images for NASA JPL Android Version
NASA Technical Reports Server (NTRS)
Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice
2013-01-01
This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.
Image retrieval and processing system version 2.0 development work
NASA Technical Reports Server (NTRS)
Slavney, Susan H.; Guinness, Edward A.
1991-01-01
The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each.
NASA Astrophysics Data System (ADS)
Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.
2004-11-01
Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.
NSWC-NADC interactive communication links for AN/UYS-1 loadtape creation and retrieval
NASA Astrophysics Data System (ADS)
Greathouse, D. M.
1984-09-01
This report contains an alternative method of communication (interactive vs. remote batch) with the Naval Air Development Center for the creation and retrieval of AN/UYS-1 Advanced Signal Processor (ASP) operational software loadtapes. Operational software for the Digital Acoustic Sensor Simulator (DASS) program is developed and maintained at the Naval Air Development Center (NADC). The Facility for Automated Software Production (FASP), an NADC-resident software generation facility, provides the support tools necessary for data base creation, software development and maintenance, and loadtape generation. Once a loadtape file is generated at NADC, it must be retrieved via telephone transmission and placed in a format suitable for loading into the AN/UYS-1 Advanced Signal Processor (ASP).
Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer
NASA Technical Reports Server (NTRS)
Broderick, Daniel
2012-01-01
This software retrieves the surface and atmosphere parameters of multi-angle, multiband spectra. The synthetic spectra are generated by applying the modified Rahman-Pinty-Verstraete Bidirectional Reflectance Distribution Function (BRDF) model, and a single-scattering dominated atmosphere model to surface reflectance data from Multiangle Imaging SpectroRadiometer (MISR). The aerosol physical model uses a single scattering approximation using Rayleigh scattering molecules, and Henyey-Greenstein aerosols. The surface and atmosphere parameters of the models are retrieved using the Lavenberg-Marquardt algorithm. The software can retrieve the surface and atmosphere parameters with two different scales. The surface parameters are retrieved pixel-by-pixel while the atmosphere parameters are retrieved for a group of pixels where the same atmosphere model parameters are applied. This two-scale approach allows one to select the natural scale of the atmosphere properties relative to surface properties. The software also takes advantage of an intelligent initial condition given by the solution of the neighbor pixels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
RIECK, C.A.
1999-02-23
This Software Configuration Management Plan (SCMP) provides the instructions for change control of the W-211 Project, Retrieval Control System (RCS) software after initial approval/release but prior to the transfer of custody to the waste tank operations contractor. This plan applies to the W-211 system software developed by the project, consisting of the computer human-machine interface (HMI) and programmable logic controller (PLC) software source and executable code, for production use by the waste tank operations contractor. The plan encompasses that portion of the W-211 RCS software represented on project-specific AUTOCAD drawings that are released as part of the C1 definitive designmore » package (these drawings are identified on the drawing list associated with each C-1 package), and the associated software code. Implementation of the plan is required for formal acceptance testing and production release. The software configuration management plan does not apply to reports and data generated by the software except where specifically identified. Control of information produced by the software once it has been transferred for operation is the responsibility of the receiving organization.« less
How to Choose a Media Retrieval System.
ERIC Educational Resources Information Center
Huber, Joe
1995-01-01
Provides guidelines for schools choosing a media retrieval system. Topics include broadband, baseband, coaxial cable, or fiber optic decisions; the control network; selecting scheduling software; presentation software; device control; control from the classroom; and a comparison of systems offered by five companies. (LRW)
NASA Technical Reports Server (NTRS)
deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher
2013-01-01
This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.
Exploiting IoT Technologies and Open Source Components for Smart Seismic Network Instrumentation
NASA Astrophysics Data System (ADS)
Germenis, N. G.; Koulamas, C. A.; Foundas, P. N.
2017-12-01
The data collection infrastructure of any seismic network poses a number of requirements and trade-offs related to accuracy, reliability, power autonomy and installation & operational costs. Having the right hardware design at the edge of this infrastructure, embedded software running inside the instruments is the heart of pre-processing and communication services implementation and their integration with the central storage and processing facilities of the seismic network. This work demonstrates the feasibility and benefits of exploiting software components from heterogeneous sources in order to realize a smart seismic data logger, achieving higher reliability, faster integration and less development and testing costs of critical functionality that is in turn responsible for the cost and power efficient operation of the device. The instrument's software builds on top of widely used open source components around the Linux kernel with real-time extensions, the core Debian Linux distribution, the earthworm and seiscomp tooling frameworks, as well as components from the Internet of Things (IoT) world, such as the CoAP and MQTT protocols for the signaling planes, besides the widely used de-facto standards of the application domain at the data plane, such as the SeedLink protocol. By using an innovative integration of features based on lower level GPL components of the seiscomp suite with higher level processing earthworm components, coupled with IoT protocol extensions to the latter, the instrument can implement smart functionality such as network controlled, event triggered data transmission in parallel with edge archiving and on demand, short term historical data retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsitsin, A.G.
A project is discussed which is aimed at creating the International Center for certification of software complexes (SC), intended to for soling various heat and mass transfer problems. Information on the experience gained in the operation of an information retrieval SC system is presented.
Brandt, J-M; Guenther, L; O'Brien, S; Vecherya, A; Turgeon, T R; Bohm, E R
2013-12-01
The surface characteristics of the femoral component affect polyethylene wear in modular total knee replacements. In the present retrieval study, the surface characteristics of cobalt-chromium (CoCr) alloy and oxidized zirconium (OxZr) femoral components were assessed and compared. Twenty-six retrieved CoCr alloy femoral components were matched with twenty-six retrieved OxZr femoral components for implantation period, body-mass index, patient gender, implant type, and polyethylene insert thickness. The surface damage on the retrieved femoral components was evaluated using a semi-quantitative assessment method, scanning electron microscopy, and contact profilometry. The retrieved CoCr alloy femoral components showed less posterior surface gouging than OxZr femoral components; however, at a higher magnification, the grooving damage features on the retrieved CoCr alloy femoral components confirmed an abrasive wear mechanism. The surface roughness values Rp, Rpm, and Rpk for the retrieved CoCr alloy femoral components were found to be significantly higher than those of the retrieved OxZr femoral components (p≤0.031). The surface roughness values were higher on the medial condyles than on the lateral condyles of the retrieved CoCr alloy femoral components; such a difference was not observed on the retrieved OxZr femoral components. The surface roughness of CoCr alloy femoral components increased while the surface roughness of the OxZr femoral components remained unchanged after in vivo service. Therefore, the OxZr femoral components' resistance to abrasive wear may enable lower polyethylene wear and ensure long-term durability in vivo. Level IV. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.
A Multimodal Search Engine for Medical Imaging Studies.
Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos
2017-02-01
The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.
Liu, Fei; Wang, Yuan-zhong; Deng, Xing-yan; Jin, Hang; Yang, Chun-yan
2014-06-01
The infrared spectral of stems of 165 trees of 23 Dendrobium varieties were obtained by means of Fourier transform infrared spectroscopy technique. The spectra show that the spectra of all the samples were similar, and the main components of stem of Dendrobium is cellulose. By the spectral professional software Omnic8.0, three spectral databases were constructed. Lib01 includes of the average spectral of the first four trees of every variety, while Lib02 and Lib03 are constructed from the first-derivative spectra and the second-derivative spectra of average spectra, separately. The correlation search, the square difference retrieval and the square differential difference retrieval of the spectra are performed with the spectral database Lib01 in the specified range of 1 800-500 cm(-1), and the yield correct rate of 92.7%, 74.5% and 92.7%, respectively. The square differential difference retrieval of the first-derivative spectra and the second-derivative spectra is carried out with Lib02 and Lib03 in the same specified range 1 800-500 cm(-1), and shows correct rate of 93.9% for the former and 90.3% for the later. The results show that the first-derivative spectral retrieval of square differential difference algorithm is more suitabe for discerning Dendrobium varieties, and FTIR combining with the spectral retrieval method can identify different varieties of Dendrobium, and the correlation retrieval, the square differential retrieval, the first-derivative spectra and second-derivative spectra retrieval in the specified spectral range are effective and simple way of distinguishing different varieties of Dendrobium.
The design, deployment, and testing of kriging models in GEOframe with SIK-0.9.8
NASA Astrophysics Data System (ADS)
Bancheri, Marialaura; Serafin, Francesco; Bottazzi, Michele; Abera, Wuletawu; Formetta, Giuseppe; Rigon, Riccardo
2018-06-01
This work presents a software package for the interpolation of climatological variables, such as temperature and precipitation, using kriging techniques. The purposes of the paper are (1) to present a geostatistical software that is easy to use and easy to plug in to a hydrological model; (2) to provide a practical example of an accurately designed software from the perspective of reproducible research; and (3) to demonstrate the goodness of the results of the software and so have a reliable alternative to other, more traditional tools. A total of 11 types of theoretical semivariograms and four types of kriging were implemented and gathered into Object Modeling System-compliant components. The package provides real-time optimization for semivariogram and kriging parameters. The software was tested using a year's worth of hourly temperature readings and a rain storm event (11 h) recorded in 2008 and retrieved from 97 meteorological stations in the Isarco River basin, Italy. For both the variables, good interpolation results were obtained and then compared to the results from the R package gstat.
ENVISAT Land Surface Processes. Phase 2
NASA Technical Reports Server (NTRS)
vandenHurk, B. J. J. M.; Su, Z.; Verhoef, W.; Menenti, M.; Li, Z.-L.; Wan, Z.; Moene, A. F.; Roerink, G.; Jia, I.
2002-01-01
This is a progress report of the 2nd phase of the project ENVISAT- Land Surface Processes, which has a 3-year scope. In this project, preparative research is carried out aiming at the retrieval of land surface characteristics from the ENVISAT sensors MERIS and AATSR, for assimilation into a system for Numerical Weather Prediction (NWP). Where in the 1st phase a number of first shot experiments were carried out (aiming at gaining experience with the retrievals and data assimilation procedures), the current 2nd phase has put more emphasis on the assessment and improvement of the quality of the retrieved products. The forthcoming phase will be devoted mainly to the data assimilation experiments and the assessment of the added value of the future ENVISAT products for NWP forecast skill. Referring to the retrieval of albedo, leaf area index and atmospheric corrections, preliminary radiative transfer calculations have been carried out that should enable the retrieval of these parameters once AATSR and MERIS data become available. However, much of this work is still to be carried out. An essential part of work in this area is the design and implementation of software that enables an efficient use of MODTRAN(sub 4) radiative transfer code, and during the current project phase familiarization with these new components has been achieved. Significant progress has been made with the retrieval of component temperatures from directional ATSR-images, and the calculation of surface turbulent heat fluxes from these data. The impact of vegetation cover on the retrieved component temperatures appears manageable, and preliminary comparison of foliage temperature to air temperatures were encouraging. The calculation of surface fluxes using the SEBI concept,which includes a detailed model of the surface roughness ratio, appeared to give results that were in reasonable agreement with local measurements with scintillometer devices. The specification of the atmospheric boundary conditions appears a crucial component, and the use of first guess estimates from the RACMO models partially explains the success. Earlier data assimilation experiments with directional surface temperatures have been analysed a bit further and were also compared to results obtained from directly modeling the surface roughness ratio. Results between these calculations and the data assimilation results appeared well comparable, but a full test in which the surface roughness model is allowed to play a free role during the data assimilation process has yet to be carried out. A considerable number of tasks that have yet to be carried out during Phase 3 has been formulated.
ERIC Educational Resources Information Center
Sieverts, Eric G.; And Others
1993-01-01
Reports on tests evaluating nine microcomputer software packages designed for information storage and retrieval: BRS-Search, dtSearch, InfoBank, Micro-OPC, Q&A, STN-PFS, Strix, TINman, and ZYindex. Tables and narrative evaluations detail results related to security, hardware, user features, search capability, indexing, input, maintenance of files,…
Knowledge Retrieval Solutions.
ERIC Educational Resources Information Center
Khan, Kamran
1998-01-01
Excalibur RetrievalWare offers true knowledge retrieval solutions. Its fundamental technologies, Adaptive Pattern Recognition Processing and Semantic Networks, have capabilities for knowledge discovery and knowledge management of full-text, structured and visual information. The software delivers a combination of accuracy, extensibility,…
A web-based approach for electrocardiogram monitoring in the home.
Magrabi, F; Lovell, N H; Celler, B G
1999-05-01
A Web-based electrocardiogram (ECG) monitoring service in which a longitudinal clinical record is used for management of patients, is described. The Web application is used to collect clinical data from the patient's home. A database on the server acts as a central repository where this clinical information is stored. A Web browser provides access to the patient's records and ECG data. We discuss the technologies used to automate the retrieval and storage of clinical data from a patient database, and the recording and reviewing of clinical measurement data. On the client's Web browser, ActiveX controls embedded in the Web pages provide a link between the various components including the Web server, Web page, the specialised client side ECG review and acquisition software, and the local file system. The ActiveX controls also implement FTP functions to retrieve and submit clinical data to and from the server. An intelligent software agent on the server is activated whenever new ECG data is sent from the home. The agent compares historical data with newly acquired data. Using this method, an optimum patient care strategy can be evaluated, a summarised report along with reminders and suggestions for action is sent to the doctor and patient by email.
NASA Technical Reports Server (NTRS)
2001-01-01
In this contract, which is a component of a larger contract that we plan to submit in the coming months, we plan to study the preprocessing issues which arise in applying natural language processing techniques to NASA-KSC problem reports. The goals of this work will be to deal with the issues of: a) automatically obtaining the problem reports from NASA-KSC data bases, b) the format of these reports and c) the conversion of these reports to a format that will be adequate for our natural language software. At the end of this contract, we expect that these problems will be solved and that we will be ready to apply our natural language software to a text database of over 1000 KSC problem reports.
Develop advanced nonlinear signal analysis topographical mapping system
NASA Technical Reports Server (NTRS)
Jong, Jen-Yi
1993-01-01
This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. The final result of this program will yield an advanced nonlinear signal analysis topographical mapping system (ATMS) of nonlinear and nonstationary spectral analysis software package integrated with the Compressed SSME TOPO Data Base (CSTDB) on the same platform. This system will allow NASA engineers to retrieve any unique defect signatures and trends associated with different failure modes and anomalous phenomena over the entire SSME test history across turbopump families.
EOS MLS Level 2 Data Processing Software Version 3
NASA Technical Reports Server (NTRS)
Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.;
2011-01-01
This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.
NASA Astrophysics Data System (ADS)
Al-Mishwat, Ali T.
2016-05-01
PHASS99 is a FORTRAN program designed to retrieve and decode radiometric and other physical age information of igneous rocks contained in the international database IGBADAT (Igneous Base Data File). In the database, ages are stored in a proprietary format using mnemonic representations. The program can handle up to 99 ages in an igneous rock specimen and caters to forty radiometric age systems. The radiometric age alphanumeric strings assigned to each specimen description in the database consist of four components: the numeric age and its exponential modifier, a four-character mnemonic method identification, a two-character mnemonic name of analysed material, and the reference number in the rock group bibliography vector. For each specimen, the program searches for radiometric age strings, extracts them, parses them, decodes the different age components, and converts them to high-level English equivalents. IGBADAT and similarly-structured files are used for input. The output includes three files: a flat raw ASCII text file containing retrieved radiometric age information, a generic spreadsheet-compatible file for data import to spreadsheets, and an error file. PHASS99 builds on the old program TSTPHA (Test Physical Age) decoder program and expands greatly its capabilities. PHASS99 is simple, user friendly, fast, efficient, and does not require users to have knowledge of programing.
BanTeC: a software tool for management of corneal transplantation.
López-Alvarez, P; Caballero, F; Trias, J; Cortés, U; López-Navidad, A
2005-11-01
Until recently, all cornea information at our tissue bank was managed manually, no specific database or computer tool had been implemented to provide electronic versions of documents and medical reports. The main objective of the BanTeC project was therefore to create a computerized system to integrate and classify all the information and documents used in the center in order to facilitate management of retrieved, transplanted corneal tissues. We used the Windows platform to develop the project. Microsoft Access and Microsoft Jet Engine were used at the database level and Data Access Objects was the chosen data access technology. In short, the BanTeC software seeks to computerize the tissue bank. All the initial stages of the development have now been completed, from specification of needs, program design and implementation of the software components, to the total integration of the final result in the real production environment. BanTeC will allow the generation of statistical reports for analysis to improve our performance.
ASIST 2001. Information in a Networked World: Harnessing the Flow. Part III: Poster Presentations.
ERIC Educational Resources Information Center
Proceedings of the ASIST Annual Meeting, 2001
2001-01-01
Topics of Poster Presentations include: electronic preprints; intranets; poster session abstracts; metadata; information retrieval; watermark images; video games; distributed information retrieval; subject domain knowledge; data mining; information theory; course development; historians' use of pictorial images; information retrieval software;…
Positive facial expressions during retrieval of self-defining memories.
Gandolphe, Marie Charlotte; Nandrino, Jean Louis; Delelis, Gérald; Ducro, Claire; Lavallee, Audrey; Saloppe, Xavier; Moustafa, Ahmed A; El Haj, Mohamad
2017-11-14
In this study, we investigated, for the first time, facial expressions during the retrieval of Self-defining memories (i.e., those vivid and emotionally intense memories of enduring concerns or unresolved conflicts). Participants self-rated the emotional valence of their Self-defining memories and autobiographical retrieval was analyzed with a facial analysis software. This software (Facereader) synthesizes the facial expression information (i.e., cheek, lips, muscles, eyebrow muscles) to describe and categorize facial expressions (i.e., neutral, happy, sad, surprised, angry, scared, and disgusted facial expressions). We found that participants showed more emotional than neutral facial expressions during the retrieval of Self-defining memories. We also found that participants showed more positive than negative facial expressions during the retrieval of Self-defining memories. Interestingly, participants attributed positive valence to the retrieved memories. These findings are the first to demonstrate the consistency between facial expressions and the emotional subjective experience of Self-defining memories. These findings provide valuable physiological information about the emotional experience of the past.
Image acquisition unit for the Mayo/IBM PACS project
NASA Astrophysics Data System (ADS)
Reardon, Frank J.; Salutz, James R.
1991-07-01
The Mayo Clinic and IBM Rochester, Minnesota, have jointly developed a picture archiving, distribution and viewing system for use with Mayo's CT and MRI imaging modalities. Images are retrieved from the modalities and sent over the Mayo city-wide token ring network to optical storage subsystems for archiving, and to server subsystems for viewing on image review stations. Images may also be retrieved from archive and transmitted back to the modalities. The subsystems that interface to the modalities and communicate to the other components of the system are termed Image Acquisition Units (LAUs). The IAUs are IBM Personal System/2 (PS/2) computers with specially developed software. They operate independently in a network of cooperative subsystems and communicate with the modalities, archive subsystems, image review server subsystems, and a central subsystem that maintains information about the content and location of images. This paper provides a detailed description of the function and design of the Image Acquisition Units.
Specifications for Thesaurus Software.
ERIC Educational Resources Information Center
Milstead, Jessica L.
1991-01-01
Presents specifications for software that is designed to support manual development and maintenance of information retrieval thesauri. Evaluation of existing software and design of custom software is discussed, requirements for integration with larger systems and for the user interface are described, and relationships among terms are discussed.…
Patient Safety—Incorporating Drawing Software into Root Cause Analysis Software
Williams, Linda; Grayson, Diana; Gosbee, John
2001-01-01
Drawing software from Lassalle Technologies1 (France) designed for Visual Basic is the tool we used to standardize the creation, storage, and retrieval of flow diagrams containing information about adverse events and close calls.
Patient Safety—Incorporating Drawing Software into Root Cause Analysis Software
Williams, Linda; Grayson, Diana; Gosbee, John
2002-01-01
Drawing software from Lassalle Technologies1 (France) designed for Visual Basic is the tool we used to standardize the creation, storage, and retrieval of flow diagrams containing information about adverse events and close calls.
International Inventory of Software Packages in the Information Field.
ERIC Educational Resources Information Center
Keren, Carl, Ed.; Sered, Irina, Ed.
Designed to provide guidance in selecting appropriate software for library automation, information storage and retrieval, or management of bibliographic databases, this inventory describes 188 computer software packages. The information was obtained through a questionnaire survey of 600 software suppliers and developers who were asked to describe…
Incorporating a Human-Computer Interaction Course into Software Development Curriculums
ERIC Educational Resources Information Center
Janicki, Thomas N.; Cummings, Jeffrey; Healy, R. Joseph
2015-01-01
Individuals have increasing options on retrieving information related to hardware and software. Specific hardware devices include desktops, tablets and smart devices. Also, the number of software applications has significantly increased the user's capability to access data. Software applications include the traditional web site, smart device…
NASA Technical Reports Server (NTRS)
Erickson, J.; Goode, R.; Grimm, K.; Hess, C.; Norsworthy, R.; Anderson, G.; Merkel, L.; Phinney, D.
1992-01-01
The ground-based demonstrations of Extra Vehicular Activity (EVA) Retriever, a voice-supervised, intelligent, free-flying robot, are designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The EVA Retriever software is required to autonomously plan and execute a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles with subsequent object handover. The software architecture incorporates a heirarchical decomposition of the control system that is horizontally partitioned into five major functional subsystems: sensing, perception, world model, reasoning, and acting. The design provides for supervised autonomy as the primary mode of operation. It is intended to be an evolutionary system improving in capability over time and as it earns crew trust through reliable and safe operation. This paper gives an overview of the hardware, a focus on software, and a summary of results achieved recently from both computer simulations and air bearing floor demonstrations. Limitations of the technology used are evaluated. Plans for the next phase, during which moving targets and obstacles drive realtime behavior requirements, are discussed.
NASA Astrophysics Data System (ADS)
Erickson, Jon D.; Goode, R.; Grimm, K. A.; Hess, Clifford W.; Norsworthy, Robert S.; Anderson, Greg D.; Merkel, L.; Phinney, Dale E.
1992-03-01
The ground-based demonstrations of Extra Vehicular Activity (EVA) Retriever, a voice- supervised, intelligent, free-flying robot, are designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the space station. The EVA Retriever software is required to autonomously plan and execute a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles with subsequent object handover. The software architecture incorporates a hierarchical decomposition of the control system that is horizontally partitioned into five major functional subsystems: sensing, perception, world model, reasoning, and acting. The design provides for supervised autonomy as the primary mode of operation. It is intended to be an evolutionary system improving in capability over time and as it earns crew trust through reliable and safe operation. This paper gives an overview of the hardware, a focus on software, and a summary of results achieved recently from both computer simulations and air bearing floor demonstrations. Limitations of the technology used are evaluated. Plans for the next phase, during which moving targets and obstacles drive realtime behavior requirements, are discussed.
Facilitating Internet-Scale Code Retrieval
ERIC Educational Resources Information Center
Bajracharya, Sushil Krishna
2010-01-01
Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…
2016-01-01
The aim of this study was to determine how representative wear scars of simulator-tested polyethylene (PE) inserts compare with retrieved PE inserts from total knee replacement (TKR). By means of a nonparametric self-organizing feature map (SOFM), wear scar images of 21 postmortem- and 54 revision-retrieved components were compared with six simulator-tested components that were tested either in displacement or in load control according to ISO protocols. The SOFM network was then trained with the wear scar images of postmortem-retrieved components since those are considered well-functioning at the time of retrieval. Based on this training process, eleven clusters were established, suggesting considerable variability among wear scars despite an uncomplicated loading history inside their hosts. The remaining components (revision-retrieved and simulator-tested) were then assigned to these established clusters. Six out of five simulator components were clustered together, suggesting that the network was able to identify similarities in loading history. However, the simulator-tested components ended up in a cluster at the fringe of the map containing only 10.8% of retrieved components. This may suggest that current ISO testing protocols were not fully representative of this TKR population, and protocols that better resemble patients' gait after TKR containing activities other than walking may be warranted. PMID:27597955
Independent Orbiter Assessment (IOA): Analysis of the remote manipulator system
NASA Technical Reports Server (NTRS)
Tangorra, F.; Grasmeder, R. F.; Montgomery, A. D.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items (PCIs). To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The independent analysis results for the Orbiter Remote Manipulator System (RMS) are documented. The RMS hardware and software are primarily required for deploying and/or retrieving up to five payloads during a single mission, capture and retrieve free-flying payloads, and for performing Manipulator Foot Restraint operations. Specifically, the RMS hardware consists of the following components: end effector; displays and controls; manipulator controller interface unit; arm based electronics; and the arm. The IOA analysis process utilized available RMS hardware drawings, schematics and documents for defining hardware assemblies, components and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 574 failure modes analyzed, 413 were determined to be PCIs.
Schematic memory components converge within angular gyrus during retrieval
Wagner, Isabella C; van Buuren, Mariët; Kroes, Marijn CW; Gutteling, Tjerk P; van der Linden, Marieke; Morris, Richard G; Fernández, Guillén
2015-01-01
Mental schemas form associative knowledge structures that can promote the encoding and consolidation of new and related information. Schemas are facilitated by a distributed system that stores components separately, presumably in the form of inter-connected neocortical representations. During retrieval, these components need to be recombined into one representation, but where exactly such recombination takes place is unclear. Thus, we asked where different schema components are neuronally represented and converge during retrieval. Subjects acquired and retrieved two well-controlled, rule-based schema structures during fMRI on consecutive days. Schema retrieval was associated with midline, medial-temporal, and parietal processing. We identified the multi-voxel representations of different schema components, which converged within the angular gyrus during retrieval. Critically, convergence only happened after 24-hour-consolidation and during a transfer test where schema material was applied to novel but related trials. Therefore, the angular gyrus appears to recombine consolidated schema components into one memory representation. DOI: http://dx.doi.org/10.7554/eLife.09668.001 PMID:26575291
Applying Hypertext Structures to Software Documentation.
ERIC Educational Resources Information Center
French, James C.; And Others
1997-01-01
Describes a prototype system for software documentation management called SLEUTH (Software Literacy Enhancing Usefulness to Humans) being developed at the University of Virginia. Highlights include information retrieval techniques, hypertext links that are installed automatically, a WAIS (Wide Area Information Server) search engine, user…
WWW Entrez: A Hypertext Retrieval Tool for Molecular Biology.
ERIC Educational Resources Information Center
Epstein, Jonathan A.; Kans, Jonathan A.; Schuler, Gregory D.
This article describes the World Wide Web (WWW) Entrez server which is based upon the National Center for Biotechnology Information's (NCBI) Entrez retrieval database and software. Entrez is a molecular sequence retrieval system that contains an integrated view of portions of Medline and all publicly available nucleotide and protein databases,…
HUC--A User Designed System for All Recorded Knowledge and Information.
ERIC Educational Resources Information Center
Hilton, Howard J.
This paper proposes a user designed system, HUC, intended to provide a single index and retrieval system covering all recorded knowledge and information capable of being retrieved from all modes of storage, from manual to the most sophisticated retrieval system. The concept integrates terminal hardware, software, and database structure to allow…
Semantics of User Interface for Image Retrieval: Possibility Theory and Learning Techniques.
ERIC Educational Resources Information Center
Crehange, M.; And Others
1989-01-01
Discusses the need for a rich semantics for the user interface in interactive image retrieval and presents two methods for building such interfaces: possibility theory applied to fuzzy data retrieval, and a machine learning technique applied to learning the user's deep need. Prototypes developed using videodisks and knowledge-based software are…
NASA Technical Reports Server (NTRS)
Quach, William L.; Sesplaukis, Tadas; Owen-Mankovich, Kyran J.; Nakamura, Lori L.
2012-01-01
WMD provides a centralized interface to access data stored in the Mission Data Processing and Control System (MPCS) GDS (Ground Data Systems) databases during MSL (Mars Science Laboratory) Testbeds and ATLO (Assembly, Test, and Launch Operations) test sessions. The MSL project organizes its data based on venue (Testbed, ATLO, Ops), with each venue's data stored on a separate database, making it cumbersome for users to access data across the various venues. WMD allows sessions to be retrieved through a Web-based search using several criteria: host name, session start date, or session ID number. Sessions matching the search criteria will be displayed and users can then select a session to obtain and analyze the associated data. The uniqueness of this software comes from its collection of data retrieval and analysis features provided through a single interface. This allows users to obtain their data and perform the necessary analysis without having to worry about where and how to get the data, which may be stored in various locations. Additionally, this software is a Web application that only requires a standard browser without additional plug-ins, providing a cross-platform, lightweight solution for users to retrieve and analyze their data. This software solves the problem of efficiently and easily finding and retrieving data from thousands of MSL Testbed and ATLO sessions. WMD allows the user to retrieve their session in as little as one mouse click, and then to quickly retrieve additional data associated with the session.
Lidar stand-alone retrieval of atmospheric aerosol microphysical properties during SLOPE
NASA Astrophysics Data System (ADS)
Ortiz-Amezcua, Pablo; Samaras, Stefanos; Böckmann, Christine; Antonio Benavent-Oltra, Jose; Luis Guerrero-Rascado, Juan; Román, Roberto; Alados-Arboledas, Lucas
2018-04-01
Two cases from SLOPE campaign at Granada are analyzed in terms of particle microphysical properties using novel software developed at Potsdam University. Multiwavelength Raman lidar measurements of particle extinction and backscatter coefficients as well as linear particle depolarization ratios are used as input for the software. The result of the retrieval is a 2-dimensional particle volume distribution as a function of radius and aspect ratio, from which the particle microphysical properties are obtained.
Code of Federal Regulations, 2013 CFR
2013-07-01
... DoD Component. (b) Retrieval practices. (1) Records in a group of records that MAY be retrieved by a... automated (Information Technology) system that is capable of being manipulated to retrieve information about... to this part, retrieval policies and practices shall be evaluated. If DoD Component policy is to...
Code of Federal Regulations, 2012 CFR
2012-07-01
... DoD Component. (b) Retrieval practices. (1) Records in a group of records that MAY be retrieved by a... automated (Information Technology) system that is capable of being manipulated to retrieve information about... to this part, retrieval policies and practices shall be evaluated. If DoD Component policy is to...
Code of Federal Regulations, 2011 CFR
2011-07-01
... DoD Component. (b) Retrieval practices. (1) Records in a group of records that MAY be retrieved by a... automated (Information Technology) system that is capable of being manipulated to retrieve information about... to this part, retrieval policies and practices shall be evaluated. If DoD Component policy is to...
Code of Federal Regulations, 2014 CFR
2014-07-01
... DoD Component. (b) Retrieval practices. (1) Records in a group of records that MAY be retrieved by a... automated (Information Technology) system that is capable of being manipulated to retrieve information about... to this part, retrieval policies and practices shall be evaluated. If DoD Component policy is to...
ERIC Educational Resources Information Center
Davies, Denise M.
1985-01-01
Discusses design, development, and use of a database to provide organization and access to a computer software collection at the University of Hawaii School of Library Studies. Field specifications, samples of report forms, and a description of the physical organization of the software collection are included. (MBR)
NASA Technical Reports Server (NTRS)
2011-01-01
Topics covered include: Wind and Temperature Spectrometry of the Upper Atmosphere in Low-Earth Orbit; Health Monitor for Multitasking, Safety-Critical, Real-Time Software; Stereo Imaging Miniature Endoscope; Early Oscillation Detection Technique for Hybrid DC/DC Converters; Parallel Wavefront Analysis for a 4D Interferometer; Schottky Heterodyne Receivers With Full Waveguide Bandwidth; Carbon Nanofiber-Based, High-Frequency, High-Q, Miniaturized Mechanical Resonators; Ultracapacitor-Based Uninterrupted Power Supply System; Coaxial Cables for Martian Extreme Temperature Environments; Using Spare Logic Resources To Create Dynamic Test Points; Autonomous Coordination of Science Observations Using Multiple Spacecraft; Autonomous Phase Retrieval Calibration; EOS MLS Level 1B Data Processing Software, Version 3; Cassini Tour Atlas Automated Generation; Software Development Standard Processes (SDSP); Graphite Composite Panel Polishing Fixture; Material Gradients in Oxygen System Components Improve Safety; Ridge Waveguide Structures in Magnesium-Doped Lithium Niobate; Modifying Matrix Materials to Increase Wetting and Adhesion; Lightweight Magnetic Cooler With a Reversible Circulator; The Invasive Species Forecasting System; Method for Cleanly and Precisely Breaking Off a Rock Core Using a Radial Compressive Force; Praying Mantis Bending Core Breakoff and Retention Mechanism; Scoring Dawg Core Breakoff and Retention Mechanism; Rolling-Tooth Core Breakoff and Retention Mechanism; Vibration Isolation and Stabilization System for Spacecraft Exercise Treadmill Devices; Microgravity-Enhanced Stem Cell Selection; Diagnosis and Treatment of Neurological Disorders by Millimeter-Wave Stimulation; Passive Vaporizing Heat Sink; Remote Sensing and Quantization of Analog Sensors; Phase Retrieval for Radio Telescope and Antenna Control; Helium-Cooled Black Shroud for Subscale Cryogenic Testing; Receive Mode Analysis and Design of Microstrip Reflectarrays; and Chance-Constrained Guidance With Non-Convex Constraints.
Software for Managing Personal Files.
ERIC Educational Resources Information Center
Lundeen, Gerald
1989-01-01
Discusses the special characteristics of personal file management software and compares four microcomputer software packages: Notebook II with Bibliography and Convert, Pro-Cite with Biblio-Links, askSam, and Reference Manager. Each package is evaluated in terms of the user interface, file maintenance, retrieval capabilities, output, and…
NASA Astrophysics Data System (ADS)
Niblack, Carlton W.; Zhu, Xiaoming; Hafner, James L.; Breuel, Tom; Ponceleon, Dulce B.; Petkovic, Dragutin; Flickner, Myron D.; Upfal, Eli; Nin, Sigfredo I.; Sull, Sanghoon; Dom, Byron E.; Yeo, Boon-Lock; Srinivasan, Savitha; Zivkovic, Dan; Penner, Mike
1997-12-01
QBICTM (Query By Image Content) is a set of technologies and associated software that allows a user to search, browse, and retrieve image, graphic, and video data from large on-line collections. This paper discusses current research directions of the QBIC project such as indexing for high-dimensional multimedia data, retrieval of gray level images, and storyboard generation suitable for video. It describes aspects of QBIC software including scripting tools, application interfaces, and available GUIs, and gives examples of applications and demonstration systems using it.
More emotional facial expressions during episodic than during semantic autobiographical retrieval.
El Haj, Mohamad; Antoine, Pascal; Nandrino, Jean Louis
2016-04-01
There is a substantial body of research on the relationship between emotion and autobiographical memory. Using facial analysis software, our study addressed this relationship by investigating basic emotional facial expressions that may be detected during autobiographical recall. Participants were asked to retrieve 3 autobiographical memories, each of which was triggered by one of the following cue words: happy, sad, and city. The autobiographical recall was analyzed by a software for facial analysis that detects and classifies basic emotional expressions. Analyses showed that emotional cues triggered the corresponding basic facial expressions (i.e., happy facial expression for memories cued by happy). Furthermore, we dissociated episodic and semantic retrieval, observing more emotional facial expressions during episodic than during semantic retrieval, regardless of the emotional valence of cues. Our study provides insight into facial expressions that are associated with emotional autobiographical memory. It also highlights an ecological tool to reveal physiological changes that are associated with emotion and memory.
Design implications for task-specific search utilities for retrieval and re-engineering of code
NASA Astrophysics Data System (ADS)
Iqbal, Rahat; Grzywaczewski, Adam; Halloran, John; Doctor, Faiyaz; Iqbal, Kashif
2017-05-01
The importance of information retrieval systems is unquestionable in the modern society and both individuals as well as enterprises recognise the benefits of being able to find information effectively. Current code-focused information retrieval systems such as Google Code Search, Codeplex or Koders produce results based on specific keywords. However, these systems do not take into account developers' context such as development language, technology framework, goal of the project, project complexity and developer's domain expertise. They also impose additional cognitive burden on users in switching between different interfaces and clicking through to find the relevant code. Hence, they are not used by software developers. In this paper, we discuss how software engineers interact with information and general-purpose information retrieval systems (e.g. Google, Yahoo!) and investigate to what extent domain-specific search and recommendation utilities can be developed in order to support their work-related activities. In order to investigate this, we conducted a user study and found that software engineers followed many identifiable and repeatable work tasks and behaviours. These behaviours can be used to develop implicit relevance feedback-based systems based on the observed retention actions. Moreover, we discuss the implications for the development of task-specific search and collaborative recommendation utilities embedded with the Google standard search engine and Microsoft IntelliSense for retrieval and re-engineering of code. Based on implicit relevance feedback, we have implemented a prototype of the proposed collaborative recommendation system, which was evaluated in a controlled environment simulating the real-world situation of professional software engineers. The evaluation has achieved promising initial results on the precision and recall performance of the system.
MaROS: Information Management Service
NASA Technical Reports Server (NTRS)
Allard, Daniel A.; Gladden, Roy E.; Wright, Jesse J.; Hy, Franklin H.; Rabideau, Gregg R.; Wallick, Michael N.
2011-01-01
This software is provided by the Mars Relay Operations Service (MaROS) task to a variety of Mars projects for the purpose of coordinating communications sessions between landed spacecraft assets and orbiting spacecraft assets at Mars. The Information Management Service centralizes a set of functions previously distributed across multiple spacecraft operations teams, and as such, greatly improves visibility into the end-to-end strategic coordination process. Most of the process revolves around the scheduling of communications sessions between the spacecraft during periods of time when a landed asset on Mars is geometrically visible by an orbiting spacecraft. These relay sessions are used to transfer data both to and from the landed asset via the orbiting asset on behalf of Earth-based spacecraft operators. This software component is an application process running as a Java virtual machine. The component provides all service interfaces via a Representational State Transfer (REST) protocol over https to external clients. There are two general interaction modes with the service: upload and download of data. For data upload, the service must execute logic specific to the upload data type and trigger any applicable calculations including pass delivery latencies and overflight conflicts. For data download, the software must retrieve and correlate requested information and deliver to the requesting client. The provision of this service enables several key advancements over legacy processes and systems. For one, this service represents the first time that end-to-end relay information is correlated into a single shared repository. The software also provides the first multimission latency calculator; previous latency calculations had been performed on a mission-by-mission basis.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval... information retrieval system” or “system” means the system of software and hardware used to process Medicaid... management information required by the Medicaid single State agency and Federal Government for program...
Code of Federal Regulations, 2013 CFR
2013-10-01
... ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval... information retrieval system” or “system” means the system of software and hardware used to process Medicaid... management information required by the Medicaid single State agency and Federal Government for program...
Code of Federal Regulations, 2012 CFR
2012-10-01
... ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval... information retrieval system” or “system” means the system of software and hardware used to process Medicaid... management information required by the Medicaid single State agency and Federal Government for program...
Code of Federal Regulations, 2011 CFR
2011-10-01
... ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval... information retrieval system” or “system” means the system of software and hardware used to process Medicaid... management information required by the Medicaid single State agency and Federal Government for program...
Code of Federal Regulations, 2010 CFR
2010-10-01
... management information required by the Medicaid single State agency and Federal Government for program... ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval... information retrieval system” or “system” means the system of software and hardware used to process Medicaid...
Artificial Intelligence and Information Retrieval.
ERIC Educational Resources Information Center
Teodorescu, Ioana
1987-01-01
Compares artificial intelligence and information retrieval paradigms for natural language understanding, reviews progress to date, and outlines the applicability of artificial intelligence to question answering systems. A list of principal artificial intelligence software for database front end systems is appended. (CLB)
NASA Technical Reports Server (NTRS)
Eichmann, David A.
1992-01-01
We present a user interface for software reuse repository that relies both on the informal semantics of faceted classification and the formal semantics of type signatures for abstract data types. The result is an interface providing both structural and qualitative feedback to a software reuser.
Methods and Software for Building Bibliographic Data Bases.
ERIC Educational Resources Information Center
Daehn, Ralph M.
1985-01-01
This in-depth look at database management systems (DBMS) for microcomputers covers data entry, information retrieval, security, DBMS software and design, and downloading of literature search results. The advantages of in-house systems versus online search vendors are discussed, and specifications of three software packages and 14 sources are…
Critical Design Decisions of The Planck LFI Level 1 Software
NASA Astrophysics Data System (ADS)
Morisset, N.; Rohlfs, R.; Türler, M.; Meharga, M.; Binko, P.; Beck, M.; Frailis, M.; Zacchei, A.
2010-12-01
The PLANCK satellite with two on-board instruments, a Low Frequency Instrument (LFI) and a High Frequency Instrument (HFI) has been launched on May 14th with Ariane 5. The ISDC Data Centre for Astrophysics in Versoix, Switzerland has developed and maintains the Planck LFI Level 1 software for the Data Processing Centre (DPC) in Trieste, Italy. The main tasks of the Level 1 processing are to retrieve the daily available scientific and housekeeping (HK) data of the LFI instrument, the Sorption Cooler and the 4k Cooler data from Mission Operation Centre (MOC) in Darmstadt; to sort them by time and by type (detector, observing mode, etc...); to extract the spacecraft attitude information from auxiliary files; to flag the data according to several criteria; and to archive the resulting Time Ordered Information (TOI), which will then be used to produce maps of the sky in different spectral bands. The output of the Level 1 software are the TOI files in FITS format, later ingested into the Data Management Component (DMC) database. This software has been used during different phases of the LFI instrument development. We started to reuse some ISDC components for the LFI Qualification Model (QM) and we completely rework the software for the Flight Model (FM). This was motivated by critical design decisions taken jointly with the DPC. The main questions were: a) the choice of the data format: FITS or DMC? b) the design of the pipelines: use of the Planck Process Coordinator (ProC) or a simple Perl script? c) do we adapt the existing QM software or do we restart from scratch? The timeline and available manpower are also important issues to be taken into account. We present here the orientation of our choices and discuss their pertinence based on the experience of the final pre-launch tests and the start of real Planck LFI operations.
Software for Optical Archive and Retrieval (SOAR) user's guide, version 4.2
NASA Technical Reports Server (NTRS)
Davis, Charles
1991-01-01
The optical disk is an emerging technology. Because it is not a magnetic medium, it offers a number of distinct advantages over the established form of storage, advantages that make it extremely attractive. They are as follows: (1) the ability to store much more data within the same space; (2) the random access characteristics of the Write Once Read Many optical disk; (3) a much longer life than that of traditional storage media; and (4) much greater data access rate. Software for Optical Archive and Retrieval (SOAR) user's guide is presented.
Ajay, Dara; Gangwal, Rahul P; Sangamwar, Abhay T
2015-01-01
Intelligent Patent Analysis Tool (IPAT) is an online data retrieval tool, operated based on text mining algorithm to extract specific patent information in a predetermined pattern into an Excel sheet. The software is designed and developed to retrieve and analyze technology information from multiple patent documents and generate various patent landscape graphs and charts. The software is C# coded in visual studio 2010, which extracts the publicly available patent information from the web pages like Google Patent and simultaneously study the various technology trends based on user-defined parameters. In other words, IPAT combined with the manual categorization will act as an excellent technology assessment tool in competitive intelligence and due diligence for predicting the future R&D forecast.
New Software Architecture Options for the TCL Data Acquisition System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valenton, Emmanuel
2014-09-01
The Turbulent Combustion Laboratory (TCL) conducts research on combustion in turbulent flow environments. To conduct this research, the TCL utilizes several pulse lasers, a traversable wind tunnel, flow controllers, scientific grade CCD cameras, and numerous other components. Responsible for managing these different data-acquiring instruments and data processing components is the Data Acquisition (DAQ) software. However, the current system is constrained to running through VXI hardware—an instrument-computer interface—that is several years old, requiring the use of an outdated version of the visual programming language, LabVIEW. A new Acquisition System is being programmed which will borrow heavily from either a programming modelmore » known as the Current Value Table (CVT) System or another model known as the Server-Client System. The CVT System model is in essence, a giant spread sheet from which data or commands may be retrieved or written to, and the Server-Client System is based on network connections between a server and a client, very much like the Server-Client model of the Internet. Currently, the bare elements of a CVT DAQ Software have been implemented, consisting of client programs in addition to a server program that the CVT will run on. This system is being rigorously tested to evaluate the merits of pursuing the CVT System model and to uncover any potential flaws which may result in further implementation. If the CVT System is chosen, which is likely, then future work will consist of build up the system until enough client programs have been created to run the individual components of the lab. The advantages of such a System will be flexibility, portability, and polymorphism. Additionally, the new DAQ software will allow the Lab to replace the VXI with a newer instrument interface—the PXI—and take advantage of the capabilities of current and future versions of LabVIEW.« less
Study and Analysis of The Robot-Operated Material Processing Systems (ROMPS)
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.
1996-01-01
This is a report presenting the progress of a research grant funded by NASA for work performed during 1 Oct. 1994 - 31 Sep. 1995. The report deals with the development and investigation of potential use of software for data processing for the Robot Operated Material Processing System (ROMPS). It reports on the progress of data processing of calibration samples processed by ROMPS in space and on earth. First data were retrieved using the I/O software and manually processed using MicroSoft Excel. Then the data retrieval and processing process was automated using a program written in C which is able to read the telemetry data and produce plots of time responses of sample temperatures and other desired variables. LabView was also employed to automatically retrieve and process the telemetry data.
A new vector radiative transfer model as a part of SCIATRAN 3.0 software package.
NASA Astrophysics Data System (ADS)
Rozanov, Alexei; Rozanov, Vladimir; Burrows, John P.
The SCIATRAN 3.0 package is a result of further development of the SCIATRAN 2.x software family which, similar to previous versions, comprises a radiative transfer model and a retrieval block. A major improvement was achieved in comparison to previous software versions by adding the vector mode to the radiative transfer model. Thus, the well-established Discrete Ordinate solver can now be run in the vector mode to calculate the scattered solar radiation including polarization, i.e., to simulate all four components of the Stockes vector. Similar to the scalar version, the simulations can be performed for any viewing geometry typical for atmospheric observations in the UV-Vis-NIR spectral range (nadir, limb, off-axis, etc.) as well as for any observer position within or outside the Earth's atmosphere. Similar to the precursor version, the new model is freely available for non-commercial use via the web page of the University of Bremen. In this presentation a short description of the software package, especially of the new vector radiative transfer model will be given, including remarks on the availability for the scientific community. Furthermore, comparisons to other vector models will be shown and some example problems will be considered where the polarization of the observed radiation must be accounted for to obtain high quality results.
Beyond Information Retrieval: Ways To Provide Content in Context.
ERIC Educational Resources Information Center
Wiley, Deborah Lynne
1998-01-01
Provides an overview of information retrieval from mainframe systems to Web search engines; discusses collaborative filtering, data extraction, data visualization, agent technology, pattern recognition, classification and clustering, and virtual communities. Argues that rather than huge data-storage centers and proprietary software, we need…
Software Assists in Responding to Anomalous Conditions
NASA Technical Reports Server (NTRS)
James, Mark; Kronbert, F.; Weiner, A.; Morgan, T.; Stroozas, B.; Girouard, F.; Hopkins, A.; Wong, L.; Kneubuhl, J.; Malina, R.
2004-01-01
Fault Induced Document Retrieval Officer (FIDO) is a computer program that reduces the need for a large and costly team of engineers and/or technicians to monitor the state of a spacecraft and associated ground systems and respond to anomalies. FIDO includes artificial-intelligence components that imitate the reasoning of human experts with reference to a knowledge base of rules that represent failure modes and to a database of engineering documentation. These components act together to give an unskilled operator instantaneous expert assistance and access to information that can enable resolution of most anomalies, without the need for highly paid experts. FIDO provides a system state summary (a configurable engineering summary) and documentation for diagnosis of a potentially failing component that might have caused a given error message or anomaly. FIDO also enables high-level browsing of documentation by use of an interface indexed to the particular error message. The collection of available documents includes information on operations and associated procedures, engineering problem reports, documentation of components, and engineering drawings. FIDO also affords a capability for combining information on the state of ground systems with detailed, hierarchically-organized, hypertext- enabled documentation.
Resources, Equipment and Logistics in Support of Long-Term Monitoring at Fort Benning
2005-08-01
access to cell phone transceivers for data retrieval. Sensor manufacturers, state climatologists, and the EPA provide standards, guidelines...have cell phone access for data retrieval and any software driven maintenance. Figure 3. Typical meteorological data acquisition station Sensor...collect and store data every 30 minutes. Of three current stations, one has a cell phone data link for daily retrieval. Table 2 lists equipment
Biological data integration: wrapping data and tools.
Lacroix, Zoé
2002-06-01
Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. Building a digital library for scientific data requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web as well as data generated by software. We present an approach to wrapping web data sources, databases, flat files, or data generated by tools through a database view mechanism. Generally, a wrapper has two tasks: it first sends a query to the source to retrieve data and, second builds the expected output with respect to the virtual structure. Our wrappers are composed of a retrieval component based on an intermediate object view mechanism called search views mapping the source capabilities to attributes, and an eXtensible Markup Language (XML) engine, respectively, to perform these two tasks. The originality of the approach consists of: 1) a generic view mechanism to access seamlessly data sources with limited capabilities and 2) the ability to wrap data sources as well as the useful specific tools they may provide. Our approach has been developed and demonstrated as part of the multidatabase system supporting queries via uniform object protocol model (OPM) interfaces.
The JPL Library information retrieval system
NASA Technical Reports Server (NTRS)
Walsh, J.
1975-01-01
The development, capabilities, and products of the computer-based retrieval system of the Jet Propulsion Laboratory Library are described. The system handles books and documents, produces a book catalog, and provides a machine search capability. Programs and documentation are available to the public through NASA's computer software dissemination program.
Data storage and retrieval system abstract
NASA Technical Reports Server (NTRS)
Matheson, Barbara
1992-01-01
The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.
Data storage and retrieval system abstract
NASA Astrophysics Data System (ADS)
Matheson, Barbara
1992-09-01
The STX mass storage system design is intended for environments requiring high speed access to large volumes of data (terabyte and greater). Prior to commitment to a product design plan, STX conducted an exhaustive study of the commercially available off-the-shelf hardware and software. STX also conducted research into the area of emerging technologies in networks and storage media so that the design could easily accommodate new interfaces and peripherals as they came on the market. All the selected system elements were brought together in a demo suite sponsored jointly by STX and ALLIANT where the system elements were evaluated based on actual operation using a client-server mirror image configuration. Testing was conducted to assess the various component overheads and results were compared against vendor data claims. The resultant system, while adequate to meet our capacity requirements, fell short of transfer speed expectations. A product team lead by STX was assembled and chartered with solving the bottleneck issues. Optimization efforts yielded a 60 percent improvement in throughput performance. The ALLIANT computer platform provided the I/O flexibility needed to accommodate a multitude of peripheral interfaces including the following: up to twelve 25MB/s VME I/O channels; up to five HiPPI I/O full duplex channels; IPI-s, SCSI, SMD, and RAID disk array support; standard networking software support for TCP/IP, NFS, and FTP; open architecture based on standard RISC processors; and V.4/POSIX-based operating system (Concentrix). All components including the software are modular in design and can be reconfigured as needs and system uses change. Users can begin with a small system and add modules as needed in the field. Most add-ons can be accomplished seamlessly without revision, recompilation or re-linking of software.
A model for the electronic support of practice-based research networks.
Peterson, Kevin A; Delaney, Brendan C; Arvanitis, Theodoros N; Taweel, Adel; Sandberg, Elisabeth A; Speedie, Stuart; Richard Hobbs, F D
2012-01-01
The principal goal of the electronic Primary Care Research Network (ePCRN) is to enable the development of an electronic infrastructure to support clinical research activities in primary care practice-based research networks (PBRNs). We describe the model that the ePCRN developed to enhance the growth and to expand the reach of PBRN research. Use cases and activity diagrams were developed from interviews with key informants from 11 PBRNs from the United States and United Kingdom. Discrete functions were identified and aggregated into logical components. Interaction diagrams were created, and an overall composite diagram was constructed describing the proposed software behavior. Software for each component was written and aggregated, and the resulting prototype application was pilot tested for feasibility. A practical model was then created by separating application activities into distinct software packages based on existing PBRN business rules, hardware requirements, network requirements, and security concerns. We present an information architecture that provides for essential interactions, activities, data flows, and structural elements necessary for providing support for PBRN translational research activities. The model describes research information exchange between investigators and clusters of independent data sites supported by a contracted research director. The model was designed to support recruitment for clinical trials, collection of aggregated anonymous data, and retrieval of identifiable data from previously consented patients across hundreds of practices. The proposed model advances our understanding of the fundamental roles and activities of PBRNs and defines the information exchange commonly used by PBRNs to successfully engage community health care clinicians in translational research activities. By describing the network architecture in a language familiar to that used by software developers, the model provides an important foundation for the development of electronic support for essential PBRN research activities.
Mobile medical visual information retrieval.
Depeursinge, Adrien; Duc, Samuel; Eggel, Ivan; Müller, Henning
2012-01-01
In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical information retrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.
The GRAPE aerosol retrieval algorithm
NASA Astrophysics Data System (ADS)
Thomas, G. E.; Poulsen, C. A.; Sayer, A. M.; Marsh, S. H.; Dean, S. M.; Carboni, E.; Siddans, R.; Grainger, R. G.; Lawrence, B. N.
2009-11-01
The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations - this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set. The algorithm is described in detail and its performance examined. This includes a discussion of errors resulting from the formulation of the forward model, sensitivity of the retrieval to the measurements and a priori constraints, and errors resulting from assumptions made about the atmospheric/surface state.
The GRAPE aerosol retrieval algorithm
NASA Astrophysics Data System (ADS)
Thomas, G. E.; Poulsen, C. A.; Sayer, A. M.; Marsh, S. H.; Dean, S. M.; Carboni, E.; Siddans, R.; Grainger, R. G.; Lawrence, B. N.
2009-04-01
The aerosol component of the Oxford-Rutherford Aerosol and Cloud (ORAC) combined cloud and aerosol retrieval scheme is described and the theoretical performance of the algorithm is analysed. ORAC is an optimal estimation retrieval scheme for deriving cloud and aerosol properties from measurements made by imaging satellite radiometers and, when applied to cloud free radiances, provides estimates of aerosol optical depth at a wavelength of 550 nm, aerosol effective radius and surface reflectance at 550 nm. The aerosol retrieval component of ORAC has several incarnations - this paper addresses the version which operates in conjunction with the cloud retrieval component of ORAC (described by Watts et al., 1998), as applied in producing the Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) data-set. The algorithm is described in detail and its performance examined. This includes a discussion of errors resulting from the formulation of the forward model, sensitivity of the retrieval to the measurements and a priori constraints, and errors resulting from assumptions made about the atmospheric/surface state.
Maximizing Accessibility to Spatially Referenced Digital Data.
ERIC Educational Resources Information Center
Hunt, Li; Joselyn, Mark
1995-01-01
Discusses some widely available spatially referenced datasets, including raster and vector datasets. Strategies for improving accessibility include: acquisition of data in a software-dependent format; reorganization of data into logical geographic units; acquisition of intelligent retrieval software; improving computer hardware; and intelligent…
Code of Federal Regulations, 2010 CFR
2010-07-01
... Administration COURT SERVICES AND OFFENDER SUPERVISION AGENCY FOR THE DISTRICT OF COLUMBIA DISCLOSURE OF RECORDS... proprietary interest in the information. (e) Computer software means tools by which records are created, stored, and retrieved. Normally, computer software, including source code, object code, and listings of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guyer, H.B.; McChesney, C.A.
The overall primary Objective of HDAR is to create a repository of historical personnel security documents and provide the functionality needed for archival and retrieval use by other software modules and application users of the DISS/ET system. The software product to be produced from this specification is the Historical Document Archival and Retrieval Subsystem The product will provide the functionality to capture, retrieve and manage documents currently contained in the personnel security folders in DOE Operations Offices vaults at various locations across the United States. The long-term plan for DISS/ET includes the requirement to allow for capture and storage ofmore » arbitrary, currently undefined, clearance-related documents that fall outside the scope of the ``cradle-to-grave`` electronic processing provided by DISS/ET. However, this requirement is not within the scope of the requirements specified in this document.« less
Information Retrieval Using ADABAS-NATURAL (with Applications for Television and Radio).
ERIC Educational Resources Information Center
Silbergeld, I.; Kutok, P.
1984-01-01
Describes use of the software ADABAS (general purpose database management system) and NATURAL (interactive programing language) in development and implementation of an information retrieval system for the National Television and Radio Network of Israel. General design considerations, files contained in each archive, search strategies, and keywords…
Automation of the CAS Document Delivery Service.
ERIC Educational Resources Information Center
Steensland, M. C.; Soukup, K. M.
1986-01-01
The automation of online order retrieval for Chemical Abstracts Service Document Delivery Service was accomplished by shifting to an order retrieval/dispatch process linked to a Unix network. The Unix-based environment, its terminal emulation, page-break, and user-friendly interface software, and later enhancements are reviewed. Resultant increase…
2007-03-01
software level retrieve state information that can inherently contain more contextual information . As a result, such mechanisms can be applied in more...ease by which state information can be gathered for monitoring purposes. For example, we consider soft security to allow for easier state retrieval ...files are to be checked and what parameters are to be verified. The independent auditor periodically retrieves information pertaining to the files in
Design notes for the next generation persistent object manager for CAP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isely, M.; Fischler, M.; Galli, M.
1995-05-01
The CAP query system software at Fermilab has several major components, including SQS (for managing the query), the retrieval system (for fetching auxiliary data), and the query software itself. The central query software in particular is essentially a modified version of the `ptool` product created at UIC (University of Illinois at Chicago) as part of the PASS project under Bob Grossman. The original UIC version was designed for use in a single-user non-distributed Unix environment. The Fermi modifications were an attempt to permit multi-user access to a data set distributed over a set of storage nodes. (The hardware is anmore » IBM SP-x system - a cluster of AIX POWER2 nodes with an IBM-proprietary high speed switch interconnect). Since the implementation work of the Fermi-ized ptool, the CAP members have learned quite a bit about the nature of queries and where the current performance bottlenecks exist. This has lead them to design a persistent object manager that will overcome these problems. For backwards compatibility with ptool, the ptool persistent object API will largely be retained, but the implementation will be entirely different.« less
Using neural networks in software repositories
NASA Technical Reports Server (NTRS)
Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.
1992-01-01
The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.
Digital photogrammetry for quantitative wear analysis of retrieved TKA components.
Grochowsky, J C; Alaways, L W; Siskey, R; Most, E; Kurtz, S M
2006-11-01
The use of new materials in knee arthroplasty demands a way in which to accurately quantify wear in retrieved components. Methods such as damage scoring, coordinate measurement, and in vivo wear analysis have been used in the past. The limitations in these methods illustrate a need for a different methodology that can accurately quantify wear, which is relatively easy to perform and uses a minimal amount of expensive equipment. Off-the-shelf digital photogrammetry represents a potentially quick and easy alternative to what is readily available. Eighty tibial inserts were visually examined for front and backside wear and digitally photographed in the presence of two calibrated reference fields. All images were segmented (via manual and automated algorithms) using Adobe Photoshop and National Institute of Health ImageJ. Finally, wear was determined using ImageJ and Rhinoceros software. The absolute accuracy of the method and repeatability/reproducibility by different observers were measured in order to determine the uncertainty of wear measurements. To determine if variation in wear measurements was due to implant design, 35 implants of the three most prevalent designs were subjected to retrieval analysis. The overall accuracy of area measurements was 97.8%. The error in automated segmentation was found to be significantly lower than that of manual segmentation. The photogrammetry method was found to be reasonably accurate and repeatable in measuring 2-D areas and applicable to determining wear. There was no significant variation in uncertainty detected among different implant designs. Photogrammetry has a broad range of applicability since it is size- and design-independent. A minimal amount of off-the-shelf equipment is needed for the procedure and no proprietary knowledge of the implant is needed. (c) 2006 Wiley Periodicals, Inc.
Digital Equipment Corporation's CRDOM Software and Database Publications.
ERIC Educational Resources Information Center
Adams, Michael Q.
1986-01-01
Acquaints information professionals with Digital Equipment Corporation's compact optical disk read-only-memory (CDROM) search and retrieval software and growing library of CDROM database publications (COMPENDEX, Chemical Abstracts Services). Highlights include MicroBASIS, boolean operators, range operators, word and phrase searching, proximity…
Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.
ERIC Educational Resources Information Center
Pieska, K. A. O.
1986-01-01
Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)
INTERFACING SAS TO ORACLE IN THE UNIX ENVIRONMENT
SAS is an EPA standard data and statistical analysis software package while ORACLE is EPA's standard data base management system software package. RACLE has the advantage over SAS in data retrieval and storage capabilities but has limited data and statistical analysis capability....
Adjusted Levenberg-Marquardt method application to methene retrieval from IASI/METOP spectra
NASA Astrophysics Data System (ADS)
Khamatnurova, Marina; Gribanov, Konstantin
2016-04-01
Levenberg-Marquardt method [1] with iteratively adjusted parameter and simultaneous evaluation of averaging kernels together with technique of parameters selection are developed and applied to the retrieval of methane vertical profiles in the atmosphere from IASI/METOP spectra. Retrieved methane vertical profiles are then used for calculation of total atmospheric column amount. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder,USA) [2] are taken as initial guess for retrieval algorithm. Surface temperature, temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval for each selected spectrum. Modified software package FIRE-ARMS [3] were used for numerical experiments. To adjust parameters and validate the method we used ECMWF MACC reanalysis data [4]. Methane columnar values retrieved from cloudless IASI spectra demonstrate good agreement with MACC columnar values. Comparison is performed for IASI spectra measured in May of 2012 over Western Siberia. Application of the method for current IASI/METOP measurements are discussed. 1.Ma C., Jiang L. Some Research on Levenberg-Marquardt Method for the Nonlinear Equations // Applied Mathematics and Computation. 2007. V.184. P. 1032-1040 2.http://www.esrl.noaa.gov/psdhttp://www.esrl.noaa.gov/psd 3.Gribanov K.G., Zakharov V.I., Tashkun S.A., Tyuterev Vl.G.. A New Software Tool for Radiative Transfer Calculations and its application to IMG/ADEOS data // JQSRT.2001.V.68.№ 4. P. 435-451. 4.http://www.ecmwf.int/http://www.ecmwf.int
Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma
2014-02-01
Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.
Evolutionary Computing Methods for Spectral Retrieval
NASA Technical Reports Server (NTRS)
Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna
2009-01-01
A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.
PRISM, Processing and Review Interface for Strong Motion Data Software
NASA Astrophysics Data System (ADS)
Kalkan, E.; Jones, J. M.; Stephens, C. D.; Ng, P.
2016-12-01
A continually increasing number of high-quality digital strong-motion records from stations of the National Strong Motion Project (NSMP) of the U.S. Geological Survey (USGS), as well as data from regional seismic networks within the U.S., calls for automated processing of strong-motion records with human review limited to selected significant or flagged records. The NSMP has developed the Processing and Review Interface for Strong Motion data (PRISM) software to meet this need. PRISM automates the processing of strong-motion records by providing batch-processing capabilities. The PRISM software is platform-independent (coded in Java), open-source, and does not depend on any closed-source or proprietary software. The software consists of two major components: a record processing engine composed of modules for each processing step, and a graphical user interface (GUI) for manual review and processing. To facilitate the use by non-NSMP earthquake engineers and scientists, PRISM (both its processing engine and GUI components) is easy to install and run as a stand-alone system on common operating systems such as Linux, OS X and Windows. PRISM was designed to be flexible and extensible in order to accommodate implementation of new processing techniques. Input to PRISM currently is limited to data files in the Consortium of Organizations for Strong-Motion Observation Systems (COSMOS) V0 format, so that all retrieved acceleration time series need to be converted to this format. Output products include COSMOS V1, V2 and V3 files as: (i) raw acceleration time series in physical units with mean removed (V1), (ii) baseline-corrected and filtered acceleration, velocity, and displacement time series (V2), and (iii) response spectra, Fourier amplitude spectra and common earthquake-engineering intensity measures (V3). A thorough description of the record processing features supported by PRISM is presented with examples and validation results. All computing features have been thoroughly tested.
Peeling the Onion: Okapi System Architecture and Software Design Issues.
ERIC Educational Resources Information Center
Jones, S.; And Others
1997-01-01
Discusses software design issues for Okapi, an information retrieval system that incorporates both search engine and user interface and supports weighted searching, relevance feedback, and query expansion. The basic search system, adjacency searching, and moving toward a distributed system are discussed. (Author/LRW)
This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling:• SDMProjectBuilder (which includes the Microbial Source Module as part...
cPath: open source software for collecting, storing, and querying biological pathways.
Cerami, Ethan G; Bader, Gary D; Gross, Benjamin E; Sander, Chris
2006-11-13
Biological pathways, including metabolic pathways, protein interaction networks, signal transduction pathways, and gene regulatory networks, are currently represented in over 220 diverse databases. These data are crucial for the study of specific biological processes, including human diseases. Standard exchange formats for pathway information, such as BioPAX, CellML, SBML and PSI-MI, enable convenient collection of this data for biological research, but mechanisms for common storage and communication are required. We have developed cPath, an open source database and web application for collecting, storing, and querying biological pathway data. cPath makes it easy to aggregate custom pathway data sets available in standard exchange formats from multiple databases, present pathway data to biologists via a customizable web interface, and export pathway data via a web service to third-party software, such as Cytoscape, for visualization and analysis. cPath is software only, and does not include new pathway information. Key features include: a built-in identifier mapping service for linking identical interactors and linking to external resources; built-in support for PSI-MI and BioPAX standard pathway exchange formats; a web service interface for searching and retrieving pathway data sets; and thorough documentation. The cPath software is freely available under the LGPL open source license for academic and commercial use. cPath is a robust, scalable, modular, professional-grade software platform for collecting, storing, and querying biological pathways. It can serve as the core data handling component in information systems for pathway visualization, analysis and modeling.
Protein Annotators' Assistant: A Novel Application of Information Retrieval Techniques.
ERIC Educational Resources Information Center
Wise, Michael J.
2000-01-01
Protein Annotators' Assistant (PAA) is a software system which assists protein annotators in assigning functions to newly sequenced proteins. PAA employs a number of information retrieval techniques in a novel setting and is thus related to text categorization, where multiple categories may be suggested, except that in this case none of the…
Software Helps Retrieve Information Relevant to the User
NASA Technical Reports Server (NTRS)
Mathe, Natalie; Chen, James
2003-01-01
The Adaptive Indexing and Retrieval Agent (ARNIE) is a code library, designed to be used by an application program, that assists human users in retrieving desired information in a hypertext setting. Using ARNIE, the program implements a computational model for interactively learning what information each human user considers relevant in context. The model, called a "relevance network," incrementally adapts retrieved information to users individual profiles on the basis of feedback from the users regarding specific queries. The model also generalizes such knowledge for subsequent derivation of relevant references for similar queries and profiles, thereby, assisting users in filtering information by relevance. ARNIE thus enables users to categorize and share information of interest in various contexts. ARNIE encodes the relevance and structure of information in a neural network dynamically configured with a genetic algorithm. ARNIE maintains an internal database, wherein it saves associations, and from which it returns associated items in response to a query. A C++ compiler for a platform on which ARNIE will be utilized is necessary for creating the ARNIE library but is not necessary for the execution of the software.
Infrared Sky Imager (IRSI) Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, Victor R.
2016-04-01
The Infrared Sky Imager (IRSI) deployed at the Atmospheric Radiation Measurement (ARM) Climate Research Facility is a Solmirus Corp. All Sky Infrared Visible Analyzer. The IRSI is an automatic, continuously operating, digital imaging and software system designed to capture hemispheric sky images and provide time series retrievals of fractional sky cover during both the day and night. The instrument provides diurnal, radiometrically calibrated sky imagery in the mid-infrared atmospheric window and imagery in the visible wavelengths for cloud retrievals during daylight hours. The software automatically identifies cloudy and clear regions at user-defined intervals and calculates fractional sky cover, providing amore » real-time display of sky conditions.« less
Development of medical data information systems
NASA Technical Reports Server (NTRS)
Anderson, J.
1971-01-01
Computerized storage and retrieval of medical information is discussed. Tasks which were performed in support of the project are: (1) flight crew health stabilization computer system, (2) medical data input system, (3) graphic software development, (4) lunar receiving laboratory support, and (5) Statos V printer/plotter software development.
This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling: • QMRA Installation • SDMProjectBuilder (which includes the Microbial ...
The Bartlesville System; TGISS Software Documentation.
ERIC Educational Resources Information Center
Roberts, Tommy L.; And Others
TGISS (Total Guidance Information Support System) is an information storage and retrieval system specifically designed to meet the needs and requirements of a counselor in the Bartlesville Public School environment. The system, which is a combination of man/machine capabilities, includes the hardware and software necessary to extend the…
GSC configuration management plan
NASA Technical Reports Server (NTRS)
Withers, B. Edward
1990-01-01
The tools and methods used for the configuration management of the artifacts (including software and documentation) associated with the Guidance and Control Software (GCS) project are described. The GCS project is part of a software error studies research program. Three implementations of GCS are being produced in order to study the fundamental characteristics of the software failure process. The Code Management System (CMS) is used to track and retrieve versions of the documentation and software. Application of the CMS for this project is described and the numbering scheme is delineated for the versions of the project artifacts.
NASA Astrophysics Data System (ADS)
Tanci, Claudio; Tosti, Gino; Conforti, Vito; Schwarz, Joseph; Antolini, Elisa; Antonelli, L. A.; Bulgarelli, Andrea; Bigongiari, Ciro; Bruno, Pietro; Canestrari, Rodolfo; Capalbi, Milvia; Cascone, Enrico; Catalano, Osvaldo; Di Paola, Andrea; Di Pierro, Federico; Fioretti, Valentina; Gallozzi, Stefano; Gardiol, Daniele; Gianotti, Fulvio; Giro, Enrico; Grillo, Alessandro; La Palombara, Nicola; Leto, Giuseppe; Lombardi, Saverio; Maccarone, Maria C.; Pareschi, Giovanni; Russo, Federico; Sangiorgi, Pierluca; Scuderi, Salvo; Stringhetti, Luca; Testa, Vincenzo; Trifoglio, Massimo; Vercellone, Stefano; Zoli, Andrea
2016-08-01
The ASTRI mini-array, composed of nine small-size dual mirror (SST-2M) telescopes, has been proposed to be installed at the southern site of the Cherenkov Telescope Array (CTA), as a set of preproduction units of the CTA observatory. The ASTRI mini-array is a collaborative and international effort carried out by Italy, Brazil and South Africa and led by the Italian National Institute of Astrophysics, INAF. We present the main features of the current implementation of the Mini-Array Software System (MASS) now in use for the activities of the ASTRI SST-2M telescope prototype located at the INAF observing station on Mt. Etna, Italy and the characteristics that make it a prototype for the CTA control software system. CTA Data Management (CTADATA) and CTA Array Control and Data Acquisition (CTA-ACTL) requirements and guidelines as well as the ASTRI use cases were considered in the MASS design, most of its features are derived from the Atacama Large Millimeter/sub-millimeter Array Control software. The MASS will provide a set of tools to manage all onsite operations of the ASTRI mini-array in order to perform the observations specified in the short term schedule (including monitoring and controlling all the hardware components of each telescope and calibration device), to analyze the acquired data online and to store/retrieve all the data products to/from the onsite repository.
Abstracted Workow Framework with a Structure from Motion Application
NASA Astrophysics Data System (ADS)
Rossi, Adam J.
In scientific and engineering disciplines, from academia to industry, there is an increasing need for the development of custom software to perform experiments, construct systems, and develop products. The natural mindset initially is to shortcut and bypass all overhead and process rigor in order to obtain an immediate result for the problem at hand, with the misconception that the software will simply be thrown away at the end. In a majority of the cases, it turns out the software persists for many years, and likely ends up in production systems for which it was not initially intended. In the current study, a framework that can be used in both industry and academic applications mitigates underlying problems associated with developing scientific and engineering software. This results in software that is much more maintainable, documented, and usable by others, specifically allowing new users to extend capabilities of components already implemented in the framework. There is a multi-disciplinary need in the fields of imaging science, computer science, and software engineering for a unified implementation model, which motivates the development of an abstracted software framework. Structure from motion (SfM) has been identified as one use case where the abstracted workflow framework can improve research efficiencies and eliminate implementation redundancies in scientific fields. The SfM process begins by obtaining 2D images of a scene from different perspectives. Features from the images are extracted and correspondences are established. This provides a sufficient amount of information to initialize the problem for fully automated processing. Transformations are established between views, and 3D points are established via triangulation algorithms. The parameters for the camera models for all views / images are solved through bundle adjustment, establishing a highly consistent point cloud. The initial sparse point cloud and camera matrices are used to generate a dense point cloud through patch based techniques or densification algorithms such as Semi-Global Matching (SGM). The point cloud can be visualized or exploited by both humans and automated techniques. In some cases the point cloud is "draped" with original imagery in order to enhance the 3D model for a human viewer. The SfM workflow can be implemented in the abstracted framework, making it easily leverageable and extensible by multiple users. Like many processes in scientific and engineering domains, the workflow described for SfM is complex and requires many disparate components to form a functional system, often utilizing algorithms implemented by many users in different languages / environments and without knowledge of how the component fits into the larger system. In practice, this generally leads to issues interfacing the components, building the software for desired platforms, understanding its concept of operations, and how it can be manipulated in order to fit the desired function for a particular application. In addition, other scientists and engineers instinctively wish to analyze the performance of the system, establish new algorithms, optimize existing processes, and establish new functionality based on current research. This requires a framework whereby new components can be easily plugged in without affecting the current implemented functionality. The need for a universal programming environment establishes the motivation for the development of the abstracted workflow framework. This software implementation, named Catena, provides base classes from which new components must derive in order to operate within the framework. The derivation mandates requirements be satisfied in order to provide a complete implementation. Additionally, the developer must provide documentation of the component in terms of its overall function and inputs. The interface input and output values corresponding to the component must be defined in terms of their respective data types, and the implementation uses mechanisms within the framework to retrieve and send the values. This process requires the developer to componentize their algorithm rather than implement it monolithically. Although the requirements of the developer are slightly greater, the benefits realized from using Catena far outweigh the overhead, and results in extensible software. This thesis provides a basis for the abstracted workflow framework concept and the Catena software implementation. The benefits are also illustrated using a detailed examination of the SfM process as an example application.
Hirsch, Robert M.; De Cicco, Laura A.
2015-01-01
Evaluating long-term changes in river conditions (water quality and discharge) is an important use of hydrologic data. To carry out such evaluations, the hydrologist needs tools to facilitate several key steps in the process: acquiring the data records from a variety of sources, structuring it in ways that facilitate the analysis, processing the data with routines that extract information about changes that may be happening, and displaying findings with graphical techniques. A pair of tightly linked R packages, called dataRetrieval and EGRET (Exploration and Graphics for RivEr Trends), have been developed for carrying out each of these steps in an integrated manner. They are designed to easily accept data from three sources: U.S. Geological Survey hydrologic data, U.S. Environmental Protection Agency (EPA) STORET data, and user-supplied flat files. The dataRetrieval package not only serves as a “front end” to the EGRET package, it can also be used to easily download many types of hydrologic data and organize it in ways that facilitate many other hydrologic applications. The EGRET package has components oriented towards the description of long-term changes in streamflow statistics (high flow, average flow, and low flow) as well as changes in water quality. For the water-quality analysis, it uses Weighted Regressions on Time, Discharge and Season (WRTDS) to describe long-term trends in both concentration and flux. EGRET also creates a wide range of graphical presentations of the water-quality data and of the WRTDS results. This report serves as a user guide to these two R packages, providing detailed guidance on installation and use of the software, documentation of the analysis methods used, as well as guidance on some of the kinds of questions and approaches that the software can facilitate.
Software component quality evaluation
NASA Technical Reports Server (NTRS)
Clough, A. J.
1991-01-01
The paper describes a software inspection process that can be used to evaluate the quality of software components. Quality criteria, process application, independent testing of the process and proposed associated tool support are covered. Early results indicate that this technique is well suited for assessing software component quality in a standardized fashion. With automated machine assistance to facilitate both the evaluation and selection of software components, such a technique should promote effective reuse of software components.
A simple procedure for retrieval of a cement-retained implant-supported crown: a case report.
Buzayan, Muaiyed Mahmoud; Mahmood, Wan Adida; Yunus, Norsiah Binti
2014-02-01
Retrieval of cement-retained implant prostheses can be more demanding than retrieval of screw-retained prostheses. This case report describes a simple and predictable procedure to locate the abutment screw access openings of cementretained implant-supported crowns in cases of fractured ceramic veneer. A conventional periapical radiography image was captured using a digital camera, transferred to a computer, and manipulated using Microsoft Word document software to estimate the location of the abutment screw access.
Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia
2016-01-01
Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs. PMID:27929431
Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia
2016-12-06
Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs.
Database Software Selection for the Egyptian National STI Network.
ERIC Educational Resources Information Center
Slamecka, Vladimir
The evaluation and selection of information/data management system software for the Egyptian National Scientific and Technical (STI) Network are described. An overview of the state-of-the-art of database technology elaborates on the differences between information retrieval and database management systems (DBMS). The desirable characteristics of…
Archiving a Software Development Project
2013-04-01
an ongoing monitoring system that identifies attempts and requests for retrieval, and ensures that the attempts and requests cannot proceed without...Intelligence Division Peter Fisher has worked as a consultant, systems analyst, software developer and project manager in Australia, Holland, the USA...4 3.1.3 DRMS – Defence Records Management System
Retrieval of the atmospheric compounds using a spectral optical thickness information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ioltukhovski, A.A.
A spectral inversion technique for retrieval of the atmospheric gases and aerosols contents is proposed. This technique based upon the preliminary measurement or retrieval of the spectral optical thickness. The existence of a priori information about the spectral cross sections for some of the atmospheric components allows to retrieve the relative contents of these components in the atmosphere. Method of smooth filtration makes possible to estimate contents of atmospheric aerosols with known cross sections and to filter out other aerosols; this is done independently from their relative contribution to the optical thickness.
ERIC Educational Resources Information Center
Hawkins, Donald T.; Levy, Louise R.
1985-01-01
This initial article in series of three discusses barriers inhibiting use of current online retrieval systems by novice users and notes reasons for front end and gateway online retrieval systems. Definitions, front end features, user interface, location (personal computer, host mainframe), evaluation, and strengths and weaknesses are covered. (16…
Metrinome: Continuous Monitoring and Security Validation of Distributed Systems
2014-03-01
Integration into the SDLC ( Software Development Life Cycle), Retrieved Nov 06 2013, https://www.owasp.org/ images/f/f6/Integration_into_the_SDLC.ppt [2...assessment as part of the software development life cycle, current approaches suffer from a number of shortcomings that limit their application in...with assessing security and correct functionality. Second, integrated and end-to-end testing and experimentation is often postponed until software
Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access
NASA Astrophysics Data System (ADS)
Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.
2008-12-01
Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.
Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet
2008-01-01
Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfacesmore » then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.« less
What is the "Clim-Likely" aerosol product?
Atmospheric Science Data Center
2014-12-08
... identifying a range of components and mixtures for the MISR Standard Aerosol Retrieval Algorithm climatology, and as one standard against ... retrieval results. Six component aerosols included in the model were medium and coarse mode mineral dust, sulfate, sea salt, black ...
Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A
2006-01-06
Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development.
NASA Astrophysics Data System (ADS)
Krassovski, M. B.; Riggs, J. S.; Hook, L. A.; Nettles, W. R.; Hanson, P. J.; Boden, T. A.
2015-07-01
Ecosystem-scale manipulation experiments represent large science investments that require well-designed data acquisition and management systems to provide reliable, accurate information to project participants and third party users. The SPRUCE Project (Spruce and Peatland Responses Under Climatic and Environmental Change, http://mnspruce.ornl.gov) is such an experiment funded by the Department of Energy's (DOE), Office of Science, Terrestrial Ecosystem Science (TES) Program. The SPRUCE experimental mission is to assess ecosystem-level biological responses of vulnerable, high carbon terrestrial ecosystems to a range of climate warming manipulations and an elevated CO2 atmosphere. SPRUCE provides a platform for testing mechanisms controlling the vulnerability of organisms, biogeochemical processes, and ecosystems to climatic change (e.g., thresholds for organism decline or mortality, limitations to regeneration, biogeochemical limitations to productivity, the cycling and release of CO2 and CH4 to the atmosphere). The SPRUCE experiment will generate a wide range of continuous and discrete measurements. To successfully manage SPRUCE data collection, achieve SPRUCE science objectives, and support broader climate change research, the research staff has designed a flexible data system using proven network technologies and software components. The primary SPRUCE data system components are: 1. Data acquisition and control system - set of hardware and software to retrieve biological and engineering data from sensors, collect sensor status information, and distribute feedback to control components. 2. Data collection system - set of hardware and software to deliver data to a central depository for storage and further processing. 3. Data management plan - set of plans, policies, and practices to control consistency, protect data integrity, and deliver data. This publication presents our approach to meeting the challenges of designing and constructing an efficient data system for managing high volume sources of in-situ observations in a remote, harsh environmental location. The approach covers data flow starting from the sensors and ending at the archival/distribution points, discusses types of hardware and software used, examines design considerations that were used to choose them, and describes the data management practices chosen to control and enhance the value of the data.
NASA Astrophysics Data System (ADS)
Krassovski, M. B.; Riggs, J. S.; Hook, L. A.; Nettles, W. R.; Hanson, P. J.; Boden, T. A.
2015-11-01
Ecosystem-scale manipulation experiments represent large science investments that require well-designed data acquisition and management systems to provide reliable, accurate information to project participants and third party users. The SPRUCE project (Spruce and Peatland Responses Under Climatic and Environmental Change, http://mnspruce.ornl.gov) is such an experiment funded by the Department of Energy's (DOE), Office of Science, Terrestrial Ecosystem Science (TES) Program. The SPRUCE experimental mission is to assess ecosystem-level biological responses of vulnerable, high carbon terrestrial ecosystems to a range of climate warming manipulations and an elevated CO2 atmosphere. SPRUCE provides a platform for testing mechanisms controlling the vulnerability of organisms, biogeochemical processes, and ecosystems to climatic change (e.g., thresholds for organism decline or mortality, limitations to regeneration, biogeochemical limitations to productivity, and the cycling and release of CO2 and CH4 to the atmosphere). The SPRUCE experiment will generate a wide range of continuous and discrete measurements. To successfully manage SPRUCE data collection, achieve SPRUCE science objectives, and support broader climate change research, the research staff has designed a flexible data system using proven network technologies and software components. The primary SPRUCE data system components are the following: 1. data acquisition and control system - set of hardware and software to retrieve biological and engineering data from sensors, collect sensor status information, and distribute feedback to control components; 2. data collection system - set of hardware and software to deliver data to a central depository for storage and further processing; 3. data management plan - set of plans, policies, and practices to control consistency, protect data integrity, and deliver data. This publication presents our approach to meeting the challenges of designing and constructing an efficient data system for managing high volume sources of in situ observations in a remote, harsh environmental location. The approach covers data flow starting from the sensors and ending at the archival/distribution points, discusses types of hardware and software used, examines design considerations that were used to choose them, and describes the data management practices chosen to control and enhance the value of the data.
NASA Technical Reports Server (NTRS)
Wang, Jun; Xu, Xiaoguang; Ding, Shouguo; Zeng, Jing; Spurr, Robert; Liu, Xiong; Chance, Kelly; Mishchenko, Michael I.
2014-01-01
We present a numerical testbed for remote sensing of aerosols, together with a demonstration for evaluating retrieval synergy from a geostationary satellite constellation. The testbed combines inverse (optimal-estimation) software with a forward model containing linearized code for computing particle scattering (for both spherical and non-spherical particles), a kernel-based (land and ocean) surface bi-directional reflectance facility, and a linearized radiative transfer model for polarized radiance. Calculation of gas absorption spectra uses the HITRAN (HIgh-resolution TRANsmission molecular absorption) database of spectroscopic line parameters and other trace species cross-sections. The outputs of the testbed include not only the Stokes 4-vector elements and their sensitivities (Jacobians) with respect to the aerosol single scattering and physical parameters (such as size and shape parameters, refractive index, and plume height), but also DFS (Degree of Freedom for Signal) values for retrieval of these parameters. This testbed can be used as a tool to provide an objective assessment of aerosol information content that can be retrieved for any constellation of (planned or real) satellite sensors and for any combination of algorithm design factors (in terms of wavelengths, viewing angles, radiance and/or polarization to be measured or used). We summarize the components of the testbed, including the derivation and validation of analytical formulae for Jacobian calculations. Benchmark calculations from the forward model are documented. In the context of NASA's Decadal Survey Mission GEOCAPE (GEOstationary Coastal and Air Pollution Events), we demonstrate the use of the testbed to conduct a feasibility study of using polarization measurements in and around the O2 A band for the retrieval of aerosol height information from space, as well as an to assess potential improvement in the retrieval of aerosol fine and coarse mode aerosol optical depth (AOD) through the synergic use of two future geostationary satellites, GOES-R (Geostationary Operational Environmental Satellite R-series) and TEMPO (Tropospheric Emissions: Monitoring of Pollution). Strong synergy between GEOS-R and TEMPO are found especially in their characterization of surface bi-directional reflectance, and thereby, can potentially improve the AOD retrieval to the accuracy required by GEO-CAPE.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-25
..., Software, Implants, and Components Thereof; Notice of Receipt of Complaint; Solicitation of Comments... Certain Computerized Orthopedic Surgical Devices, Software, Implants, and Components Thereof, DN 2945; the... importation of certain computerized orthopedic surgical devices, software, implants, and components thereof...
NASA Astrophysics Data System (ADS)
Khamatnurova, M. Yu.; Gribanov, K. G.; Zakharov, V. I.; Rokotyan, N. V.; Imasu, R.
2017-11-01
The algorithm for atmospheric methane distribution retrieval in atmosphere from IASI spectra has been developed. The feasibility of Levenberg-Marquardt method for atmospheric methane total column amount retrieval from the spectra measured by IASI/METOP modified for the case of lack of a priori covariance matrices for methane vertical profiles is studied in this paper. Method and algorithm were implemented into software package together with iterative estimation of a posteriori covariance matrices and averaging kernels for each individual retrieval. This allows retrieval quality selection using the properties of both types of matrices. Methane (XCH4) retrieval by Levenberg-Marquardt method from IASI/METOP spectra is presented in this work. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder, USA) were taken as initial guess. Surface temperature, air temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval. The data retrieved from ground-based measurements at the Ural Atmospheric Station and data of L2/IASI standard product were used for the verification of the method and results of methane retrieval from IASI/METOP spectra.
Planetary Atmosphere Dynamics and Radiative Transfer
NASA Technical Reports Server (NTRS)
Atkinson, David H.
1996-01-01
This research program has dealt with two projects in the field of planetary atmosphere dynamics and radiative energy transfer, one theoretical and one experimental. The first project, in radiative energy transfer, incorporated the capability to isolate and quantify the contribution of individual atmospheric components to the Venus radiative balance and thermal structure to greatly improve the current understanding of the radiative processes occurring within the Venus atmosphere. This is possible by varying the mixing ratios of each gas species, and the location, number density and aerosol size distributions of the clouds. This project was a continuation of the work initiated under a 1992 University Consortium Agreement. Under the just completed grant, work has continued on the use of a convolution-based algorithm that provided the capability to calculate the k coefficients of a gas mixture at different temperatures, pressures and spectral intervals from the separate k-distributions of the individual gas species. The second primary goal of this research dealt with the Doppler wind retrieval for the Successful Galileo Jupiter probe mission in December, 1995. In anticipation of the arrival of Galileo at Jupiter, software development continued to read the radioscience and probe/orbiter trajectory data provided by the Galileo project and required for Jupiter zonal wind measurements. Sample experiment radioscience data records and probe/orbiter trajectory data files provided by the Galileo Radioscience and Navigation teams at the Jet Propulsion Laboratory, respectively, were used for the first phase of the software development. The software to read the necessary data records was completed in 1995. The procedure by which the wind retrieval takes place begins with initial consistency checks of the raw data, preliminary data reductions, wind recoveries, iterative reconstruction of the probe descent profile, and refined wind recoveries. At each stage of the wind recovery consistency is checked and maintained between the orbiter navigational data, the radioscience data, and the probe descent profile derived by the Atmospheric Instrument Team. Preliminary results show that the zonal winds at Jupiter increase with depth to approximately 150 m/s.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-17
..., Components Thereof, Associated Software, and Products Containing the Same; Notice of Investigation AGENCY: U... scanning devices, components thereof, associated software, and products containing the same by reason of... after importation of certain biometric scanning devices, components thereof, associated software, or...
Experimental evaluation of ontology-based HIV/AIDS frequently asked question retrieval system.
Ayalew, Yirsaw; Moeng, Barbara; Mosweunyane, Gontlafetse
2018-05-01
This study presents the results of experimental evaluations of an ontology-based frequently asked question retrieval system in the domain of HIV and AIDS. The main purpose of the system is to provide answers to questions on HIV/AIDS using ontology. To evaluate the effectiveness of the frequently asked question retrieval system, we conducted two experiments. The first experiment focused on the evaluation of the quality of the ontology we developed using the OQuaRE evaluation framework which is based on software quality metrics and metrics designed for ontology quality evaluation. The second experiment focused on evaluating the effectiveness of the ontology in retrieving relevant answers. For this we used an open-source information retrieval platform, Terrier, with retrieval models BM25 and PL2. For the measurement of performance, we used the measures mean average precision, mean reciprocal rank, and precision at 5. The results suggest that frequently asked question retrieval with ontology is more effective than frequently asked question retrieval without ontology in the domain of HIV/AIDS.
Digital Preservation in Open-Source Digital Library Software
ERIC Educational Resources Information Center
Madalli, Devika P.; Barve, Sunita; Amin, Saiful
2012-01-01
Digital archives and digital library projects are being initiated all over the world for materials of different formats and domains. To organize, store, and retrieve digital content, many libraries as well as archiving centers are using either proprietary or open-source software. While it is accepted that print media can survive for centuries with…
Wright, T M; Rimnac, C M; Faris, P M; Bansal, M
1988-10-01
The performance of carbon fiber-reinforced ultra-high molecular weight polyethylene was compared with that of plain (non-reinforced) polyethylene on the basis of the damage that was observed on the articulating surfaces of retrieved tibial components of total knee prostheses. Established microscopy techniques for subjectively grading the presence and extent of surface damage and the histological structure of the surrounding tissues were used to evaluate twenty-six carbon fiber-reinforced and twenty plain polyethylene components that had been retrieved after an average of twenty-one months of implantation. All of the tibial components were from the same design of total knee replacement. The two groups of patients from whom the components were retrieved did not differ with regard to weight, the length of time that the component had been implanted, the radiographic position and angular alignment of the component, the original diagnosis, or the reason for removal of the component. The amounts and types of damage that were observed did not differ for the two materials. For both materials, the amount of damage was directly related to the length of time that the component had been implanted. The histological appearance of tissues from the area around the component did not differ for the two materials, except for the presence of fragments of carbon fiber in many of the samples from the areas around carbon fiber-reinforced components.
The joint methane profiles retrieval approach from GOSAT TIR and SWIR spectra
NASA Astrophysics Data System (ADS)
Zadvornykh, Ilya V.; Gribanov, Konstantin G.; Zakharov, Vyacheslav I.; Imasu, Ryoichi
2017-11-01
In this paper we present a method, using methane as example, which allows more accurate greenhouse gases retrieval in the Earth's atmosphere. Using the new version of the FIRE-ARMS software, supplemented with the VLIDORT vector radiation transfer model, we carried out joint methane retrieval from TIR (Thermal Infrared Range) and SWIR (ShortWavelength Infrared Range) GOSAT spectra using optimal estimation method. MACC reanalysis data from the European Center for Medium-Range Forecasts (ECMWF), supplemented by data from aircraft measurements of the HIPPO experiment were used as a statistical ensemble.
cPath: open source software for collecting, storing, and querying biological pathways
Cerami, Ethan G; Bader, Gary D; Gross, Benjamin E; Sander, Chris
2006-01-01
Background Biological pathways, including metabolic pathways, protein interaction networks, signal transduction pathways, and gene regulatory networks, are currently represented in over 220 diverse databases. These data are crucial for the study of specific biological processes, including human diseases. Standard exchange formats for pathway information, such as BioPAX, CellML, SBML and PSI-MI, enable convenient collection of this data for biological research, but mechanisms for common storage and communication are required. Results We have developed cPath, an open source database and web application for collecting, storing, and querying biological pathway data. cPath makes it easy to aggregate custom pathway data sets available in standard exchange formats from multiple databases, present pathway data to biologists via a customizable web interface, and export pathway data via a web service to third-party software, such as Cytoscape, for visualization and analysis. cPath is software only, and does not include new pathway information. Key features include: a built-in identifier mapping service for linking identical interactors and linking to external resources; built-in support for PSI-MI and BioPAX standard pathway exchange formats; a web service interface for searching and retrieving pathway data sets; and thorough documentation. The cPath software is freely available under the LGPL open source license for academic and commercial use. Conclusion cPath is a robust, scalable, modular, professional-grade software platform for collecting, storing, and querying biological pathways. It can serve as the core data handling component in information systems for pathway visualization, analysis and modeling. PMID:17101041
Data Compression in Full-Text Retrieval Systems.
ERIC Educational Resources Information Center
Bell, Timothy C.; And Others
1993-01-01
Describes compression methods for components of full-text systems such as text databases on CD-ROM. Topics discussed include storage media; structures for full-text retrieval, including indexes, inverted files, and bitmaps; compression tools; memory requirements during retrieval; and ranking and information retrieval. (Contains 53 references.)…
Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation
NASA Technical Reports Server (NTRS)
Mandrake, Lukas
2013-01-01
Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.
BARTTest: Community-Standard Atmospheric Radiative-Transfer and Retrieval Tests
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Himes, Michael D.; Cubillos, Patricio E.; Blecic, Jasmina; Challener, Ryan C.
2018-01-01
Atmospheric radiative transfer (RT) codes are used both to predict planetary and brown-dwarf spectra and in retrieval algorithms to infer atmospheric chemistry, clouds, and thermal structure from observations. Observational plans, theoretical models, and scientific results depend on the correctness of these calculations. Yet, the calculations are complex and the codes implementing them are often written without modern software-verification techniques. The community needs a suite of test calculations with analytically, numerically, or at least community-verified results. We therefore present the Bayesian Atmospheric Radiative Transfer Test Suite, or BARTTest. BARTTest has four categories of tests: analytically verified RT tests of simple atmospheres (single line in single layer, line blends, saturation, isothermal, multiple line-list combination, etc.), community-verified RT tests of complex atmospheres, synthetic retrieval tests on simulated data with known answers, and community-verified real-data retrieval tests.BARTTest is open-source software intended for community use and further development. It is available at https://github.com/ExOSPORTS/BARTTest. We propose this test suite as a standard for verifying atmospheric RT and retrieval codes, analogous to the Held-Suarez test for general circulation models. This work was supported by NASA Planetary Atmospheres grant NX12AI69G, NASA Astrophysics Data Analysis Program grant NNX13AF38G, and NASA Exoplanets Research Program grant NNX17AB62G.
Detecting Thin Cirrus in Multiangle Imaging Spectroradiometer Aerosol Retrievals
NASA Technical Reports Server (NTRS)
Pierce, Jeffrey R.; Kahn, Ralph A.; Davis, Matt R.; Comstock, Jennifer M.
2010-01-01
Thin cirrus clouds (optical depth (OD) < 03) are often undetected by standard cloud masking in satellite aerosol retrieval algorithms. However, the Mu]tiangle Imaging Spectroradiometer (MISR) aerosol retrieval has the potential to discriminate between the scattering phase functions of cirrus and aerosols, thus separating these components. Theoretical tests show that MISR is sensitive to cirrus OD within Max{0.05 1 20%l, similar to MISR's sensitivity to aerosol OD, and MISR can distinguish between small and large crystals, even at low latitudes, where the range of scattering angles observed by MISR is smallest. Including just two cirrus components in the aerosol retrieval algorithm would capture typical MISR sensitivity to the natural range of cinus properties; in situations where cirrus is present but the retrieval comparison space lacks these components, the retrieval tends to underestimate OD. Generally, MISR can also distinguish between cirrus and common aerosol types when the proper cirrus and aerosol optical models are included in the retrieval comparison space and total column OD is >-0.2. However, in some cases, especially at low latitudes, cirrus can be mistaken for some combinations of dust and large nonabsorbing spherical aerosols, raising a caution about retrievals in dusty marine regions when cirrus is present. Comparisons of MISR with lidar and Aerosol Robotic Network show good agreement in a majority of the cases, but situations where cirrus clouds have optical depths >0.15 and are horizontally inhomogeneous on spatial scales shorter than 50 km pose difficulties for cirrus retrieval using the MISR standard aerosol algorithm..
Grasping objects autonomously in simulated KC-135 zero-g
NASA Technical Reports Server (NTRS)
Norsworthy, Robert S.
1994-01-01
The KC-135 aircraft was chosen for simulated zero gravity testing of the Extravehicular Activity Helper/retriever (EVAHR). A software simulation of the EVAHR hardware, KC-135 flight dynamics, collision detection and grasp inpact dynamics has been developed to integrate and test the EVAHR software prior to flight testing on the KC-135. The EVAHR software will perform target pose estimation, tracking, and motion estimation for rigid, freely rotating, polyhedral objects. Manipulator grasp planning and trajectory control software has also been developed to grasp targets while avoiding collisions.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-02
... Software and Firmware, and Components Thereof and Products Containing the Same; Institution of..., related software and firmware, and components thereof and products containing the same by reason of... after importation of certain cameras and mobile devices, related software and firmware, and components...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-07
... Products, Components Thereof, and Related Software; Notice of Institution of Investigation; Institution of... importation of certain GPS navigation products, components thereof, and related software by reason of... importation of certain GPS navigation products, components thereof, and related software that infringe one or...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-13
... Software, and Components Thereof Final Determination of Violation; Issuance of a Limited Exclusion Order... importation of certain mobile devices, associated software, and components thereof by reason of infringement... importation of certain mobile devices, associated software, and components thereof containing same by reason...
A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory
NASA Technical Reports Server (NTRS)
Elander, Valjean; Koshak, William; Phanord, Dieudonne
2004-01-01
The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Robert Woolsey; Thomas M. McGee; Carol Blanton Lutken
2007-03-31
The Gulf of Mexico Hydrates Research Consortium (GOM-HRC) was established in 1999 to assemble leaders in gas hydrates research. The Consortium is administered by the Center for Marine Resources and Environmental Technology, CMRET, at the University of Mississippi. The primary objective of the group is to design and emplace a remote monitoring station or sea floor observatory (MS/SFO) on the sea floor in the northern Gulf of Mexico by the year 2007, in an area where gas hydrates are known to be present at, or just below, the sea floor. This mission, although unavoidably delayed by hurricanes and other disturbances,more » necessitates assembling a station that will monitor physical and chemical parameters of the marine environment, including sea water and sea-floor sediments, on a more-or-less continuous basis over an extended period of time. In 2005, biological monitoring, as a means of assessing environmental health, was added to the mission of the MS/SFO. Establishment of the Consortium has succeeded in fulfilling the critical need to coordinate activities, avoid redundancies and communicate effectively among researchers in the arena of gas hydrates research. Complementary expertise, both scientific and technical, has been assembled to promote innovative research methods and construct necessary instrumentation. The observatory has now achieved a microbial dimension in addition to the geophysical, geological, and geochemical components it had already included. Initial components of the observatory, a probe that collects pore-fluid samples and another that records sea floor temperatures, were deployed in Mississippi Canyon 118 (MC118) in May of 2005. Follow-up deployments, planned for fall 2005, had to be postponed due to the catastrophic effects of Hurricane Katrina (and later, Rita) on the Gulf Coast. Station/observatory completion, anticipated for 2007, will likely be delayed by at least one year. These delays caused scheduling and deployments difficulties but many sensors and instruments were completed during this period. Software has been written that will accommodate the data that the station retrieves, when it begins to be delivered. In addition, new seismic data processing software has been written to treat the peculiar data to be received by the vertical line array (VLA) and additional software has been developed that will address the horizontal line array (HLA) data. These packages have been tested on data from the test deployments of the VLA and on data from other, similar, areas of the Gulf (in the case of the HLA software). The CMRET has conducted one very significant research cruise during this reporting period: a March cruise to perform sea trials of the Station Service Device (SSD), the custom Remotely Operated Vehicle (ROV) built to perform several of the unique functions required for the observatory to become fully operational. March's efforts included test deployments of the SSD and Florida Southern University's mass spectrometer designed to measure hydrocarbon gases in the water column and The University of Georgia's microbial collector. The University of Georgia's rotational sea-floor camera was retrieved as was Specialty Devices storm monitor array. The former was deployed in September and the latter in June, 2006. Both were retrieved by acoustic release from a dispensable weight. Cruise participants also went prepared to recover any and all instruments left on the sea-floor during the September Johnson SeaLink submersible cruise. One of the pore-fluid samplers, a small ''peeper'' was retrieved successfully and in fine condition. Other instrumentation was left on the sea-floor until modifications of the SSD are complete and a return cruise is accomplished.« less
Barlow, Paul M.; Cunningham, William L.; Zhai, Tong; Gray, Mark
2015-01-01
This report is a user guide for the streamflow-hydrograph analysis methods provided with version 1.0 of the U.S. Geological Survey (USGS) Groundwater Toolbox computer program. These include six hydrograph-separation methods to determine the groundwater-discharge (base-flow) and surface-runoff components of streamflow—the Base-Flow Index (BFI; Standard and Modified), HYSEP (Fixed Interval, Sliding Interval, and Local Minimum), and PART methods—and the RORA recession-curve displacement method and associated RECESS program to estimate groundwater recharge from streamflow data. The Groundwater Toolbox is a customized interface built on the nonproprietary, open source MapWindow geographic information system software. The program provides graphing, mapping, and analysis capabilities in a Microsoft Windows computing environment. In addition to the four hydrograph-analysis methods, the Groundwater Toolbox allows for the retrieval of hydrologic time-series data (streamflow, groundwater levels, and precipitation) from the USGS National Water Information System, downloading of a suite of preprocessed geographic information system coverages and meteorological data from the National Oceanic and Atmospheric Administration National Climatic Data Center, and analysis of data with several preprocessing and postprocessing utilities. With its data retrieval and analysis tools, the Groundwater Toolbox provides methods to estimate many of the components of the water budget for a hydrologic basin, including precipitation; streamflow; base flow; runoff; groundwater recharge; and total, groundwater, and near-surface evapotranspiration.
MERIS Retrieval of Water Quality Components in the Turbid Albemarle-Pamlico Sound Estuary, USA
Two remote-sensing optical algorithms for the retrieval of the water quality components (WQCs) in the Albemarle-Pamlico Estuarine System (APES) have been developed and validated for chlorophyll a (Chl) concentration. Both algorithms are semiempirical because they incorporate some...
Determinants to trigger memory reconsolidation: The role of retrieval and updating information.
Rodriguez-Ortiz, Carlos J; Bermúdez-Rattoni, Federico
2017-07-01
Long-term memories can undergo destabilization/restabilization processes, collectively called reconsolidation. However, the parameters that trigger memory reconsolidation are poorly understood and are a matter of intense investigation. Particularly, memory retrieval is widely held as requisite to initiate reconsolidation. This assumption makes sense since only relevant cues will induce reconsolidation of a specific memory. However, recent studies show that pharmacological inhibition of retrieval does not avoid memory from undergoing reconsolidation, indicating that memory reconsolidation occurs through a process that can be dissociated from retrieval. We propose that retrieval is not a unitary process but has two dissociable components; one leading to the expression of memory and the other to reconsolidation, referred herein as executer and integrator respectively. The executer would lead to the behavioral expression of the memory. This component would be the one disrupted on the studies that show reconsolidation independence from retrieval. The integrator would deal with reconsolidation. This component of retrieval would lead to long-term memory destabilization when specific conditions are met. We think that an important number of reports are consistent with the hypothesis that reconsolidation is only initiated when updating information is acquired. We suggest that the integrator would initiate reconsolidation to integrate updating information into long-term memory. Copyright © 2016 Elsevier Inc. All rights reserved.
Polyethylene Wear in Retrieved Reverse Total Shoulder Components
Day, Judd S; MacDonald, Daniel W; Olsen, Madeline; Getz, Charles; Williams, Gerald R; Kurtz, Steven M
2011-01-01
Background Reverse total shoulder arthroplasty has been used to treat rotator cuff tear arthropathy, proximal humeral fractures and for failed conventional total shoulder prostheses. It has been suggested that polyethylene wear is potentially higher in reverse shoulder replacements than in conventional shoulder replacements. The modes and degree of polyethylene wear have not been completely elucidated. The purpose of this study was to evaluate polyethylene wear patterns in seven specimens retrieved at revision arthroplasty and identify factors that may be associated with increased wear. Methods Reverse total shoulder components were retrieved from 7 patients during revision arthroplasty for loosening and/or pain. Pre-operative glenoid tilt and placement, and scapular notching were evaluated using pre-operative radiographs. Polyethylene wear was evaluated using microCT and optical microscopy. Results Wear on the rim of the polyethylene humeral cup, was identified on all retrieved components. The extent of rim wear varied from a penetration depth of 0.1 to 4.7 mm. We could not demonstrate a correlation between scapular notching and rim wear. However, rim wear was more extensive when the inferior screw had made contact with the liner. Metal on metal wear between the humeral component and the inferior screw of one component was also observed. Wear of the intended bearing surface was minimal. Discussion Rim damage was the predominant cause of polyethylene wear in our retrieved specimens. Direct contact between the humeral component and inferior metaglene screws is concerning because this could lead to accelerated UHMWPE wear and also induce mechanical loosening of the glenoid component. PMID:21724419
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-23
... Entertainment Consoles, Related Software, and Components Thereof; Notice of Investigation AGENCY: U.S..., related software, and components thereof by reason of infringement of certain claims of U.S. Patent No. 5... gaming and entertainment consoles, related software, and components thereof that infringe one or more of...
Leveraging Existing Mission Tools in a Re-Usable, Component-Based Software Environment
NASA Technical Reports Server (NTRS)
Greene, Kevin; Grenander, Sven; Kurien, James; z,s (fshir. z[orttr); z,scer; O'Reilly, Taifun
2006-01-01
Emerging methods in component-based software development offer significant advantages but may seem incompatible with existing mission operations applications. In this paper we relate our positive experiences integrating existing mission applications into component-based tools we are delivering to three missions. In most operations environments, a number of software applications have been integrated together to form the mission operations software. In contrast, with component-based software development chunks of related functionality and data structures, referred to as components, can be individually delivered, integrated and re-used. With the advent of powerful tools for managing component-based development, complex software systems can potentially see significant benefits in ease of integration, testability and reusability from these techniques. These benefits motivate us to ask how component-based development techniques can be relevant in a mission operations environment, where there is significant investment in software tools that are not component-based and may not be written in languages for which component-based tools even exist. Trusted and complex software tools for sequencing, validation, navigation, and other vital functions cannot simply be re-written or abandoned in order to gain the advantages offered by emerging component-based software techniques. Thus some middle ground must be found. We have faced exactly this issue, and have found several solutions. Ensemble is an open platform for development, integration, and deployment of mission operations software that we are developing. Ensemble itself is an extension of an open source, component-based software development platform called Eclipse. Due to the advantages of component-based development, we have been able to vary rapidly develop mission operations tools for three surface missions by mixing and matching from a common set of mission operation components. We have also had to determine how to integrate existing mission applications for sequence development, sequence validation, and high level activity planning, and other functions into a component-based environment. For each of these, we used a somewhat different technique based upon the structure and usage of the existing application.
A MultiDiscipline Approach to Digitizing Historic Seismograms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Andrew
2016-04-07
Retriever Technology has developed and has made available free of charge a seismogram digitization software package called SKATE (Seismogram Kit for Automatic Trace Extraction). We have developed an extensive set of algorithms that process seismogram image files, provide editing tools, and output time series data. The software is available online and free of charge at seismo.redfish.com. To demonstrate the speed and cost effectiveness of the software, we have processed over 30,000 images.
Natural language processing-based COTS software and related technologies survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stickland, Michael G.; Conrad, Gregory N.; Eaton, Shelley M.
Natural language processing-based knowledge management software, traditionally developed for security organizations, is now becoming commercially available. An informal survey was conducted to discover and examine current NLP and related technologies and potential applications for information retrieval, information extraction, summarization, categorization, terminology management, link analysis, and visualization for possible implementation at Sandia National Laboratories. This report documents our current understanding of the technologies, lists software vendors and their products, and identifies potential applications of these technologies.
Supporting geoscience with graphical-user-interface Internet tools for the Macintosh
NASA Astrophysics Data System (ADS)
Robin, Bernard
1995-07-01
This paper describes a suite of Macintosh graphical-user-interface (GUI) software programs that can be used in conjunction with the Internet to support geoscience education. These software programs allow science educators to access and retrieve a large body of resources from an increasing number of network sites, taking advantage of the intuitive, simple-to-use Macintosh operating system. With these tools, educators easily can locate, download, and exchange not only text files but also sound resources, video movie clips, and software application files from their desktop computers. Another major advantage of these software tools is that they are available at no cost and may be distributed freely. The following GUI software tools are described including examples of how they can be used in an educational setting: ∗ Eudora—an e-mail program ∗ NewsWatcher—a newsreader ∗ TurboGopher—a Gopher program ∗ Fetch—a software application for easy File Transfer Protocol (FTP) ∗ NCSA Mosaic—a worldwide hypertext browsing program. An explosive growth of online archives currently is underway as new electronic sites are being added continuously to the Internet. Many of these resources may be of interest to science educators who learn they can share not only ASCII text files, but also graphic image files, sound resources, QuickTime movie clips, and hypermedia projects with colleagues from locations around the world. These powerful, yet simple to learn GUI software tools are providing a revolution in how knowledge can be accessed, retrieved, and shared.
Development of a platform-independent receiver control system for SISIFOS
NASA Astrophysics Data System (ADS)
Lemke, Roland; Olberg, Michael
1998-05-01
Up to now receiver control software was a time consuming development usually written by receiver engineers who had mainly the hardware in mind. We are presenting a low-cost and very flexible system which uses a minimal interface to the real hardware, and which makes it easy to adapt to new receivers. Our system uses Tcl/Tk as a graphical user interface (GUI), SpecTcl as a GUI builder, Pgplot as plotting software, a simple query language (SQL) database for information storage and retrieval, Ethernet socket to socket communication and SCPI as a command control language. The complete system is in principal platform independent but for cost saving reasons we are using it actually on a PC486 running Linux 2.0.30, which is a copylefted Unix. The only hardware dependent part are the digital input/output boards, analog to digital and digital to analog convertors. In the case of the Linux PC we are using a device driver development kit to integrate the boards fully into the kernel of the operating system, which indeed makes them look like an ordinary device. The advantage of this system is firstly the low price and secondly the clear separation between the different software components which are available for many operating systems. If it is not possible, due to CPU performance limitations, to run all the software in a single machine,the SQL-database or the graphical user interface could be installed on separate computers.
X-Ray Phase Imaging for Breast Cancer Detection
2012-09-01
the Gerchberg-Saxton algorithm in the Fresnel diffraction regime, and is much more robust against image noise than the TIE-based method. For details...developed efficient coding with the software modules for the image registration, flat-filed correction , and phase retrievals. In addition, we...X, Liu H. 2010. Performance analysis of the attenuation-partition based iterative phase retrieval algorithm for in-line phase-contrast imaging
Databank Software for the 1990s and Beyond--Part 1: The User's Wish List.
ERIC Educational Resources Information Center
Basch, Reva
1990-01-01
Describes desired software enhancements identified by the Southern California Online Users Group in the areas of search language, database selection, document retrieval and display, user interface, customer support, and cost and economic issues. The need to prioritize these wishes and to determine whether features should reside in the mainframe or…
ERIC Educational Resources Information Center
Roberts, Tommy L.; And Others
The Total Guidance Information Support System (TGISS), is an information storage and retrieval system for counselors. The total TGISS, including hardware and software, extends the counselor's capabilities by providing ready access to student information under secure conditions. The hardware required includes: (1) IBM 360/50 central processing…
Commercial imagery archive, management, exploitation, and distribution project development
NASA Astrophysics Data System (ADS)
Hollinger, Bruce; Sakkas, Alysa
1999-10-01
The Lockheed Martin (LM) team had garnered over a decade of operational experience on the U.S. Government's IDEX II (Imagery Dissemination and Exploitation) system. Recently, it set out to create a new commercial product to serve the needs of large-scale imagery archiving and analysis markets worldwide. LM decided to provide a turnkey commercial solution to receive, store, retrieve, process, analyze and disseminate in 'push' or 'pull' modes imagery, data and data products using a variety of sources and formats. LM selected 'best of breed' hardware and software components and adapted and developed its own algorithms to provide added functionality not commercially available elsewhere. The resultant product, Intelligent Library System (ILS)TM, satisfies requirements for (1) a potentially unbounded, data archive (5000 TB range) (2) automated workflow management for increased user productivity; (3) automatic tracking and management of files stored on shelves; (4) ability to ingest, process and disseminate data volumes with bandwidths ranging up to multi- gigabit per second; (5) access through a thin client-to-server network environment; (6) multiple interactive users needing retrieval of files in seconds from both archived images or in real time, and (7) scalability that maintains information throughput performance as the size of the digital library grows.
Commercial imagery archive, management, exploitation, and distribution product development
NASA Astrophysics Data System (ADS)
Hollinger, Bruce; Sakkas, Alysa
1999-12-01
The Lockheed Martin (LM) team had garnered over a decade of operational experience on the U.S. Government's IDEX II (Imagery Dissemination and Exploitation) system. Recently, it set out to create a new commercial product to serve the needs of large-scale imagery archiving and analysis markets worldwide. LM decided to provide a turnkey commercial solution to receive, store, retrieve, process, analyze and disseminate in 'push' or 'pull' modes imagery, data and data products using a variety of sources and formats. LM selected 'best of breed' hardware and software components and adapted and developed its own algorithms to provide added functionality not commercially available elsewhere. The resultant product, Intelligent Library System (ILS)TM, satisfies requirements for (a) a potentially unbounded, data archive (5000 TB range) (b) automated workflow management for increased user productivity; (c) automatic tracking and management of files stored on shelves; (d) ability to ingest, process and disseminate data volumes with bandwidths ranging up to multi- gigabit per second; (e) access through a thin client-to-server network environment; (f) multiple interactive users needing retrieval of files in seconds from both archived images or in real time, and (g) scalability that maintains information throughput performance as the size of the digital library grows.
Schierjott, Ronja A; Giurea, Alexander; Neuhaus, Hans-Joachim; Schwiesau, Jens; Pfaff, Andreas M; Utzschneider, Sandra; Tozzi, Gianluca; Grupp, Thomas M
2016-01-01
Carbon fiber reinforced poly-ether-ether-ketone (CFR-PEEK) represents a promising alternative material for bushings in total knee replacements, after early clinical failures of polyethylene in this application. The objective of the present study was to evaluate the damage modes and the extent of damage observed on CFR-PEEK hinge mechanism articulation components after in vivo service in a rotating hinge knee (RHK) system and to compare the results with corresponding components subjected to in vitro wear tests. Key question was if there were any similarities or differences between in vivo and in vitro damage characteristics. Twelve retrieved RHK systems after an average of 34.9 months in vivo underwent wear damage analysis with focus on the four integrated CFR-PEEK components and distinction between different damage modes and classification with a scoring system. The analysis included visual examination, scanning electron microscopy, and energy dispersive X-ray spectroscopy, as well as surface roughness and profile measurements. The main wear damage modes were comparable between retrieved and in vitro specimens ( n = 3), whereby the size of affected area on the retrieved components showed a higher variation. Overall, the retrieved specimens seemed to be slightly heavier damaged which was probably attributable to the more complex loading and kinematic conditions in vivo.
Modeling and investigative studies of Jovian low frequency emissions
NASA Technical Reports Server (NTRS)
Menietti, J. D.; Green, James L.; Six, N. Frank; Gulkis, S.
1986-01-01
Jovian decametric (DAM) and hectometric (HOM) emissions were first observed over the entire spectrum by the Voyager 1 and 2 flybys of the planet. They display unusual arc-like structures on frequency-versus-time spectrograms. Software for the modeling of the Jovian plasma and magnetic field environment was performed. In addition, an extensive library of programs was developed for the retrieval of Voyager Planetary Radio Astronomy (PRA) data in both the high and low frequency bands from new noise-free, recalibrated data tapes. This software allows the option of retrieving data sorted with respect to particular sub-Io longitudes. This has proven to be invaluable in the analyses of the data. Graphics routines were also developed to display the data on color spectrograms.
Modeling and investigative studies of Jovian low frequency emissions
NASA Astrophysics Data System (ADS)
Menietti, J. D.; Green, James L.; Six, N. Frank; Gulkis, S.
1986-08-01
Jovian decametric (DAM) and hectometric (HOM) emissions were first observed over the entire spectrum by the Voyager 1 and 2 flybys of the planet. They display unusual arc-like structures on frequency-versus-time spectrograms. Software for the modeling of the Jovian plasma and magnetic field environment was performed. In addition, an extensive library of programs was developed for the retrieval of Voyager Planetary Radio Astronomy (PRA) data in both the high and low frequency bands from new noise-free, recalibrated data tapes. This software allows the option of retrieving data sorted with respect to particular sub-Io longitudes. This has proven to be invaluable in the analyses of the data. Graphics routines were also developed to display the data on color spectrograms.
A database for TMT interface control documents
NASA Astrophysics Data System (ADS)
Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John
2016-08-01
The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-25
... Computing Devices, Related Software, and Components Thereof; Notice of Investigation AGENCY: U.S... devices, related software, and components thereof by reason of infringement of certain claims of U.S... devices, related software, and components thereof that infringe one or more of claims 1 and 5 of the '372...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... Hardware and Software Components Thereof; Notice of Investigation AGENCY: U.S. International Trade... boxes, and hardware and software components thereof by reason of infringement of certain claims of U.S... after importation of certain set-top boxes, and hardware and software components thereof that infringe...
INFORMATION STORAGE AND RETRIEVAL, REPORTS ON EVALUATION PROCEDURES AND RESULTS 1965-1967.
ERIC Educational Resources Information Center
SALTON, GERALD
A DETAILED ANALYSIS OF THE RETRIEVAL EVALUATION RESULTS OBTAINED WITH THE AUTOMATIC SMART DOCUMENT RETRIEVAL SYSTEM FOR DOCUMENT COLLECTIONS IN THE FIELDS OF AERODYNAMICS, COMPUTER SCIENCE, AND DOCUMENTATION IS GIVEN IN THIS REPORT. THE VARIOUS COMPONENTS OF FULLY AUTOMATIC DOCUMENT RETRIEVAL SYSTEMS ARE DISCUSSED IN DETAIL, INCLUDING THE FORMS OF…
NASA Astrophysics Data System (ADS)
Gliss, Jonas; Stebel, Kerstin; Kylling, Arve; Solvejg Dinger, Anna; Sihler, Holger; Sudbø, Aasmund
2017-04-01
UV SO2 cameras have become a common method for monitoring SO2 emission rates from volcanoes. Scattered solar UV radiation is measured in two wavelength windows, typically around 310 nm and 330 nm (distinct / weak SO2 absorption) using interference filters. The data analysis comprises the retrieval of plume background intensities (to calculate plume optical densities), the camera calibration (to convert optical densities into SO2 column densities) and the retrieval of gas velocities within the plume as well as the retrieval of plume distances. SO2 emission rates are then typically retrieved along a projected plume cross section, for instance a straight line perpendicular to the plume propagation direction. Today, for most of the required analysis steps, several alternatives exist due to ongoing developments and improvements related to the measurement technique. We present piscope, a cross platform, open source software toolbox for the analysis of UV SO2 camera data. The code is written in the Python programming language and emerged from the idea of a common analysis platform incorporating a selection of the most prevalent methods found in literature. piscope includes several routines for plume background retrievals, routines for cell and DOAS based camera calibration including two individual methods to identify the DOAS field of view (shape and position) within the camera images. Gas velocities can be retrieved either based on an optical flow analysis or using signal cross correlation. A correction for signal dilution (due to atmospheric scattering) can be performed based on topographic features in the images. The latter requires distance retrievals to the topographic features used for the correction. These distances can be retrieved automatically on a pixel base using intersections of individual pixel viewing directions with the local topography. The main features of piscope are presented based on dataset recorded at Mt. Etna, Italy in September 2015.
2016-01-06
of- breed software components and software products lines (SPLs) that are subject to different IP license and cybersecurity requirements. The... commercially priced closed source software components, to be used in the design, implementation, deployment, and evolution of open architecture (OA... breed software components and software products lines (SPLs) that are subject to different IP license and cybersecurity requirements. The Department
Information Retrieval Using UMLS-based Structured Queries
Fagan, Lawrence M.; Berrios, Daniel C.; Chan, Albert; Cucina, Russell; Datta, Anupam; Shah, Maulik; Surendran, Sujith
2001-01-01
During the last three years, we have developed and described components of ELBook, a semantically based information-retrieval system [1-4]. Using these components, domain experts can specify a query model, indexers can use the query model to index documents, and end-users can search these documents for instances of indexed queries.
NASA Astrophysics Data System (ADS)
Bouroubi, Mohamed Yacine
Multi-spectral satellite imagery, especially at high spatial resolution (finer than 30 m on the ground), represents an invaluable source of information for decision making in various domains in connection with natural resources management, environment preservation or urban planning and management. The mapping scales may range from local (finer resolution than 5 m) to regional (resolution coarser than 5m). The images are characterized by objects reflectance in the electromagnetic spectrum witch represents the key information in many applications. However, satellite sensor measurements are also affected by parasite input due to illumination and observation conditions, to the atmosphere, to topography and to sensor properties. Two questions have oriented this research. What is the best approach to retrieve surface reflectance with the measured values while taking into account these parasite factors? Is this retrieval a sine qua non condition for reliable image information extraction for the diverse domains of application for the images (mapping, environmental monitoring, landscape change detection, resources inventory, etc.)? The goals we have delineated for this research are as follow: (1) Develop software to retrieve ground reflectance while taking into account the aspects mentioned earlier. This software had to be modular enough to allow improvement and adaptation to diverse remote sensing application problems; and (2) Apply this software in various context (urban, agricultural, forest) and analyse results to evaluate the accuracy gain of extracted information from remote sensing imagery transformed in ground reflectance images to demonstrate the necessity of operating in this way, whatever the type of application. During this research, we have developed a tool to retrieve ground reflectance (the new version of the REFLECT software). This software is based on the formulas (and routines) of the 6S code (Second Simulation of Satellite Signal in the Solar Spectrum) and on the dark targets method to estimated the aerosol optical thickness, representing the most difficult factor to correct. Substantial improvements have been made to the existing models. These improvements essentially concern the aerosols properties (integration of a more recent model, improvement of the dark targets selection to estimate the AOD), the adjacency effect, the adaptation to most used high resolution (Landsat TM and ETM+, all HR SPOT 1 to 5, EO-1 ALI and ASTER) and very high resolution (QuickBird et Ikonos) sensors and the correction of topographic effects with a model that separate direct and diffuse solar radiation components and the adaptation of this model to forest canopy. Validation has shown that ground reflectance estimation with REFLECT is performed with an accuracy of approximately +/-0.01 in reflectance units (for in the visible, near-infrared and middle-infrared spectral bands) even for a surface with varying topography. This software has allowed demonstrating, through apparent reflectance simulations, how much parasite factors influencing numerical values of the images may alter the ground reflectance (errors ranging from 10 to 50%). REFLECT has also been used to examine the usefulness of ground reflectance instead of raw data for various common remote sensing applications in domains such as classification, change detection, agriculture and forestry. In most applications (multi-temporal change monitoring, use of vegetation indices, biophysical parameters estimation, etc.) image correction is a crucial step to obtain reliable results. From the computer environment standpoint, REFLECT is organized as a series of menus, corresponding to different steps of: input parameters introducing, gas transmittances calculation, AOD estimation, and finally image correction application, with the possibility of using the fast option witch process an image of 5000 by 5000 pixels in approximately 15 minutes. (Abstract shortened by UMI.)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-05
..., Associated Software, and Components Thereof; Notice of Investigation AGENCY: U.S. International Trade..., associated software, and components thereof by reason of infringement of certain claims of U.S. Patent No. 5..., associated software, and components thereof that infringe one or more of claims 1-4, 22, 26, 31, and 36 of...
Component-based integration of chemistry and optimization software.
Kenny, Joseph P; Benson, Steven J; Alexeev, Yuri; Sarich, Jason; Janssen, Curtis L; McInnes, Lois Curfman; Krishnan, Manojkumar; Nieplocha, Jarek; Jurrus, Elizabeth; Fahlstrom, Carl; Windus, Theresa L
2004-11-15
Typical scientific software designs make rigid assumptions regarding programming language and data structures, frustrating software interoperability and scientific collaboration. Component-based software engineering is an emerging approach to managing the increasing complexity of scientific software. Component technology facilitates code interoperability and reuse. Through the adoption of methodology and tools developed by the Common Component Architecture Forum, we have developed a component architecture for molecular structure optimization. Using the NWChem and Massively Parallel Quantum Chemistry packages, we have produced chemistry components that provide capacity for energy and energy derivative evaluation. We have constructed geometry optimization applications by integrating the Toolkit for Advanced Optimization, Portable Extensible Toolkit for Scientific Computation, and Global Arrays packages, which provide optimization and linear algebra capabilities. We present a brief overview of the component development process and a description of abstract interfaces for chemical optimizations. The components conforming to these abstract interfaces allow the construction of applications using different chemistry and mathematics packages interchangeably. Initial numerical results for the component software demonstrate good performance, and highlight potential research enabled by this platform.
NASA Technical Reports Server (NTRS)
Hall, Laverne; Hung, Chaw-Kwei; Lin, Imin
2000-01-01
The purpose of this paper is to provide a description of NASA JPL Distributed Systems Technology (DST) Section's object-oriented component approach to open inter-operable systems software development and software reuse. It will address what is meant by the terminology object component software, give an overview of the component-based development approach and how it relates to infrastructure support of software architectures and promotes reuse, enumerate on the benefits of this approach, and give examples of application prototypes demonstrating its usage and advantages. Utilization of the object-oriented component technology approach for system development and software reuse will apply to several areas within JPL, and possibly across other NASA Centers.
Assessing repository technology. Where do we go from here?
NASA Technical Reports Server (NTRS)
Eichmann, David
1992-01-01
Three sample information retrieval systems, archie, autoLib, and Wide Area Information Service (WAIS), are compared with regard to their expressiveness and usefulness, first in the general context of information retrieval, and then as prospective software reuse repositories. While the representational capabilities of these systems are limited, they provide a useful foundation for future repository efforts, particularly from the perspective of repository distribution and coherent user interface design.
Assessing repository technology: Where do we go from here?
NASA Technical Reports Server (NTRS)
Eichmann, David A.
1992-01-01
Three sample information retrieval systems, archie, autoLib, and Wide Area Information Service (WAIS), are compared with regard to their expressiveness and usefulness, first in the general context of information retrieval, and then as perspective software reuse repositories. While the representational capabilities of these systems are limited, they provide a useful foundation for future repository efforts, particularly from the perspective of repository distribution and coherent user interface design.
Large Scale Hierarchical K-Means Based Image Retrieval With MapReduce
2014-03-27
hadoop distributed file system: Architecture and design, 2007. [10] G. Bradski. Dr. Dobb’s Journal of Software Tools, 2000. [11] Terry Costlow. Big data ...million images running on 20 virtual machines are shown. 15. SUBJECT TERMS Image Retrieval, MapReduce, Hierarchical K-Means, Big Data , Hadoop U U U UU 87...13 2.1.1.2 HDFS Data Representation . . . . . . . . . . . . . . . . 14 2.1.1.3 Hadoop Engine
NASA Astrophysics Data System (ADS)
Grassi, Davide; Sindoni, Giuseppe; D'Aversa, Emiliano; Oliva, Fabrizio; Filacchione, Gianrico; Adriani, Alberto; Mura, Alessandro; Moriconi, Maria Luisa; Noschese, Raffaella; Cicchetti, Andrea; Piccioni, Giuseppe; Ignatiev, Nikolai; Maestri, Tiziano
2016-04-01
In this contribution, we detail the retrieval scheme that has been developed in the last few years for the analysis of the spectral data expected from the JIRAM experiment on board of the Juno NASA mission [1], beginning from the second half of 2016. Our focus is on the analysis of the thermal radiation in the 5 micron transparency window, in regions of lesser cloud opacity (namely, hot-spots). Moving from the preliminary analysis presented in Grassi et al., 2010 [2], a retrieval scheme has been developed and implemented as a complete end-to-end processing software. Performances in terms of fit quality and retrieval errors are discussed from tests on simulated spectra. Few examples of usage on VIMS-Cassini flyby data are also presented. Following the suggestion originally presented in Irwin et al., 1998 [3] for the analysis of the NIMS data, the state vector to be retrieved has been drastically simplified on physically sounding basis, aiming mostly to distinguish between the 'deep' content of minor gaseous component (water, ammonia, phosphine) and their relative humidity or fractional scale height in the upper troposphere. The retrieval code is based on a Bayesian scheme [4], complemented by a Metropolis algorithm plus simulated thermal annealing [5] for most problematic cases. The key parameters retrievable from JIRAM individual spectra are the ammonia and phosphine deep content, the water vapour relative humidity as well as the total aerosol opacity. We discuss in extent also the technical aspects related to the forward radiative transfer scheme: completeness of line databases used to generate correlated-k tables, comparison of different schemes for the treatment of aerosol scattering, assumption on clouds radiative properties and issues related to the analysis of dayside data. This work has been funded through ASI grants: I/010/10/0 and 2014-050-R.0. [1] Adriani et al., 2008 doi:10.1089/ast.2007.0167 [2] Grassi et al., 2010, doi: 10.1016/j.pss.2010.05.003 [3] Irwin et al., 1998, doi: 10.1029/98JE00948 [4] Rodgers, 2000, isbn: 9789810227401 [5] Press et al., 1996, isbn: 9780521574396
Mercury: Reusable software application for Metadata Management, Data Discovery and Access
NASA Astrophysics Data System (ADS)
Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.
2009-12-01
Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury is itself a reusable toolset for metadata, with current use in 12 different projects. Mercury also supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects To balance these common and project-specific needs, Mercury’s architecture includes three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of configuration files. The harvested files are then passed to the Indexing system, where each of the fields in these structured metadata records are indexed properly, so that the query engine can perform simple, keyword, spatial and temporal searches across these metadata sources. The search user interface software has two API categories; a common core API which is used by all the Mercury user interfaces for querying the index and a customized API for project specific user interfaces. For our work in producing a reusable, portable, robust, feature-rich application, Mercury received a 2008 NASA Earth Science Data Systems Software Reuse Working Group Peer-Recognition Software Reuse Award. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.
Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sussman, Alan
2014-10-21
This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.
Graph Visualization for RDF Graphs with SPARQL-EndPoints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R; Bond, Nathaniel
2014-07-11
RDF graphs are hard to visualize as triples. This software module is a web interface that connects to a SPARQL endpoint and retrieves graph data that the user can explore interactively and seamlessly. The software written in python and JavaScript has been tested to work on screens as little as the smart phones to large screens such as EVEREST.
Objective and Item Banking Computer Software and Its Use in Comprehensive Achievement Monitoring.
ERIC Educational Resources Information Center
Schriber, Peter E.; Gorth, William P.
The current emphasis on objectives and test item banks for constructing more effective tests is being augmented by increasingly sophisticated computer software. Items can be catalogued in numerous ways for retrieval. The items as well as instructional objectives can be stored and test forms can be selected and printed by the computer. It is also…
The astronaut and the banana peel: An EVA retriever scenario
NASA Technical Reports Server (NTRS)
Shapiro, Daniel G.
1989-01-01
To prepare for the problem of accidents in Space Station activities, the Extravehicular Activity Retriever (EVAR) robot is being constructed, whose purpose is to retrieve astronauts and tools that float free of the Space Station. Advanced Decision Systems is at the beginning of a project to develop research software capable of guiding EVAR through the retrieval process. This involves addressing problems in machine vision, dexterous manipulation, real time construction of programs via speech input, and reactive execution of plans despite the mishaps and unexpected conditions that arise in uncontrolled domains. The problem analysis phase of this work is presented. An EVAR scenario is used to elucidate major domain and technical problems. An overview of the technical approach to prototyping an EVAR system is also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-08-01
An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less
TCGA-assembler 2: software pipeline for retrieval and processing of TCGA/CPTAC data.
Wei, Lin; Jin, Zhilin; Yang, Shengjie; Xu, Yanxun; Zhu, Yitan; Ji, Yuan
2018-05-01
The Cancer Genome Atlas (TCGA) program has produced huge amounts of cancer genomics data providing unprecedented opportunities for research. In 2014, we developed TCGA-Assembler, a software pipeline for retrieval and processing of public TCGA data. In 2016, TCGA data were transferred from the TCGA data portal to the Genomic Data Commons (GDCs), which is supported by a different set of data storage and retrieval mechanisms. In addition, new proteomics data of TCGA samples have been generated by the Clinical Proteomic Tumor Analysis Consortium (CPTAC) program, which were not available for downloading through TCGA-Assembler. It is desirable to acquire and integrate data from both GDC and CPTAC. We develop TCGA-assembler 2 (TA2) to automatically download and integrate data from GDC and CPTAC. We make substantial improvement on the functionality of TA2 to enhance user experience and software performance. TA2 together with its previous version have helped more than 2000 researchers from 64 countries to access and utilize TCGA and CPTAC data in their research. Availability of TA2 will continue to allow existing and new users to conduct reproducible research based on TCGA and CPTAC data. http://www.compgenome.org/TCGA-Assembler/ or https://github.com/compgenome365/TCGA-Assembler-2. zhuyitan@gmail.com or koaeraser@gmail.com. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Olsen, Kevin S.; Toon, Geoffrey C.; Boone, Chris D.; Strong, Kimberly
2016-03-01
Motivated by the initial selection of a high-resolution solar occultation Fourier transform spectrometer (FTS) to fly to Mars on the ExoMars Trace Gas Orbiter, we have been developing algorithms for retrieving volume mixing ratio vertical profiles of trace gases, the primary component of which is a new algorithm and software for retrieving vertical profiles of temperature and pressure from the spectra. In contrast to Earth-observing instruments, which can rely on accurate meteorological models, a priori information, and spacecraft position, Mars retrievals require a method with minimal reliance on such data. The temperature and pressure retrieval algorithms developed for this work were evaluated using Earth-observing spectra from the Atmospheric Chemistry Experiment (ACE) FTS, a solar occultation instrument in orbit since 2003, and the basis for the instrument selected for a Mars mission. ACE-FTS makes multiple measurements during an occultation, separated in altitude by 1.5-5 km, and we analyse 10 CO2 vibration-rotation bands at each altitude, each with a different usable altitude range. We describe the algorithms and present results of their application and their comparison to the ACE-FTS data products. The Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) provides vertical profiles of temperature up to 40 km with high vertical resolution. Using six satellites and GPS radio occultation, COSMIC's data product has excellent temporal and spatial coverage, allowing us to find coincident measurements with ACE with very tight criteria: less than 1.5 h and 150 km. We present an intercomparison of temperature profiles retrieved from ACE-FTS using our algorithm, that of the ACE Science Team (v3.5), and from COSMIC. When our retrievals are compared to ACE-FTS v3.5, we find mean differences between -5 and +2 K and that our retrieved profiles have no seasonal or zonal biases but do have a warm bias in the stratosphere and a cold bias in the mesosphere. When compared to COSMIC, we do not observe a warm/cool bias and mean differences are between -4 and +1 K. COSMIC comparisons are restricted to below 40 km, where our retrievals have the best agreement with ACE-FTS v3.5. When comparing ACE-FTS v3.5 to COSMIC we observe a cold bias in COSMIC of 0.5 K, and mean differences are between -0.9 and +0.6 K.
Initial retrieval sequence and blending strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pemwell, D.L.; Grenard, C.E.
1996-09-01
This report documents the initial retrieval sequence and the methodology used to select it. Waste retrieval, storage, pretreatment and vitrification were modeled for candidate single-shell tank retrieval sequences. Performance of the sequences was measured by a set of metrics (for example,high-level waste glass volume, relative risk and schedule).Computer models were used to evaluate estimated glass volumes,process rates, retrieval dates, and blending strategy effects.The models were based on estimates of component inventories and concentrations, sludge wash factors and timing, retrieval annex limitations, etc.
Toward Reusable Graphics Components in Ada
1993-03-01
Then alternatives for obtaining well- engineered reusable software components were examined. Finally, the alternatives were analyzed, and the most...reusable software components. Chapter 4 describes detailed design and implementation strategies in building a well- engineered reusable set of components in...study. 2.2 The Object-Oriented Paradigm 2.2.1 The Need for Object-Oriented Techniques. Among software engineers the software crisis is a well known
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-04
... INTERNATIONAL TRADE COMMISSION [DN 2891] Certain Cameras and Mobile Devices, Related Software and... complaint entitled Certain Cameras and Mobile Devices, Related Software and Firmware, and Components Thereof... cameras and mobile devices, related software and firmware, and components thereof and products containing...
Word-to-picture recognition is a function of motor components mappings at the stage of retrieval.
Brouillet, Denis; Brouillet, Thibaut; Milhau, Audrey; Heurley, Loïc; Vagnot, Caroline; Brunel, Lionel
2016-10-01
Embodied approaches of cognition argue that retrieval involves the re-enactment of both sensory and motor components of the desired remembering. In this study, we investigated the effect of motor action performed to produce the response in a recognition task when this action is compatible with the affordance of the objects that have to be recognised. In our experiment, participants were first asked to learn a list of words referring to graspable objects, and then told to make recognition judgements on pictures. The pictures represented objects where the graspable part was either pointing to the same or to the opposite side of the "Yes" response key. Results show a robust effect of compatibility between objects affordance and response hand. Moreover, this compatibility improves participants' ability of discrimination, suggesting that motor components are relevant cue for memory judgement at the stage of retrieval in a recognition task. More broadly, our data highlight that memory judgements are a function of motor components mappings at the stage of retrieval. © 2015 International Union of Psychological Science.
Software packager user's guide
NASA Technical Reports Server (NTRS)
Callahan, John R.
1995-01-01
Software integration is a growing area of concern for many programmers and software managers because the need to build new programs quickly from existing components is greater than ever. This includes building versions of software products for multiple hardware platforms and operating systems, building programs from components written in different languages, and building systems from components that must execute on different machines in a distributed network. The goal of software integration is to make building new programs from existing components more seamless -- programmers should pay minimal attention to the underlying configuration issues involved. Libraries of reusable components and classes are important tools but only partial solutions to software development problems. Even though software components may have compatible interfaces, there may be other reasons, such as differences between execution environments, why they cannot be integrated. Often, components must be adapted or reimplemented to fit into another application because of implementation differences -- they are implemented in different programming languages, dependent on different operating system resources, or must execute on different physical machines. The software packager is a tool that allows programmers to deal with interfaces between software components and ignore complex integration details. The packager takes modular descriptions of the structure of a software system written in the package specification language and produces an integration program in the form of a makefile. If complex integration tools are needed to integrate a set of components, such as remote procedure call stubs, their use is implied by the packager automatically and stub generation tools are invoked in the corresponding makefile. The programmer deals only with the components themselves and not the details of how to build the system on any given platform.
Using an architectural approach to integrate heterogeneous, distributed software components
NASA Technical Reports Server (NTRS)
Callahan, John R.; Purtilo, James M.
1995-01-01
Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.
Hutchison, N.E.; Harbaugh, A.W.; Holloway, R.A.; Merk, C.F.
1987-01-01
The Water Resources Division (WRD) of the U.S. Geological Survey is evaluating 32-bit microcomputers to determine how they can complement, and perhaps later replace, the existing network of minicomputers. The WRD is also designing a National Water Information System (NWIS) that will combine and integrate the existing National Water Data Storage and Retrieval System (WATSTORE), National Water Data Exchange (NAWDEX), and components of several other existing systems. The procedures and testing done in a market evaluation of 32-bit microcomputers are documented. The results of the testing are documented in the NWIS Project Office. The market evaluation was done to identify commercially available hardware and software that could be used for implementing early NWIS prototypes to determine the applicability of 32-bit microcomputers for data base and general computing applications. Three microcomputers will be used for these prototype studies. The results of the prototype studies will be used to compile requirements for a Request for Procurement (RFP) for hardware and software to meet the WRD 's needs in the early 1990's. The identification of qualified vendors to provide the prototype hardware and software included reviewing industry literature, and making telephone calls and personal visits to prospective vendors. Those vendors that appeared to meet general requirements were required to run benchmark tests. (Author 's abstract)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-08
... Software, and Components Thereof; Determination To Review Final Initial Determination AGENCY: U.S..., and the sale within the United States after importation of certain mobile devices, associated software... software, and components thereof containing same by reason of infringement of one or more of claims 1, 2, 5...
NASA Technical Reports Server (NTRS)
Chang, L. Aron
1995-01-01
This document describes the progress of the task of the Millimeter-wave Imaging Radiometer (MIR) data processing and the development of water vapor retrieval algorithms, for the second six-month performing period. Aircraft MIR data from two 1995 field experiments were collected and processed with a revised data processing software. Two revised versions of water vapor retrieval algorithm were developed, one for the execution of retrieval on a supercomputer platform, and one for using pressure as the vertical coordinate. Two implementations of incorporating products from other sensors into the water vapor retrieval system, one from the Special Sensor Microwave Imager (SSM/I), the other from the High-resolution Interferometer Sounder (HIS). Water vapor retrievals were performed for both airborne MIR data and spaceborne SSM/T-2 data, during field experiments of TOGA/COARE, CAMEX-1, and CAMEX-2. The climatology of water vapor during TOGA/COARE was examined by SSM/T-2 soundings and conventional rawinsonde.
NASA Astrophysics Data System (ADS)
Hsieh, Feng-Ju; Wang, Wei-Chih
2012-09-01
This paper discusses two improved methods in retrieving effective refractive indices, impedances, and material properties, such as permittivity (ɛ) and permeability (μ), of metamaterials. The first method modified from Kong's retrieval method allows effective constitutive parameters over all frequencies including the anti-resonant band, where imaginary parts of ɛ or μ are negative, to be solved. The second method is based on genetic algorithms and optimization of properly defined goal functions to retrieve parameters of the Drude and Lorentz dispersion models. Equations of effective refractive index and impedance at any reference planes are derived. Split ring resonator-rod based metamaterials operating in terahertz frequencies are designed and investigated with proposed methods. Retrieved material properties and parameters are used to regenerate S-parameters and compared with simulation results generated by cst microwave studio software.
CLAES Product Improvement by Use of the GSFC Data Assimilation System (DAS)
NASA Technical Reports Server (NTRS)
Kumer, J. B.; Douglass, Anne (Technical Monitor)
2000-01-01
This report presents the Cryogenic Limb Array Etalon Spectrometer (CLAES) product improvement by use of the GSFC Data Assimilation System (DAS). The first task is to plug line of sight gradients derived from the CTM for 2/20/92 into the forward model of our retrieval software (RSW) in order to assess the impact on the retrieved quantities. The reporting period covers 12 May 2000 - 21 December 2000.
A collaborative computer auditing system under SOA-based conceptual model
NASA Astrophysics Data System (ADS)
Cong, Qiushi; Huang, Zuoming; Hu, Jibing
2013-03-01
Some of the current challenges of computer auditing are the obstacles to retrieving, converting and translating data from different database schema. During the last few years, there are many data exchange standards under continuous development such as Extensible Business Reporting Language (XBRL). These XML document standards can be used for data exchange among companies, financial institutions, and audit firms. However, for many companies, it is still expensive and time-consuming to translate and provide XML messages with commercial application packages, because it is complicated and laborious to search and transform data from thousands of tables in the ERP databases. How to transfer transaction documents for supporting continuous auditing or real time auditing between audit firms and their client companies is a important topic. In this paper, a collaborative computer auditing system under SOA-based conceptual model is proposed. By utilizing the widely used XML document standards and existing data transformation applications developed by different companies and software venders, we can wrap these application as commercial web services that will be easy implemented under the forthcoming application environments: service-oriented architecture (SOA). Under the SOA environments, the multiagency mechanism will help the maturity and popularity of data assurance service over the Internet. By the wrapping of data transformation components with heterogeneous databases or platforms, it will create new component markets composed by many software vendors and assurance service companies to provide data assurance services for audit firms, regulators or third parties.
Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A
2006-01-01
Background Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. Results The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. Conclusion The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development. PMID:16398930
Krassovski, M. B.; Riggs, J. S.; Hook, L. A.; ...
2015-11-09
Ecosystem-scale manipulation experiments represent large science investments that require well-designed data acquisition and management systems to provide reliable, accurate information to project participants and third party users. The SPRUCE project (Spruce and Peatland Responses Under Climatic and Environmental Change, http://mnspruce.ornl.gov) is such an experiment funded by the Department of Energy's (DOE), Office of Science, Terrestrial Ecosystem Science (TES) Program. The SPRUCE experimental mission is to assess ecosystem-level biological responses of vulnerable, high carbon terrestrial ecosystems to a range of climate warming manipulations and an elevated CO 2 atmosphere. SPRUCE provides a platform for testing mechanisms controlling the vulnerability of organisms,more » biogeochemical processes, and ecosystems to climatic change (e.g., thresholds for organism decline or mortality, limitations to regeneration, biogeochemical limitations to productivity, and the cycling and release of CO 2 and CH 4 to the atmosphere). The SPRUCE experiment will generate a wide range of continuous and discrete measurements. In order to successfully manage SPRUCE data collection, achieve SPRUCE science objectives, and support broader climate change research, the research staff has designed a flexible data system using proven network technologies and software components. The primary SPRUCE data system components are the following; 1. data acquisition and control system – set of hardware and software to retrieve biological and engineering data from sensors, collect sensor status information, and distribute feedback to control components; 2. data collection system – set of hardware and software to deliver data to a central depository for storage and further processing; and 3. data management plan – set of plans, policies, and practices to control consistency, protect data integrity, and deliver data. This publication presents our approach to meeting the challenges of designing and constructing an efficient data system for managing high volume sources of in situ observations in a remote, harsh environmental location. Finally, the approach covers data flow starting from the sensors and ending at the archival/distribution points, discusses types of hardware and software used, examines design considerations that were used to choose them, and describes the data management practices chosen to control and enhance the value of the data.« less
Retrieval dynamics in self-terminated memory search.
Hussey, Erika K; Dougherty, Michael R; Harbison, J Isaiah; Davelaar, Eddy J
2014-02-01
Most free-recall experiments employ a paradigm in which participants are given a preset amount of time to retrieve items from a list. While much has been learned using this paradigm, it ignores an important component of many real-world retrieval tasks: the decision to terminate memory search. The present study examines the temporal characteristics underlying memory search by comparing within subjects a standard retrieval paradigm with a finite, preset amount of time (closed interval) to a design that allows participants to terminate memory search on their own (open interval). Calling on the results of several presented simulations, we anticipated that the threshold for number of retrieval failures varied as a function of the nature of the recall paradigm, such that open intervals should result in lower thresholds than closed intervals. Moreover, this effect was expected to manifest in interretrieval times (IRTs). Although retrieval-interval type did not significantly impact the number of items recalled or error rates, IRTs were sensitive to the manipulation. Specifically, the final IRTs in the closed-interval paradigm were longer than those of the open-interval paradigm. This pattern suggests that providing participants with a preset retrieval interval not only masks an important component of the retrieval process (the memory search termination decision), but also alters temporal retrieval dynamics. Task demands may compel people to strategically control aspects of their retrieval by implementing different stopping rules.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-03
..., Components Thereof, and Related Software; Institution of Investigation AGENCY: U.S. International Trade... navigation products, components thereof, and related software by reason of infringement of certain claims of... related software that infringe one or more of claims 1, 2, 11, and 16 of the '565 patent; claim 1 of the...
An intelligent, free-flying robot
NASA Technical Reports Server (NTRS)
Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.
1988-01-01
The ground-based demonstration of EVA Retriever, a voice-supervised, intelligent, free-flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out, (2) searches for and acquires the target, (3) plans and executes a rendezvous while continuously tracking the target, (4) avoids stationary and moving obstacles, (5) reaches for and grapples the target, (6) returns to transfer the object, and (7) returns to base.
An intelligent, free-flying robot
NASA Technical Reports Server (NTRS)
Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.; Phinney, Dale E.
1989-01-01
The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base.
Engineering intelligent tutoring systems
NASA Technical Reports Server (NTRS)
Warren, Kimberly C.; Goodman, Bradley A.
1993-01-01
We have defined an object-oriented software architecture for Intelligent Tutoring Systems (ITS's) to facilitate the rapid development, testing, and fielding of ITS's. This software architecture partitions the functionality of the ITS into a collection of software components with well-defined interfaces and execution concept. The architecture was designed to isolate advanced technology components, partition domain dependencies, take advantage of the increased availability of commercial software packages, and reduce the risks involved in acquiring ITS's. A key component of the architecture, the Executive, is a publish and subscribe message handling component that coordinates all communication between ITS components.
NASA Astrophysics Data System (ADS)
Borchert, Otto Jerome
This paper describes a software tool to assist groups of people in the classification and identification of real world objects called the Classification, Identification, and Retrieval-based Collaborative Learning Environment (CIRCLE). A thorough literature review identified current pedagogical theories that were synthesized into a series of five tasks: gathering, elaboration, classification, identification, and reinforcement through game play. This approach is detailed as part of an included peer reviewed paper. Motivation is increased through the use of formative and summative gamification; getting points completing important portions of the tasks and playing retrieval learning based games, respectively, which is also included as a peer-reviewed conference proceedings paper. Collaboration is integrated into the experience through specific tasks and communication mediums. Implementation focused on a REST-based client-server architecture. The client is a series of web-based interfaces to complete each of the tasks, support formal classroom interaction through faculty accounts and student tracking, and a module for peers to help each other. The server, developed using an in-house JavaMOO platform, stores relevant project data and serves data through a series of messages implemented as a JavaScript Object Notation Application Programming Interface (JSON API). Through a series of two beta tests and two experiments, it was discovered the second, elaboration, task requires considerable support. While students were able to properly suggest experiments and make observations, the subtask involving cleaning the data for use in CIRCLE required extra support. When supplied with more structured data, students were enthusiastic about the classification and identification tasks, showing marked improvement in usability scores and in open ended survey responses. CIRCLE tracks a variety of educationally relevant variables, facilitating support for instructors and researchers. Future work will revolve around material development, software refinement, and theory building. Curricula, lesson plans, instructional materials need to be created to seamlessly integrate CIRCLE in a variety of courses. Further refinement of the software will focus on improving the elaboration interface and developing further game templates to add to the motivation and retrieval learning aspects of the software. Data gathered from CIRCLE experiments can be used to develop and strengthen theories on teaching and learning.
The Role of Metadata Standards in EOSDIS Search and Retrieval Applications
NASA Technical Reports Server (NTRS)
Pfister, Robin
1999-01-01
Metadata standards play a critical role in data search and retrieval systems. Metadata tie software to data so the data can be processed, stored, searched, retrieved and distributed. Without metadata these actions are not possible. The process of populating metadata to describe science data is an important service to the end user community so that a user who is unfamiliar with the data, can easily find and learn about a particular dataset before an order decision is made. Once a good set of standards are in place, the accuracy with which data search can be performed depends on the degree to which metadata standards are adhered during product definition. NASA's Earth Observing System Data and Information System (EOSDIS) provides examples of how metadata standards are used in data search and retrieval.
NASA Astrophysics Data System (ADS)
Klein, Christopher R.; Kubánek, Petr; Butler, Nathaniel R.; Fox, Ori D.; Kutyrev, Alexander S.; Rapchun, David A.; Bloom, Joshua S.; Farah, Alejandro; Gehrels, Neil; Georgiev, Leonid; González, J. Jesús; Lee, William H.; Lotkin, Gennadiy N.; Moseley, Samuel H.; Prochaska, J. Xavier; Ramirez-Ruiz, Enrico; Richer, Michael G.; Robinson, Frederick D.; Román-Zúñiga, Carlos; Samuel, Mathew V.; Sparr, Leroy M.; Tucker, Corey; Watson, Alan M.
2012-07-01
The Reionization And Transients InfraRed (RATIR) camera has been built for rapid Gamma-Ray Burst (GRB) followup and will provide quasi-simultaneous imaging in ugriZY JH. The optical component uses two 2048 × 2048 pixel Finger Lakes Imaging ProLine detectors, one optimized for the SDSS u, g, and r bands and one optimized for the SDSS i band. The infrared portion incorporates two 2048 × 2048 pixel Teledyne HgCdTe HAWAII-2RG detectors, one with a 1.7-micron cutoff and one with a 2.5-micron cutoff. The infrared detectors are controlled by Teledyne's SIDECAR (System for Image Digitization Enhancement Control And Retrieval) ASICs (Application Specific Integrated Circuits). While other ground-based systems have used the SIDECAR before, this system also utilizes Teledyne's JADE2 (JWST ASIC Drive Electronics) interface card and IDE (Integrated Development Environment). Here we present a summary of the software developed to interface the RATIR detectors with Remote Telescope System, 2nd Version (RTS2) software. RTS2 is an integrated open source package for remote observatory control under the Linux operating system and will autonomously coordinate observatory dome, telescope pointing, detector, filter wheel, focus stage, and dewar vacuum compressor operations. Where necessary we have developed custom interfaces between RTS2 and RATIR hardware, most notably for cryogenic focus stage motor drivers and temperature controllers. All detector and hardware interface software developed for RATIR is freely available and open source as part of the RTS2 distribution.
Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing
NASA Technical Reports Server (NTRS)
Logan, Thomas L.; Bryant, Nevin A.
1987-01-01
The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.
Advanced laser stratospheric monitoring systems analyses
NASA Technical Reports Server (NTRS)
Larsen, J. C.
1984-01-01
This report describes the software support supplied by Systems and Applied Sciences Corporation for the study of Advanced Laser Stratospheric Monitoring Systems Analyses under contract No. NAS1-15806. This report discusses improvements to the Langley spectroscopic data base, development of LHS instrument control software and data analyses and validation software. The effect of diurnal variations on the retrieved concentrations of NO, NO2 and C L O from a space and balloon borne measurement platform are discussed along with the selection of optimum IF channels for sensing stratospheric species from space.
Retrieval analysis of ceramic-coated metal-on-polyethylene total hip replacements.
Khatkar, Harman; Hothi, Harry; de Villiers, Danielle; Lausmann, Christian; Kendoff, Daniel; Gehrke, Thorsten; Skinner, John; Hart, Alister
2017-06-01
Ceramic coatings have been used in metal-on-polyethylene (MOP) hips to reduce the risk of wear and also infection; the clinical efficacy of this remains unclear. This retrieval study sought to better understand the performance of coated bearing surfaces. Forty-three coated MOP components were analysed post-retrieval for evidence of coating loss and gross polyethylene wear. Coating loss was graded using a visual semi-quantitative protocol. Evidence of gross polyethylene wear was determined by radiographic analysis and visual inspection of the retrieved implants. All components with gross polyethylene wear (n = 10) were revised due to a malfunctioning acetabular component; 35 % (n = 15) of implants exhibited visible coating loss and the incidence of polyethylene wear in samples with coating loss was 54 %, significantly (p = 0.02) elevated compared to samples with intact coatings (14 %). In this study we found evidence of coating loss on metal femoral heads which was associated with increased wear of the corresponding polyethylene acetabular cups.
El Haj, Mohamad; Daoudi, Mohamed; Gallouj, Karim; Moustafa, Ahmed A; Nandrino, Jean-Louis
2018-05-11
Thanks to the current advances in the software analysis of facial expressions, there is a burgeoning interest in understanding emotional facial expressions observed during the retrieval of autobiographical memories. This review describes the research on facial expressions during autobiographical retrieval showing distinct emotional facial expressions according to the characteristics of retrieved memoires. More specifically, this research demonstrates that the retrieval of emotional memories can trigger corresponding emotional facial expressions (e.g. positive memories may trigger positive facial expressions). Also, this study demonstrates the variations of facial expressions according to specificity, self-relevance, or past versus future direction of memory construction. Besides linking research on facial expressions during autobiographical retrieval to cognitive and affective characteristics of autobiographical memory in general, this review positions this research within the broader context research on the physiologic characteristics of autobiographical retrieval. We also provide several perspectives for clinical studies to investigate facial expressions in populations with deficits in autobiographical memory (e.g. whether autobiographical overgenerality in neurologic and psychiatric populations may trigger few emotional facial expressions). In sum, this review paper demonstrates how the evaluation of facial expressions during autobiographical retrieval may help understand the functioning and dysfunctioning of autobiographical memory.
Photochemical Phenomenology Model for the New Millenium
NASA Technical Reports Server (NTRS)
Bishop, James; Evans, J. Scott
2000-01-01
This project tackles the problem of conversion of validated a priori physics-based modeling capabilities, specifically those relevant to the analysis and interpretation of planetary atmosphere observations, to application-oriented software for use in science and science-support activities. The software package under development, named the Photochemical Phenomenology Modeling Tool (PPMT), has particular focus on the atmospheric remote sensing data to be acquired by the CIRS instrument during the CASSINI Jupiter flyby and orbital tour of the Saturnian system. Overall, the project has followed the development outline given in the original proposal, and the Year 1 design and architecture goals have been met. Specific accomplishments and the difficulties encountered are summarized in this report. Most of the effort has gone into complete definition of the PPMT interfaces within the context of today's IT arena: adoption and adherence to the CORBA Component Model (CCM) has yielded a solid architecture basis, and CORBA-related issues (services, specification options, development plans, etc.) have been largely resolved. Implementation goals have been redirected somewhat so as to be more relevant to the upcoming CASSINI flyby of Jupiter, with focus now being more on data analysis and remote sensing retrieval applications.
The ASTRI/CTA mini-array software system
NASA Astrophysics Data System (ADS)
Tosti, Gino; Schwarz, Joseph; Antonelli, Lucio Angelo; Trifoglio, Massimo; Catalano, Osvaldo; Maccarone, Maria Concetta; Leto, Giuseppe; Gianotti, Fulvio; Canestrari, Rodolfo; Giro, Enrico; Fiorini, Mauro; La Palombara, Nicola; Pareschi, Giovanni; Stringhetti, Luca; Vercellone, Stefano; Conforti, Vito; Tanci, Claudio; Bruno, Pietro; Grillo, Alessandro; Testa, Vincenzo; di Paola, Andrea; Gallozzi, Stefano
2014-07-01
ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. The main goals of the ASTRI project are the realization of an end-to-end prototype of a Small Size Telescope (SST) for the Cherenkov Telescope Array (CTA) in a dual- mirror configuration (SST-2M) and, subsequently, of a mini-array comprising seven SST-2M telescopes. The mini-array will be placed at the final CTA Southern Site, which will be part of the CTA seed array, around which the whole CTA observatory will be developed. The Mini-Array Software System (MASS) will provide a comprehensive set of tools to prepare an observing proposal, to perform the observations specified therein (monitoring and controlling all the hardware components of each telescope), to analyze the acquired data online and to store/retrieve all the data products to/from the archive. Here we present the main features of the MASS and its first version, to be tested on the ASTRI SST-2M prototype that will be installed at the INAF observing station located at Serra La Nave on Mount Etna in Sicily.
Home media server content management
NASA Astrophysics Data System (ADS)
Tokmakoff, Andrew A.; van Vliet, Harry
2001-07-01
With the advent of set-top boxes, the convergence of TV (broadcasting) and PC (Internet) is set to enter the home environment. Currently, a great deal of activity is occurring in developing standards (TV-Anytime Forum) and devices (TiVo) for local storage on Home Media Servers (HMS). These devices lie at the heart of convergence of the triad: communications/networks - content/media - computing/software. Besides massive storage capacity and being a communications 'gateway', the home media server is characterised by the ability to handle metadata and software that provides an easy to use on-screen interface and intelligent search/content handling facilities. In this paper, we describe a research prototype HMS that is being developed within the GigaCE project at the Telematica Instituut . Our prototype demonstrates advanced search and retrieval (video browsing), adaptive user profiling and an innovative 3D component of the Electronic Program Guide (EPG) which represents online presence. We discuss the use of MPEG-7 for representing metadata, the use of MPEG-21 working draft standards for content identification, description and rights expression, and the use of HMS peer-to-peer content distribution approaches. Finally, we outline explorative user behaviour experiments that aim to investigate the effectiveness of the prototype HMS during development.
Monitoring and analysis of data from complex systems
NASA Technical Reports Server (NTRS)
Dollman, Thomas; Webster, Kenneth
1991-01-01
Some of the methods, systems, and prototypes that have been tested for monitoring and analyzing the data from several spacecraft and vehicles at the Marshall Space Flight Center are introduced. For the Huntsville Operations Support Center (HOSC) infrastructure, the Marshall Integrated Support System (MISS) provides a migration path to the state-of-the-art workstation environment. Its modular design makes it possible to implement the system in stages on multiple platforms without the need for all components to be in place at once. The MISS provides a flexible, user-friendly environment for monitoring and controlling orbital payloads. In addition, new capabilities and technology may be incorporated into MISS with greater ease. The use of information systems technology in advanced prototype phases, as adjuncts to mainline activities, is used to evaluate new computational techniques for monitoring and analysis of complex systems. Much of the software described (specially, HSTORESIS (Hubble Space Telescope Operational Readiness Expert Safemode Investigation System), DRS (Device Reasoning Shell), DART (Design Alternatives Rational Tool), elements of the DRA (Document Retrieval Assistant), and software for the PPS (Peripheral Processing System) and the HSPP (High-Speed Peripheral Processor)) is available with supporting documentation, and may be applicable to other system monitoring and analysis applications.
A Prototype of an Intelligent System for Information Retrieval: IOTA.
ERIC Educational Resources Information Center
Chiaramella, Y.; Defude, B.
1987-01-01
Discusses expert systems and their value as components of information retrieval systems related to semantic inference, and describes IOTA, a model of an intelligent information retrieval system which emphasizes natural language query processing. Experimental results are discussed and current and future developments are highlighted. (Author/LRW)
Debonding of porous coating of a threaded acetabular component: retrieval analysis.
Łapaj, Łukasz; Markuszewski, Jacek; Rybak, Tomasz; Wierusz-Kozłowska, Małgorzata
2013-01-01
This report presents a case of debonding of plasma sprayed porous titanium coating from a threaded acetabular component which caused aseptic loosening of the implant. Weight bearing after delamination caused abrasive damage of the acetabular shell, and particles of the coating embedded in the acetabular liner. Microscopic examination of periprosthetic tissues showed presence of metal particles and macrophage infiltration. Despite microscopic examination of the retrieved component the cause of debonding remains unclear. Copyright © 2012 Elsevier Ltd. All rights reserved.
A navigation and control system for an autonomous rescue vehicle in the space station environment
NASA Technical Reports Server (NTRS)
Merkel, Lawrence
1991-01-01
A navigation and control system was designed and implemented for an orbital autonomous rescue vehicle envisioned to retrieve astronauts or equipment in the case that they become disengaged from the space station. The rescue vehicle, termed the Extra-Vehicular Activity Retriever (EVAR), has an on-board inertial measurement unit ahd GPS receivers for self state estimation, a laser range imager (LRI) and cameras for object state estimation, and a data link for reception of space station state information. The states of the retriever and objects (obstacles and the target object) are estimated by inertial state propagation which is corrected via measurements from the GPS, the LRI system, or the camera system. Kalman filters are utilized to perform sensor fusion and estimate the state propagation errors. Control actuation is performed by a Manned Maneuvering Unit (MMU). Phase plane control techniques are used to control the rotational and translational state of the retriever. The translational controller provides station-keeping or motion along either Clohessy-Wiltshire trajectories or straight line trajectories in the LVLH frame of any sufficiently observed object or of the space station. The software was used to successfully control a prototype EVAR on an air bearing floor facility, and a simulated EVAR operating in a simulated orbital environment. The design of the navigation system and the control system are presented. Also discussed are the hardware systems and the overall software architecture.
Encyclopedia of software components
NASA Technical Reports Server (NTRS)
Vanwarren, Lloyd (Inventor); Beckman, Brian C. (Inventor)
1991-01-01
Intelligent browsing through a collection of reusable software components is facilitated with a computer having a video monitor and a user input interface such as a keyboard or a mouse for transmitting user selections, by presenting a picture of encyclopedia volumes with respective visible labels referring to types of software, in accordance with a metaphor in which each volume includes a page having a list of general topics under the software type of the volume and pages having lists of software components for each one of the generic topics, altering the picture to open one of the volumes in response to an initial user selection specifying the one volume to display on the monitor a picture of the page thereof having the list of general topics and altering the picture to display the page thereof having a list of software components under one of the general topics in response to a next user selection specifying the one general topic, and then presenting a picture of a set of different informative plates depicting different types of information about one of the software components in response to a further user selection specifying the one component.
Encyclopedia of Software Components
NASA Technical Reports Server (NTRS)
Warren, Lloyd V. (Inventor); Beckman, Brian C. (Inventor)
1997-01-01
Intelligent browsing through a collection of reusable software components is facilitated with a computer having a video monitor and a user input interface such as a keyboard or a mouse for transmitting user selections, by presenting a picture of encyclopedia volumes with respective visible labels referring to types of software, in accordance with a metaphor in which each volume includes a page having a list of general topics under the software type of the volume and pages having lists of software components for each one of the generic topics, altering the picture to open one of the volumes in response to an initial user selection specifying the one volume to display on the monitor a picture of the page thereof having the list of general topics and altering the picture to display the page thereof having a list of software components under one of the general topics in response to a next user selection specifying the one general topic, and then presenting a picture of a set of different informative plates depicting different types of information about one of the software components in response to a further user selection specifying the one component.
VIDANA: Data Management System for Nano Satellites
NASA Astrophysics Data System (ADS)
Montenegro, Sergio; Walter, Thomas; Dilger, Erik
2013-08-01
A Vidana data management system is a network of software and hardware components. This implies a software network, a hardware network and a smooth connection between both of them. Our strategy is based on our innovative middleware. A reliable interconnection network (SW & HW) which can interconnect many unreliable redundant components such as sensors, actuators, communication devices, computers, and storage elements,... and software components! Component failures are detected, the affected device is disabled and its function is taken over by a redundant component. Our middleware doesn't connect only software, but also devices and software together. Software and hardware communicate with each other without having to distinguish which functions are in software and which are implemented in hardware. Components may be turned on and off at any time, and the whole system will autonomously adapt to its new configuration in order to continue fulfilling its task. In VIDANA we aim dynamic adaptability (run tine), static adaptability (tailoring), and unified HW/SW communication protocols. For many of these aspects we use "learn from the nature" where we can find astonishing reference implementations.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-04
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-769] Certain Handheld Electronic Computing Devices, Related Software, and Components Thereof; Termination of the Investigation Based on... electronic computing devices, related software, and components thereof by reason of infringement of certain...
Code of Federal Regulations, 2014 CFR
2014-04-01
..., parts, firmware, software, and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software, and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...
Code of Federal Regulations, 2012 CFR
2012-04-01
..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...
Code of Federal Regulations, 2013 CFR
2013-04-01
..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...
Code of Federal Regulations, 2010 CFR
2010-04-01
..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...
Code of Federal Regulations, 2011 CFR
2011-04-01
..., parts, firmware, software and systems. 121.8 Section 121.8 Foreign Relations DEPARTMENT OF STATE...-items, components, accessories, attachments, parts, firmware, software and systems. (a) An end-item is.... Firmware includes but is not limited to circuits into which software has been programmed. (f) Software...
ERIC Educational Resources Information Center
Chowdhury, Gobinda G.
2003-01-01
Discusses issues related to natural language processing, including theoretical developments; natural language understanding; tools and techniques; natural language text processing systems; abstracting; information extraction; information retrieval; interfaces; software; Internet, Web, and digital library applications; machine translation for…
Post-Flight Data Analysis Tool
NASA Technical Reports Server (NTRS)
George, Marina
2018-01-01
A software tool that facilitates the retrieval and analysis of post-flight data. This allows our team and other teams to effectively and efficiently analyze and evaluate post-flight data in order to certify commercial providers.
Software Reuse Within the Earth Science Community
NASA Technical Reports Server (NTRS)
Marshall, James J.; Olding, Steve; Wolfe, Robert E.; Delnore, Victor E.
2006-01-01
Scientific missions in the Earth sciences frequently require cost-effective, highly reliable, and easy-to-use software, which can be a challenge for software developers to provide. The NASA Earth Science Enterprise (ESE) spends a significant amount of resources developing software components and other software development artifacts that may also be of value if reused in other projects requiring similar functionality. In general, software reuse is often defined as utilizing existing software artifacts. Software reuse can improve productivity and quality while decreasing the cost of software development, as documented by case studies in the literature. Since large software systems are often the results of the integration of many smaller and sometimes reusable components, ensuring reusability of such software components becomes a necessity. Indeed, designing software components with reusability as a requirement can increase the software reuse potential within a community such as the NASA ESE community. The NASA Earth Science Data Systems (ESDS) Software Reuse Working Group is chartered to oversee the development of a process that will maximize the reuse potential of existing software components while recommending strategies for maximizing the reusability potential of yet-to-be-designed components. As part of this work, two surveys of the Earth science community were conducted. The first was performed in 2004 and distributed among government employees and contractors. A follow-up survey was performed in 2005 and distributed among a wider community, to include members of industry and academia. The surveys were designed to collect information on subjects such as the current software reuse practices of Earth science software developers, why they choose to reuse software, and what perceived barriers prevent them from reusing software. In this paper, we compare the results of these surveys, summarize the observed trends, and discuss the findings. The results are very similar, with the second, larger survey confirming the basic results of the first, smaller survey. The results suggest that reuse of ESE software can drive down the cost and time of system development, increase flexibility and responsiveness of these systems to new technologies and requirements, and increase effective and accountable community participation.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-21
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-810] Certain Navigation Products, Components Thereof, and Related Software; Determination Not To Review an Initial Determination Granting a... United States after importation of certain navigation products, components thereof, and related software...
Teaching Software Componentization: A Bar Chart Java Bean
ERIC Educational Resources Information Center
Mitri, Michel
2010-01-01
In the current object-oriented paradigm, software construction increasingly involves creating and utilizing "software components". These components can serve a variety of functions, from common algorithmic processes to database connectivity to graphical interfaces. The advantage of component architectures is that programmers can use pre-existing…
Reducing Risk in DoD Software-Intensive Systems Development
2016-03-01
intensive systems development risk. This research addresses the use of the Technical Readiness Assessment (TRA) using the nine-level software Technology...The software TRLs are ineffective in reducing technical risk for the software component development. • Without the software TRLs, there is no...effective method to perform software TRA or reduce the technical development risk. The software component will behave as a new, untried technology in nearly
A component-based software environment for visualizing large macromolecular assemblies.
Sanner, Michel F
2005-03-01
The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.
Software Management Environment (SME): Components and algorithms
NASA Technical Reports Server (NTRS)
Hendrick, Robert; Kistler, David; Valett, Jon
1994-01-01
This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'
Acquisition Handbook - Update. Comprehensive Approach to Reusable Defensive Software (CARDS)
1994-03-25
designs, and implementation components (source code, test plans, procedures and results, and system/software documentation). This handbook provides a...activities where software components are acquired, evaluated, tested and sometimes modified. In addition to serving as a facility for the acquisition and...systems from such components [1]. Implementation components are at the lowest level and consist of: specifications; detailed designs; code, test
De Goede, Maartje; Postma, Albert
2008-04-01
Object-location memory is the only spatial task where female subjects have been shown to outperform males. This result is not consistent across all studies, and may be due to the combination of the multi-component structure of object location memory with the conditions under which different studies were done. Possible gender differences in object location memory and its component object identity memory were assessed in the present study. In order to disentangle these two components, an object location memory task (in which objects had to be relocated in daily environments), and a separate object identity recognition task were carried out. This study also focused on the conditions under which object locations were encoded and retrieved. Only half of the participants were aware of the fact that object locations had to be retrieved later on. Moreover, by applying the 'process dissociation procedure' to the object location memory assessments and the 'remember-know' paradigm to the object identity measure, the amount of explicit (conscious) and implicit (unconscious) retrieval was estimated for each component. In general, females performed better than males on the object location memory task. However, when controlled for object identity memory, females no longer outperformed males, whereas they did not obtain a higher general object identity memory score, nor did they have more explicit or implicit recollection of the object identities. These complicated effects might stem from a difference between males and females, in the way locations or associations between objects and locations are retrieved. In general, participants had more explicit (conscious) recollection than implicit (unconscious) recollection. No effect of encoding context was found, nor any interaction effect of gender, encoding and retrieval context.
Cohen, Julien G; Goo, Jin Mo; Yoo, Roh-Eul; Park, Chang Min; Lee, Chang Hyun; van Ginneken, Bram; Chung, Doo Hyun; Kim, Young Tae
2016-12-01
To evaluate the performance of software in segmenting ground-glass and solid components of subsolid nodules in pulmonary adenocarcinomas. Seventy-three pulmonary adenocarcinomas manifesting as subsolid nodules were included. Two radiologists measured the maximal axial diameter of the ground-glass components on lung windows and that of the solid components on lung and mediastinal windows. Nodules were segmented using software by applying five (-850 HU to -650 HU) and nine (-130 HU to -500 HU) attenuation thresholds. We compared the manual and software measurements of ground-glass and solid components with pathology measurements of tumour and invasive components. Segmentation of ground-glass components at a threshold of -750 HU yielded mean differences of +0.06 mm (p = 0.83, 95 % limits of agreement, 4.51 to 4.67) and -2.32 mm (p < 0.001, -8.27 to 3.63) when compared with pathology and manual measurements, respectively. For solid components, mean differences between the software (at -350 HU) and pathology measurements and between the manual (lung and mediastinal windows) and pathology measurements were -0.12 mm (p = 0.74, -5.73 to 5.55]), 0.15 mm (p = 0.73, -6.92 to 7.22), and -1.14 mm (p < 0.001, -7.93 to 5.64), respectively. Software segmentation of ground-glass and solid components in subsolid nodules showed no significant difference with pathology. • Software can effectively segment ground-glass and solid components in subsolid nodules. • Software measurements show no significant difference with pathology measurements. • Manual measurements are more accurate on lung windows than on mediastinal windows.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Definitions. The terms used in this rule with the exception of the following are defined in DCAAP 5410.14. (a... means. This does not include computer software, which is the tool by which to create, store, or retrieve...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Definitions. The terms used in this rule with the exception of the following are defined in DCAAP 5410.14. (a... means. This does not include computer software, which is the tool by which to create, store, or retrieve...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Definitions. The terms used in this rule with the exception of the following are defined in DCAAP 5410.14. (a... means. This does not include computer software, which is the tool by which to create, store, or retrieve...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Definitions. The terms used in this rule with the exception of the following are defined in DCAAP 5410.14. (a... means. This does not include computer software, which is the tool by which to create, store, or retrieve...
NASA Astrophysics Data System (ADS)
Meng, X.; Liu, Y.; Diner, D. J.; Garay, M. J.
2016-12-01
Ambient fine particle (PM2.5) has been positively associated with increased mortality and morbidity worldwide. Recent studies highlight the characteristics and differential toxicity of PM2.5 chemical components, which are important for identifying sources, developing targeted particulate matter (PM) control strategies, and protecting public health. Modelling with satellite retrieved data has been proved as the most cost-effective way to estimate ground PM2.5 levels; however, limited studies have predict PM2.5 chemical components with this method. In this study, the experimental MISR 4.4 km aerosol retrievals were used to predict ground-level particle sulfate, nitrite, organic carbon and element carbon concentrations in 16 counties of southern California. The PM2.5 chemical components concentrations were obtained from the National Chemical Speciation Network (CSN) and the Interagency Monitoring of Protected Visual Environments (IMPROVE) network. A generalized additive model (GAM) was developed based on 16-years data (2000-2015) by combining the MISR aerosol retrievals, meteorological variables and geographical indicators together. Model performance was assessed by model fitted R2 and root-mean-square error (RMSE) and 10-fold cross validation. Spatial patterns of sulfate, nitrate, OC and EC concentrations were also examined with 2-D prediction surfaces. This is the first attempt to develop high-resolution spatial models to predict PM2.5 chemical component concentrations with MISR retrieved aerosol properties, which will provide valuable population exposure estimates for future studies on the characteristics and differential toxicity of PM2.5 speciation.
The Oklahoma Geographic Information Retrieval System
NASA Technical Reports Server (NTRS)
Blanchard, W. A.
1982-01-01
The Oklahoma Geographic Information Retrieval System (OGIRS) is a highly interactive data entry, storage, manipulation, and display software system for use with geographically referenced data. Although originally developed for a project concerned with coal strip mine reclamation, OGIRS is capable of handling any geographically referenced data for a variety of natural resource management applications. A special effort has been made to integrate remotely sensed data into the information system. The timeliness and synoptic coverage of satellite data are particularly useful attributes for inclusion into the geographic information system.
NASA Technical Reports Server (NTRS)
Langland, R. A.; Stephens, P. L.; Pihos, G. G.
1980-01-01
The techniques used for ingesting SEASAT-A SASS wind retrievals into the existing operational software are described. The intent is to assess the impact of SEASAT data in he marine wind fields produced by the global marine wind/sea level pressure analysis. This analysis is performed on a 21/2 deg latitude/longitude global grid which executes at three hourly time increments. Wind fields with and without SASS winds are being compared. The problems of data volume reduction and aliased wind retrieval ambiquity are treated.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-16
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-761] Certain Set-Top Boxes, and Hardware and Software Components Thereof; Determination Not To Review Initial Determination Terminating... certain set-top boxes, and hardware and software components thereof by reason of infringement of various...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-783] Certain GPS Navigation Products, Components Thereof, and Related Software; Termination of Investigation on the Basis of Settlement AGENCY: U.S... GPS navigation products, components thereof, and related software, by reason of the infringement of...
NLM microcomputer-based tutorials (for microcomputers). Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkins, M.
1990-04-01
The package consists of TOXLEARN--a microcomputer-based training package for TOXLINE (Toxicology Information Online), CHEMLEARN-a microcomputer-based training package for CHEMLINE (Chemical Information Online), MEDTUTOR--a microcomputer-based training package for MEDLINE (Medical Information Online), and ELHILL LEARN--a microcomputer-based training package for the ELHILL search and retrieval software that supports the above-mentioned databases...Software Description: The programs were developed under PILOTplus using the NLM LEARN Programmer. They run on IBM-PC, XT, AT, PS/2, and fully compatible computers. The programs require 512K RAM memory, one disk drive, and DOS 2.0 or higher. The software supports most monochrome, color graphics, enhanced color graphics, or visual graphics displays.
Lossef, S V; Schwartz, L H
1990-09-01
A computerized reference system for radiology journal articles was developed by using an IBM-compatible personal computer with a hand-held optical scanner and optical character recognition software. This allows direct entry of scanned text from printed material into word processing or data-base files. Additionally, line diagrams and photographs of radiographs can be incorporated into these files. A text search and retrieval software program enables rapid searching for keywords in scanned documents. The hand scanner and software programs are commercially available, relatively inexpensive, and easily used. This permits construction of a personalized radiology literature file of readily accessible text and images requiring minimal typing or keystroke entry.
FreeTure: A Free software to capTure meteors for FRIPON
NASA Astrophysics Data System (ADS)
Audureau, Yoan; Marmo, Chiara; Bouley, Sylvain; Kwon, Min-Kyung; Colas, François; Vaubaillon, Jérémie; Birlan, Mirel; Zanda, Brigitte; Vernazza, Pierre; Caminade, Stephane; Gattecceca, Jérôme
2014-02-01
The Fireball Recovery and Interplanetary Observation Network (FRIPON) is a French project started in 2014 which will monitor the sky, using 100 all-sky cameras to detect meteors and to retrieve related meteorites on the ground. There are several detection software all around. Some of them are proprietary. Also, some of them are hardware dependent. We present here the open source software for meteor detection to be installed on the FRIPON network's stations. The software will run on Linux with gigabit Ethernet cameras and we plan to make it cross platform. This paper is focused on the meteor detection method used for the pipeline development and the present capabilities.
RSAT 2018: regulatory sequence analysis tools 20th anniversary.
Nguyen, Nga Thi Thuy; Contreras-Moreira, Bruno; Castro-Mondragon, Jaime A; Santana-Garcia, Walter; Ossio, Raul; Robles-Espinoza, Carla Daniela; Bahin, Mathieu; Collombet, Samuel; Vincens, Pierre; Thieffry, Denis; van Helden, Jacques; Medina-Rivera, Alejandra; Thomas-Chollier, Morgane
2018-05-02
RSAT (Regulatory Sequence Analysis Tools) is a suite of modular tools for the detection and the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, including from genome-wide datasets like ChIP-seq/ATAC-seq, (ii) motif scanning, (iii) motif analysis (quality assessment, comparisons and clustering), (iv) analysis of regulatory variations, (v) comparative genomics. Six public servers jointly support 10 000 genomes from all kingdoms. Six novel or refactored programs have been added since the 2015 NAR Web Software Issue, including updated programs to analyse regulatory variants (retrieve-variation-seq, variation-scan, convert-variations), along with tools to extract sequences from a list of coordinates (retrieve-seq-bed), to select motifs from motif collections (retrieve-matrix), and to extract orthologs based on Ensembl Compara (get-orthologs-compara). Three use cases illustrate the integration of new and refactored tools to the suite. This Anniversary update gives a 20-year perspective on the software suite. RSAT is well-documented and available through Web sites, SOAP/WSDL (Simple Object Access Protocol/Web Services Description Language) web services, virtual machines and stand-alone programs at http://www.rsat.eu/.
Glandorf, J.; Thiele, H.; Macht, M.; Vorm, O.; Podtelejnikov, A.
2007-01-01
In the course of a full-scale proteomics experiment, the handling of the data as well as the retrieval of the relevant information from the results is a major challenge due to the massive amount of generated data (gel images, chromatograms, and spectra) as well as associated result information (sequences, literature, etc.). To obtain meaningful information from these data, one has to filter the results in an easy way. Possibilities to do so can be based on GO terms or structural features such as transmembrane domains, involvement in certain pathways, etc. In this presentation we will show how a combination of a software package with a workflow-based result organization (Bruker ProteinScape) and a protein-centered data-mining software (Proxeon ProteinCenter) can assist in the comparison of the results from large projects, such as comparison of cross-platform results from 2D PAGE/MS with shotgun LC-ESI-MS/MS. We will present differences between different technologies and show how these differences can be easily identified and how they allow us to draw conclusions on the involved technologies.
Pybus -- A Python Software Bus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavrijsen, Wim T.L.P.
2004-10-14
A software bus, just like its hardware equivalent, allows for the discovery, installation, configuration, loading, unloading, and run-time replacement of software components, as well as channeling of inter-component communication. Python, a popular open-source programming language, encourages a modular design on software written in it, but it offers little or no component functionality. However, the language and its interpreter provide sufficient hooks to implement a thin, integral layer of component support. This functionality can be presented to the developer in the form of a module, making it very easy to use. This paper describes a Pythonmodule, PyBus, with which the conceptmore » of a ''software bus'' can be realized in Python. It demonstrates, within the context of the ATLAS software framework Athena, how PyBus can be used for the installation and (run-time) configuration of software, not necessarily Python modules, from a Python application in a way that is transparent to the end-user.« less
A conceptual model for megaprogramming
NASA Technical Reports Server (NTRS)
Tracz, Will
1990-01-01
Megaprogramming is component-based software engineering and life-cycle management. Magaprogramming and its relationship to other research initiatives (common prototyping system/common prototyping language, domain specific software architectures, and software understanding) are analyzed. The desirable attributes of megaprogramming software components are identified and a software development model and resulting prototype megaprogramming system (library interconnection language extended by annotated Ada) are described.
Microwave Soil Moisture Retrieval Under Trees
NASA Technical Reports Server (NTRS)
O'Neill, P.; Lang, R.; Kurum, M.; Joseph, A.; Jackson, T.; Cosh, M.
2008-01-01
Soil moisture is recognized as an important component of the water, energy, and carbon cycles at the interface between the Earth's surface and atmosphere. Current baseline soil moisture retrieval algorithms for microwave space missions have been developed and validated only over grasslands, agricultural crops, and generally light to moderate vegetation. Tree areas have commonly been excluded from operational soil moisture retrieval plans due to the large expected impact of trees on masking the microwave response to the underlying soil moisture. Our understanding of the microwave properties of trees of various sizes and their effect on soil moisture retrieval algorithms at L band is presently limited, although research efforts are ongoing in Europe, the United States, and elsewhere to remedy this situation. As part of this research, a coordinated sequence of field measurements involving the ComRAD (for Combined Radar/Radiometer) active/passive microwave truck instrument system has been undertaken. Jointly developed and operated by NASA Goddard Space Flight Center and George Washington University, ComRAD consists of dual-polarized 1.4 GHz total-power radiometers (LH, LV) and a quad-polarized 1.25 GHz L band radar sharing a single parabolic dish antenna with a novel broadband stacked patch dual-polarized feed, a quad-polarized 4.75 GHz C band radar, and a single channel 10 GHz XHH radar. The instruments are deployed on a mobile truck with an 19-m hydraulic boom and share common control software; real-time calibrated signals, and the capability for automated data collection for unattended operation. Most microwave soil moisture retrieval algorithms developed for use at L band frequencies are based on the tau-omega model, a simplified zero-order radiative transfer approach where scattering is largely ignored and vegetation canopies are generally treated as a bulk attenuating layer. In this approach, vegetation effects are parameterized by tau and omega, the microwave vegetation opacity and single scattering albedo. One goal of our current research is to determine whether the tau-omega model can work for tree canopies given the increased scatter from trees compared to grasses and crops, and. if so, what are effective values for tau and omega for trees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ham, Timothy
2008-12-01
The JBEI Registry is a software to store and manage to a database of biological parts. It is intended to be used as a web service that is accessed via a web browser. It is also capable of running as a desktop program for a single user. The registry software stores, indexes, categories, and allows users to enter, search, retrieve, and contruct biological constructs in silico. It is also able to communicate with other Registries for data sharing and exchange.
Video-Based Systems Research, Analysis, and Applications Opportunities
1981-07-30
as a COM software consultant, marketing its own COMTREVE software; * DatagraphiX Inc., San Diego, offers several versions of its COM recorders. AutoCOM...Metropolitan Microforms Ltd. in New York markets its MCAR system, which satisfies the need for a one- or multiple-user information retrieval and input...targeted to the market for high-speed data communications within a single facility, such as a university campus. The first commercial installations were set
Agile Software Development in Defense Acquisition: A Mission Assurance Perspective
2012-03-23
based information retrieval system, we might say that this program works like a hive of bees , going out for pollen and bringing it back to the hive...developers ® Six Siqma is reqistered in the U. S. Patent and Trademark Office by Motorola ^_ 33 @ AEROSPACE Major Areas in a Typical Software...requirements - Capturing and evaluating quality metrics, identifying common problem areas **» Despite its positive impact on quality, pair programming
Users guide for the Water Resources Division bibliographic retrieval and report generation system
Tamberg, Nora
1983-01-01
The WRDBIB Retrieval and Report-generation system has been developed by applying Multitrieve (CSD 1980, Reston) software to bibliographic data files. The WRDBIB data base includes some 9 ,000 records containing bibliographic citations and descriptors of WRD reports released for publication during 1968-1982. The data base is resident in the Reston Multics computer and may be accessed by registered Multics users in the field. The WRDBIB Users Guide provides detailed procedures on how to run retrieval programs using WRDBIB library files, and how to prepare custom bibliographic reports and author indexes. Users may search the WRDBIB data base on the following variable fields as described in the Data Dictionary: Authors, organizational source, title, citation, publication year, descriptors, and the WRSIC (accession) number. The Users Guide provides ample examples of program runs illustrating various retrieval and report generation aspects. Appendices include Multics access and file manipulation procedures; a ' Glossary of Selected Terms'; and a complete ' Retrieval Session ' with step-by-step outlines. (USGS)
NASA Astrophysics Data System (ADS)
Rodriguez, E.; Kolmonen, P.; Virtanen, T. H.; Sogacheva, L.; Sundstrom, A.-M.; de Leeuw, G.
2015-08-01
The Advanced Along-Track Scanning Radiometer (AATSR) on board the ENVISAT satellite is used to study aerosol properties. The retrieval of aerosol properties from satellite data is based on the optimized fit of simulated and measured reflectances at the top of the atmosphere (TOA). The simulations are made using a radiative transfer model with a variety of representative aerosol properties. The retrieval process utilizes a combination of four aerosol components, each of which is defined by their (lognormal) size distribution and a complex refractive index: a weakly and a strongly absorbing fine-mode component, coarse mode sea salt aerosol and coarse mode desert dust aerosol). These components are externally mixed to provide the aerosol model which in turn is used to calculate the aerosol optical depth (AOD). In the AATSR aerosol retrieval algorithm, the mixing of these components is decided by minimizing the error function given by the sum of the differences between measured and calculated path radiances at 3-4 wavelengths, where the path radiances are varied by varying the aerosol component mixing ratios. The continuous variation of the fine-mode components allows for the continuous variation of the fine-mode aerosol absorption. Assuming that the correct aerosol model (i.e. the correct mixing fractions of the four components) is selected during the retrieval process, also other aerosol properties could be computed such as the single scattering albedo (SSA). Implications of this assumption regarding the ratio of the weakly/strongly absorbing fine-mode fraction are investigated in this paper by evaluating the validity of the SSA thus obtained. The SSA is indirectly estimated for aerosol plumes with moderate-to-high AOD resulting from wildfires in Russia in the summer of 2010. Together with the AOD, the SSA provides the aerosol absorbing optical depth (AAOD). The results are compared with AERONET data, i.e. AOD level 2.0 and SSA and AAOD inversion products. The RMSE (root mean square error) is 0.03 for SSA and 0.02 for AAOD lower than 0.05. The SSA is further evaluated by comparison with the SSA retrieved from the Ozone Monitoring Instrument (OMI). The SSA retrieved from both instruments show similar features, with generally lower AATSR-estimated SSA values over areas affected by wildfires.
NASA Astrophysics Data System (ADS)
Córdoba-Jabonero, Carmen; Sicard, Michaël; Ansmann, Albert; Águila, Ana del; Baars, Holger
2018-04-01
POLIPHON (POlarization-LIdar PHOtometer Networking) retrieval consists in the vertical separation of two/three particle components in aerosol mixtures, highlighting their relative contributions in terms of the optical properties and mass concentrations. This method is based on the specific particle linear depolarization ratio given for different types of aerosols, and is applied to the new polarized Micro-Pulse Lidar (P-MPL). Case studies of specific climate-relevant aerosols (dust particles, fire smoke, and pollen aerosols, including a clean case as reference) observed over Barcelona (Spain) are presented in order to evaluate firstly the potential of P-MPLs measurements in combination with POLIPHON for retrieving the vertical separation of those particle components forming aerosol mixtures and their properties.
Techniques to Access Databases and Integrate Data for Hydrologic Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.
2009-06-17
This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed.more » The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and retrieve the required data, and their ability to integrate the data into environmental models using the FRAMES environment.« less
The neural basis of involuntary episodic memories.
Hall, Shana A; Rubin, David C; Miles, Amanda; Davis, Simon W; Wing, Erik A; Cabeza, Roberto; Berntsen, Dorthe
2014-10-01
Voluntary episodic memories require an intentional memory search, whereas involuntary episodic memories come to mind spontaneously without conscious effort. Cognitive neuroscience has largely focused on voluntary memory, leaving the neural mechanisms of involuntary memory largely unknown. We hypothesized that, because the main difference between voluntary and involuntary memory is the controlled retrieval processes required by the former, there would be greater frontal activity for voluntary than involuntary memories. Conversely, we predicted that other components of the episodic retrieval network would be similarly engaged in the two types of memory. During encoding, all participants heard sounds, half paired with pictures of complex scenes and half presented alone. During retrieval, paired and unpaired sounds were presented, panned to the left or to the right. Participants in the involuntary group were instructed to indicate the spatial location of the sound, whereas participants in the voluntary group were asked to additionally recall the pictures that had been paired with the sounds. All participants reported the incidence of their memories in a postscan session. Consistent with our predictions, voluntary memories elicited greater activity in dorsal frontal regions than involuntary memories, whereas other components of the retrieval network, including medial-temporal, ventral occipitotemporal, and ventral parietal regions were similarly engaged by both types of memories. These results clarify the distinct role of dorsal frontal and ventral occipitotemporal regions in predicting strategic retrieval and recalled information, respectively, and suggest that, although there are neural differences in retrieval, involuntary memories share neural components with established voluntary memory systems.
The Neural Basis of Involuntary Episodic Memories
Hall, Shana A.; Rubin, David C.; Miles, Amanda; Davis, Simon W.; Wing, Erik A.; Cabeza, Roberto; Berntsen, Dorthe
2014-01-01
Voluntary episodic memories require an intentional memory search, whereas involuntary episodic memories come to mind spontaneously without conscious effort. Cognitive neuroscience has largely focused on voluntary memory, leaving the neural mechanisms of involuntary memory largely unknown. We hypothesized that because the main difference between voluntary and involuntary memory is the controlled retrieval processes required by the former, there would be greater frontal activity for voluntary than involuntary memories. Conversely, we predicted that other components of the episodic retrieval network would be similarly engaged in the two types of memory. During encoding, all participants heard sounds, half paired with pictures of complex scenes and half presented alone. During retrieval, paired and unpaired sounds were presented panned to the left or to the right. Participants in the involuntary group were instructed to indicate the spatial location of the sound, whereas participants in the voluntary group were asked to additionally recall the pictures that had been paired with the sounds. All participants reported the incidence of their memories in a post-scan session. Consistent with our predictions, voluntary memories elicited greater activity in dorsal frontal regions than involuntary memories, whereas other components of the retrieval network, including medial temporal, ventral occipitotemporal, and ventral parietal regions were similarly engaged by both types of memories. These results clarify the distinct role of dorsal frontal and ventral occipitotemporal regions in predicting strategic retrieval and recalled information, respectively, and suggest that while there are neural differences in retrieval, involuntary memories share neural components with established voluntary memory systems. PMID:24702453
A Physical Validation Program for the GPM Mission
NASA Technical Reports Server (NTRS)
Smith, Eric A.
2003-01-01
The GPM mission is currently planned for start in the late 2007 - early 2008 time frame. Its main scientific goal is to help answer pressing scientific problems arising within the context of global and regional water cycling. These problems cut across a hierarchy of scales and include climate-water cycle interactions, techniques for improving weather and climate predictions, and better methods for combining observed precipitation with hydrometeorological prediction models for applications to hazardous flood-producing storms, seasonal flood draught conditions, and fresh water resource assessments. The GPM mission will expand the scope of precipitation measurement through the use of a constellation of some 9 satellites, one of which will be an advanced TRMM-like core satellite carrying a dual-frequency Ku-Ka band precipitation radar and an advanced, multifrequency passive microwave radiometer with vertical-horizontal polarization discrimination. The other constellation members will include new dedicated satellites and co-existing operational/research satellites carrying similar (but not identical) passive microwave radiometers. The goal of the constellation is to achieve approximately 3-hour sampling at any spot on the globe -- continuously. The constellation's orbit architecture will consist of a mix of sun-synchronous and non-sun-synchronous satellites with the core satellite providing measurements of cloud-precipitation microphysical processes plus calibration-quality rainrate retrievals to be used with the other retrieval information to ensure bias-free constellation coverage. A major requirement before the retrieved rainfall information generated by the GPM mission can be used effectively by prognostic models to improve weather forecasts, hydrometeorological forecasts, and climate model reanalysis simulations is a capability to quantify the error characteristics of the retrievals. A solution for this problem has been upheld in past precipitation missions because of the lack of suitable error modeling systems incorporated into the validation programs and data distribution systems. An overview of how NASA intends to overcome this problem for the GPM mission using a physically-based error modeling approach within a multi-faceted validation program is described. The solution is to first identify specific user requirements and then determine the most stringent of these requirements that embodies all essential error characterization information needed by the entire user community. In the context of NASA s scientific agenda for the GPM mission, the most stringent user requirement is found within the data assimilation community. The fundamental theory of data assimilation vis-a-vis ingesting satellite precipitation information into the pre-forecast initializations is based on quantifying the conditional bias and precision errors of individual rain retrievals, and the space-time structure of the precision error (i.e., the spatial-temporal error covariance). By generating the hardware and software capability to produce this information in a near real-time fashion, and to couple the derived quantitative error properties to the actual retrieved rainrates, all key validation users can be satisfied. The talk will describe the essential components of the hardware and software systems needed to generate such near real-time error properties, as well as the various paradigm shifts needed within the validation community to produce a validation program relevant to the precipitation user community.
NASA Astrophysics Data System (ADS)
Criado, Javier; Padilla, Nicolás; Iribarne, Luis; Asensio, Jose-Andrés
Due to the globalization of the information and knowledge society on the Internet, modern Web-based Information Systems (WIS) must be flexible and prepared to be easily accessible and manageable in real-time. In recent times it has received a special interest the globalization of information through a common vocabulary (i.e., ontologies), and the standardized way in which information is retrieved on the Web (i.e., powerful search engines, and intelligent software agents). These same principles of globalization and standardization should also be valid for the user interfaces of the WIS, but they are built on traditional development paradigms. In this paper we present an approach to reduce the gap of globalization/standardization in the generation of WIS user interfaces by using a real-time "bottom-up" composition perspective with COTS-interface components (type interface widgets) and trading services.
Flow dynamic environment data base development for the SSME
NASA Technical Reports Server (NTRS)
Sundaram, C. V.
1985-01-01
The fluid flow-induced vibration of the Space Shuttle main engine (SSME) components are being studied with a view to correlating the frequency characteristics of the pressure fluctuations in a rocket engine to its operating conditions and geometry. An overview of the data base development for SSME test firing results and the interactive computer software used to access, retrieve, and plot or print the results selectively for given thrust levels, engine numbers, etc., is presented. The various statistical methods available in the computer code for data analysis are discussed. Plots of test data, nondimensionalized using parameters such as fluid flow velocities, densities, and pressures, are presented. Results are compared with those available in the literature. Correlations between the resonant peaks observed at higher frequencies in power spectral density plots with pump geometry and operating conditions are discussed. An overview of the status of the investigation is presented and future directions are discussed.
Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.
2011-01-01
Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.
NEOview: Near Earth Object Data Discovery and Query
NASA Astrophysics Data System (ADS)
Tibbetts, M.; Elvis, M.; Galache, J. L.; Harbo, P.; McDowell, J. C.; Rudenko, M.; Van Stone, D.; Zografou, P.
2013-10-01
Missions to Near Earth Objects (NEOs) figure prominently in NASA's Flexible Path approach to human space exploration. NEOs offer insight into both the origins of the Solar System and of life, as well as a source of materials for future missions. With NEOview scientists can locate NEO datasets, explore metadata provided by the archives, and query or combine disparate NEO datasets in the search for NEO candidates for exploration. NEOview is a software system that illustrates how standards-based interfaces facilitate NEO data discovery and research. NEOview software follows a client-server architecture. The server is a configurable implementation of the International Virtual Observatory Alliance (IVOA) Table Access Protocol (TAP), a general interface for tabular data access, that can be deployed as a front end to existing NEO datasets. The TAP client, seleste, is a graphical interface that provides intuitive means of discovering NEO providers, exploring dataset metadata to identify fields of interest, and constructing queries to retrieve or combine data. It features a powerful, graphical query builder capable of easing the user's introduction to table searches. Through science use cases, NEOview demonstrates how potential targets for NEO rendezvous could be identified by combining data from complementary sources. Through deployment and operations, it has been shown that the software components are data independent and configurable to many different data servers. As such, NEOview's TAP server and seleste TAP client can be used to create a seamless environment for data discovery and exploration for tabular data in any astronomical archive.
Beveridge, Allan
2006-01-01
The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-01
... With Image Processing Systems, Components Thereof, and Associated Software; Notice of Investigation..., and associated software by reason of infringement of certain claims of U.S. Patent Nos. 7,043,087... processing systems, components thereof, and associated software that infringe one or more of claims 1, 6, and...
NASA Astrophysics Data System (ADS)
Kamei, A.; Yoshida, Y.; Dupuy, E.; Hiraki, K.; Matsunaga, T.
2015-12-01
The GOSAT-2, which is scheduled for launch in early 2018, is the successor mission to the Greenhouse gases Observing Satellite (GOSAT). The FTS-2 onboard the GOSAT-2 is a Fourier transform spectrometer, which has three bands in the near to short-wavelength infrared (SWIR) region and two bands in the thermal infrared (TIR) region to observe infrared light reflected and emitted from the Earth's surface and atmosphere with high-resolution spectra. Column amounts and vertical profiles of major greenhouse gases such as carbon dioxide (CO2) and methane (CH4) are retrieved from acquired radiance spectra. In addition, the FTS-2 has several improvements from the FTS onboard the GOSAT: 1) added spectral coverage in the SWIR region for carbon monoxide (CO) retrieval, 2) increased signal-to-noise ratio (SNR) for all bands, 3) extended range of along-track pointing angles for sunglint observations, 4) intelligent pointing to avoid cloud contamination. Since 2012, we have been developing a software tool, which is called the GOSAT-2 FTS-2 simulator, to simulate spectral radiance data that will be acquired by the GOSAT-2 FTS-2. The objective of it is to analyze/optimize data with respect to the sensor specification, the parameters for Level 1 processing, and the improvement of Level 2 retrieval algorithms. It consists of six components: 1) overall control, 2) sensor carrying platform, 3) spectral radiance calculation, 4) Fourier transform module, 5) Level 1B (L1B) processing, and 6) L1B data output. More realistic and faster simulations have been made possible by the improvement of details about sensor characteristics, the sophistication of data processing and algorithms, the addition of various observation modes, the use of surface and atmospheric ancillary data, and the speed-up and parallelization of radiative transfer code. This simulator is confirmed to be working properly from the reproduction of GOSAT FTS L1B data depends on the ancillary data. We will summarize the performance verification of the GOSAT-2 FTS-2 simulator and describe the future prospects for Level 2 retrieval. Besides, we will present the various sensitivity analyses relating to the engineering parameters and the atmospheric conditions on Level 1 processing for greenhouse gases retrieval.
Support for life-cycle product reuse in NASA's SSE
NASA Technical Reports Server (NTRS)
Shotton, Charles
1989-01-01
The Software Support Environment (SSE) is a software factory for the production of Space Station Freedom Program operational software. The SSE is to be centrally developed and maintained and used to configure software production facilities in the field. The PRC product TTCQF provides for an automated qualification process and analysis of existing code that can be used for software reuse. The interrogation subsystem permits user queries of the reusable data and components which have been identified by an analyzer and qualified with associated metrics. The concept includes reuse of non-code life-cycle components such as requirements and designs. Possible types of reusable life-cycle components include templates, generics, and as-is items. Qualification of reusable elements requires analysis (separation of candidate components into primitives), qualification (evaluation of primitives for reusability according to reusability criteria) and loading (placing qualified elements into appropriate libraries). There can be different qualifications for different installations, methodologies, applications and components. Identifying reusable software and related components is labor-intensive and is best carried out as an integrated function of an SSE.
The software-cycle model for re-engineering and reuse
NASA Technical Reports Server (NTRS)
Bailey, John W.; Basili, Victor R.
1992-01-01
This paper reports on the progress of a study which will contribute to our ability to perform high-level, component-based programming by describing means to obtain useful components, methods for the configuration and integration of those components, and an underlying economic model of the costs and benefits associated with this approach to reuse. One goal of the study is to develop and demonstrate methods to recover reusable components from domain-specific software through a combination of tools, to perform the identification, extraction, and re-engineering of components, and domain experts, to direct the applications of those tools. A second goal of the study is to enable the reuse of those components by identifying techniques for configuring and recombining the re-engineered software. This component-recovery or software-cycle model addresses not only the selection and re-engineering of components, but also their recombination into new programs. Once a model of reuse activities has been developed, the quantification of the costs and benefits of various reuse options will enable the development of an adaptable economic model of reuse, which is the principal goal of the overall study. This paper reports on the conception of the software-cycle model and on several supporting techniques of software recovery, measurement, and reuse which will lead to the development of the desired economic model.
The Community Cloud retrieval for CLimate (CC4CL) - Part 2: The optimal estimation approach
NASA Astrophysics Data System (ADS)
McGarragh, Gregory R.; Poulsen, Caroline A.; Thomas, Gareth E.; Povey, Adam C.; Sus, Oliver; Stapelberg, Stefan; Schlundt, Cornelia; Proud, Simon; Christensen, Matthew W.; Stengel, Martin; Hollmann, Rainer; Grainger, Roy G.
2018-06-01
The Community Cloud retrieval for Climate (CC4CL) is a cloud property retrieval system for satellite-based multispectral imagers and is an important component of the Cloud Climate Change Initiative (Cloud_cci) project. In this paper we discuss the optimal estimation retrieval of cloud optical thickness, effective radius and cloud top pressure based on the Optimal Retrieval of Aerosol and Cloud (ORAC) algorithm. Key to this method is the forward model, which includes the clear-sky model, the liquid water and ice cloud models, the surface model including a bidirectional reflectance distribution function (BRDF), and the "fast" radiative transfer solution (which includes a multiple scattering treatment). All of these components and their assumptions and limitations will be discussed in detail. The forward model provides the accuracy appropriate for our retrieval method. The errors are comparable to the instrument noise for cloud optical thicknesses greater than 10. At optical thicknesses less than 10 modeling errors become more significant. The retrieval method is then presented describing optimal estimation in general, the nonlinear inversion method employed, measurement and a priori inputs, the propagation of input uncertainties and the calculation of subsidiary quantities that are derived from the retrieval results. An evaluation of the retrieval was performed using measurements simulated with noise levels appropriate for the MODIS instrument. Results show errors less than 10 % for cloud optical thicknesses greater than 10. Results for clouds of optical thicknesses less than 10 have errors up to 20 %.
Component Technology for High-Performance Scientific Simulation Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epperly, T; Kohn, S; Kumfert, G
2000-11-09
We are developing scientific software component technology to manage the complexity of modem, parallel simulation software and increase the interoperability and re-use of scientific software packages. In this paper, we describe a language interoperability tool named Babel that enables the creation and distribution of language-independent software libraries using interface definition language (IDL) techniques. We have created a scientific IDL that focuses on the unique interface description needs of scientific codes, such as complex numbers, dense multidimensional arrays, complicated data types, and parallelism. Preliminary results indicate that in addition to language interoperability, this approach provides useful tools for thinking about themore » design of modem object-oriented scientific software libraries. Finally, we also describe a web-based component repository called Alexandria that facilitates the distribution, documentation, and re-use of scientific components and libraries.« less
Managing Written Directives: A Software Solution to Streamline Workflow.
Wagner, Robert H; Savir-Baruch, Bital; Gabriel, Medhat S; Halama, James R; Bova, Davide
2017-06-01
A written directive is required by the U.S. Nuclear Regulatory Commission for any use of 131 I above 1.11 MBq (30 μCi) and for patients receiving radiopharmaceutical therapy. This requirement has also been adopted and must be enforced by the agreement states. As the introduction of new radiopharmaceuticals increases therapeutic options in nuclear medicine, time spent on regulatory paperwork also increases. The pressure of managing these time-consuming regulatory requirements may heighten the potential for inaccurate or incomplete directive data and subsequent regulatory violations. To improve on the paper-trail method of directive management, we created a software tool using a Health Insurance Portability and Accountability Act (HIPAA)-compliant database. This software allows for secure data-sharing among physicians, technologists, and managers while saving time, reducing errors, and eliminating the possibility of loss and duplication. Methods: The software tool was developed using Visual Basic, which is part of the Visual Studio development environment for the Windows platform. Patient data are deposited in an Access database on a local HIPAA-compliant secure server or hard disk. Once a working version had been developed, it was installed at our institution and used to manage directives. Updates and modifications of the software were released regularly until no more significant problems were found with its operation. Results: The software has been used at our institution for over 2 y and has reliably kept track of all directives. All physicians and technologists use the software daily and find it superior to paper directives. They can retrieve active directives at any stage of completion, as well as completed directives. Conclusion: We have developed a software solution for the management of written directives that streamlines and structures the departmental workflow. This solution saves time, centralizes the information for all staff to share, and decreases confusion about the creation, completion, filing, and retrieval of directives. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Software Component Technologies and Space Applications
NASA Technical Reports Server (NTRS)
Batory, Don
1995-01-01
In the near future, software systems will be more reconfigurable than hardware. This will be possible through the advent of software component technologies which have been prototyped in universities and research labs. In this paper, we outline the foundations for those technologies and suggest how they might impact software for space applications.
Challenges of the Open Source Component Marketplace in the Industry
NASA Astrophysics Data System (ADS)
Ayala, Claudia; Hauge, Øyvind; Conradi, Reidar; Franch, Xavier; Li, Jingyue; Velle, Ketil Sandanger
The reuse of Open Source Software components available on the Internet is playing a major role in the development of Component Based Software Systems. Nevertheless, the special nature of the OSS marketplace has taken the “classical” concept of software reuse based on centralized repositories to a completely different arena based on massive reuse over Internet. In this paper we provide an overview of the actual state of the OSS marketplace, and report preliminary findings about how companies interact with this marketplace to reuse OSS components. Such data was gathered from interviews in software companies in Spain and Norway. Based on these results we identify some challenges aimed to improve the industrial reuse of OSS components.
A high throughput geocomputing system for remote sensing quantitative retrieval and a case study
NASA Astrophysics Data System (ADS)
Xue, Yong; Chen, Ziqiang; Xu, Hui; Ai, Jianwen; Jiang, Shuzheng; Li, Yingjie; Wang, Ying; Guang, Jie; Mei, Linlu; Jiao, Xijuan; He, Xingwei; Hou, Tingting
2011-12-01
The quality and accuracy of remote sensing instruments have been improved significantly, however, rapid processing of large-scale remote sensing data becomes the bottleneck for remote sensing quantitative retrieval applications. The remote sensing quantitative retrieval is a data-intensive computation application, which is one of the research issues of high throughput computation. The remote sensing quantitative retrieval Grid workflow is a high-level core component of remote sensing Grid, which is used to support the modeling, reconstruction and implementation of large-scale complex applications of remote sensing science. In this paper, we intend to study middleware components of the remote sensing Grid - the dynamic Grid workflow based on the remote sensing quantitative retrieval application on Grid platform. We designed a novel architecture for the remote sensing Grid workflow. According to this architecture, we constructed the Remote Sensing Information Service Grid Node (RSSN) with Condor. We developed a graphic user interface (GUI) tools to compose remote sensing processing Grid workflows, and took the aerosol optical depth (AOD) retrieval as an example. The case study showed that significant improvement in the system performance could be achieved with this implementation. The results also give a perspective on the potential of applying Grid workflow practices to remote sensing quantitative retrieval problems using commodity class PCs.
Technology for an intelligent, free-flying robot for crew and equipment retrieval in space
NASA Technical Reports Server (NTRS)
Erickson, J. D.; Reuter, G. J.; Healey, Kathleen J.; Phinney, D. E.
1990-01-01
Crew rescue and equipment retrieval is a Space Station Freedom requirement. During Freedom's lifetime, there is a high probability that a number of objects will accidently become separated. Members of the crew, replacement units, and key tools are examples. Retrieval of these objects within a short time is essential. Systems engineering studies were conducted to identify system requirements and candidate approaches. One such approach, based on a voice-supervised, intelligent, free-flying robot was selected for further analysis. A ground-based technology demonstration, now in its second phase, was designed to provide an integrated robotic hardware and software testbed supporting design of a space-borne system. The ground system, known as the EVA Retriever, is examining the problem of autonomously planning and executing a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles. The current prototype is an anthropomorphic manipulator unit with dexterous arms and hands attached to a robot body and latched in a manned maneuvering unit. A precision air-bearing floor is used to simulate space. Sensor data include two vision systems and force/proximity/tactile sensors on the hands and arms. Planning for a shuttle file experiment is underway. A set of scenarios and strawman requirements were defined to support conceptual development. Initial design activities are expected to begin in late 1989 with the flight occurring in 1994. The flight hardware and software will be based on lessons learned from both the ground prototype and computer simulations.
DICOM implementation on online tape library storage system
NASA Astrophysics Data System (ADS)
Komo, Darmadi; Dai, Hailei L.; Elghammer, David; Levine, Betty A.; Mun, Seong K.
1998-07-01
The main purpose of this project is to implement a Digital Image and Communications (DICOM) compliant online tape library system over the Internet. Once finished, the system will be used to store medical exams generated from U.S. ARMY Mobile ARMY Surgical Hospital (MASH) in Tuzla, Bosnia. A modified UC Davis implementation of DICOM storage class is used for this project. DICOM storage class user and provider are implemented as the system's interface to the Internet. The DICOM software provides flexible configuration options such as types of modalities and trusted remote DICOM hosts. Metadata is extracted from each exam and indexed in a relational database for query and retrieve purposes. The medical images are stored inside the Wolfcreek-9360 tape library system from StorageTek Corporation. The tape library system has nearline access to more than 1000 tapes. Each tape has a capacity of 800 megabytes making the total nearline tape access of around 1 terabyte. The tape library uses the Application Storage Manager (ASM) which provides cost-effective file management, storage, archival, and retrieval services. ASM automatically and transparently copies files from expensive magnetic disk to less expensive nearline tape library, and restores the files back when they are needed. The ASM also provides a crash recovery tool, which enable an entire file system restore in a short time. A graphical user interface (GUI) function is used to view the contents of the storage systems. This GUI also allows user to retrieve the stored exams and send the exams to anywhere on the Internet using DICOM protocols. With the integration of different components of the system, we have implemented a high capacity online tape library storage system that is flexible and easy to use. Using tape as an alternative storage media as opposed to the magnetic disk has the great potential of cost savings in terms of dollars per megabyte of storage. As this system matures, the Hospital Information Systems/Radiology Information Systems (HIS/RIS) or other components can be developed potentially as interfaces to the outside world thus widen the usage of the tape library system.
NASA Astrophysics Data System (ADS)
Duan, Y.; Durand, M. T.; Jezek, K. C.; Yardim, C.; Bringer, A.; Aksoy, M.; Johnson, J. T.
2017-12-01
The ultra-wideband software-defined microwave radiometer (UWBRAD) is designed to provide ice sheet internal temperature product via measuring low frequency microwave emission. Twelve channels ranging from 0.5 to 2.0 GHz are covered by the instrument. A Greenland air-borne demonstration was demonstrated in September 2016, provided first demonstration of Ultra-wideband radiometer observations of geophysical scenes, including ice sheets. Another flight is planned for September 2017 for acquiring measurements in central ice sheet. A Bayesian framework is designed to retrieve the ice sheet internal temperature from simulated UWBRAD brightness temperature (Tb) measurements over Greenland flight path with limited prior information of the ground. A 1-D heat-flow model, the Robin Model, was used to model the ice sheet internal temperature profile with ground information. Synthetic UWBRAD Tb observations was generated via the partially coherent radiation transfer model, which utilizes the Robin model temperature profile and an exponential fit of ice density from Borehole measurement as input, and corrupted with noise. The effective surface temperature, geothermal heat flux, the variance of upper layer ice density, and the variance of fine scale density variation at deeper ice sheet were treated as unknown variables within the retrieval framework. Each parameter is defined with its possible range and set to be uniformly distributed. The Markov Chain Monte Carlo (MCMC) approach is applied to make the unknown parameters randomly walk in the parameter space. We investigate whether the variables can be improved over priors using the MCMC approach and contribute to the temperature retrieval theoretically. UWBRAD measurements near camp century from 2016 was also treated with the MCMC to examine the framework with scattering effect. The fine scale density fluctuation is an important parameter. It is the most sensitive yet highly unknown parameter in the estimation framework. Including the fine scale density fluctuation greatly improved the retrieval results. The ice sheet vertical temperature profile, especially the 10m temperature, can be well retrieved via the MCMC process. Future retrieval work will apply the Bayesian approach to UWBRAD airborne measurements.
ResidPlots-2: Computer Software for IRT Graphical Residual Analyses
ERIC Educational Resources Information Center
Liang, Tie; Han, Kyung T.; Hambleton, Ronald K.
2009-01-01
This article discusses the ResidPlots-2, a computer software that provides a powerful tool for IRT graphical residual analyses. ResidPlots-2 consists of two components: a component for computing residual statistics and another component for communicating with users and for plotting the residual graphs. The features of the ResidPlots-2 software are…
New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database
NASA Technical Reports Server (NTRS)
Laher, Russ; Rector, John
2004-01-01
Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.
A Unified Framework for Periodic, On-Demand, and User-Specified Software Information
NASA Technical Reports Server (NTRS)
Kolano, Paul Z.
2004-01-01
Although grid computing can increase the number of resources available to a user; not all resources on the grid may have a software environment suitable for running a given application. To provide users with the necessary assistance for selecting resources with compatible software environments and/or for automatically establishing such environments, it is necessary to have an accurate source of information about the software installed across the grid. This paper presents a new OGSI-compliant software information service that has been implemented as part of NASA's Information Power Grid project. This service is built on top of a general framework for reconciling information from periodic, on-demand, and user-specified sources. Information is retrieved using standard XPath queries over a single unified namespace independent of the information's source. Two consumers of the provided software information, the IPG Resource Broker and the IPG Neutralization Service, are briefly described.
Development of an Automated Security Incident Reporting System (SIRS) for Bus Transit
DOT National Transportation Integrated Search
1986-12-01
The security incident reporting system (sirs) is a microcomputer-based software program demonstrated at the metropolitan transit commission (mtc) in Minneapolis, mn. Sirs is designed to provide convenient storage, update and retrieval of security inc...
A Four–Component Model of Age–Related Memory Change
Healey, M. Karl; Kahana, Michael J.
2015-01-01
We develop a novel, computationally explicit, theory of age–related memory change within the framework of the context maintenance and retrieval (CMR2) model of memory search. We introduce a set of benchmark findings from the free recall and recognition tasks that includes aspects of memory performance that show both age-related stability and decline. We test aging theories by lesioning the corresponding mechanisms in a model fit to younger adult free recall data. When effects are considered in isolation, many theories provide an adequate account, but when all effects are considered simultaneously, the existing theories fail. We develop a novel theory by fitting the full model (i.e., allowing all parameters to vary) to individual participants and comparing the distributions of parameter values for older and younger adults. This theory implicates four components: 1) the ability to sustain attention across an encoding episode, 2) the ability to retrieve contextual representations for use as retrieval cues, 3) the ability to monitor retrievals and reject intrusions, and 4) the level of noise in retrieval competitions. We extend CMR2 to simulate a recognition memory task using the same mechanisms the free recall model uses to reject intrusions. Without fitting any additional parameters, the four–component theory that accounts for age differences in free recall predicts the magnitude of age differences in recognition memory accuracy. Confirming a prediction of the model, free recall intrusion rates correlate positively with recognition false alarm rates. Thus we provide a four–component theory of a complex pattern of age differences across two key laboratory tasks. PMID:26501233
Systems and Methods for Decoy Routing and Convert Channel Bonding
2013-11-26
34 Proc. R. Soc. A, vol. 463, Jan. 12, 2007, pp. 1-16. " Stupid censorship Web Proxy," http://www.stupidcensorship.com/, retrieved from the internet on...services such as those offered by Google or Skype, web or microblogs such as Twitter, various social media services such as Face- book, and file...device (e.g., Skype, Google , Jabber, Firefox) to be directed to the proprietary software for processing. For instance, the proprietary software of
Methods, Software and Tools for Three Numerical Applications. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. R. Jessup
2000-03-01
This is a report of the results of the authors work supported by DOE contract DE-FG03-97ER25325. They proposed to study three numerical problems. They are: (1) the extension of the PMESC parallel programming library; (2) the development of algorithms and software for certain generalized eigenvalue and singular value (SVD) problems, and (3) the application of techniques of linear algebra to an information retrieval technique known as latent semantic indexing (LSI).
Tautomerism in chemical information management systems
NASA Astrophysics Data System (ADS)
Warr, Wendy A.
2010-06-01
Tautomerism has an impact on many of the processes in chemical information management systems including novelty checking during registration into chemical structure databases; storage of structures; exact and substructure searching in chemical structure databases; and depiction of structures retrieved by a search. The approaches taken by 27 different software vendors and database producers are compared. It is hoped that this comparison will act as a discussion document that could ultimately improve databases and software for researchers in the future.
Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola
2015-01-01
One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies. PMID:26047146
Wang, Lin; Liu, Simin; Niu, Tianhua; Xu, Xin
2005-03-18
Single nucleotide polymorphisms (SNPs) provide an important tool in pinpointing susceptibility genes for complex diseases and in unveiling human molecular evolution. Selection and retrieval of an optimal SNP set from publicly available databases have emerged as the foremost bottlenecks in designing large-scale linkage disequilibrium studies, particularly in case-control settings. We describe the architectural structure and implementations of a novel software program, SNPHunter, which allows for both ad hoc-mode and batch-mode SNP search, automatic SNP filtering, and retrieval of SNP data, including physical position, function class, flanking sequences at user-defined lengths, and heterozygosity from NCBI dbSNP. The SNP data extracted from dbSNP via SNPHunter can be exported and saved in plain text format for further down-stream analyses. As an illustration, we applied SNPHunter for selecting SNPs for 10 major candidate genes for type 2 diabetes, including CAPN10, FABP4, IL6, NOS3, PPARG, TNF, UCP2, CRP, ESR1, and AR. SNPHunter constitutes an efficient and user-friendly tool for SNP screening, selection, and acquisition. The executable and user's manual are available at http://www.hsph.harvard.edu/ppg/software.htm
Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola
2015-06-03
One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies.
NASA Technical Reports Server (NTRS)
Jamsek, Damir A.
1993-01-01
A brief example of the use of formal methods techniques in the specification of a software system is presented. The report is part of a larger effort targeted at defining a formal methods pilot project for NASA. One possible application domain that may be used to demonstrate the effective use of formal methods techniques within the NASA environment is presented. It is not intended to provide a tutorial on either formal methods techniques or the application being addressed. It should, however, provide an indication that the application being considered is suitable for a formal methods by showing how such a task may be started. The particular system being addressed is the Structured File Services (SFS), which is a part of the Data Storage and Retrieval Subsystem (DSAR), which in turn is part of the Data Management System (DMS) onboard Spacestation Freedom. This is a software system that is currently under development for NASA. An informal mathematical development is presented. Section 3 contains the same development using Penelope (23), an Ada specification and verification system. The complete text of the English version Software Requirements Specification (SRS) is reproduced in Appendix A.
An overview of the National Space Science data Center Standard Information Retrieval System (SIRS)
NASA Technical Reports Server (NTRS)
Shapiro, A.; Blecher, S.; Verson, E. E.; King, M. L. (Editor)
1974-01-01
A general overview is given of the National Space Science Data Center (NSSDC) Standard Information Retrieval System. A description, in general terms, the information system that contains the data files and the software system that processes and manipulates the files maintained at the Data Center. Emphasis is placed on providing users with an overview of the capabilities and uses of the NSSDC Standard Information Retrieval System (SIRS). Examples given are taken from the files at the Data Center. Detailed information about NSSDC data files is documented in a set of File Users Guides, with one user's guide prepared for each file processed by SIRS. Detailed information about SIRS is presented in the SIRS Users Guide.
PharmARTS: terminology web services for drug safety data coding and retrieval.
Alecu, Iulian; Bousquet, Cédric; Degoulet, Patrice; Jaulent, Marie-Christine
2007-01-01
MedDRA and WHO-ART are the terminologies used to encode drug safety reports. The standardisation achieved with these terminologies facilitates: 1) The sharing of safety databases; 2) Data mining for the continuous reassessment of benefit-risk ratio at national or international level or in the pharmaceutical industry. There is some debate about the capacity of these terminologies for retrieving case reports related to similar medical conditions. We have developed a resource that allows grouping similar medical conditions more effectively than WHO-ART and MedDRA. We describe here a software tool facilitating the use of this terminological resource thanks to an RDF framework with support for RDF Schema inferencing and querying. This tool eases coding and data retrieval in drug safety.
An introduction to information retrieval: applications in genomics
Nadkarni, P M
2011-01-01
Information retrieval (IR) is the field of computer science that deals with the processing of documents containing free text, so that they can be rapidly retrieved based on keywords specified in a user’s query. IR technology is the basis of Web-based search engines, and plays a vital role in biomedical research, because it is the foundation of software that supports literature search. Documents can be indexed by both the words they contain, as well as the concepts that can be matched to domain-specific thesauri; concept matching, however, poses several practical difficulties that make it unsuitable for use by itself. This article provides an introduction to IR and summarizes various applications of IR and related technologies to genomics. PMID:12049181
NASA Technical Reports Server (NTRS)
Stiller, G. P.; Gunson, M. R.; Lowes, L. L.; Abrams, M. C.; Raper, O. F.; Farmer, C. B.; Zander, R.; Rinsland, C. P.
1995-01-01
A simple, classical, and expedient method for the retrieval of atmospheric pressure-temperature profiles has been applied to the high-resolution infrared solar absorption spectra obtained with the atmospheric trace molecule spectroscopy (ATMOS) instrument. The basis for this method is a rotational analysis of retrieved apparent abundances from CO2 rovibrational absorption lines, employing existing constituent concentration retrieval software used in the analysis of data returned by ATMOS. Pressure-temperature profiles derived from spectra acquired during the ATLAS 1 space shuttle mission of March-April 1992 are quantitatively evaluated and compared with climatological and meteorological data as a means of assessing the validity of this approach.
A single center experience with retrievable IVC filters.
Renno, Anas; Khateeb, Faisal; Kazan, Viviane; Qu, Weikai; Gollapudi, Anurekha; Aplin, Brett; Abbas, Jihad; Zelenock, Gerald; Nazzal, Munier
2015-08-01
To evaluate retrievable IVC filters in our institution and assess their retrieval following a well-structured follow up program. Retrospective cohort study. The medical records of patients implanted with retrievable IVC filters were reviewed. All retrievable filter insertions between July 2007 and August 2011 at our institution were reviewed. Data was analyzed for age, gender, indication, complications, retrieval rate, and brand of filter inserted. Statistical analysis was done using SPSS software v19. Chi-square was used to compare discrete data and t-test for continuous data. P < 0.05 was significant. A total of 484 patients were reviewed of which 258 (53.1%) had a complete medical record. And 96 (37.2%) filters were placed as permanent at the time of insertion. An additional 40 (15.5%) filters were converted to permanent (total permanent filters 136; 52.7%). Death was reported in 26 (10%) patients and 96 (37.2%) out of the remaining 232 patients presented for potential retrieval. Also, 73 (28.2%) had an attempt to retrieve the filters, 69 (94.5%) were successful and 4 (5.4%) failed to retrieve. The remaining 23 (8.9%) patients declined retrieval. Filters studied include Celect (38%), Bard (31.4%), Option (26.2%), Tulip (4.1%), and Recovery (0.2%). Bard was more commonly used as a retrievable filter (80.9%). Retrieval on the first attempt was 90.4% (n = 66) successful. Of the remaining seven filters, three were successfully retrieved on a second attempt, and four failed to retrieve due to filter tilt. The success rates of retrieval for Celect and Tulip were significantly lower than for Bard (p = 0.04 and 0.023, respectively). Our study showed that a variety of IVC filters can be retrieved successfully with minimal complication rates. In more than half of our patients, IVC filters were used as permanent. Failure of retrieval was most frequently due to filter tilting. © The Author(s) 2014.
Lo, Ming; Hue, Chih-Wei
2008-11-01
The Character-Component Analysis Toolkit (C-CAT) software was designed to assist researchers in constructing experimental materials using traditional Chinese characters. The software package contains two sets of character stocks: one suitable for research using literate adults as subjects and one suitable for research using schoolchildren as subjects. The software can identify linguistic properties, such as the number of strokes contained, the character-component pronunciation regularity, and the arrangement of character components within a character. Moreover, it can compute a character's linguistic frequency, neighborhood size, and phonetic validity with respect to a user-selected character stock. It can also search the selected character stock for similar characters or for character components with user-specified linguistic properties.
NASA Technical Reports Server (NTRS)
Blankenship, Clay; Case, Jonathan L.; Zavodsky, Bradley
2015-01-01
Land surface models are important components of numerical weather prediction (NWP) models, partitioning incoming energy into latent and sensitive heat fluxes that affect boundary layer growth and destabilization. During warm-season months, diurnal heating and convective initiation depend strongly on evapotranspiration and available boundary layer moisture, which are substantially affected by soil moisture content. Therefore, to properly simulate warm-season processes in NWP models, an accurate initialization of the land surface state is important for accurately depicting the exchange of heat and moisture between the surface and boundary layer. In this study, soil moisture retrievals from the Soil Moisture and Ocean Salinity (SMOS) satellite radiometer are assimilated into the Noah Land Surface Model via an Ensemble Kalman Filter embedded within the NASA Land Information System (LIS) software framework. The output from LIS-Noah is subsequently used to initialize runs of the Weather Research and Forecasting (WRF) NWP model. The impact of assimilating SMOS retrievals is assessed by initializing the WRF model with LIS-Noah output obtained with and without SMOS data assimilation. The southeastern United States is used as the domain for a preliminary case study. During the summer months, there is extensive irrigation in the lower Mississippi Valley for rice and other crops. The irrigation is not represented in the meteorological forcing used to drive the LIS-Noah integration, but the irrigated areas show up clearly in the SMOS soil moisture retrievals, resulting in a case with a large difference in initial soil moisture conditions. The impact of SMOS data assimilation on both Noah soil moisture fields and on short-term (0-48 hour) WRF weather forecasts will be presented.
User's operating procedures. Volume 1: Scout project information programs
NASA Technical Reports Server (NTRS)
Harris, C. G.; Harris, D. K.
1985-01-01
A review of the user's operating procedures for the Scout Project Automatic Data System, called SPADS is given. SPADS is the result of the past seven years of software development on a Prime minicomputer located at the Scout Project Office. SPADS was developed as a single entry, multiple cross reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. The instructions to operate the Scout Project Information programs in data retrieval and file maintenance via the user friendly menu drivers is presented.
System Description for Tank 241-AZ-101 Waste Retrieval Data Acquisition System
DOE Office of Scientific and Technical Information (OSTI.GOV)
ROMERO, S.G.
2000-01-10
Describes the hardware and software for the AZ-101 Mixer Pump Data Acquisition System. The purpose of the tank 241-AZ-101 retrieval system Data Acquisition System (DAS) is to provide monitoring and data acquisition of key parameters in order to confirm the effectiveness of the mixer pumps utilized for suspending solids in the tank. The suspension of solids in Tank 241-AZ-101 is necessary for pretreatment of the neutralized current acid waste (NCAW), and eventual disposal as glass via the Hanford Waste Vitrification Plant.
Computer programs: Information retrieval and data analysis, a compilation
NASA Technical Reports Server (NTRS)
1972-01-01
The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.
Open-source Software for Exoplanet Atmospheric Modeling
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph
2018-01-01
I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.
Learning characteristics of a space-time neural network as a tether skiprope observer
NASA Technical Reports Server (NTRS)
Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles
1993-01-01
The Software Technology Laboratory at the Johnson Space Center is testing a Space Time Neural Network (STNN) for observing tether oscillations present during retrieval of a tethered satellite. Proper identification of tether oscillations, known as 'skiprope' motion, is vital to safe retrieval of the tethered satellite. Our studies indicate that STNN has certain learning characteristics that must be understood properly to utilize this type of neural network for the tethered satellite problem. We present our findings on the learning characteristics including a learning rate versus momentum performance table.
Learning characteristics of a space-time neural network as a tether skiprope observer
NASA Technical Reports Server (NTRS)
Lea, Robert N.; Villarreal, James A.; Jani, Yashvant; Copeland, Charles
1992-01-01
The Software Technology Laboratory at JSC is testing a Space Time Neural Network (STNN) for observing tether oscillations present during retrieval of a tethered satellite. Proper identification of tether oscillations, known as 'skiprope' motion, is vital to safe retrieval of the tethered satellite. Our studies indicate that STNN has certain learning characteristics that must be understood properly to utilize this type of neural network for the tethered satellite problem. We present our findings on the learning characteristics including a learning rate versus momentum performance table.
User's operating procedures. Volume 3: Projects directorate information programs
NASA Technical Reports Server (NTRS)
Haris, C. G.; Harris, D. K.
1985-01-01
A review of the user's operating procedures for the scout project automatic data system, called SPADS is presented. SPADS is the results of the past seven years of software development on a prime mini-computer. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, three of three, provides the instructions to operate the projects directorate information programs in data retrieval and file maintenance via the user friendly menu drivers.
Information Retrieval Research and ESPRIT.
ERIC Educational Resources Information Center
Smeaton, Alan F.
1987-01-01
Describes the European Strategic Programme of Research and Development in Information Technology (ESPRIT), and its five programs: advanced microelectronics, software technology, advanced information processing, office systems, and computer integrated manufacturing. The emphasis on logic programming and ESPRIT as the European response to the…
Video Information Communication and Retrieval/Image Based Information System (VICAR/IBIS)
NASA Technical Reports Server (NTRS)
Wherry, D. B.
1981-01-01
The acquisition, operation, and planning stages of installing a VICAR/IBIS system are described. The system operates in an IBM mainframe environment, and provides image processing of raster data. System support problems with software and documentation are discussed.
Hutchinson, J Benjamin; Uncapher, Melina R; Wagner, Anthony D
2015-01-01
Retrieval of episodic memories is a multi-component act that relies on numerous operations ranging from processing the retrieval cue, evaluating retrieved information, and selecting the appropriate response given the demands of the task. Motivated by a rich functional neuroimaging literature, recent theorizing about various computations at retrieval has focused on the role of posterior parietal cortex (PPC). In a potentially promising line of research, recent neuroimaging findings suggest that different subregions of dorsal PPC respond distinctly to different aspects of retrieval decisions, suggesting that better understanding of their contributions might shed light on the component processes of retrieval. In an attempt to understand the basic operations performed by dorsal PPC, we used functional MRI and functional connectivity analyses to examine how activation in, and connectivity between, dorsal PPC and ventral temporal regions representing retrieval cues varies as a function of retrieval decision uncertainty. Specifically, participants made a five-point recognition confidence judgment for a series of old and new visually presented words. Consistent with prior studies, memory-related activity patterns dissociated across left dorsal PPC subregions, with activity in the lateral IPS tracking the degree to which participants perceived an item to be old, whereas activity in the SPL increased as a function of decision uncertainty. Importantly, whole-brain functional connectivity analyses further revealed that SPL activity was more strongly correlated with that in the visual word-form area during uncertain relative to certain decisions. These data suggest that the involvement of SPL during episodic retrieval reflects, at least in part, the processing of the retrieval cue, perhaps in service of attempts to increase the mnemonic evidence elicited by the cue. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Yin, J.; Oyaki, A.; Hwang, C.; Hung, C.
2000-01-01
The purpose of this research and study paper is to provide a summary description and results of rapid development accomplishments at NASA/JPL in the area of advanced distributed computing technology using a Commercial-Off--The-Shelf (COTS)-based object oriented component approach to open inter-operable software development and software reuse.
Benchmarking a Soil Moisture Data Assimilation System for Agricultural Drought Monitoring
NASA Technical Reports Server (NTRS)
Hun, Eunjin; Crow, Wade T.; Holmes, Thomas; Bolten, John
2014-01-01
Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this need, this paper evaluates an LDAS for agricultural drought monitoring by benchmarking individual components of the system (i.e., a satellite soil moisture retrieval algorithm, a soil water balance model and a sequential data assimilation filter) against a series of linear models which perform the same function (i.e., have the same basic inputoutput structure) as the full system component. Benchmarking is based on the calculation of the lagged rank cross-correlation between the normalized difference vegetation index (NDVI) and soil moisture estimates acquired for various components of the system. Lagged soil moistureNDVI correlations obtained using individual LDAS components versus their linear analogs reveal the degree to which non-linearities andor complexities contained within each component actually contribute to the performance of the LDAS system as a whole. Here, a particular system based on surface soil moisture retrievals from the Land Parameter Retrieval Model (LPRM), a two-layer Palmer soil water balance model and an Ensemble Kalman filter (EnKF) is benchmarked. Results suggest significant room for improvement in each component of the system.
Novel and Advanced Techniques for Complex IVC Filter Retrieval.
Daye, Dania; Walker, T Gregory
2017-04-01
Inferior vena cava (IVC) filter placement is indicated for the treatment of venous thromboembolism (VTE) in patients with a contraindication to or a failure of anticoagulation. With the advent of retrievable IVC filters and their ease of placement, an increasing number of such filters are being inserted for prophylaxis in patients at high risk for VTE. Available data show that only a small number of these filters are retrieved within the recommended period, if at all, prompting the FDA to issue a statement on the need for their timely removal. With prolonged dwell times, advanced techniques may be needed for filter retrieval in up to 60% of the cases. In this article, we review standard and advanced IVC filter retrieval techniques including single-access, dual-access, and dissection techniques. Complicated filter retrievals carry a non-negligible risk for complications such as filter fragmentation and resultant embolization of filter components, venous pseudoaneurysms or stenoses, and breach of the integrity of the caval wall. Careful pre-retrieval assessment of IVC filter position, any significant degree of filter tilting or of hook, and/or strut epithelialization and caval wall penetration by filter components should be considered using dedicated cross-sectional imaging for procedural planning. In complex cases, the risk for retrieval complications should be carefully weighed against the risks of leaving the filter permanently indwelling. The decision to remove an embedded IVC filter using advanced techniques should be individualized to each patient and made with caution, based on the patient's age and existing comorbidities.
@Note: a workbench for biomedical text mining.
Lourenço, Anália; Carreira, Rafael; Carneiro, Sónia; Maia, Paulo; Glez-Peña, Daniel; Fdez-Riverola, Florentino; Ferreira, Eugénio C; Rocha, Isabel; Rocha, Miguel
2009-08-01
Biomedical Text Mining (BioTM) is providing valuable approaches to the automated curation of scientific literature. However, most efforts have addressed the benchmarking of new algorithms rather than user operational needs. Bridging the gap between BioTM researchers and biologists' needs is crucial to solve real-world problems and promote further research. We present @Note, a platform for BioTM that aims at the effective translation of the advances between three distinct classes of users: biologists, text miners and software developers. Its main functional contributions are the ability to process abstracts and full-texts; an information retrieval module enabling PubMed search and journal crawling; a pre-processing module with PDF-to-text conversion, tokenisation and stopword removal; a semantic annotation schema; a lexicon-based annotator; a user-friendly annotation view that allows to correct annotations and a Text Mining Module supporting dataset preparation and algorithm evaluation. @Note improves the interoperability, modularity and flexibility when integrating in-home and open-source third-party components. Its component-based architecture allows the rapid development of new applications, emphasizing the principles of transparency and simplicity of use. Although it is still on-going, it has already allowed the development of applications that are currently being used.
NASA Technical Reports Server (NTRS)
Sundermier, Amy (Inventor)
2002-01-01
A method for acquiring and assembling software components at execution time into a client program, where the components may be acquired from remote networked servers is disclosed. The acquired components are assembled according to knowledge represented within one or more acquired mediating components. A mediating component implements knowledge of an object model. A mediating component uses its implemented object model knowledge, acquired component class information and polymorphism to assemble components into an interacting program at execution time. The interactions or abstract relationships between components in the object model may be implemented by the mediating component as direct invocations or indirect events or software bus exchanges. The acquired components may establish communications with remote servers. The acquired components may also present a user interface representing data to be exchanged with the remote servers. The mediating components may be assembled into layers, allowing arbitrarily complex programs to be constructed at execution time.
AdaNET Dynamic Software Inventory (DSI) prototype component acquisition plan
NASA Technical Reports Server (NTRS)
Hanley, Lionel
1989-01-01
A component acquisition plan contains the information needed to evaluate, select, and acquire software and hardware components necessary for successful completion of the AdaNET Dynamic Software Inventory (DSI) Management System Prototype. This plan will evolve and be applicable to all phases of the DSI prototype development. Resources, budgets, schedules, and organizations related to component acquisition activities are provided. A purpose and description of a software or hardware component which is to be acquired are presented. Since this is a plan for acquisition of all components, this section is not applicable. The procurement activities and events conducted by the acquirer are described and who is responsible is identified, where the activity will be performed, and when the activities will occur for each planned procurement. Acquisition requirements describe the specific requirements and standards to be followed during component acquisition. The activities which will take place during component acquisition are described. A list of abbreviations and acronyms, and a glossary are contained.
1986-05-01
offering the course is a company. Name and Address of offeror: Tachyon Corporation 2725 Congress Street Suite 2H San Diego, CA 92110 Offeror’s...Background: Tachyon Corporation specializes in Ada software quality assurance, computer hosted instruction and information retrieval systems, authoring tools...easy to use (on-line help) and can look up or search for terms. Tachyon Corporation 20 CDURSE OFFERINGS 2.2. Lecture/Seminar Courses 2.2.1. Company
NASA Technical Reports Server (NTRS)
Papitashvili, N. E.; Papitashvili, V. O.; Allen, J. H.; Morris, L. D.
1995-01-01
The National Geophysical Data Center has the largest collection of geomagnetic data from the worldwide network of magnetic observatories. The data base management system and retrieval/display software have been developed for the archived geomagnetic data (annual means, monthly, daily, hourly, and 1-minute values) and placed on the center's CD-ROM's to provide users with 'user-oriented' and 'user-friendly' support. This system is described in this paper with a brief outline of provided options.
Mushu, a free- and open source BCI signal acquisition, written in Python.
Venthur, Bastian; Blankertz, Benjamin
2012-01-01
The following paper describes Mushu, a signal acquisition software for retrieval and online streaming of Electroencephalography (EEG) data. It is written, but not limited, to the needs of Brain Computer Interfacing (BCI). It's main goal is to provide a unified interface to EEG data regardless of the amplifiers used. It runs under all major operating systems, like Windows, Mac OS and Linux, is written in Python and is free- and open source software licensed under the terms of the GNU General Public License.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonior, Jason D; Hu, Zhen; Guo, Terry N.
This letter presents an experimental demonstration of software-defined-radio-based wireless tomography using computer-hosted radio devices called Universal Software Radio Peripheral (USRP). This experimental brief follows our vision and previous theoretical study of wireless tomography that combines wireless communication and RF tomography to provide a novel approach to remote sensing. Automatic data acquisition is performed inside an RF anechoic chamber. Semidefinite relaxation is used for phase retrieval, and the Born iterative method is utilized for imaging the target. Experimental results are presented, validating our vision of wireless tomography.
ERIC Educational Resources Information Center
Kahle, Brewster; Prelinger, Rick; Jackson, Mary E.; Boyack, Kevin W.; Wylie, Brian N.; Davidson, George S.; Witten, Ian H.; Bainbridge, David; Boddie, Stefan J.; Garrison, William A.; Cunningham, Sally Jo; Borgman, Christine L.; Hessel, Heather
2001-01-01
These six articles discuss various issues relating to digital libraries. Highlights include public access to digital materials; intellectual property concerns; the need for collaboration across disciplines; Greenstone software for construction and presentation of digital information collections; the Colorado Digitization Project; and conferences…
Vegetation Phenology Metrics Derived from Temporally Smoothed and Gap-filled MODIS Data
NASA Technical Reports Server (NTRS)
Tan, Bin; Morisette, Jeff; Wolfe, Robert; Esaias, Wayne; Gao, Feng; Ederer, Greg; Nightingale, Joanne; Nickeson, Jamie E.; Ma, Pete; Pedely, Jeff
2012-01-01
Smoothed and gap-filled VI provides a good base for estimating vegetation phenology metrics. The TIMESAT software was improved by incorporating the ancillary information from MODIS products. A simple assessment of the association between retrieved greenup dates and ground observations indicates satisfactory result from improved TIMESAT software. One application example shows that mapping Nectar Flow Phenology is tractable on a continental scale using hive weight and satellite vegetation data. The phenology data product is supporting more researches in ecology, climate change fields.
A data-management system for detailed areal interpretive data
Ferrigno, C.F.
1986-01-01
A data storage and retrieval system has been developed to organize and preserve areal interpretive data. This system can be used by any study where there is a need to store areal interpretive data that generally is presented in map form. This system provides the capability to grid areal interpretive data for input to groundwater flow models at any spacing and orientation. The data storage and retrieval system is designed to be used for studies that cover small areas such as counties. The system is built around a hierarchically structured data base consisting of related latitude-longitude blocks. The information in the data base can be stored at different levels of detail, with the finest detail being a block of 6 sec of latitude by 6 sec of longitude (approximately 0.01 sq mi). This system was implemented on a mainframe computer using a hierarchical data base management system. The computer programs are written in Fortran IV and PL/1. The design and capabilities of the data storage and retrieval system, and the computer programs that are used to implement the system are described. Supplemental sections contain the data dictionary, user documentation of the data-system software, changes that would need to be made to use this system for other studies, and information on the computer software tape. (Lantz-PTT)
Atlas - a data warehouse for integrative bioinformatics.
Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire M S; Ling, John; Ouellette, B F Francis
2005-02-21
We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: http://bioinformatics.ubc.ca/atlas/
Atlas – a data warehouse for integrative bioinformatics
Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire MS; Ling, John; Ouellette, BF Francis
2005-01-01
Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data models, enforcing the relationships between data types. Second, integration is achieved through a combination of APIs, ontology, and tools. The Atlas software is freely available under the GNU General Public License at: PMID:15723693
Managing Scientific Software Complexity with Bocca and CCA
Allan, Benjamin A.; Norris, Boyana; Elwasif, Wael R.; ...
2008-01-01
In high-performance scientific software development, the emphasis is often on short time to first solution. Even when the development of new components mostly reuses existing components or libraries and only small amounts of new code must be created, dealing with the component glue code and software build processes to obtain complete applications is still tedious and error-prone. Component-based software meant to reduce complexity at the application level increases complexity to the extent that the user must learn and remember the interfaces and conventions of the component model itself. To address these needs, we introduce Bocca, the first tool to enablemore » application developers to perform rapid component prototyping while maintaining robust software-engineering practices suitable to HPC environments. Bocca provides project management and a comprehensive build environment for creating and managing applications composed of Common Component Architecture components. Of critical importance for high-performance computing (HPC) applications, Bocca is designed to operate in a language-agnostic way, simultaneously handling components written in any of the languages commonly used in scientific applications: C, C++, Fortran, Python and Java. Bocca automates the tasks related to the component glue code, freeing the user to focus on the scientific aspects of the application. Bocca embraces the philosophy pioneered by Ruby on Rails for web applications: start with something that works, and evolve it to the user's purpose.« less
ERIC Educational Resources Information Center
Mattenella, L. E.; Velazco, J. W.
1992-01-01
This article briefly describes the development of bibliographic retrieval systems in the Instituto de Beneficio di Minerales (IN BE MI) in Salta, Argentina, using the Mini-micro CDS/ISIS software developed by Unesco. (LRW)
Testing, Testing...Managing Electronic Access in Disparate Times.
ERIC Educational Resources Information Center
Carrington, Bessie M.
1996-01-01
Duke University's Perkins Library (North Carolina) tests electronic resources and services for remote accessibility by examining capabilities on various platforms, operating systems, communications software, and World Wide Web browsers. Problems occur in establishing connections, screen display, navigation or retrieval, keyboard variations, and in…
Bibliographies without Tears: Bibliography-Managers Round-Up.
ERIC Educational Resources Information Center
Science Software Quarterly, 1984
1984-01-01
Reviews and compares "Sci-Mate,""Reference Manager," and "BIBLIOPHILE" software packages used for storage and retrieval tasks involving bibliographic data. Each program handles search tasks well; major differences are in the amount of flexibility in customizing the database structure, their import and export…
Two retrievals from a single cue: A bottleneck persists across episodic and semantic memory.
Orscheschek, Franziska; Strobach, Tilo; Schubert, Torsten; Rickard, Timothy
2018-05-01
There is evidence in the literature that two retrievals from long-term memory cannot occur in parallel. To date, however, that work has explored only the case of two retrievals from newly acquired episodic memory. These studies demonstrated a retrieval bottleneck even after dual-retrieval practice. That retrieval bottleneck may be a global property of long-term memory retrieval, or it may apply only to the case of two retrievals from episodic memory. In the current experiments, we explored whether that apparent dual-retrieval bottleneck applies to the case of one retrieval from episodic memory and one retrieval from highly overlearned semantic memory. Across three experiments, subjects learned to retrieve a left or right keypress response form a set of 14 unique word cues (e.g., black-right keypress). In addition, they learned a verbal response which involved retrieving the antonym of the presented cue (e.g., black-"white"). In the dual-retrieval condition, subjects had to retrieve both the keypress response and the antonym word. The results suggest that the retrieval bottleneck is superordinate to specific long-term memory systems and holds across different memory components. In addition, the results support the assumption of a cue-level response chunking account of learned retrieval parallelism.
NASA Technical Reports Server (NTRS)
Fymat, A. L.; Mease, K. D.
1978-01-01
The technique proposed by Fymat (1976) for retrieving the complex refractive index of atmospheric aerosols using narrowband spectral transmission ratios, taken within an overall narrow spectral interval, is investigated in the case of modelled polydispersions of rural, maritime-continental, maritime-sea spray and meteoric dust aerosols. It is confirmed that for not too broad size distributions most of the information comes from a narrow size range of 'active' aerosols so that, under these circumstances, the refractive index components can indeed be retrieved essentially independently of the size distribution. For 0.1% accurate data in three colors, the technique can provide the real and imaginary components of the index respectively within 0.07% and 0.3% accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arhatari, B. D.; ARC Centre of Excellence for Coherent X-ray Science, Melbourne; Harris, A. R.
Phase retrieval tomography has been successfully used to enhance imaging in systems that exhibit poor absorption contrast. However, when highly absorbing regions are present in a sample, so-called metal artefacts can appear in the tomographic reconstruction. We demonstrate that straightforward approaches for metal artefact reconstruction, developed in absorption contrast tomography, can be applied when using phase retrieval. Using a prototype thin film cochlear implant that has high and low absorption components made from iridium (or platinum) and plastic, respectively, we show that segmentation of the various components is possible and hence measurement of the electrode geometry and relative location tomore » other regions of interest can be achieved.« less
NASA Technical Reports Server (NTRS)
Jefferys, S.; Johnson, W.; Lewis, R.; Rich, R.
1981-01-01
This specification establishes the requirements, concepts, and preliminary design for a set of software known as the IGDS/TRAP Interface Program (ITIP). This software provides the capability to develop at an Interactive Graphics Design System (IGDS) design station process flow diagrams for use by the NASA Coal Gasification Task Team. In addition, ITIP will use the Data Management and Retrieval System (DMRS) to maintain a data base from which a properly formatted input file to the Time-Line and Resources Analysis Program (TRAP) can be extracted. This set of software will reside on the PDP-11/70 and will become the primary interface between the Coal Gasification Task Team and IGDS, DMRS, and TRAP. The user manual for the computer program is presented.
Redundant array of independent disks: practical on-line archiving of nuclear medicine image data.
Lear, J L; Pratt, J P; Trujillo, N
1996-02-01
While various methods for long-term archiving of nuclear medicine image data exist, none support rapid on-line search and retrieval of information. We assembled a 90-Gbyte redundant array of independent disks (RAID) system using 10-, 9-Gbyte disk drives. The system was connected to a personal computer and software was used to partition the array into 4-Gbyte sections. All studies (50,000) acquired over a 7-year period were archived in the system. Based on patient name/number and study date, information could be located within 20 seconds and retrieved for display and analysis in less than 5 seconds. RAID offers a practical, redundant method for long-term archiving of nuclear medicine studies that supports rapid on-line retrieval.
Component Prioritization Schema for Achieving Maximum Time and Cost Benefits from Software Testing
NASA Astrophysics Data System (ADS)
Srivastava, Praveen Ranjan; Pareek, Deepak
Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Defining the end of software testing represents crucial features of any software development project. A premature release will involve risks like undetected bugs, cost of fixing faults later, and discontented customers. Any software organization would want to achieve maximum possible benefits from software testing with minimum resources. Testing time and cost need to be optimized for achieving a competitive edge in the market. In this paper, we propose a schema, called the Component Prioritization Schema (CPS), to achieve an effective and uniform prioritization of the software components. This schema serves as an extension to the Non Homogenous Poisson Process based Cumulative Priority Model. We also introduce an approach for handling time-intensive versus cost-intensive projects.
Estimating Software-Development Costs With Greater Accuracy
NASA Technical Reports Server (NTRS)
Baker, Dan; Hihn, Jairus; Lum, Karen
2008-01-01
COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.
15 CFR Supplement No. 6 to Part 742 - Technical Questionnaire for Encryption Items
Code of Federal Regulations, 2012 CFR
2012-01-01
... software, provide the following information: (1) Description of all the symmetric and asymmetric encryption... third-party hardware or software encryption components (if any). Identify the manufacturers of the hardware or software components, including specific part numbers and version information as needed to...
15 CFR Supplement No. 6 to Part 742 - Technical Questionnaire for Encryption Items
Code of Federal Regulations, 2013 CFR
2013-01-01
... software, provide the following information: (1) Description of all the symmetric and asymmetric encryption... third-party hardware or software encryption components (if any). Identify the manufacturers of the hardware or software components, including specific part numbers and version information as needed to...
15 CFR Supplement No. 6 to Part 742 - Technical Questionnaire for Encryption Items
Code of Federal Regulations, 2014 CFR
2014-01-01
... software, provide the following information: (1) Description of all the symmetric and asymmetric encryption... third-party hardware or software encryption components (if any). Identify the manufacturers of the hardware or software components, including specific part numbers and version information as needed to...
System Testing of Ground Cooling System Components
NASA Technical Reports Server (NTRS)
Ensey, Tyler Steven
2014-01-01
This internship focused primarily upon software unit testing of Ground Cooling System (GCS) components, one of the three types of tests (unit, integrated, and COTS/regression) utilized in software verification. Unit tests are used to test the software of necessary components before it is implemented into the hardware. A unit test determines that the control data, usage procedures, and operating procedures of a particular component are tested to determine if the program is fit for use. Three different files are used to make and complete an efficient unit test. These files include the following: Model Test file (.mdl), Simulink SystemTest (.test), and autotest (.m). The Model Test file includes the component that is being tested with the appropriate Discrete Physical Interface (DPI) for testing. The Simulink SystemTest is a program used to test all of the requirements of the component. The autotest tests that the component passes Model Advisor and System Testing, and puts the results into proper files. Once unit testing is completed on the GCS components they can then be implemented into the GCS Schematic and the software of the GCS model as a whole can be tested using integrated testing. Unit testing is a critical part of software verification; it allows for the testing of more basic components before a model of higher fidelity is tested, making the process of testing flow in an orderly manner.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Menard, Richard; Ortland, David; Einaudi, Franco (Technical Monitor)
2001-01-01
A new approach to the analysis of systematic and random observation errors is presented in which the error statistics are obtained using forecast data rather than observations from a different instrument type. The analysis is carried out at an intermediate retrieval level, instead of the more typical state variable space. This method is carried out on measurements made by the High Resolution Doppler Imager (HRDI) on board the Upper Atmosphere Research Satellite (UARS). HRDI, a limb sounder, is the only satellite instrument measuring winds in the stratosphere, and the only instrument of any kind making global wind measurements in the upper atmosphere. HRDI measures doppler shifts in the two different O2 absorption bands (alpha and B) and the retrieved products are tangent point Line-of-Sight wind component (level 2 retrieval) and UV winds (level 3 retrieval). This analysis is carried out on a level 1.9 retrieval, in which the contributions from different points along the line-of-sight have not been removed. Biases are calculated from O-F (observed minus forecast) LOS wind components and are separated into a measurement parameter space consisting of 16 different values. The bias dependence on these parameters (plus an altitude dependence) is used to create a bias correction scheme carried out on the level 1.9 retrieval. The random error component is analyzed by separating the gamma and B band observations and locating observation pairs where both bands are very nearly looking at the same location at the same time. It is shown that the two observation streams are uncorrelated and that this allows the forecast error variance to be estimated. The bias correction is found to cut the effective observation error variance in half.
Generic trending and analysis system
NASA Technical Reports Server (NTRS)
Keehan, Lori; Reese, Jay
1994-01-01
The Generic Trending and Analysis System (GTAS) is a generic spacecraft performance monitoring tool developed by NASA Code 511 and Loral Aerosys. It is designed to facilitate quick anomaly resolution and trend analysis. Traditionally, the job of off-line analysis has been performed using hardware and software systems developed for real-time spacecraft contacts; then, the systems were supplemented with a collection of tools developed by Flight Operations Team (FOT) members. Since the number of upcoming missions is increasing, NASA can no longer afford to operate in this manner. GTAS improves control center productivity and effectiveness because it provides a generic solution across multiple missions. Thus, GTAS eliminates the need for each individual mission to develop duplicate capabilities. It also allows for more sophisticated tools to be developed because it draws resources from several projects. In addition, the GTAS software system incorporates commercial off-the-shelf tools software (COTS) packages and reuses components of other NASA-developed systems wherever possible. GTAS has incorporated lessons learned from previous missions by involving the users early in the development process. GTAS users took a proactive role in requirements analysis, design, development, and testing. Because of user involvement, several special tools were designed and are now being developed. GTAS users expressed considerable interest in facilitating data collection for long term trending and analysis. As a result, GTAS provides easy access to large volumes of processed telemetry data directly in the control center. The GTAS archival and retrieval capabilities are supported by the integration of optical disk technology and a COTS relational database management system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guanjing; Granderson, J.; Brambley, Michael R.
2015-07-01
In the United States, small commercial buildings represent 51% of total floor space of all commercial buildings and consume nearly 3 quadrillion Btu (3.2 quintillion joule) of site energy annually, presenting an enormous opportunity for energy savings. Retro-commissioning (RCx), the process through which professional energy service providers identify and correct operational problems, has proven to be a cost-effective means to achieve median energy savings of 16%. However, retro-commissioning is not typically conducted at scale throughout the commercial stock. Very few small commercial buildings are retro-commissioned because utility expenses are relatively modest, margins are tighter, and capital for improvements is limited.more » In addition, small buildings do not have in-house staff with the expertise to identify improvement opportunities. In response, a turnkey hardware-software solution was developed to enable cost-effective, monitoring-based RCx of small commercial buildings. This highly tailored solution enables non-commissioning providers to identify energy and comfort problems, as well as associated cost impacts and remedies. It also facilitates scale by offering energy service providers the means to streamline their existing processes and reduce costs by more than half. The turnkey RCx sensor suitcase consists of two primary components: a suitcase of sensors for short-term building data collection that guides users through the process of deploying and retrieving their data and a software application that automates analysis of sensor data, identifies problems and generates recommendations. This paper presents the design and testing of prototype models, including descriptions of the hardware design, analysis algorithms, performance testing, and plans for dissemination.« less
NASA Technical Reports Server (NTRS)
2012-01-01
Topics include: Bioreactors Drive Advances in Tissue Engineering; Tooling Techniques Enhance Medical Imaging; Ventilator Technologies Sustain Critically Injured Patients; Protein Innovations Advance Drug Treatments, Skin Care; Mass Analyzers Facilitate Research on Addiction; Frameworks Coordinate Scientific Data Management; Cameras Improve Navigation for Pilots, Drivers; Integrated Design Tools Reduce Risk, Cost; Advisory Systems Save Time, Fuel for Airlines; Modeling Programs Increase Aircraft Design Safety; Fly-by-Wire Systems Enable Safer, More Efficient Flight; Modified Fittings Enhance Industrial Safety; Simulation Tools Model Icing for Aircraft Design; Information Systems Coordinate Emergency Management; Imaging Systems Provide Maps for U.S. Soldiers; High-Pressure Systems Suppress Fires in Seconds; Alloy-Enhanced Fans Maintain Fresh Air in Tunnels; Control Algorithms Charge Batteries Faster; Software Programs Derive Measurements from Photographs; Retrofits Convert Gas Vehicles into Hybrids; NASA Missions Inspire Online Video Games; Monitors Track Vital Signs for Fitness and Safety; Thermal Components Boost Performance of HVAC Systems; World Wind Tools Reveal Environmental Change; Analyzers Measure Greenhouse Gasses, Airborne Pollutants; Remediation Technologies Eliminate Contaminants; Receivers Gather Data for Climate, Weather Prediction; Coating Processes Boost Performance of Solar Cells; Analyzers Provide Water Security in Space and on Earth; Catalyst Substrates Remove Contaminants, Produce Fuel; Rocket Engine Innovations Advance Clean Energy; Technologies Render Views of Earth for Virtual Navigation; Content Platforms Meet Data Storage, Retrieval Needs; Tools Ensure Reliability of Critical Software; Electronic Handbooks Simplify Process Management; Software Innovations Speed Scientific Computing; Controller Chips Preserve Microprocessor Function; Nanotube Production Devices Expand Research Capabilities; Custom Machines Advance Composite Manufacturing; Polyimide Foams Offer Superior Insulation; Beam Steering Devices Reduce Payload Weight; Models Support Energy-Saving Microwave Technologies; Materials Advance Chemical Propulsion Technology; and High-Temperature Coatings Offer Energy Savings.
Wireless data collection retrievals of bridge inspection/management information.
DOT National Transportation Integrated Search
2017-02-28
To increase the efficiency and reliability of bridge inspections, MDOT contracted to have a 3D-model-based data entry application for mobile tablets developed to aid inspectors in the field. The 3D Bridge App is a mobile software tool designed to fac...
Collecting and Animating Online Satellite Images.
ERIC Educational Resources Information Center
Irons, Ralph
1995-01-01
Describes how to generate automated classroom resources from the Internet. Topics covered include viewing animated satellite weather images using file transfer protocol (FTP); sources of images on the Internet; shareware available for viewing images; software for automating image retrieval; procedures for animating satellite images; and storing…
Database in Artificial Intelligence.
ERIC Educational Resources Information Center
Wilkinson, Julia
1986-01-01
Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…
DAS: A Data Management System for Instrument Tests and Operations
NASA Astrophysics Data System (ADS)
Frailis, M.; Sartor, S.; Zacchei, A.; Lodi, M.; Cirami, R.; Pasian, F.; Trifoglio, M.; Bulgarelli, A.; Gianotti, F.; Franceschi, E.; Nicastro, L.; Conforti, V.; Zoli, A.; Smart, R.; Morbidelli, R.; Dadina, M.
2014-05-01
The Data Access System (DAS) is a and data management software system, providing a reusable solution for the storage of data acquired both from telescopes and auxiliary data sources during the instrument development phases and operations. It is part of the Customizable Instrument WorkStation system (CIWS-FW), a framework for the storage, processing and quick-look at the data acquired from scientific instruments. The DAS provides a data access layer mainly targeted to software applications: quick-look displays, pre-processing pipelines and scientific workflows. It is logically organized in three main components: an intuitive and compact Data Definition Language (DAS DDL) in XML format, aimed for user-defined data types; an Application Programming Interface (DAS API), automatically adding classes and methods supporting the DDL data types, and providing an object-oriented query language; a data management component, which maps the metadata of the DDL data types in a relational Data Base Management System (DBMS), and stores the data in a shared (network) file system. With the DAS DDL, developers define the data model for a particular project, specifying for each data type the metadata attributes, the data format and layout (if applicable), and named references to related or aggregated data types. Together with the DDL user-defined data types, the DAS API acts as the only interface to store, query and retrieve the metadata and data in the DAS system, providing both an abstract interface and a data model specific one in C, C++ and Python. The mapping of metadata in the back-end database is automatic and supports several relational DBMSs, including MySQL, Oracle and PostgreSQL.
Development and implementation of an Integrated Water Resources Management System (IWRMS)
NASA Astrophysics Data System (ADS)
Flügel, W.-A.; Busch, C.
2011-04-01
One of the innovative objectives in the EC project BRAHMATWINN was the development of a stakeholder oriented Integrated Water Resources Management System (IWRMS). The toolset integrates the findings of the project and presents it in a user friendly way for decision support in sustainable integrated water resources management (IWRM) in river basins. IWRMS is a framework, which integrates different types of basin information and which supports the development of IWRM options for climate change mitigation. It is based on the River Basin Information System (RBIS) data models and delivers a graphical user interface for stakeholders. A special interface was developed for the integration of the enhanced DANUBIA model input and the NetSyMod model with its Mulino decision support system (mulino mDss) component. The web based IWRMS contains and combines different types of data and methods to provide river basin data and information for decision support. IWRMS is based on a three tier software framework which uses (i) html/javascript at the client tier, (ii) PHP programming language to realize the application tier, and (iii) a postgresql/postgis database tier to manage and storage all data, except the DANUBIA modelling raw data, which are file based and registered in the database tier. All three tiers can reside on one or different computers and are adapted to the local hardware infrastructure. IWRMS as well as RBIS are based on Open Source Software (OSS) components and flexible and time saving access to that database is guaranteed by web-based interfaces for data visualization and retrieval. The IWRMS is accessible via the BRAHMATWINN homepage: http://www.brahmatwinn.uni-jena.de and a user manual for the RBIS is available for download as well.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-09
..., Associated Software, and Products Containing the Same AGENCY: U.S. International Trade Commission. ACTION..., components thereof, associated software, and products containing the same by reason of infringement of..., components thereof, associated software, and products containing the same that infringe one or more of claims...
Achieving Better Buying Power through Acquisition of Open Architecture Software Systems: Volume 1
2016-01-06
supporting “Bring Your Own Devices” (BYOD)? 22 New business models for OA software components ● Franchising ● Enterprise licensing ● Metered usage...paths IP and cybersecurity requirements will need continuous attention! 35 New business models for OA software components ● Franchising ● Enterprise
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-06
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-795] Certain Video Analytics Software... filed by ObjectVideo, Inc. of Reston, Virginia. 76 FR 45859 (Aug. 1, 2011). The complaint, as amended... certain video analytics software, systems, components thereof, and products containing same by reason of...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-21
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-852] Certain Video Analytics Software..., 2012, based on a complaint filed by ObjectVideo, Inc. (``ObjectVideo'') of Reston, Virginia. 77 FR... United States after importation of certain video analytics software systems, components thereof, and...
Retrieving background surface reflectance of Himawari-8/AHI using BRDF modeling
NASA Astrophysics Data System (ADS)
Choi, Sungwon; Seo, Minji; Lee, Kyeong-sang; Han, Kyung-soo
2017-04-01
In these days, remote sensing is more important than past. And retrieving surface reflectance in remote sensing is also important. So there are many ways to retrieve surface reflectance by my countries with polar orbit and geostationary satellite. We studied Bidirectional Reflectance Distribution Function (BRDF) which is used to retrieve surface reflectance. In BRDF equation, we calculate surface reflectance using BRD components and angular data. BRD components are to calculate 3 of scatterings, isotropic geometric and volumetric scattering. To make Background Surface Reflectance (BSR) of Himawari-8/AHI. We used 5 bands (band1, band2, band3, band4, band5) with BRDF. And we made 5 BSR for 5 channels. For validation, we compare BSR with Top of canopy (TOC) reflectance of AHI. As a result, bias are from -0.00223 to 0.008328 and Root Mean Square Error (RMSE) are from 0.045 to 0.049. We think BSR can be used to replace TOC reflectance in remote sensing to improve weakness of TOC reflectance.
Techniques for Soundscape Retrieval and Synthesis
NASA Astrophysics Data System (ADS)
Mechtley, Brandon Michael
The study of acoustic ecology is concerned with the manner in which life interacts with its environment as mediated through sound. As such, a central focus is that of the soundscape: the acoustic environment as perceived by a listener. This dissertation examines the application of several computational tools in the realms of digital signal processing, multimedia information retrieval, and computer music synthesis to the analysis of the soundscape. Namely, these tools include a) an open source software library, Sirens, which can be used for the segmentation of long environmental field recordings into individual sonic events and compare these events in terms of acoustic content, b) a graph-based retrieval system that can use these measures of acoustic similarity and measures of semantic similarity using the lexical database WordNet to perform both text-based retrieval and automatic annotation of environmental sounds, and c) new techniques for the dynamic, realtime parametric morphing of multiple field recordings, informed by the geographic paths along which they were recorded.
Mineral Resources Data System (MRDS)
Mason, G.T.; Arndt, R.E.
1996-01-01
The U.S. Geological Survey (USGS) operates the Mineral Resources Data System (MRDS), a digital system that contained 111,955 records on Sept. 1, 1995. Records describe metallic and industrial commodity deposits, mines, prospects, and occurrences in the United States and selected other countries. These records have been created over the years by USGS commodity specialists and through cooperative agreements with geological surveys of U.S. States and other countries. This CD-ROM contains the complete MRDS data base, several subsets of it, and software to allow data retrieval and display. Data retrievals are made by using GSSEARCH, a program that is included on this CD-ROM. Retrievals are made by specifying fields or any combination of the fields that provide information on deposit name, location, commodity, deposit model type, geology, mineral production, reserves, and references. A tutorial is included. Retrieved records may be printed or written to a hard disk file in four different formats: ascii, fixed, comma delimited, and DBASE compatible.
NASA Technical Reports Server (NTRS)
Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.
1991-01-01
The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.
Giraldo, N A; Amariles, P; Monsalve, M; Faus, M J
Highly active antiretroviral therapy has extended the expected lifespan of patients with HIV/AIDS. However, the therapeutic benefits of some drugs used simultaneously with highly active antiretroviral therapy may be adversely affected by drug interactions. The goal was to design and develop a free software to facilitate analysis, assessment, and clinical decision making according to the clinical relevance of drug interactions in patients with HIV/AIDS. A comprehensive Medline/PubMed database search of drug interactions was performed. Articles that recognized any drug interactions in HIV disease were selected. The publications accessed were limited to human studies in English or Spanish, with full texts retrieved. Drug interactions were analyzed, assessed, and grouped into four levels of clinical relevance according to gravity and probability. Software to systematize the information regarding drug interactions and their clinical relevance was designed and developed. Overall, 952 different references were retrieved and 446 selected; in addition, 67 articles were selected from the citation lists of identified articles. A total of 2119 pairs of drug interactions were identified; of this group, 2006 (94.7%) were drug-drug interactions, 1982 (93.5%) had an identified pharmacokinetic mechanism, and 1409 (66.5%) were mediated by enzyme inhibition. In terms of clinical relevance, 1285 (60.6%) drug interactions were clinically significant in patients with HIV (levels 1 and 2). With this information, a software program that facilitates identification and assessment of the clinical relevance of antiretroviral drug interactions (SIMARV ® ) was developed. A free software package with information on 2119 pairs of antiretroviral drug interactions was designed and developed that could facilitate analysis, assessment, and clinical decision making according to the clinical relevance of drug interactions in patients with HIV/AIDS. Copyright © 2016 Elsevier Inc. All rights reserved.
New Methods for Air Quality Model Evaluation with Satellite Data
NASA Astrophysics Data System (ADS)
Holloway, T.; Harkey, M.
2015-12-01
Despite major advances in the ability of satellites to detect gases and aerosols in the atmosphere, there remains significant, untapped potential to apply space-based data to air quality regulatory applications. Here, we showcase research findings geared toward increasing the relevance of satellite data to support operational air quality management, focused on model evaluation. Particular emphasis is given to nitrogen dioxide (NO2) and formaldehyde (HCHO) from the Ozone Monitoring Instrument aboard the NASA Aura satellite, and evaluation of simulations from the EPA Community Multiscale Air Quality (CMAQ) model. This work is part of the NASA Air Quality Applied Sciences Team (AQAST), and is motivated by ongoing dialog with state and federal air quality management agencies. We present the response of satellite-derived NO2 to meteorological conditions, satellite-derived HCHO:NO2 ratios as an indicator of ozone production regime, and the ability of models to capture these sensitivities over the continental U.S. In the case of NO2-weather sensitivities, we find boundary layer height, wind speed, temperature, and relative humidity to be the most important variables in determining near-surface NO2 variability. CMAQ agreed with relationships observed in satellite data, as well as in ground-based data, over most regions. However, we find that the southwest U.S. is a problem area for CMAQ, where modeled NO2 responses to insolation, boundary layer height, and other variables are at odds with the observations. Our analyses utilize a software developed by our team, the Wisconsin Horizontal Interpolation Program for Satellites (WHIPS): a free, open-source program designed to make satellite-derived air quality data more usable. WHIPS interpolates level 2 satellite retrievals onto a user-defined fixed grid, in effect creating custom-gridded level 3 satellite product. Currently, WHIPS can process the following data products: OMI NO2 (NASA retrieval); OMI NO2 (KNMI retrieval); OMI HCHO (NASA retrieval); MOPITT CO (NASA retrieval); MODIS AOD (NASA retrieval). More information at http://nelson.wisc.edu/sage/data-and-models/software.php.
NASA Astrophysics Data System (ADS)
Gehrcke, Jan-Philip; Kluth, Stefan; Stonjek, Stefan
2010-04-01
We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.
LD2SNPing: linkage disequilibrium plotter and RFLP enzyme mining for tag SNPs
Chang, Hsueh-Wei; Chuang, Li-Yeh; Chang, Yan-Jhu; Cheng, Yu-Huei; Hung, Yu-Chen; Chen, Hsiang-Chi; Yang, Cheng-Hong
2009-01-01
Background Linkage disequilibrium (LD) mapping is commonly used to evaluate markers for genome-wide association studies. Most types of LD software focus strictly on LD analysis and visualization, but lack supporting services for genotyping. Results We developed a freeware called LD2SNPing, which provides a complete package of mining tools for genotyping and LD analysis environments. The software provides SNP ID- and gene-centric online retrievals for SNP information and tag SNP selection from dbSNP/NCBI and HapMap, respectively. Restriction fragment length polymorphism (RFLP) enzyme information for SNP genotype is available to all SNP IDs and tag SNPs. Single and multiple SNP inputs are possible in order to perform LD analysis by online retrieval from HapMap and NCBI. An LD statistics section provides D, D', r2, δQ, ρ, and the P values of the Hardy-Weinberg Equilibrium for each SNP marker, and Chi-square and likelihood-ratio tests for the pair-wise association of two SNPs in LD calculation. Finally, 2D and 3D plots, as well as plain-text output of the results, can be selected. Conclusion LD2SNPing thus provides a novel visualization environment for multiple SNP input, which facilitates SNP association studies. The software, user manual, and tutorial are freely available at . PMID:19500380
A semi-automated workflow for biodiversity data retrieval, cleaning, and quality control
Mathew, Cherian; Obst, Matthias; Vicario, Saverio; Haines, Robert; Williams, Alan R.; de Jong, Yde; Goble, Carole
2014-01-01
Abstract The compilation and cleaning of data needed for analyses and prediction of species distributions is a time consuming process requiring a solid understanding of data formats and service APIs provided by biodiversity informatics infrastructures. We designed and implemented a Taverna-based Data Refinement Workflow which integrates taxonomic data retrieval, data cleaning, and data selection into a consistent, standards-based, and effective system hiding the complexity of underlying service infrastructures. The workflow can be freely used both locally and through a web-portal which does not require additional software installations by users. PMID:25535486
Retrieving and Indexing Spatial Data in the Cloud Computing Environment
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Wang, Sheng; Zhou, Daliang
In order to solve the drawbacks of spatial data storage in common Cloud Computing platform, we design and present a framework for retrieving, indexing, accessing and managing spatial data in the Cloud environment. An interoperable spatial data object model is provided based on the Simple Feature Coding Rules from the OGC such as Well Known Binary (WKB) and Well Known Text (WKT). And the classic spatial indexing algorithms like Quad-Tree and R-Tree are re-designed in the Cloud Computing environment. In the last we develop a prototype software based on Google App Engine to implement the proposed model.
Retrieval techniques: LVLH and inertially stabilized payloads
NASA Technical Reports Server (NTRS)
Yglesias, J. A.
1980-01-01
Procedures and techniques are discussed for retrieving payloads that are inertially or local vertical/local horizontal (LVLH) stabilized. Selection of the retrieval profile to be used depends on several factors: (1) control authority of the payload, (2) payload sensitivity to primary reaction control system (PRCS) plumes, (3) whether the payload is inertially or LVLH stabilized, (4) location of the grapple fixture, and (5) orbiter propellant consumption. The general retrieval profiles recommended are a V-bar approach for payloads that are LVLH or gravity-gradient stabilized, and the V-bar approach with one or two phase flyaround for inertially stabilized payloads. Once the general type of profile has been selected, the detailed retrieval profile and timeline should consider the various guidelines, groundrules, and constraints associated with a particular payload or flight. Reaction control system (RCS) propellant requirements for the recommended profiles range from 200 to 1500 pounds, depending on such factors as braking techniques, flyaround maneuvers (if necessary), and stationkeeping operations. The time required to perform a retrieval (starting from 1000 feet) varies from 20 to 130 minutes, depending on the complexity of the profile. The goals of this project are to develop a profile which ensures mission success; to make the retrieval profiles simple; and to keep the pilot workload to a minimum by making use of the automatic features of the orbiter flight software whenever possible.
Dynamic Neural Networks Supporting Memory Retrieval
St. Jacques, Peggy L.; Kragel, Philip A.; Rubin, David C.
2011-01-01
How do separate neural networks interact to support complex cognitive processes such as remembrance of the personal past? Autobiographical memory (AM) retrieval recruits a consistent pattern of activation that potentially comprises multiple neural networks. However, it is unclear how such large-scale neural networks interact and are modulated by properties of the memory retrieval process. In the present functional MRI (fMRI) study, we combined independent component analysis (ICA) and dynamic causal modeling (DCM) to understand the neural networks supporting AM retrieval. ICA revealed four task-related components consistent with the previous literature: 1) Medial Prefrontal Cortex (PFC) Network, associated with self-referential processes, 2) Medial Temporal Lobe (MTL) Network, associated with memory, 3) Frontoparietal Network, associated with strategic search, and 4) Cingulooperculum Network, associated with goal maintenance. DCM analysis revealed that the medial PFC network drove activation within the system, consistent with the importance of this network to AM retrieval. Additionally, memory accessibility and recollection uniquely altered connectivity between these neural networks. Recollection modulated the influence of the medial PFC on the MTL network during elaboration, suggesting that greater connectivity among subsystems of the default network supports greater re-experience. In contrast, memory accessibility modulated the influence of frontoparietal and MTL networks on the medial PFC network, suggesting that ease of retrieval involves greater fluency among the multiple networks contributing to AM. These results show the integration between neural networks supporting AM retrieval and the modulation of network connectivity by behavior. PMID:21550407
NASA Astrophysics Data System (ADS)
Zhao, Hong; Li, Changjun; Li, Hongping; Lv, Kebo; Zhao, Qinghui
2016-06-01
The sea surface salinity (SSS) is a key parameter in monitoring ocean states. Observing SSS can promote the understanding of global water cycle. This paper provides a new approach for retrieving sea surface salinity from Soil Moisture and Ocean Salinity (SMOS) satellite data. Based on the principal component regression (PCR) model, SSS can also be retrieved from the brightness temperature data of SMOS L2 measurements and Auxiliary data. 26 pair matchup data is used in model validation for the South China Sea (in the area of 4°-25°N, 105°-125°E). The RMSE value of PCR model retrieved SSS reaches 0.37 psu (practical salinity units) and the RMSE of SMOS SSS1 is 1.65 psu when compared with in-situ SSS. The corresponding Argo daily salinity data during April to June 2013 is also used in our validation with RMSE value 0.46 psu compared to 1.82 psu for daily averaged SMOS L2 products. This indicates that the PCR model is valid and may provide us with a good approach for retrieving SSS from SMOS satellite data.
IRIT at TREC 2012 Contextual Suggestion Track
2012-11-01
the 2nd International Conference on Software and Data Technologies, volume 3, pages 166–171. INSTICC Press, 2007. [HM09] Gilles Hubert and Josiane...geographic information retrieval systems: Evaluation framework and case study. Int. J. Digit. Libr ., pages 91–109, 2010. [PSC+12] Damien Palacio, Christian
DOT National Transportation Integrated Search
2017-04-15
In this 3-year project, the research team developed the Hydrologic Disaster Forecast and Response (HDFR) system, a set of integrated software tools for end users that streamlines hydrologic prediction workflows involving automated retrieval of hetero...
ERIC Educational Resources Information Center
Burton, Adrian P.
1995-01-01
Discusses accessing online electronic documents at the European Telecommunications Satellite Organization (EUTELSAT). Highlights include off-site paper document storage, the document management system, benefits, the EUTELSAT Standard IBM Access software, implementation, the development process, and future enhancements. (AEF)
Imaging Technology in Libraries: Photo CD Offers New Possibilities.
ERIC Educational Resources Information Center
Beiser, Karl
1993-01-01
Describes Kodak's Photo CD technology, a format for the storage and retrieval of photographic images in electronic form. Highlights include current and future Photo CD formats; computer imaging technology; ownership issues; hardware for using Photo CD; software; library and information center applications, including image collections and…
14 CFR 1214.205 - Revisit and/or retrieval services.
Code of Federal Regulations, 2012 CFR
2012-01-01
... a scheduled Shuttle flight, he will only pay for added mission planning, unique hardware or software... Section 1214.205 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT... priced on the basis of estimated costs. If a special dedicated Shuttle flight is required, the full...
14 CFR 1214.205 - Revisit and/or retrieval services.
Code of Federal Regulations, 2011 CFR
2011-01-01
... a scheduled Shuttle flight, he will only pay for added mission planning, unique hardware or software... Section 1214.205 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT... priced on the basis of estimated costs. If a special dedicated Shuttle flight is required, the full...
14 CFR 1214.205 - Revisit and/or retrieval services.
Code of Federal Regulations, 2013 CFR
2013-01-01
... a scheduled Shuttle flight, he will only pay for added mission planning, unique hardware or software... Section 1214.205 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT... priced on the basis of estimated costs. If a special dedicated Shuttle flight is required, the full...
Teaching Information Retrieval: Lessons from Cornell.
ERIC Educational Resources Information Center
Stewart, Linda Guyotte; Markiewicz, James
1986-01-01
This article describes two separate workshops offered to college faculty during the spring 1984 semester: one in online bibliographic searching, one in using computers to manage personal files of bibliographic references (examination of existing systems, types of software available). A telephone survey evaluation conducted 3 months after sessions…
The Use of Microcomputers for Information Retrieval.
ERIC Educational Resources Information Center
Scharff, L.; And Others
1983-01-01
Provides a detailed description of the microcomputer hardware and software used by the Euronet-Diane Launch Team for automating a number of functions associated with online searching. Background, login procedures, program architecture, maintenance problems, comments on lessons learned, and suggestions for avoiding difficulties with this type of…
DigiSeis—A software component for digitizing seismic signals using the PC sound card
NASA Astrophysics Data System (ADS)
Amin Khan, Khalid; Akhter, Gulraiz; Ahmad, Zulfiqar
2012-06-01
An innovative software-based approach to develop an inexpensive experimental seismic recorder is presented. This approach requires no hardware as the built-in PC sound card is used for digitization of seismic signals. DigiSeis, an ActiveX component is developed to capture the digitized seismic signals from the sound card and deliver them to applications for processing and display. A seismic recorder application software SeisWave is developed over this component, which provides real-time monitoring and display of seismic events picked by a pair of external geophones. This recorder can be used as an educational aid for conducting seismic experiments. It can also be connected with suitable seismic sensors to record earthquakes. The software application and the ActiveX component are available for download. This component can be used to develop seismic recording applications according to user specific requirements.
Applying Query Structuring in Cross-language Retrieval.
ERIC Educational Resources Information Center
Pirkola, Ari; Puolamaki, Deniz; Jarvelin, Kalervo
2003-01-01
Explores ways to apply query structuring in cross-language information retrieval. Tested were: English queries translated into Finnish using an electronic dictionary, and run in a Finnish newspaper databases; effects of compound-based structuring using a proximity operator for translation equivalents of query language compound components; and a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaenko, Alexander; Windus, Theresa L.; Sosonkina, Masha
2012-10-19
The design and development of scientific software components to provide an interface to the effective fragment potential (EFP) methods are reported. Multiscale modeling of physical and chemical phenomena demands the merging of software packages developed by research groups in significantly different fields. Componentization offers an efficient way to realize new high performance scientific methods by combining the best models available in different software packages without a need for package readaptation after the initial componentization is complete. The EFP method is an efficient electronic structure theory based model potential that is suitable for predictive modeling of intermolecular interactions in large molecularmore » systems, such as liquids, proteins, atmospheric aerosols, and nanoparticles, with an accuracy that is comparable to that of correlated ab initio methods. The developed components make the EFP functionality accessible for any scientific component-aware software package. The performance of the component is demonstrated on a protein interaction model, and its accuracy is compared with results obtained with coupled cluster methods.« less
Building Your Own Web Course: The Case for Off-the-Shelf Component Software.
ERIC Educational Resources Information Center
Kaplan, Howard
1998-01-01
Compares the features, advantages, and disadvantages of two major software options available for designing web courses: (1) component, off-the shelf software that allows for creation of audio slide lectures, course materials, discussion forums, animations, synchronous chat groups, quiz creators, and electronic mail, and (2) integrated packages…
Improved parallel data partitioning by nested dissection with applications to information retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar
The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less
Poor readers' retrieval mechanism: efficient access is not dependent on reading skill
Johns, Clinton L.; Matsuki, Kazunaga; Van Dyke, Julie A.
2015-01-01
A substantial body of evidence points to a cue-based direct-access retrieval mechanism as a crucial component of skilled adult reading. We report two experiments aimed at examining whether poor readers are able to make use of the same retrieval mechanism. This is significant in light of findings that poor readers have difficulty retrieving linguistic information (e.g., Perfetti, 1985). Our experiments are based on a previous demonstration of direct-access retrieval in language processing, presented in McElree et al. (2003). Experiment 1 replicates the original result using an auditory implementation of the Speed-Accuracy Tradeoff (SAT) method. This finding represents a significant methodological advance, as it opens up the possibility of exploring retrieval speeds in non-reading populations. Experiment 2 provides evidence that poor readers do use a direct-access retrieval mechanism during listening comprehension, despite overall poorer accuracy and slower retrieval speeds relative to skilled readers. The findings are discussed with respect to hypotheses about the source of poor reading comprehension. PMID:26528212
Racsmány, Mihály; Szőllősi, Ágnes; Bencze, Dorottya
2018-01-01
The "testing effect" refers to the striking phenomenon that repeated retrieval practice is one of the most effective learning strategies, and certainly more advantageous for long-term learning, than additional restudying of the same information. How retrieval can boost the retention of memories is still without unanimous explanation. In 3 experiments, focusing on the reaction time (RT) of retrieval, we showed that RT of retrieval during retrieval practice followed a power function speed up that typically characterizes automaticity and skill learning. More important, it was found that the measure of goodness of fit to this power function was associated with long-term recall success. Here we suggest that the automatization of retrieval is an explanatory component of the testing effect. As a consequence, retrieval-based learning has the properties characteristic of skill learning: diminishing involvement of attentional processes, faster processing, resistance to interference effects, and lower forgetting rate. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Retrieval Enhances Route Knowledge Acquisition, but Only When Movement Errors Are Prevented
ERIC Educational Resources Information Center
Kelly, Jonathan W.; Carpenter, Shana K.; Sjolund, Lori A.
2015-01-01
Studies of the "testing effect" have shown that retrieval significantly improves learning. However, most of these studies have been restricted to simple types of declarative verbal knowledge. Five experiments were designed to explore whether testing improves acquisition of route knowledge, which has a procedural component consisting of…
SLIMMER--A UNIX System-Based Information Retrieval System.
ERIC Educational Resources Information Center
Waldstein, Robert K.
1988-01-01
Describes an information retrieval system developed at Bell Laboratories to create and maintain a variety of different but interrelated databases, and to provide controlled access to these databases. The components discussed include the interfaces, indexing rules, display languages, response time, and updating procedures of the system. (6 notes…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, David D.; Clough, Shepard A.; Liljegren, James C.
2007-11-01
Ground-based two-channel microwave radiometers have been used for over 15 years by the Atmospheric Radiation Measurement (ARM) program to provide observations of downwelling emitted radiance from which precipitable water vapor (PWV) and liquid water path (LWP) – twp geophysical parameters critical for many areas of atmospheric research – are retrieved. An algorithm that utilizes two advanced retrieval techniques, a computationally expensive physical-iterative approach and an efficient statistical method, has been developed to retrieve these parameters. An important component of this Microwave Retrieval (MWRRET) algorithm is the determination of small (< 1K) offsets that are subtracted from the observed brightness temperaturesmore » before the retrievals are performed. Accounting for these offsets removes systematic biases from the observations and/or the model spectroscopy necessary for the retrieval, significantly reducing the systematic biases in the retrieved LWP. The MWRRET algorithm provides significantly more accurate retrievals than the original ARM statistical retrieval which uses monthly retrieval coefficients. By combining the two retrieval methods with the application of brightness temperature offsets to reduce the spurious LWP bias in clear skies, the MWRRET algorithm provides significantly better retrievals of PWV and LWP from the ARM 2-channel microwave radiometers compared to the original ARM product.« less
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
NASA Astrophysics Data System (ADS)
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-08-01
We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances are rejected and full-waveform inversion in a space-time grid around a provided hypocentre. A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequency ranges. The method is tested on synthetic and observed data. It is applied on a data set from the Swiss seismic network and the results are compared with the existing high-quality MT catalogue. The software package programmed in Python is designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large pre-existing earthquake catalogues and data sets.
Data Interpretation in the Digital Age
Leonelli, Sabina
2014-01-01
The consultation of internet databases and the related use of computer software to retrieve, visualise and model data have become key components of many areas of scientific research. This paper focuses on the relation of these developments to understanding the biology of organisms, and examines the conditions under which the evidential value of data posted online is assessed and interpreted by the researchers who access them, in ways that underpin and guide the use of those data to foster discovery. I consider the types of knowledge required to interpret data as evidence for claims about organisms, and in particular the relevance of knowledge acquired through physical interaction with actual organisms to assessing the evidential value of data found online. I conclude that familiarity with research in vivo is crucial to assessing the quality and significance of data visualised in silico; and that studying how biological data are disseminated, visualised, assessed and interpreted in the digital age provides a strong rationale for viewing scientific understanding as a social and distributed, rather than individual and localised, achievement. PMID:25729262
StreptomycesInforSys: A web-enabled information repository
Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P
2012-01-01
Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. Availability www.sis.biowaves.org PMID:23275736
StreptomycesInforSys: A web-enabled information repository.
Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P
2012-01-01
Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. www.sis.biowaves.org.
Reliability, Validity, and Usability of Data Extraction Programs for Single-Case Research Designs.
Moeyaert, Mariola; Maggin, Daniel; Verkuilen, Jay
2016-11-01
Single-case experimental designs (SCEDs) have been increasingly used in recent years to inform the development and validation of effective interventions in the behavioral sciences. An important aspect of this work has been the extension of meta-analytic and other statistical innovations to SCED data. Standard practice within SCED methods is to display data graphically, which requires subsequent users to extract the data, either manually or using data extraction programs. Previous research has examined issues of reliability and validity of data extraction programs in the past, but typically at an aggregate level. Little is known, however, about the coding of individual data points. We focused on four different software programs that can be used for this purpose (i.e., Ungraph, DataThief, WebPlotDigitizer, and XYit), and examined the reliability of numeric coding, the validity compared with real data, and overall program usability. This study indicates that the reliability and validity of the retrieved data are independent of the specific software program, but are dependent on the individual single-case study graphs. Differences were found in program usability in terms of user friendliness, data retrieval time, and license costs. Ungraph and WebPlotDigitizer received the highest usability scores. DataThief was perceived as unacceptable and the time needed to retrieve the data was double that of the other three programs. WebPlotDigitizer was the only program free to use. As a consequence, WebPlotDigitizer turned out to be the best option in terms of usability, time to retrieve the data, and costs, although the usability scores of Ungraph were also strong. © The Author(s) 2016.
BioModels.net Web Services, a free and integrated toolkit for computational modelling software.
Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille
2010-05-01
Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.
A Community Data Model for Hydrologic Observations
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Horsburgh, J. S.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.; Jennings, B.
2006-12-01
The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. Hydrologic information science involves the description of hydrologic environments in a consistent way, using data models for information integration. This includes a hydrologic observations data model for the storage and retrieval of hydrologic observations in a relational database designed to facilitate data retrieval for integrated analysis of information collected by multiple investigators. It is intended to provide a standard format to facilitate the effective sharing of information between investigators and to facilitate analysis of information within a single study area or hydrologic observatory, or across hydrologic observatories and regions. The observations data model is designed to store hydrologic observations and sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used and provide traceable heritage from raw measurements to usable information. The design is based on the premise that a relational database at the single observation level is most effective for providing querying capability and cross dimension data retrieval and analysis. This premise is being tested through the implementation of a prototype hydrologic observations database, and the development of web services for the retrieval of data from and ingestion of data into the database. These web services hosted by the San Diego Supercomputer center make data in the database accessible both through a Hydrologic Data Access System portal and directly from applications software such as Excel, Matlab and ArcGIS that have Standard Object Access Protocol (SOAP) capability. This paper will (1) describe the data model; (2) demonstrate the capability for representing diverse data in the same database; (3) demonstrate the use of the database from applications software for the performance of hydrologic analysis across different observation types.
EARLINET Single Calculus Chain - technical - Part 2: Calculation of optical products
NASA Astrophysics Data System (ADS)
Mattis, Ina; D'Amico, Giuseppe; Baars, Holger; Amodeo, Aldo; Madonna, Fabio; Iarlori, Marco
2016-07-01
In this paper we present the automated software tool ELDA (EARLINET Lidar Data Analyzer) for the retrieval of profiles of optical particle properties from lidar signals. This tool is one of the calculus modules of the EARLINET Single Calculus Chain (SCC) which allows for the analysis of the data of many different lidar systems of EARLINET in an automated, unsupervised way. ELDA delivers profiles of particle extinction coefficients from Raman signals as well as profiles of particle backscatter coefficients from combinations of Raman and elastic signals or from elastic signals only. Those analyses start from pre-processed signals which have already been corrected for background, range dependency and hardware specific effects. An expert group reviewed all algorithms and solutions for critical calculus subsystems which are used within EARLINET with respect to their applicability for automated retrievals. Those methods have been implemented in ELDA. Since the software was designed in a modular way, it is possible to add new or alternative methods in future. Most of the implemented algorithms are well known and well documented, but some methods have especially been developed for ELDA, e.g., automated vertical smoothing and temporal averaging or the handling of effective vertical resolution in the case of lidar ratio retrievals, or the merging of near-range and far-range products. The accuracy of the retrieved profiles was tested following the procedure of the EARLINET-ASOS algorithm inter-comparison exercise which is based on the analysis of synthetic signals. Mean deviations, mean relative deviations, and normalized root-mean-square deviations were calculated for all possible products and three height layers. In all cases, the deviations were clearly below the maximum allowed values according to the EARLINET quality requirements.
USER'S GUIDE FOR GLOED VERSION 1.0 - THE GLOBAL EMISSIONS DATABASE
The document is a user's guide for the EPA-developed, powerful software package, Global Emissions Database (GloED). GloED is a user-friendly, menu-driven tool for storing and retrieving emissions factors and activity data on a country-specific basis. Data can be selected from dat...
Computer Courseware Evaluations. January 1988 to December 1988. Volume VIII.
ERIC Educational Resources Information Center
Riome, Carol-Anne, Comp.
The eighth in a series, this report reviews microcomputer software authorized by the Alberta (Canada) Department of Education from January 1988 through December 1988. This edition provides detailed evaluations of 40 authorized programs for teaching business education, computer literacy, databases, file management, French, information retrieval,…
A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text
ERIC Educational Resources Information Center
Nguyen, Bao-An; Yang, Don-Lin
2012-01-01
An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…
DOT National Transportation Integrated Search
1987-11-01
Performance requirements are developed which define the kinematic and kinetic response of the head for a seated subject exposed to frontal, lateral or oblique impact. Response is expressed in terms of variables which are readily measured in an anthro...
Emerging Uses of Computer Technology in Qualitative Research.
ERIC Educational Resources Information Center
Parker, D. Randall
The application of computer technology in qualitative research and evaluation ranges from simple word processing to doing sophisticated data sorting and retrieval. How computer software can be used for qualitative research is discussed. Researchers should consider the use of computers in data analysis in light of their own familiarity and comfort…
Semi-Automatic Determination of Citation Relevancy: User Evaluation.
ERIC Educational Resources Information Center
Huffman, G. David
1990-01-01
Discussion of online bibliographic database searches focuses on a software system, SORT-AID/SABRE, that ranks retrieved citations in terms of relevance. Results of a comprehensive user evaluation of the relevance ranking procedure to determine its effectiveness are presented, and implications for future work are suggested. (10 references) (LRW)
A Management Information System for Bare Base Civil Engineering Commanders
1988-09-01
initial beddown stage. The purpose of this research was to determine the feasibility of developing a microcomputer based management information system (MIS...the software best suited to synthesize four of the categories into a prototype field MIS. Keyword: Management information system , Bare bases, Civil engineering, Data bases, Information retrieval.
A Computerized Interactive Vocabulary Development System for Advanced Learners.
ERIC Educational Resources Information Center
Kukulska-Hulme, Agnes
1988-01-01
Argues that the process of recording newly encountered vocabulary items in a typical language learning situation can be improved through a computerized system of vocabulary storage based on database management software that improves the discovery and recording of meaning, subsequent retrieval of items for productive use, and memory retention.…
NASA Technical Reports Server (NTRS)
Fisher, Brad; Wolff, David B.
2010-01-01
Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.
NASA Astrophysics Data System (ADS)
Kozanis, S.; Christofides, A.; Efstratiadis, A.; Koukouvinos, A.; Karavokiros, G.; Mamassis, N.; Koutsoyiannis, D.; Nikolopoulos, D.
2012-04-01
The water supply of Athens, Greece, is implemented through a complex water resource system, extending over an area of around 4 000 km2 and including surface water and groundwater resources. It incorporates four reservoirs, 350 km of main aqueducts, 15 pumping stations, more than 100 boreholes and 5 small hydropower plants. The system is run by the Athens Water Supply and Sewerage Company (EYDAP) Over more than 10 years we have developed, information technology tools such as GIS, database and decision support systems, to assist the management of the system. Among the software components, "Enhydris", a web application for the visualization and management of geographical and hydrometeorological data, and "Hydrognomon", a data analysis and processing tool, are now free software. Enhydris is entirely based on free software technologies such as Python, Django, PostgreSQL, and JQuery. We also created http://openmeteo.org/, a web site hosting our free software products as well as a free database system devoted to the dissemination of free data. In particular, "Enhydris" is used for the management of the hydrometeorological stations and the major hydraulic structures (aqueducts, reservoirs, boreholes, etc.), as well as for the retrieval of time series, online graphs etc. For the specific needs of EYDAP, additional GIS functionality was introduced for the display and monitoring of the water supply network. This functionality is also implemented as free software and can be reused in similar projects. Except for "Hydrognomon" and "Enhydris", we have developed a number of advanced modeling applications, which are also generic-purpose tools that have been used for a long time to provide decision support for the water resource system of Athens. These are "Hydronomeas", which optimizes the operation of complex water resource systems, based on a stochastic simulation framework, "Castalia", which implements the generation of synthetic time series, and "Hydrogeios", which employs conjunctive hydrological and hydrogeological simulation, with emphasis to human-modified river basins. These tools are currently available as executable files that are free for download though the ITIA web site (http://itia.ntua.gr/). Currently, we are working towards releasing their source code as well, through making them free software, after some licensing issues are resolved.
NASA Technical Reports Server (NTRS)
Lo, P. S.; Card, D.
1983-01-01
The Software Engineering Laboratory (SEL) Data Base Maintenance System (DBAM) is explained. The various software facilities of the SEL, DBAM operating procedures, and DBAM system information are described. The relationships among DBAM components (baseline diagrams), component descriptions, overlay descriptions, indirect command file listings, file definitions, and sample data collection forms are provided.
Application of Design Patterns in Refactoring Software Design
NASA Technical Reports Server (NTRS)
Baggs. Rjpda; Shaykhian, Gholam Ali
2007-01-01
Refactoring software design is a method of changing software design while explicitly preserving its unique design functionalities. Presented approach is to utilize design patterns as the basis for refactoring software design. Comparison of a design solution will be made through C++ programming language examples to exploit this approach. Developing reusable component will be discussed, the paper presents that the construction of such components can diminish the added burden of both refactoring and the use of design patterns.
Apply Design Patterns to Refactor Software Design
NASA Technical Reports Server (NTRS)
Baggs, Rhoda; Shaykhian, Gholam Ali
2007-01-01
Refactoring software design is a method of changing software design while explicitly preserving its unique design functionalities. Presented approach is to utilize design patterns as the basis for refactoring software design. Comparison of a design solution will be made through C++ programming language examples to exploit this approach. Developing reusable component will be discussed, the paper presents that the construction of such components can diminish the added burden of both refactoring and the use of design patterns.
The Chandra Source Catalog: Processing and Infrastructure
NASA Astrophysics Data System (ADS)
Evans, Janet; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Hall, Diane M.; Miller, Joseph B.; Plummer, David A.; Zografou, Panagoula; Primini, Francis A.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.
2009-09-01
Chandra Source Catalog processing recalibrates each observation using the latest available calibration data, and employs a wavelet-based source detection algorithm to identify all the X-ray sources in the field of view. Source properties are then extracted from each detected source that is a candidate for inclusion in the catalog. Catalog processing is completed by matching sources across multiple observations, merging common detections, and applying quality assurance checks. The Chandra Source Catalog processing system shares a common processing infrastructure and utilizes much of the functionality that is built into the Standard Data Processing (SDP) pipeline system that provides calibrated Chandra data to end-users. Other key components of the catalog processing system have been assembled from the portable CIAO data analysis package. Minimal new software tool development has been required to support the science algorithms needed for catalog production. Since processing pipelines must be instantiated for each detected source, the number of pipelines that are run during catalog construction is a factor of order 100 times larger than for SDP. The increased computational load, and inherent parallel nature of the processing, is handled by distributing the workload across a multi-node Beowulf cluster. Modifications to the SDP automated processing application to support catalog processing, and extensions to Chandra Data Archive software to ingest and retrieve catalog products, complete the upgrades to the infrastructure to support catalog processing.
The NASA Exoplanet Science Institute Archives: KOA and NStED
NASA Astrophysics Data System (ADS)
Berriman, G. B.; Ciardi, D.; Abajian, M.; Barlow, T.; Bryden, G.; von Braun, K.; Good, J.; Kane, S.; Kong, M.; Laity, A.; Lynn, M.; Elroy, D. M.; Plavchan, P.; Ramirez, S.; Schmitz, M.; Stauffer, J.; Wyatt, P.; Zhang, A.; Goodrich, R.; Mader, J.; Tran, H.; Tsubota, M.; Beekley, A.; Berukoff, S.; Chan, B.; Lau, C.; Regelson, M.; Saucedo, M.; Swain, M.
2010-12-01
The NASA Exoplanet Science Institute (NExScI) maintains a series of archival services in support of NASA’s planet finding and characterization goals. Two of the larger archival services at NExScI are the Keck Observatory Archive (KOA) and the NASA Star and Exoplanet Database (NStED). KOA, a collaboration between the W. M. Keck Observatory and NExScI, serves raw data from the High Resolution Echelle Spectrograph (HIRES) and extracted spectral browse products. As of June 2009, KOA hosts over 28 million files (4.7 TB) from over 2,000 nights. In Spring 2010, it will begin to serve data from the Near-Infrared Echelle Spectrograph (NIRSPEC). NStED is a general purpose archive with the aim of providing support for NASA’s planet finding and characterization goals, and stellar astrophysics. There are two principal components of NStED: a database of (currently) all known exoplanets, and images; and an archive dedicated to high precision photometric surveys for transiting exoplanets. NStED is the US portal to the CNES mission CoRoT, the first space mission dedicated to the discovery and characterization of exoplanets. These archives share a common software and hardware architecture with the NASA/IPAC Infrared Science Archive (IRSA). The software architecture consists of standalone utilities that perform generic query and retrieval functions. They are called through program interfaces and plugged together to form applications through a simple executive library.
NASA Technical Reports Server (NTRS)
2010-01-01
Topics covered include: Situational Awareness from a Low-Cost Camera System; Data Acquisition System for Multi-Frequency Radar Flight Operations Preparation; Mercury Toolset for Spatiotemporal Metadata; Social Tagging of Mission Data; Integrating Radar Image Data with Google Maps; Demonstration of a Submillimeter-Wave HEMT Oscillator Module at 330 GHz; Flexible Peripheral Component Interconnect Input/Output Card; Interface Supports Lightweight Subsystem Routing for Flight Applications; MMIC Amplifiers and Wafer Probes for 350 to 500 GHz; Public Risk Assessment Program; Particle Swarm Optimization Toolbox; Telescience Support Center Data System Software; Update on PISCES; Ground and Space Radar Volume Matching and Comparison Software; Web-Based Interface for Command and Control of Network Sensors; Orbit Determination Toolbox; Distributed Observer Network; Computer-Automated Evolution of Spacecraft X-Band Antennas; Practical Loop-Shaping Design of Feedback Control Systems; Fully Printed High-Frequency Phased-Array Antenna on Flexible Substrate; Formula for the Removal and Remediation of Polychlorinated Biphenyls in Painted Structures; Integrated Solar Concentrator and Shielded Radiator; Water Membrane Evaporator; Modeling of Failure for Analysis of Triaxial Braided Carbon Fiber Composites; Catalyst for Carbon Monoxide Oxidation; Titanium Hydroxide - a Volatile Species at High Temperature; Selective Functionalization of Carbon Nanotubes: Part II; Steerable Hopping Six-Legged Robot; Launchable and Retrievable Tetherobot; Hybrid Heat Exchangers; Orbital Winch for High-Strength, Space-Survivable Tethers; Parameterized Linear Longitudinal Airship Model; and Physics of Life: A Model for Non-Newtonian Properties of Living Systems.
Web-Based Environment for Maintaining Legacy Software
NASA Technical Reports Server (NTRS)
Tigges, Michael; Thompson, Nelson; Orr, Mark; Fox, Richard
2007-01-01
Advanced Tool Integration Environment (ATIE) is the name of both a software system and a Web-based environment created by the system for maintaining an archive of legacy software and expertise involved in developing the legacy software. ATIE can also be used in modifying legacy software and developing new software. The information that can be encapsulated in ATIE includes experts documentation, input and output data of tests cases, source code, and compilation scripts. All of this information is available within a common environment and retained in a database for ease of access and recovery by use of powerful search engines. ATIE also accommodates the embedment of supporting software that users require for their work, and even enables access to supporting commercial-off-the-shelf (COTS) software within the flow of the experts work. The flow of work can be captured by saving the sequence of computer programs that the expert uses. A user gains access to ATIE via a Web browser. A modern Web-based graphical user interface promotes efficiency in the retrieval, execution, and modification of legacy code. Thus, ATIE saves time and money in the support of new and pre-existing programs.
High wear resistance of femoral components coated with titanium nitride: a retrieval analysis.
Fabry, Christian; Zietz, Carmen; Baumann, Axel; Ehall, Reinhard; Bader, Rainer
2017-05-20
The objective of this study was to evaluate the in vivo wear resistance of cobalt-chromium femoral components coated with titanium nitride (TiN). Our null hypothesis was that the surface damage and the thickness of the TiN coating do not correlate with the time in vivo. Twenty-five TiN-coated bicondylar femoral retrievals with a mean implantation period of 30.7 ± 11.7 months were subjected to an objective surface damage analysis with a semi-quantitative assessment method. A visual examination of scratches, indentations, notches and coating breakthroughs of the surfaces was performed. The roughness and the coating thickness of the TiN coating were evaluated in the main articulation regions. Narrow scratches and indentations in the range of low flexion angles on the retrieval surfaces were the most common modes of damage. There was no evidence of delamination on the articulation surface but rather at the bottom of isolated severe indentations or notches. An analysis of three retrievals revealed a coating breakthrough in the patellofemoral joint region, resulting from patella maltracking and a dislocation. The arithmetical mean roughness of the TiN surface slightly increased with the implantation period. In contrast, the maximum peak height of the roughness profile was reduced at the condyles of the retrieved components in comparison with new, unused surfaces. No significant association between the coating thickness and implantation period was determined. Moreover, the measured values were retained in the range of the initial coating thickness even after several years of in vivo service. As was demonstrated by the results of this study, the surface damage to the TiN coating did not deteriorate with the implantation period. The calculated damage scores and the measured coating thickness in particular both confirmed that the TiN coating provides low wear rates. Our findings support the use of wear-resistant TiN-coated components in total knee arthroplasty with the objective of reducing the risk of aseptic loosening. However, in terms of TiN-coated femoral components, particular attention should be paid to a correct patellar tracking in order to avoid wear propagation at the implant.
2010-01-01
Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024
Two-component wind fields over ocean waves using atmospheric lidar and motion estimation algorithms
NASA Astrophysics Data System (ADS)
Mayor, S. D.
2016-02-01
Numerical models, such as large eddy simulations, are capable of providing stunning visualizations of the air-sea interface. One reason for this is the inherent spatial nature of such models. As compute power grows, models are able to provide higher resolution visualizations over larger domains revealing intricate details of the interactions of ocean waves and the airflow over them. Spatial observations on the other hand, which are necessary to validate the simulations, appear to lag behind models. The rough ocean environment of the real world is an additional challenge. One method of providing spatial observations of fluid flow is that of particle image velocimetry (PIV). PIV has been successfully applied to many problems in engineering and the geosciences. This presentation will show recent research results that demonstate that a PIV-style approach using pulsed-fiber atmospheric elastic backscatter lidar hardware and wavelet-based optical flow motion estimation software can reveal two-component wind fields over rough ocean surfaces. Namely, a recently-developed compact lidar was deployed for 10 days in March of 2015 in the Eureka, California area. It scanned over the ocean. Imagery reveal that breaking ocean waves provide copius amounts of particulate matter for the lidar to detect and for the motion estimation algorithms to retrieve wind vectors from. The image below shows two examples of results from the experiment. The left panel shows the elastic backscatter intensity (copper shades) under a field of vectors that was retrieved by the wavelet-based optical flow algorithm from two scans that took about 15 s each to acquire. The vectors, that reveal offshore flow toward the NW, were decimated for clarity. The bright aerosol features along the right edge of the sector scan were caused by ocean waves breaking on the beach. The right panel is the result of scanning over the ocean on a day when wave amplitudes ranged from 8-12 feet and whitecaps offshore beyond the surf zone appeared to be rare and fleeting. Nonetheless, faint coherent aerosol structures are observable in the backscatter field as long, streaky, wind-parallel filaments and a wind field was retrieved. During the 10-day deployment, the seas were not as rough as expected. A current goal is to find collaborators and return to map airflow in rougher conditions.
Ambient Field Analysis at Groningen Gas Field
NASA Astrophysics Data System (ADS)
Spica, Z.; Nakata, N.; Beroza, G. C.
2016-12-01
We analyze continuous ambient-field data at Groningen gas field (Netherlands) through cross-correlation processing. The Groningen array is composed of 75 shallow boreholes with 6 km spacing, which contain a 3C surface accelerometer and four 5-Hz 3C borehole geophones spaced at 50 m depth intervals. We successfully retrieve coherent waves from ambient seismic field on the 9 components between stations. Results show high SNR signal in the frequency range of 0.125-1 Hz, and the ZZ, ZR, RZ, RR and TT components show much stronger wave energy than other components as expected. This poster discuss the different type of waves retrieved, the utility of the combination of borehole and surface observations, future development as well as the importance to compute the 9 components of the Green's tensor to better understand the wave field propriety with ambient noise.
Radiation Products based on a constellation of Geostationary Satellites
NASA Astrophysics Data System (ADS)
Trigo, I. F.; Freitas, S. C.; Barroso, C.; Macedo, J.; Perdigão, R.; Silva, R.; Viterbo, P.
2012-04-01
The various components of the surface radiation budget present high variability in time and space, particularly over land surfaces where spatial heterogeneity of the upward fluxes is high. Geostationary satellites are well-suited to describe the daily cycle of downward and upward radiation fluxes and present spatial resolutions of the order of 3-to-5 km at sub-satellite point, acceptable for many applications. The work presented here is being carried out within the framework of Geoland-2 project, and aims the use of data from geostationary platforms to generate, archive and distribute in near real time four component of the surface radiation budget: land surface albedo, land surface temperature (LST) and downward short- and long-wave fluxes at the surface. All four components are retrieved from the following satellites - GOES-W covering North and South America, Meteosat Second Generation (MSG) covering essentially Europe and Africa, and MTSAT covering part of Asia and Australia. The variables are retrieved independently from each satellite and then merged into a single field, with a 5 km spatial resolution. Data are generated hourly in the case of the downward fluxes and LST, and 10-daily in the case of albedo. In regions covered by both GOES and MSG disks, the interpolated field makes use of both retrievals, giving more weight to those with lower uncertainty. The four components of the surface radiation budget described above are assessed through comparisons with similar parameters retrieved from other sensors (e.g., MODIS, CERES) or from models (e.g., ECMWF forecasts), as well as with in situ observations when available. The presentation will be focused on a brief description of algorithms and auxiliary data used in product estimation. The results of inter-comparisons with other data sources, along with the identification of the retrieval conditions that allow optimal / sub-optimal estimation of these surface radiation parameters will also be analysed. The radiation products generated within the Geoland-2 project are freely available to the user community.
Asteroid retrieval missions enabled by invariant manifold dynamics
NASA Astrophysics Data System (ADS)
Sánchez, Joan Pau; García Yárnoz, Daniel
2016-10-01
Near Earth Asteroids are attractive targets for new space missions; firstly, because of their scientific importance, but also because of their impact threat and prospective resources. The asteroid retrieval mission concept has thus arisen as a synergistic approach to tackle these three facets of interest in one single mission. This paper reviews the methodology used by the authors (2013) in a previous search for objects that could be transported from accessible heliocentric orbits into the Earth's neighbourhood at affordable costs (or Easily Retrievable Objects, a.k.a. EROs). This methodology consisted of a heuristic pruning and an impulsive manoeuvre trajectory optimisation. Low thrust propulsion on the other hand clearly enables the transportation of much larger objects due to its higher specific impulse. Hence, in this paper, low thrust retrieval transfers are sought using impulsive trajectories as first guesses to solve the optimal control problem. GPOPS-II is used to transcribe the continuous-time optimal control problem to a nonlinear programming problem (NLP). The latter is solved by IPOPT, an open source software package for large-scale NLPs. Finally, a natural continuation procedure that increases the asteroid mass allows to find out the largest objects that could be retrieved from a given asteroid orbit. If this retrievable mass is larger than the actual mass of the asteroid, the asteroid retrieval mission for this particular object is said to be feasible. The paper concludes with an updated list of 17 EROs, as of April 2016, with their maximum retrievable masses by means of low thrust propulsion. This ranges from 2000 tons for the easiest object to be retrieved to 300 tons for the least accessible of them.
Future missions for observing Earth's changing gravity field: a closed-loop simulation tool
NASA Astrophysics Data System (ADS)
Visser, P. N.
2008-12-01
The GRACE mission has successfully demonstrated the observation from space of the changing Earth's gravity field at length and time scales of typically 1000 km and 10-30 days, respectively. Many scientific communities strongly advertise the need for continuity of observing Earth's gravity field from space. Moreover, a strong interest is being expressed to have gravity missions that allow a more detailed sampling of the Earth's gravity field both in time and in space. Designing a gravity field mission for the future is a complicated process that involves making many trade-offs, such as trade-offs between spatial, temporal resolution and financial budget. Moreover, it involves the optimization of many parameters, such as orbital parameters (height, inclination), distinction between which gravity sources to observe or correct for (for example are gravity changes due to ocean currents a nuisance or a signal to be retrieved?), observation techniques (low-low satellite-to-satellite tracking, satellite gravity gradiometry, accelerometers), and satellite control systems (drag-free?). A comprehensive tool has been developed and implemented that allows the closed-loop simulation of gravity field retrievals for different satellite mission scenarios. This paper provides a description of this tool. Moreover, its capabilities are demonstrated by a few case studies. Acknowledgments. The research that is being done with the closed-loop simulation tool is partially funded by the European Space Agency (ESA). An important component of the tool is the GEODYN software, kindly provided by NASA Goddard Space Flight Center in Greenbelt, Maryland.
RHCV Telescope System Operations Manual
2018-01-05
hardware and software components. Several of the components are closely coupled and rely on one-another, while others are largely independent. This...of hardware and software components. Several of the components are closely coupled and rely on one-another, while others are largely independent. This...attendant training The use cases are briefly described in separate sections, and step-by-step instructions are presented. Each section begins on a new
GERICOS: A Generic Framework for the Development of On-Board Software
NASA Astrophysics Data System (ADS)
Plasson, P.; Cuomo, C.; Gabriel, G.; Gauthier, N.; Gueguen, L.; Malac-Allain, L.
2016-08-01
This paper presents an overview of the GERICOS framework (GEneRIC Onboard Software), its architecture, its various layers and its future evolutions. The GERICOS framework, developed and qualified by LESIA, offers a set of generic, reusable and customizable software components for the rapid development of payload flight software. The GERICOS framework has a layered structure. The first layer (GERICOS::CORE) implements the concept of active objects and forms an abstraction layer over the top of real-time kernels. The second layer (GERICOS::BLOCKS) offers a set of reusable software components for building flight software based on generic solutions to recurrent functionalities. The third layer (GERICOS::DRIVERS) implements software drivers for several COTS IP cores of the LEON processor ecosystem.
Sleep Improves Prospective Remembering by Facilitating Spontaneous-Associative Retrieval Processes
Diekelmann, Susanne; Wilhelm, Ines; Wagner, Ullrich; Born, Jan
2013-01-01
Memories are of the past but for the future, enabling individuals to implement intended plans and actions at the appropriate time. Prospective memory is the specific ability to remember and execute an intended behavior at some designated point in the future. Although sleep is well-known to benefit the consolidation of memories for past events, its role for prospective memory is still not well understood. Here, we show that sleep as compared to wakefulness after prospective memory instruction enhanced the successful execution of prospective memories two days later. We further show that sleep benefited both components of prospective memory, i.e. to remember that something has to be done (prospective component) and to remember what has to be done (retrospective component). Finally, sleep enhanced prospective remembering particularly when attentional resources were reduced during task execution, suggesting that subjects after sleep were able to recruit additional spontaneous-associative retrieval processes to remember intentions successfully. Our findings indicate that sleep supports the maintenance of prospective memory over time by strengthening intentional memory representations, thus favoring the spontaneous retrieval of the intended action at the appropriate time. PMID:24143246
A distributed data component for the open modeling interface
USDA-ARS?s Scientific Manuscript database
As the volume of collected data continues to increase in the environmental sciences, so does the need for effective means for accessing those data. We have developed an Open Modeling Interface (OpenMI) data component that retrieves input data for model components from environmental information syste...
User's operating procedures. Volume 2: Scout project financial analysis program
NASA Technical Reports Server (NTRS)
Harris, C. G.; Haris, D. K.
1985-01-01
A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.
Tip of the Tongue States Increase under Evaluative Observation
ERIC Educational Resources Information Center
James, Lori E.; Schmank, Christopher J.; Castro, Nichol; Buchanan, Tony W.
2018-01-01
We tested the frequent assumption that the difficulty of word retrieval increases when a speaker is being observed and evaluated. We modified the Trier Social Stress Test (TSST) so that participants believed that its evaluative observation components continued throughout the duration of a subsequent word retrieval task, and measured participants'…
Data retrieval system provides unlimited hardware design information
NASA Technical Reports Server (NTRS)
Rawson, R. D.; Swanson, R. L.
1967-01-01
Data is input to magnetic tape on a single format card that specifies the system, location, and component, the test point identification number, the operators initial, the date, a data code, and the data itself. This method is efficient for large volume data storage and retrieval, and permits output variations without continuous program modifications.
Loss of Retrieval Information in Prose Recall.
ERIC Educational Resources Information Center
Sehulster, Jerome R.; And Others
The purpose of this research was to experimentally manipulate input and output orders of information and separate storage and retrieval components of prose free recall. The cued partial recall method, used in word list recall, was adapted to a prose learning task. Four short biographical stories of about 55 words each were systematically combined…
An Optical Disk-Based Information Retrieval System.
ERIC Educational Resources Information Center
Bender, Avi
1988-01-01
Discusses a pilot project by the Nuclear Regulatory Commission to apply optical disk technology to the storage and retrieval of documents related to its high level waste management program. Components and features of the microcomputer-based system which provides full-text and image access to documents are described. A sample search is included.…
ViPAR: a software platform for the Virtual Pooling and Analysis of Research Data.
Carter, Kim W; Francis, Richard W; Carter, K W; Francis, R W; Bresnahan, M; Gissler, M; Grønborg, T K; Gross, R; Gunnes, N; Hammond, G; Hornig, M; Hultman, C M; Huttunen, J; Langridge, A; Leonard, H; Newman, S; Parner, E T; Petersson, G; Reichenberg, A; Sandin, S; Schendel, D E; Schalkwyk, L; Sourander, A; Steadman, C; Stoltenberg, C; Suominen, A; Surén, P; Susser, E; Sylvester Vethanayagam, A; Yusof, Z
2016-04-01
Research studies exploring the determinants of disease require sufficient statistical power to detect meaningful effects. Sample size is often increased through centralized pooling of disparately located datasets, though ethical, privacy and data ownership issues can often hamper this process. Methods that facilitate the sharing of research data that are sympathetic with these issues and which allow flexible and detailed statistical analyses are therefore in critical need. We have created a software platform for the Virtual Pooling and Analysis of Research data (ViPAR), which employs free and open source methods to provide researchers with a web-based platform to analyse datasets housed in disparate locations. Database federation permits controlled access to remotely located datasets from a central location. The Secure Shell protocol allows data to be securely exchanged between devices over an insecure network. ViPAR combines these free technologies into a solution that facilitates 'virtual pooling' where data can be temporarily pooled into computer memory and made available for analysis without the need for permanent central storage. Within the ViPAR infrastructure, remote sites manage their own harmonized research dataset in a database hosted at their site, while a central server hosts the data federation component and a secure analysis portal. When an analysis is initiated, requested data are retrieved from each remote site and virtually pooled at the central site. The data are then analysed by statistical software and, on completion, results of the analysis are returned to the user and the virtually pooled data are removed from memory. ViPAR is a secure, flexible and powerful analysis platform built on open source technology that is currently in use by large international consortia, and is made publicly available at [http://bioinformatics.childhealthresearch.org.au/software/vipar/]. © The Author 2015. Published by Oxford University Press on behalf of the International Epidemiological Association.
Xiao, Xin; Zhao, Di; Zhang, Qin; Guo, Chun-yan
2012-03-01
The current study used the directed forgetting paradigm in implicit and explicit memory to investigate the concreteness effect. Event-related potentials (ERPs) were recorded to explore the neural basis of this phenomenon. The behavioral results showed a clear concreteness effect in both implicit and explicit memory tests; participants responded significantly faster to concrete words than to abstract words. The ERP results revealed a concreteness effect (N400) in both the encoding and retrieval phases. In addition, behavioral and ERP results showed an interaction between word concreteness and memory instruction (to-be-forgotten vs. to-be-remembered) in the late epoch of the explicit retrieval phase, revealing a significant concreteness effect only under the to-be-remembered instruction condition. This concreteness effect was realized as an increased P600-like component in response to concrete words relative to abstract words, likely reflecting retrieval of contextual details. The time course of the concreteness effect suggests advantages of concrete words over abstract words due to greater contextual information. Copyright © 2011 Elsevier Inc. All rights reserved.
Cockpit Ocular Recording System (CORS)
NASA Technical Reports Server (NTRS)
Rothenheber, Edward; Stokes, James; Lagrossa, Charles; Arnold, William; Dick, A. O.
1990-01-01
The overall goal was the development of a Cockpit Ocular Recording System (CORS). Four tasks were used: (1) the development of the system; (2) the experimentation and improvement of the system; (3) demonstrations of the working system; and (4) system documentation. Overall, the prototype represents a workable and flexibly designed CORS system. For the most part, the hardware use for the prototype system is off-the-shelf. All of the following software was developed specifically: (1) setup software that the user specifies the cockpit configuration and identifies possible areas in which the pilot will look; (2) sensing software which integrates the 60 Hz data from the oculometer and heat orientation sensing unit; (3) processing software which applies a spatiotemporal filter to the lookpoint data to determine fixation/dwell positions; (4) data recording output routines; and (5) playback software which allows the user to retrieve and analyze the data. Several experiments were performed to verify the system accuracy and quantify system deficiencies. These tests resulted in recommendations for any future system that might be constructed.
Advanced software development workstation project ACCESS user's guide
NASA Technical Reports Server (NTRS)
1990-01-01
ACCESS is a knowledge based software information system designed to assist the user in modifying retrieved software to satisfy user specifications. A user's guide is presented for the knowledge engineer who wishes to create for ACCESS a knowledge base consisting of representations of objects in some software system. This knowledge is accessible to an end user who wishes to use the catalogued software objects to create a new application program or an input stream for an existing system. The application specific portion of an ACCESS knowledge base consists of a taxonomy of object classes, as well as instances of these classes. All objects in the knowledge base are stored in an associative memory. ACCESS provides a standard interface for the end user to browse and modify objects. In addition, the interface can be customized by the addition of application specific data entry forms and by specification of display order for the taxonomy and object attributes. These customization options are described.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-14
... Graphics Data Processing Systems, Components Thereof, and Associated Software; Institution of Investigation... associated software by reason of infringement of certain claims of U.S. Patent No. 5,945,997 (``the `997... software that infringe one or more of claims 1, 3-5, 9, and 16 of the `997 patent; claims 1, 5, and 9 of...
Hybrid Modeling for Testing Intelligent Software for Lunar-Mars Closed Life Support
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Nicholson, Leonard S. (Technical Monitor)
1999-01-01
Intelligent software is being developed for closed life support systems with biological components, for human exploration of the Moon and Mars. The intelligent software functions include planning/scheduling, reactive discrete control and sequencing, management of continuous control, and fault detection, diagnosis, and management of failures and errors. Four types of modeling information have been essential to system modeling and simulation to develop and test the software and to provide operational model-based what-if analyses: discrete component operational and failure modes; continuous dynamic performance within component modes, modeled qualitatively or quantitatively; configuration of flows and power among components in the system; and operations activities and scenarios. CONFIG, a multi-purpose discrete event simulation tool that integrates all four types of models for use throughout the engineering and operations life cycle, has been used to model components and systems involved in the production and transfer of oxygen and carbon dioxide in a plant-growth chamber and between that chamber and a habitation chamber with physicochemical systems for gas processing.
Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...
2008-01-01
Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less
Pinal, Diego; Zurrón, Montserrat; Díaz, Fernando
2014-01-01
information encoding, maintenance, and retrieval; these are supported by brain activity in a network of frontal, parietal and temporal regions. Manipulation of WM load and duration of the maintenance period can modulate this activity. Although such modulations have been widely studied using the event-related potentials (ERP) technique, a precise description of the time course of brain activity during encoding and retrieval is still required. Here, we used this technique and principal component analysis to assess the time course of brain activity during encoding and retrieval in a delayed match to sample task. We also investigated the effects of memory load and duration of the maintenance period on ERP activity. Brain activity was similar during information encoding and retrieval and comprised six temporal factors, which closely matched the latency and scalp distribution of some ERP components: P1, N1, P2, N2, P300, and a slow wave. Changes in memory load modulated task performance and yielded variations in frontal lobe activation. Moreover, the P300 amplitude was smaller in the high than in the low load condition during encoding and retrieval. Conversely, the slow wave amplitude was higher in the high than in the low load condition during encoding, and the same was true for the N2 amplitude during retrieval. Thus, during encoding, memory load appears to modulate the processing resources for context updating and post-categorization processes, and during retrieval it modulates resources for stimulus classification and context updating. Besides, despite the lack of differences in task performance related to duration of the maintenance period, larger N2 amplitude and stronger activation of the left temporal lobe after long than after short maintenance periods were found during information retrieval. Thus, results regarding the duration of maintenance period were complex, and future work is required to test the time-based decay theory predictions.
NASA Technical Reports Server (NTRS)
Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.
1987-01-01
The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.
Software development environments: Status and trends
NASA Technical Reports Server (NTRS)
Duffel, Larry E.
1988-01-01
Currently software engineers are the essential integrating factors tying several components together. The components consist of process, methods, computers, tools, support environments, and software engineers. The engineers today empower the tools versus the tools empowering the engineers. Some of the issues in software engineering are quality, managing the software engineering process, and productivity. A strategy to accomplish this is to promote the evolution of software engineering from an ad hoc, labor intensive activity to a managed, technology supported discipline. This strategy may be implemented by putting the process under management control, adopting appropriate methods, inserting the technology that provides automated support for the process and methods, collecting automated tools into an integrated environment and educating the personnel.
Merritt, M.L.
1977-01-01
A computerized index of water-data collection activities and retrieval software to generate publication list of this information was developed for Florida. This system serves a vital need in the administration of the many and diverse water-data collection activities. Previously, needed data was very difficult to assemble for use in program planning or project implementation. Largely descriptive, the report tells how a file of computer card images has been established which contains entries for all sites in Florida at which there is currently a water-data-collection activity. Entries include information such as identification number, station name, location, type of site, county, information about data collection, funding, and other pertinent details. The computer program FINDEX selectively retrieves entries and lists them in a format suitable for publication. Updating the index is done routinely. (Woodard-USGS)
Next-Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, Phil
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William; Fitzgerald, Matthew; Stahl, Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible.
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, H. Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.
The effect of late posterior negativity in retrieving the color of Chinese characters.
Nie, Aiqing; Guo, Chunyan; Liang, Junying; Shen, Mowei
2013-02-08
Previous event-related potentials (ERPs) research has suggested that the retrieval tasks for many sources of items were operated in the frontal regions, but Cycowicz et al. [2-4,6] recorded the late posterior negativity (LPN), a component over the posterior cortex, in retrieving the associative color sources of pictures. To examine whether the LPN could also be observed in retrieving both the associative and the organizational color sources of verbal stimuli, two experiments were designed by using Chinese nouns as stimuli. Both experiments revealed significant LPN that was related to the tasks of color source retrieval. These findings demonstrate that the association between LPN and search for and/or retrieval/evaluation of the colors of objects is fairly strong, and this association is insensitive to both the attributes of stimulus materials and those of the color sources. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Component Models for Semantic Web Languages
NASA Astrophysics Data System (ADS)
Henriksson, Jakob; Aßmann, Uwe
Intelligent applications and agents on the Semantic Web typically need to be specified with, or interact with specifications written in, many different kinds of formal languages. Such languages include ontology languages, data and metadata query languages, as well as transformation languages. As learnt from years of experience in development of complex software systems, languages need to support some form of component-based development. Components enable higher software quality, better understanding and reusability of already developed artifacts. Any component approach contains an underlying component model, a description detailing what valid components are and how components can interact. With the multitude of languages developed for the Semantic Web, what are their underlying component models? Do we need to develop one for each language, or is a more general and reusable approach achievable? We present a language-driven component model specification approach. This means that a component model can be (automatically) generated from a given base language (actually, its specification, e.g. its grammar). As a consequence, we can provide components for different languages and simplify the development of software artifacts used on the Semantic Web.
PASSIM--an open source software system for managing information in biomedical studies.
Viksna, Juris; Celms, Edgars; Opmanis, Martins; Podnieks, Karlis; Rucevskis, Peteris; Zarins, Andris; Barrett, Amy; Neogi, Sudeshna Guha; Krestyaninova, Maria; McCarthy, Mark I; Brazma, Alvis; Sarkans, Ugis
2007-02-09
One of the crucial aspects of day-to-day laboratory information management is collection, storage and retrieval of information about research subjects and biomedical samples. An efficient link between sample data and experiment results is absolutely imperative for a successful outcome of a biomedical study. Currently available software solutions are largely limited to large-scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but often implies sufficient investment of time, effort and funds, which are not always available. There is a clear need for lightweight open source systems for patient and sample information management. We present a web-based tool for submission, management and retrieval of sample and research subject data. The system secures confidentiality by separating anonymized sample information from individuals' records. It is simple and generic, and can be customised for various biomedical studies. Information can be both entered and accessed using the same web interface. User groups and their privileges can be defined. The system is open-source and is supplied with an on-line tutorial and necessary documentation. It has proven to be successful in a large international collaborative project. The presented system closes the gap between the need and the availability of lightweight software solutions for managing information in biomedical studies involving human research subjects.
NASA Astrophysics Data System (ADS)
Eakins, John P.; Edwards, Jonathan D.; Riley, K. Jonathan; Rosin, Paul L.
2001-01-01
Many different kinds of features have been used as the basis for shape retrieval from image databases. This paper investigates the relative effectiveness of several types of global shape feature, both singly and in combination. The features compared include well-established descriptors such as Fourier coefficients and moment invariants, as well as recently-proposed measures of triangularity and ellipticity. Experiments were conducted within the framework of the ARTISAN shape retrieval system, and retrieval effectiveness assessed on a database of over 10,000 images, using 24 queries and associated ground truth supplied by the UK Patent Office . Our experiments revealed only minor differences in retrieval effectiveness between different measures, suggesting that a wide variety of shape feature combinations can provide adequate discriminating power for effective shape retrieval in multi-component image collections such as trademark registries. Marked differences between measures were observed for some individual queries, suggesting that there could be considerable scope for improving retrieval effectiveness by providing users with an improved framework for searching multi-dimensional feature space.
NASA Astrophysics Data System (ADS)
Eakins, John P.; Edwards, Jonathan D.; Riley, K. Jonathan; Rosin, Paul L.
2000-12-01
Many different kinds of features have been used as the basis for shape retrieval from image databases. This paper investigates the relative effectiveness of several types of global shape feature, both singly and in combination. The features compared include well-established descriptors such as Fourier coefficients and moment invariants, as well as recently-proposed measures of triangularity and ellipticity. Experiments were conducted within the framework of the ARTISAN shape retrieval system, and retrieval effectiveness assessed on a database of over 10,000 images, using 24 queries and associated ground truth supplied by the UK Patent Office . Our experiments revealed only minor differences in retrieval effectiveness between different measures, suggesting that a wide variety of shape feature combinations can provide adequate discriminating power for effective shape retrieval in multi-component image collections such as trademark registries. Marked differences between measures were observed for some individual queries, suggesting that there could be considerable scope for improving retrieval effectiveness by providing users with an improved framework for searching multi-dimensional feature space.
Leveraging Terminologies for Retrieval of Radiology Reports with Critical Imaging Findings
Warden, Graham I.; Lacson, Ronilda; Khorasani, Ramin
2011-01-01
Introduction: Communication of critical imaging findings is an important component of medical quality and safety. A fundamental challenge includes retrieval of radiology reports that contain these findings. This study describes the expressiveness and coverage of existing medical terminologies for critical imaging findings and evaluates radiology report retrieval using each terminology. Methods: Four terminologies were evaluated: National Cancer Institute Thesaurus (NCIT), Radiology Lexicon (RadLex), Systemized Nomenclature of Medicine (SNOMED-CT), and International Classification of Diseases (ICD-9-CM). Concepts in each terminology were identified for 10 critical imaging findings. Three findings were subsequently selected to evaluate document retrieval. Results: SNOMED-CT consistently demonstrated the highest number of overall terms (mean=22) for each of ten critical findings. However, retrieval rate and precision varied between terminologies for the three findings evaluated. Conclusion: No single terminology is optimal for retrieving radiology reports with critical findings. The expressiveness of a terminology does not consistently correlate with radiology report retrieval. PMID:22195212
Image/text automatic indexing and retrieval system using context vector approach
NASA Astrophysics Data System (ADS)
Qing, Kent P.; Caid, William R.; Ren, Clara Z.; McCabe, Patrick
1995-11-01
Thousands of documents and images are generated daily both on and off line on the information superhighway and other media. Storage technology has improved rapidly to handle these data but indexing this information is becoming very costly. HNC Software Inc. has developed a technology for automatic indexing and retrieval of free text and images. This technique is demonstrated and is based on the concept of `context vectors' which encode a succinct representation of the associated text and features of sub-image. In this paper, we will describe the Automated Librarian System which was designed for free text indexing and the Image Content Addressable Retrieval System (ICARS) which extends the technique from the text domain into the image domain. Both systems have the ability to automatically assign indices for a new document and/or image based on the content similarities in the database. ICARS also has the capability to retrieve images based on similarity of content using index terms, text description, and user-generated images as a query without performing segmentation or object recognition.
Retrieval of land cover information under thin fog in Landsat TM image
NASA Astrophysics Data System (ADS)
Wei, Yuchun
2008-04-01
Thin fog, which often appears in remote sensing image of subtropical climate region, has resulted in the low image quantity and bad image mapping. Therefore, it is necessary to develop the image processing method to retrieve land cover information under thin fog. In this paper, the Landsat TM image near the Taihu Lake that is in the subtropical climate zone of China was used as an example, and the workflow and method used to retrieve the land cover information under thin fog have been built based on ENVI software and a single TM image. The basic step covers three parts: 1) isolating the thin fog area in image according to the spectral difference of different bands; 2) retrieving the visible band information of different land cover types under thin fog from the near-infrared bands according to the relationships between near-infrared bands and visible bands of different land cover types in the area without fog; 3) image post-process. The result showed that the method in the paper is easy and suitable, and can be used to improve the quantity of TM image mapping more effectively.
Integrated Software Health Management for Aircraft GN and C
NASA Technical Reports Server (NTRS)
Schumann, Johann; Mengshoel, Ole
2011-01-01
Modern aircraft rely heavily on dependable operation of many safety-critical software components. Despite careful design, verification and validation (V&V), on-board software can fail with disastrous consequences if it encounters problematic software/hardware interaction or must operate in an unexpected environment. We are using a Bayesian approach to monitor the software and its behavior during operation and provide up-to-date information about the health of the software and its components. The powerful reasoning mechanism provided by our model-based Bayesian approach makes reliable diagnosis of the root causes possible and minimizes the number of false alarms. Compilation of the Bayesian model into compact arithmetic circuits makes SWHM feasible even on platforms with limited CPU power. We show initial results of SWHM on a small simulator of an embedded aircraft software system, where software and sensor faults can be injected.
Orbiter/payload proximity operations: Lateral approach technique
NASA Technical Reports Server (NTRS)
Bell, J. A.; Jones, H. L.; Mcadoo, S. F.
1977-01-01
The lateral approach is presented for proximity operations associated with the retrieval of free flying payloads. An out of plane final approach emphasizing onboard software support is recommended for all except the latter segment of the final approach in which manual control is considered mandatory. An overall assessment of various candidate proximity operations techniques are made.
Beyond the Quantitative and Qualitative Divide: Research in Art Education as Border Skirmish.
ERIC Educational Resources Information Center
Sullivan, Graeme
1996-01-01
Analyzes a research project that utilizes a coherent conceptual model of art education research incorporating the demand for empirical rigor and providing for diverse interpretive frameworks. Briefly profiles the NUD*IST (Non-numerical Unstructured Data Indexing Searching and Theorizing) software system that can organize and retrieve complex…
Recommender System for Learning SQL Using Hints
ERIC Educational Resources Information Center
Lavbic, Dejan; Matek, Tadej; Zrnec, Aljaž
2017-01-01
Today's software industry requires individuals who are proficient in as many programming languages as possible. Structured query language (SQL), as an adopted standard, is no exception, as it is the most widely used query language to retrieve and manipulate data. However, the process of learning SQL turns out to be challenging. The need for a…
ERIC Educational Resources Information Center
Girill, T. R.
1991-01-01
This article continues the description of DFT (Document, Find, Theseus), an online documentation system that provides computer-managed on-demand printing of software manuals as well as the interactive retrieval of reference passages. Document boundaries in the hypertext database are discussed, search vocabulary complexities are described, and text…
Keyless Entry: Building a Text Database Using OCR Technology.
ERIC Educational Resources Information Center
Grotophorst, Clyde W.
1989-01-01
Discusses the use of optical character recognition (OCR) technology to produce an ASCII text database. A tutorial on digital scanning and OCR is provided, and a systems integration project which used the Calera CDP-3000XF scanner and text retrieval software to construct a database of dissertations at George Mason University is described. (four…
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 2 2014-07-01 2014-07-01 false Definitions. 290.5 Section 290.5 National... Definitions. The terms used in this rule with the exception of the following are defined in DCAAP 5410.14. (a... means. This does not include computer software, which is the tool by which to create, store, or retrieve...
BIBLIO: A Computer System Designed to Support the Near-Library User Model of Information Retrieval.
ERIC Educational Resources Information Center
Belew, Richard K.; Holland, Maurita Peterson
1988-01-01
Description of the development of the Information Exchange Facility, a prototype microcomputer-based personal bibliographic facility, covers software selection, user selection, overview of the system, and evaluation. The plan for an integrated system, BIBLIO, and the future role of libraries are discussed. (eight references) (MES)
Using an Intelligent Tutor and Math Fluency Training to Improve Math Performance
ERIC Educational Resources Information Center
Arroyo, Ivon; Royer, James M.; Woolf, Beverly P.
2011-01-01
This article integrates research in intelligent tutors with psychology studies of memory and math fluency (the speed to retrieve or calculate answers to basic math operations). It describes the impact of computer software designed to improve either strategic behavior or math fluency. Both competencies are key to improved performance and both…
Installing an Integrated Information System in a Centralized Network.
ERIC Educational Resources Information Center
Mendelson, Andrew D.
1992-01-01
Many schools are looking at ways to centralize the distribution and retrieval of video, voice, and data transmissions in an integrate information system (IIS). A centralized system offers greater control of hardware and software. Describes media network planning to retrofit an Illinois' high school with a fiber optic-based IIS. (MLF)
ERIC Educational Resources Information Center
Maier, Caroline Alexandra
2001-01-01
Presents an activity in which students seek answers to questions about evolutionary relationships by using genetic databases and bioinformatics software. Students build genetic distance matrices and phylogenetic trees based on molecular sequence data using web-based resources. Provides a flowchart of steps involved in accessing, retrieving, and…
45 CFR 205.35 - Mechanized claims processing and information retrieval systems; definitions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... software and hardware used: (1) To introduce, control and account for data items in providing public... undertaken, and the resources required to complete the project; (2) The preparation of an APD; (3) The preparation of a detailed project plan describing when and how the computer system will be designed and...
Visits, Hits, Caching and Counting on the World Wide Web: Old Wine in New Bottles?
ERIC Educational Resources Information Center
Berthon, Pierre; Pitt, Leyland; Prendergast, Gerard
1997-01-01
Although web browser caching speeds up retrieval, reduces network traffic, and decreases the load on servers and browser's computers, an unintended consequence for marketing research is that Web servers undercount hits. This article explores counting problems, caching, proxy servers, trawler software and presents a series of correction factors…
A Multi-Language System for Knowledge Extraction in E-Learning Videos
ERIC Educational Resources Information Center
Sood, Aparesh; Mittal, Ankush; Sarthi, Divya
2006-01-01
The existing multimedia software in E-Learning does not provide par excellence multimedia data service to the common user, hence E-Learning services are still short of intelligence and sophisticated end user tools for visualization and retrieval. An efficient approach to achieve the tasks such as, regional language narration, regional language…
NASA Technical Reports Server (NTRS)
Didlake, Anthony C., Jr.; Heymsfield, Gerald M.; Tian, Lin; Guimond, Stephen R.
2015-01-01
The coplane analysis technique for mapping the three-dimensional wind field of precipitating systems is applied to the NASA High Altitude Wind and Rain Airborne Profiler (HIWRAP). HIWRAP is a dual-frequency Doppler radar system with two downward pointing and conically scanning beams. The coplane technique interpolates radar measurements to a natural coordinate frame, directly solves for two wind components, and integrates the mass continuity equation to retrieve the unobserved third wind component. This technique is tested using a model simulation of a hurricane and compared to a global optimization retrieval. The coplane method produced lower errors for the cross-track and vertical wind components, while the global optimization method produced lower errors for the along-track wind component. Cross-track and vertical wind errors were dependent upon the accuracy of the estimated boundary condition winds near the surface and at nadir, which were derived by making certain assumptions about the vertical velocity field. The coplane technique was then applied successfully to HIWRAP observations of Hurricane Ingrid (2013). Unlike the global optimization method, the coplane analysis allows for a transparent connection between the radar observations and specific analysis results. With this ability, small-scale features can be analyzed more adequately and erroneous radar measurements can be identified more easily.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-10
... Certain GPS Navigation Products, Components Thereof, and Related Software, DN 2814; the Commission is... importation of certain GPS navigation products, components thereof, and related software. The complaint names...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.
This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages ofmore » information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.« less
A general UNIX interface for biocomputing and network information retrieval software.
Kiong, B K; Tan, T W
1993-10-01
We describe a UNIX program, HYBROW, which can integrate without modification a wide range of UNIX biocomputing and network information retrieval software. HYBROW works in conjunction with a separate set of ASCII files containing embedded hypertext-like links. The program operates like a hypertext browser featuring five basic links: file link, execute-only link, execute-display link, directory-browse link and field-filling link. Useful features of the interface may be developed using combinations of these links with simple shell scripts and examples of these are briefly described. The system manager who supports biocomputing users should find the program easy to maintain, and useful in assisting new and infrequent users; it is also simple to incorporate new programs. Moreover, the individual user can customize the interface, create dynamic menus, hypertext a document, invoke shell scripts and new programs simply with a basic understanding of the UNIX operating system and any text editor. This program was written in C language and uses the UNIX curses and termcap libraries. It is freely available as a tar compressed file (by anonymous FTP from nuscc.nus.sg).
U.S. Participation in the GOME and SCIAMACHY Projects
NASA Technical Reports Server (NTRS)
Chance, K. V.
1996-01-01
This report summarizes research done under NASA Grant NAGW-2541 from April 1, 1996 through March 31, 1997. The research performed during this reporting period includes development and maintenance of scientific software for the GOME retrieval algorithms, consultation on operational software development for GOME, consultation and development for SCIAMACHY near-real-time (NRT) and off-line (OL) data products, and development of infrared line-by-line atmospheric modeling and retrieval capability for SCIAMACHY. SAO also continues to participate in GOME validation studies, to the limit that can be accomplished at the present level of funding. The Global Ozone Monitoring Experiment was successfully launched on the ERS-2 satellite on April 20, 1995, and remains working in normal fashion. SCIAMACHY is currently in instrument characterization. The first two European ozone monitoring instruments (OMI), to fly on the Metop series of operational meteorological satellites being planned by Eumetsat, have been selected to be GOME-type instruments (the first, in fact, will be the refurbished GOME flight spare). K. Chance is the U.S. member of the OMI Users Advisory Group.
NASA Astrophysics Data System (ADS)
Nemuc, A.; Vasilescu, J.; Talianu, C.; Belegante, L.; Nicolae, D.
2013-11-01
Multi-wavelength depolarization Raman lidar measurements from Magurele, Romania are used in this study along with simulated mass-extinction efficiencies to calculate the mass concentration profiles of different atmospheric components, due to their different depolarization contribution to the 532 nm backscatter coefficient. Linear particle depolarization ratio (δpart) was computed using the relative amplification factor and the system-dependent molecular depolarization. The low depolarizing component was considered as urban/smoke, with a mean δpart of 3%, while for the high depolarizing component (mineral dust) a mean δpart of 35% was assumed. For this study 11 months of lidar measurements were analysed. Two study cases are presented in details: one for a typical Saharan dust aerosol intrusion, 10 June 2012 and one for 12 July 2012 when a lofted layer consisting of biomass burning smoke extended from 3 to 4.5 km height. Optical Properties of Aerosols and Clouds software package (OPAC) classification and conversion factors were used to calculate mass concentrations. We found that calibrated depolarization measurements are critical in distinguishing between smoke-reach aerosol during the winter and dust-reach aerosol during the summer, as well as between elevated aerosol layers having different origins. Good agreement was found between lidar retrievals and DREAM- Dust REgional Atmospheric Model forecasts in cases of Saharan dust. Our method was also compared against LIRIC (The Lidar/Radiometer Inversion Code) and very small differences were observed.
NASA Astrophysics Data System (ADS)
Nemuc, A.; Vasilescu, J.; Talianu, C.; Belegante, L.; Nicolae, D.
2013-06-01
Multiwavelength depolarization Raman lidar measurements from Magurele, Romania are used in this study along with simulated mass-extinction efficiencies to calculate the mass concentrations profiles of different atmospheric components, due to their different depolarization contribution to the 532 nm backscatter coefficient. Linear particle depolarization ratio (δpart) was computed using the relative amplification factor and the system-dependent molecular depolarization. The low depolarizing component was considered as urban/smoke, with a mean δpart of 3%, while for the high depolarizing component (mineral dust) a mean δpart of 35% was assumed. For this study 11 months of lidar measurements were analyzed. Two study cases are presented in details: one for a typical Saharan dust aerosol intrusion, 10 June 2012 and one for 12 July 2012 when a lofted layer consisting of biomass burning smoke extended from 3 to 4.5 km height. Optical Properties of Aerosols and Clouds software package (OPAC) classification and conversion factors were used to calculate mass concentrations. We found that calibrated depolarization measurements are critical to distinguish between smoke-reach aerosol during the winter and dust-reach aerosol during the summer, as well as between elevated aerosol layers having different origins. Good agreement was found between lidar retrievals and DREAM- Dust REgional Atmospheric Model forecasts in cases of Saharan dust. Our method was also compared against LIRIC (The Lidar/Radiometer Inversion Code) and very small differences were observed.
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1993-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.
Hyperspectral Data Processing and Mapping of Soil Parameters: Preliminary Data from Tuscany (Italy)
NASA Astrophysics Data System (ADS)
Garfagnoli, F.; Moretti, S.; Catani, F.; Innocenti, L.; Chiarantini, L.
2010-12-01
Hyperspectral imaging has become a very powerful remote sensing tool for its capability of performing chemical and physical analysis of the observed areas. The objective of this study is to retrieve and characterize clay mineral content of the cultivated layer of soils, from both airborne hyperspectral and field spectrometry surveys in the 400-2500 nm spectral range. Correlation analysis is used to examine the possibility to predict the selected property using high-resolution reflectance spectra and images. The study area is located in the Mugello basin, about 30 km north of Firenze (Tuscany, Italy). Agriculturally suitable terrains are assigned mainly to annual crops, marginally to olive groves, vineyards and orchards. Soils mostly belong to Regosols and Cambisols orders. About 80 topsoil samples scattered all over the area were collected simultaneously with the flight of SIM.GA hyperspectral camera from Selex Galileo. The quantitative determination of clay minerals content in soil samples was performed by means of XRD and Rietveld refinement. An ASD FieldSpec spectroradiometer was used to obtain reflectance spectra from dried, crushed and sieved samples under controlled laboratory conditions. Different chemometric techniques (multiple linear regression, vertex component analysis, partial least squares regression and band depth analysis) were preliminarily tested to correlate mineralogical records with reflectance data. A one component partial least squares regression model yielded a preliminary R2 value of 0.65. A similar result was achieved by plotting the absorption peak depth at 2210 versus total clay mineral content (band-depth analysis). A complete hyperspectral geocoded reflectance dataset was collected using SIM.GA hyperspectral image sensor from Selex-Galileo, mounted on board of the University of Firenze ultra light aircraft. The approximate pixel resolution was 0.6 m (VNIR) and 1.2 m (SWIR). Airborne SIM.GA row data were firstly transformed into at-sensor radiance values, where calibration coefficients and parameters from laboratory measurements are applied to non-georeferred VNIR/SWIR DN values. Then, geocoded products are retrieved for each flight line by using a procedure developed in IDL Language and PARGE (PARametric Geocoding) software. When all compensation parameters are applied to hyperspectral data or to the final thematic map, orthorectified, georeferred and coregistered VNIR to SWIR images or maps are available for GIS application and 3D view. Airborne imagery has to be corrected for the influence of the atmosphere, solar illumination, sensor viewing geometry and terrain geometry information, for the retrieval of inherent surface reflectance properties. Then, different geophysical parameters can be investigated and retrieved by means of inversion algorithms. The experimental fitting of laboratory data on mineral content is used for airborne data inversion, whose results are in agreement with laboratory records, demonstrating the possibility to use this methodology for digital mapping of soil properties.
Contingency theoretic methodology for agent-based web-oriented manufacturing systems
NASA Astrophysics Data System (ADS)
Durrett, John R.; Burnell, Lisa J.; Priest, John W.
2000-12-01
The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.