Sample records for retrieval tool development

  1. EVA Retriever Demonstration

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The EVA retriever is demonstrated in the Manipulator Development Facility (MDF). The retriever moves on the air bearing table 'searching' for its target, in this case tools 'dropped' by astronauts on orbit.

  2. Evaluation of a simple method for the automatic assignment of MeSH descriptors to health resources in a French online catalogue.

    PubMed

    Névéol, Aurélie; Pereira, Suzanne; Kerdelhué, Gaetan; Dahamna, Badisse; Joubert, Michel; Darmoni, Stéfan J

    2007-01-01

    The growing number of resources to be indexed in the catalogue of online health resources in French (CISMeF) calls for curating strategies involving automatic indexing tools while maintaining the catalogue's high indexing quality standards. To develop a simple automatic tool that retrieves MeSH descriptors from documents titles. In parallel to research on advanced indexing methods, a bag-of-words tool was developed for timely inclusion in CISMeF's maintenance system. An evaluation was carried out on a corpus of 99 documents. The indexing sets retrieved by the automatic tool were compared to manual indexing based on the title and on the full text of resources. 58% of the major main headings were retrieved by the bag-of-words algorithm and the precision on main heading retrieval was 69%. Bag-of-words indexing has effectively been used on selected resources to be included in CISMeF since August 2006. Meanwhile, on going work aims at improving the current version of the tool.

  3. EM-31 RETRIEVAL KNOWLEDGE CENTER MEETING REPORT: MOBILIZE AND DISLODGE TANK WASTE HEELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fellinger, A.

    2010-02-16

    The Retrieval Knowledge Center sponsored a meeting in June 2009 to review challenges and gaps to retrieval of tank waste heels. The facilitated meeting was held at the Savannah River Research Campus with personnel broadly representing tank waste retrieval knowledge at Hanford, Savannah River, Idaho, and Oak Ridge. This document captures the results of this meeting. In summary, it was agreed that the challenges to retrieval of tank waste heels fell into two broad categories: (1) mechanical heel waste retrieval methodologies and equipment and (2) understanding and manipulating the heel waste (physical, radiological, and chemical characteristics) to support retrieval optionsmore » and subsequent processing. Recent successes and lessons from deployments of the Sand and Salt Mantis vehicles as well as retrieval of C-Area tanks at Hanford were reviewed. Suggestions to address existing retrieval approaches that utilize a limited set of tools and techniques are included in this report. The meeting found that there had been very little effort to improve or integrate the multiple proven or new techniques and tools available into a menu of available methods for rapid insertion into baselines. It is recommended that focused developmental efforts continue in the two areas underway (low-level mixing evaluation and pumping slurries with large solid materials) and that projects to demonstrate new/improved tools be launched to outfit tank farm operators with the needed tools to complete tank heel retrievals effectively and efficiently. This document describes the results of a meeting held on June 3, 2009 at the Savannah River Site in South Carolina to identify technology gaps and potential technology solutions to retrieving high-level waste (HLW) heels from waste tanks within the complex of sites run by the U. S. Department of Energy (DOE). The meeting brought together personnel with extensive tank waste retrieval knowledge from DOE's four major waste sites - Hanford, Savannah River, Idaho, and Oak Ridge. The meeting was arranged by the Retrieval Knowledge Center (RKC), which is a technology development project sponsored by the Office of Technology Innovation & Development - formerly the Office of Engineering and Technology - within the DOE Office of Environmental Management (EM).« less

  4. Exploiting LCSH, LCC, and DDC To Retrieve Networked Resources: Issues and Challenges.

    ERIC Educational Resources Information Center

    Chan, Lois Mai

    This paper examines how the nature of the World Wide Web and characteristics of networked resources affect subject access and analyzes the requirements of effective indexing and retrieval tools. The current and potential uses of existing tools and possible courses of future development are explored in the context of recent research. The first…

  5. Enhanced Information Retrieval Using AJAX

    NASA Astrophysics Data System (ADS)

    Kachhwaha, Rajendra; Rajvanshi, Nitin

    2010-11-01

    Information Retrieval deals with the representation, storage, organization of, and access to information items. The representation and organization of information items should provide the user with easy access to the information with the rapid development of Internet, large amounts of digitally stored information is readily available on the World Wide Web. This information is so huge that it becomes increasingly difficult and time consuming for the users to find the information relevant to their needs. The explosive growth of information on the Internet has greatly increased the need for information retrieval systems. However, most of the search engines are using conventional information retrieval systems. An information system needs to implement sophisticated pattern matching tools to determine contents at a faster rate. AJAX has recently emerged as the new tool such the of information retrieval process of information retrieval can become fast and information reaches the use at a faster pace as compared to conventional retrieval systems.

  6. [Useful tools and methods for literature retrieval in pubmed: step-by-step guide for physicians].

    PubMed

    Hevia M, Joaquín; Huete G, Álvaro; Alfaro F, Sandra; Palominos V, Verónica

    2017-12-01

    Developing skills to search the medical literature has potential benefits on patient care and allow physicians to better orient their efforts when answering daily clinical questions. The objective of this paper is to share useful tools for optimizing medical literature retrieval in MEDLINE using PubMed including MeSH terms, filters and connectors.

  7. 49 CFR 563.12 - Data retrieval tools.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 6 2011-10-01 2011-10-01 false Data retrieval tools. 563.12 Section 563.12... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EVENT DATA RECORDERS § 563.12 Data retrieval tools. Each... tool(s) is commercially available that is capable of accessing and retrieving the data stored in the...

  8. Retrieval of radiology reports citing critical findings with disease-specific customization.

    PubMed

    Lacson, Ronilda; Sugarbaker, Nathanael; Prevedello, Luciano M; Ivan, Ip; Mar, Wendy; Andriole, Katherine P; Khorasani, Ramin

    2012-01-01

    Communication of critical results from diagnostic procedures between caregivers is a Joint Commission national patient safety goal. Evaluating critical result communication often requires manual analysis of voluminous data, especially when reviewing unstructured textual results of radiologic findings. Information retrieval (IR) tools can facilitate this process by enabling automated retrieval of radiology reports that cite critical imaging findings. However, IR tools that have been developed for one disease or imaging modality often need substantial reconfiguration before they can be utilized for another disease entity. THIS PAPER: 1) describes the process of customizing two Natural Language Processing (NLP) and Information Retrieval/Extraction applications - an open-source toolkit, A Nearly New Information Extraction system (ANNIE); and an application developed in-house, Information for Searching Content with an Ontology-Utilizing Toolkit (iSCOUT) - to illustrate the varying levels of customization required for different disease entities and; 2) evaluates each application's performance in identifying and retrieving radiology reports citing critical imaging findings for three distinct diseases, pulmonary nodule, pneumothorax, and pulmonary embolus. Both applications can be utilized for retrieval. iSCOUT and ANNIE had precision values between 0.90-0.98 and recall values between 0.79 and 0.94. ANNIE had consistently higher precision but required more customization. Understanding the customizations involved in utilizing NLP applications for various diseases will enable users to select the most suitable tool for specific tasks.

  9. Retrieval of Radiology Reports Citing Critical Findings with Disease-Specific Customization

    PubMed Central

    Lacson, Ronilda; Sugarbaker, Nathanael; Prevedello, Luciano M; Ivan, IP; Mar, Wendy; Andriole, Katherine P; Khorasani, Ramin

    2012-01-01

    Background: Communication of critical results from diagnostic procedures between caregivers is a Joint Commission national patient safety goal. Evaluating critical result communication often requires manual analysis of voluminous data, especially when reviewing unstructured textual results of radiologic findings. Information retrieval (IR) tools can facilitate this process by enabling automated retrieval of radiology reports that cite critical imaging findings. However, IR tools that have been developed for one disease or imaging modality often need substantial reconfiguration before they can be utilized for another disease entity. Purpose: This paper: 1) describes the process of customizing two Natural Language Processing (NLP) and Information Retrieval/Extraction applications – an open-source toolkit, A Nearly New Information Extraction system (ANNIE); and an application developed in-house, Information for Searching Content with an Ontology-Utilizing Toolkit (iSCOUT) – to illustrate the varying levels of customization required for different disease entities and; 2) evaluates each application’s performance in identifying and retrieving radiology reports citing critical imaging findings for three distinct diseases, pulmonary nodule, pneumothorax, and pulmonary embolus. Results: Both applications can be utilized for retrieval. iSCOUT and ANNIE had precision values between 0.90-0.98 and recall values between 0.79 and 0.94. ANNIE had consistently higher precision but required more customization. Conclusion: Understanding the customizations involved in utilizing NLP applications for various diseases will enable users to select the most suitable tool for specific tasks. PMID:22934127

  10. Figure mining for biomedical research.

    PubMed

    Rodriguez-Esteban, Raul; Iossifov, Ivan

    2009-08-15

    Figures from biomedical articles contain valuable information difficult to reach without specialized tools. Currently, there is no search engine that can retrieve specific figure types. This study describes a retrieval method that takes advantage of principles in image understanding, text mining and optical character recognition (OCR) to retrieve figure types defined conceptually. A search engine was developed to retrieve tables and figure types to aid computational and experimental research. http://iossifovlab.cshl.edu/figurome/.

  11. Design Package for Fuel Retrieval System Fuel Handling Tool Modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TEDESCHI, D.J.

    This design package documents design, fabrication, and testing of new stinger tool design. Future revisions will document further development of the stinger tool and incorporate various developmental stages, and final test results.

  12. Intelligent Information Retrieval: Diagnosing Information Need. Part I. The Theoretical Framework for Developing an Intelligent IR Tool.

    ERIC Educational Resources Information Center

    Cole, Charles

    1998-01-01

    Suggests that the principles underlying the procedure used by doctors to diagnose a patient's disease are useful in the design of intelligent information-retrieval systems because the task of the doctor is conceptually similar to the computer or human intermediary's task in information retrieval: to draw out the user's query/information need.…

  13. Cry-Bt identifier: a biological database for PCR detection of Cry genes present in transgenic plants.

    PubMed

    Singh, Vinay Kumar; Ambwani, Sonu; Marla, Soma; Kumar, Anil

    2009-10-23

    We describe the development of a user friendly tool that would assist in the retrieval of information relating to Cry genes in transgenic crops. The tool also helps in detection of transformed Cry genes from Bacillus thuringiensis present in transgenic plants by providing suitable designed primers for PCR identification of these genes. The tool designed based on relational database model enables easy retrieval of information from the database with simple user queries. The tool also enables users to access related information about Cry genes present in various databases by interacting with different sources (nucleotide sequences, protein sequence, sequence comparison tools, published literature, conserved domains, evolutionary and structural data). http://insilicogenomics.in/Cry-btIdentifier/welcome.html.

  14. Selected aspects of microelectronics technology and applications: Numerically controlled machine tools. Technology trends series no. 2

    NASA Astrophysics Data System (ADS)

    Sigurdson, J.; Tagerud, J.

    1986-05-01

    A UNIDO publication about machine tools with automatic control discusses the following: (1) numerical control (NC) machine tool perspectives, definition of NC, flexible manufacturing systems, robots and their industrial application, research and development, and sensors; (2) experience in developing a capability in NC machine tools; (3) policy issues; (4) procedures for retrieval of relevant documentation from data bases. Diagrams, statistics, bibliography are included.

  15. Information Discovery and Retrieval Tools

    DTIC Science & Technology

    2004-12-01

    information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.

  16. Information Discovery and Retrieval Tools

    DTIC Science & Technology

    2003-04-01

    information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.

  17. JPL Developments in Retrieval Algorithms for Geostationary Observations - Applications to H2CO

    NASA Technical Reports Server (NTRS)

    Kurosu, Thomas P.; Kulawik, Susan; Natraj, Vijay

    2012-01-01

    JPL has strong expertise in atmospheric retrievals from UV and thermal IR, and a wide range of tools to apply to observations and instrument characterization. Radiative Transfer, AMF, Inversion, Fitting, Assimilation. Tools were applied for a preliminary study of H2CO sensitivities from GEO. Results show promise for moderate/strong H2CO lading but also that low background conditions will prove a challenge. H2CO DOF are not too strongly dependent on FWHM. GEMS (Geostationary Environmental Monitoring Spectrometer) choice of 0.6 nm FWHM (?) spectral resolution is adequate for H2CO retrievals. Case study can easily be adapted to GEMS observations/instrument model for more in-depth sensitivity characterization.

  18. An Online Image Analysis Tool for Science Education

    ERIC Educational Resources Information Center

    Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.

    2008-01-01

    This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…

  19. Dynamics, control and sensor issues pertinent to robotic hands for the EVA retriever system

    NASA Technical Reports Server (NTRS)

    Mclauchlan, Robert A.

    1987-01-01

    Basic dynamics, sensor, control, and related artificial intelligence issues pertinent to smart robotic hands for the Extra Vehicular Activity (EVA) Retriever system are summarized and discussed. These smart hands are to be used as end effectors on arms attached to manned maneuvering units (MMU). The Retriever robotic systems comprised of MMU, arm and smart hands, are being developed to aid crewmen in the performance of routine EVA tasks including tool and object retrieval. The ultimate goal is to enhance the effectiveness of EVA crewmen.

  20. BioUSeR: a semantic-based tool for retrieving Life Science web resources driven by text-rich user requirements

    PubMed Central

    2013-01-01

    Background Open metadata registries are a fundamental tool for researchers in the Life Sciences trying to locate resources. While most current registries assume that resources are annotated with well-structured metadata, evidence shows that most of the resource annotations simply consists of informal free text. This reality must be taken into account in order to develop effective techniques for resource discovery in Life Sciences. Results BioUSeR is a semantic-based tool aimed at retrieving Life Sciences resources described in free text. The retrieval process is driven by the user requirements, which consist of a target task and a set of facets of interest, both expressed in free text. BioUSeR is able to effectively exploit the available textual descriptions to find relevant resources by using semantic-aware techniques. Conclusions BioUSeR overcomes the limitations of the current registries thanks to: (i) rich specification of user information needs, (ii) use of semantics to manage textual descriptions, (iii) retrieval and ranking of resources based on user requirements. PMID:23635042

  1. Task Oriented Tools for Information Retrieval

    ERIC Educational Resources Information Center

    Yang, Peilin

    2017-01-01

    Information Retrieval (IR) is one of the most evolving research fields and has drawn extensive attention in recent years. Because of its empirical nature, the advance of the IR field is closely related to the development of various toolkits. While the traditional IR toolkit mainly provides a platform to evaluate the effectiveness of retrieval…

  2. IPAT: a freely accessible software tool for analyzing multiple patent documents with inbuilt landscape visualizer.

    PubMed

    Ajay, Dara; Gangwal, Rahul P; Sangamwar, Abhay T

    2015-01-01

    Intelligent Patent Analysis Tool (IPAT) is an online data retrieval tool, operated based on text mining algorithm to extract specific patent information in a predetermined pattern into an Excel sheet. The software is designed and developed to retrieve and analyze technology information from multiple patent documents and generate various patent landscape graphs and charts. The software is C# coded in visual studio 2010, which extracts the publicly available patent information from the web pages like Google Patent and simultaneously study the various technology trends based on user-defined parameters. In other words, IPAT combined with the manual categorization will act as an excellent technology assessment tool in competitive intelligence and due diligence for predicting the future R&D forecast.

  3. An advanced search engine for patent analytics in medicinal chemistry.

    PubMed

    Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnykova, Dina; Lovis, Christian; Ruch, Patrick

    2012-01-01

    Patent collections contain an important amount of medical-related knowledge, but existing tools were reported to lack of useful functionalities. We present here the development of TWINC, an advanced search engine dedicated to patent retrieval in the domain of health and life sciences. Our tool embeds two search modes: an ad hoc search to retrieve relevant patents given a short query and a related patent search to retrieve similar patents given a patent. Both search modes rely on tuning experiments performed during several patent retrieval competitions. Moreover, TWINC is enhanced with interactive modules, such as chemical query expansion, which is of prior importance to cope with various ways of naming biomedical entities. While the related patent search showed promising performances, the ad-hoc search resulted in fairly contrasted results. Nonetheless, TWINC performed well during the Chemathlon task of the PatOlympics competition and experts appreciated its usability.

  4. Experiments with a novel content-based image retrieval software: can we eliminate classification systems in adolescent idiopathic scoliosis?

    PubMed

    Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma

    2014-02-01

    Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.

  5. PharmARTS: terminology web services for drug safety data coding and retrieval.

    PubMed

    Alecu, Iulian; Bousquet, Cédric; Degoulet, Patrice; Jaulent, Marie-Christine

    2007-01-01

    MedDRA and WHO-ART are the terminologies used to encode drug safety reports. The standardisation achieved with these terminologies facilitates: 1) The sharing of safety databases; 2) Data mining for the continuous reassessment of benefit-risk ratio at national or international level or in the pharmaceutical industry. There is some debate about the capacity of these terminologies for retrieving case reports related to similar medical conditions. We have developed a resource that allows grouping similar medical conditions more effectively than WHO-ART and MedDRA. We describe here a software tool facilitating the use of this terminological resource thanks to an RDF framework with support for RDF Schema inferencing and querying. This tool eases coding and data retrieval in drug safety.

  6. Bio-TDS: bioscience query tool discovery system.

    PubMed

    Gnimpieba, Etienne Z; VanDiermen, Menno S; Gustafson, Shayla M; Conn, Bill; Lushbough, Carol M

    2017-01-04

    Bioinformatics and computational biology play a critical role in bioscience and biomedical research. As researchers design their experimental projects, one major challenge is to find the most relevant bioinformatics toolkits that will lead to new knowledge discovery from their data. The Bio-TDS (Bioscience Query Tool Discovery Systems, http://biotds.org/) has been developed to assist researchers in retrieving the most applicable analytic tools by allowing them to formulate their questions as free text. The Bio-TDS is a flexible retrieval system that affords users from multiple bioscience domains (e.g. genomic, proteomic, bio-imaging) the ability to query over 12 000 analytic tool descriptions integrated from well-established, community repositories. One of the primary components of the Bio-TDS is the ontology and natural language processing workflow for annotation, curation, query processing, and evaluation. The Bio-TDS's scientific impact was evaluated using sample questions posed by researchers retrieved from Biostars, a site focusing on BIOLOGICAL DATA ANALYSIS: The Bio-TDS was compared to five similar bioscience analytic tool retrieval systems with the Bio-TDS outperforming the others in terms of relevance and completeness. The Bio-TDS offers researchers the capacity to associate their bioscience question with the most relevant computational toolsets required for the data analysis in their knowledge discovery process. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. CDAPubMed: a browser extension to retrieve EHR-based biomedical literature.

    PubMed

    Perez-Rey, David; Jimenez-Castellanos, Ana; Garcia-Remesal, Miguel; Crespo, Jose; Maojo, Victor

    2012-04-05

    Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems.

  8. CDAPubMed: a browser extension to retrieve EHR-based biomedical literature

    PubMed Central

    2012-01-01

    Background Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. Results We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. Conclusions CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems. PMID:22480327

  9. What Can Graph Theory Tell Us about Word Learning and Lexical Retrieval?

    ERIC Educational Resources Information Center

    Vitevitch, Michael S.

    2008-01-01

    Purpose: Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of…

  10. NSWC-NADC interactive communication links for AN/UYS-1 loadtape creation and retrieval

    NASA Astrophysics Data System (ADS)

    Greathouse, D. M.

    1984-09-01

    This report contains an alternative method of communication (interactive vs. remote batch) with the Naval Air Development Center for the creation and retrieval of AN/UYS-1 Advanced Signal Processor (ASP) operational software loadtapes. Operational software for the Digital Acoustic Sensor Simulator (DASS) program is developed and maintained at the Naval Air Development Center (NADC). The Facility for Automated Software Production (FASP), an NADC-resident software generation facility, provides the support tools necessary for data base creation, software development and maintenance, and loadtape generation. Once a loadtape file is generated at NADC, it must be retrieved via telephone transmission and placed in a format suitable for loading into the AN/UYS-1 Advanced Signal Processor (ASP).

  11. Stackfile Database

    NASA Technical Reports Server (NTRS)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  12. Memory Retrieval in Mice and Men

    PubMed Central

    Ben-Yakov, Aya; Dudai, Yadin; Mayford, Mark R.

    2015-01-01

    Retrieval, the use of learned information, was until recently mostly terra incognita in the neurobiology of memory, owing to shortage of research methods with the spatiotemporal resolution required to identify and dissect fast reactivation or reconstruction of complex memories in the mammalian brain. The development of novel paradigms, model systems, and new tools in molecular genetics, electrophysiology, optogenetics, in situ microscopy, and functional imaging, have contributed markedly in recent years to our ability to investigate brain mechanisms of retrieval. We review selected developments in the study of explicit retrieval in the rodent and human brain. The picture that emerges is that retrieval involves coordinated fast interplay of sparse and distributed corticohippocampal and neocortical networks that may permit permutational binding of representational elements to yield specific representations. These representations are driven largely by the activity patterns shaped during encoding, but are malleable, subject to the influence of time and interaction of the existing memory with novel information. PMID:26438596

  13. Wireline system for multiple direct push tool usage

    DOEpatents

    Bratton, Wesley L.; Farrington, Stephen P.; Shinn, II, James D.; Nolet, Darren C.

    2003-11-11

    A tool latching and retrieval system allows the deployment and retrieval of a variety of direct push subsurface characterization tools through an embedded rod string during a single penetration without requiring withdrawal of the string from the ground. This enables the in situ interchange of different tools, as well as the rapid retrieval of soil core samples from multiple depths during a single direct push penetration. The system includes specialized rods that make up the rod string, a tool housing which is integral to the rod string, a lock assembly, and several tools which mate to the lock assembly.

  14. Automated MeSH indexing of the World-Wide Web.

    PubMed Central

    Fowler, J.; Kouramajian, V.; Maram, S.; Devadhar, V.

    1995-01-01

    To facilitate networked discovery and information retrieval in the biomedical domain, we have designed a system for automatic assignment of Medical Subject Headings to documents retrieved from the World-Wide Web. Our prototype implementations show significant promise. We describe our methods and discuss the further development of a completely automated indexing tool called the "Web-MeSH Medibot." PMID:8563421

  15. Tools Developed to Prepare and Stabilize Reactor Spent Fuel for Retrieval from Tile Holes - 12251

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horne, Michael; Clough, Malcolm

    Spent fuel from the Chalk River Laboratories (CRL) nuclear reactors is stored in the waste management areas on site. This fuel is contained within carbon steel spent fuel cans that are stored inside vertical carbon steel lined concrete pipes in the ground known as tile holes. The fuel cans have been stored in the tile holes for greater than 30 years. Some of the fuel cans have experienced corrosion which may have affected their structural integrity as well as the potential to form hydrogen gas. In addition to these potential hazards, there was a need to clean contaminated surfaces insidemore » of and around the exposed upper surface of the tile holes. As part of the site waste management remediation plan spent fuel will be retrieved from degraded tile holes, dried, and relocated to a new purpose built above ground storage facility. There have been a number of tools that are required to be developed to ensure spent fuel cans are in a safe condition prior to retrieval and re-location. A series of special purpose tools have been designed and constructed to stabilize the contents of the tile holes, to determine the integrity of the fuel containers and to decontaminate inside and around the tile holes. Described herein are the methods and types of tools used. Tools that have been presented here have been used, or will be used in the near future, in the waste management areas of the CRL Site in preparation for storage of spent fuel in a new above ground facility. The stabilization tools have been demonstrated on mock-up facilities prior to successful use in the field to remove hydrogen gas and uranium hydrides from the fuel cans. A lifting tool has been developed and used successfully in the field to confirm the integrity of the fuel cans for future relocation. A tool using a commercial dry ice blaster has been developed and is ready to start mock-up trials and is scheduled to be used in the field during the summer of 2012. (authors)« less

  16. Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.

    PubMed

    André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2011-01-01

    Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.

  17. The Emergence of Tool Use during the Second Year of Life

    ERIC Educational Resources Information Center

    Rat-Fischer, Lauriane; O'Regan, J. Kevin; Fagard, Jacqueline

    2012-01-01

    Despite a growing interest in the question of tool-use development in infants, no study so far has systematically investigated how learning to use a tool to retrieve an out-of-reach object progresses with age. This was the first aim of this study, in which 60 infants, aged 14, 16, 18, 20, and 22 months, were presented with an attractive toy and a…

  18. Web tools for effective retrieval, visualization, and evaluation of cardiology medical images and records

    NASA Astrophysics Data System (ADS)

    Masseroli, Marco; Pinciroli, Francesco

    2000-12-01

    To provide easy retrieval, integration and evaluation of multimodal cardiology images and data in a web browser environment, distributed application technologies and java programming were used to implement a client-server architecture based on software agents. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. The client side is a Java applet running in a web browser and providing a friendly medical user interface to perform queries on patient and medical test dat and integrate and visualize properly the various query results. A set of tools based on Java Advanced Imaging API enables to process and analyze the retrieved cardiology images, and quantify their features in different regions of interest. The platform-independence Java technology makes the developed prototype easy to be managed in a centralized form and provided in each site where an intranet or internet connection can be located. Giving the healthcare providers effective tools for querying, visualizing and evaluating comprehensively cardiology medical images and records in all locations where they can need them- i.e. emergency, operating theaters, ward, or even outpatient clinics- the developed prototype represents an important aid in providing more efficient diagnoses and medical treatments.

  19. IMIRSEL: a secure music retrieval testing environment

    NASA Astrophysics Data System (ADS)

    Downie, John S.

    2004-10-01

    The Music Information Retrieval (MIR) and Music Digital Library (MDL) research communities have long noted the need for formal evaluation mechanisms. Issues concerning the unavailability of freely-available music materials have greatly hindered the creation of standardized test collections with which these communities could scientifically assess the strengths and weaknesses of their various music retrieval techniques. The International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL) is being developed at the University of Illinois at Urbana-Champaign (UIUC) specifically to overcome this hindrance to the scientific evaluation of MIR/MDL systems. Together with its subsidiary Human Use of Music Information Retrieval Systems (HUMIRS) project, IMIRSEL will allow MIR/MDL researchers access to the standardized large-scale collection of copyright-sensitive music materials and standardized test queries being housed at UIUC's National Center for Supercomputing Applications (NCSA). Virtual Research Labs (VRL), based upon NCSA's Data-to-Knowledge (D2K) tool set, are being developed through which MIR/MDL researchers will interact with the music materials under a "trusted code" security model.

  20. FIESTA—An R estimation tool for FIA analysts

    Treesearch

    Tracey S. Frescino; Paul L. Patterson; Gretchen G. Moisen; Elizabeth A. Freeman

    2015-01-01

    FIESTA (Forest Inventory ESTimation for Analysis) is a user-friendly R package that was originally developed to support the production of estimates consistent with current tools available for the Forest Inventory and Analysis (FIA) National Program, such as FIDO (Forest Inventory Data Online) and EVALIDator. FIESTA provides an alternative data retrieval and reporting...

  1. Supervised learning of tools for content-based search of image databases

    NASA Astrophysics Data System (ADS)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  2. Clinical guideline representation in a CDS: a human information processing method.

    PubMed

    Kilsdonk, Ellen; Riezebos, Rinke; Kremer, Leontien; Peute, Linda; Jaspers, Monique

    2012-01-01

    The Dutch Childhood Oncology Group (DCOG) has developed evidence-based guidelines for screening childhood cancer survivors for possible late complications of treatment. These paper-based guidelines appeared to not suit clinicians' information retrieval strategies; it was thus decided to communicate the guidelines through a Computerized Decision Support (CDS) tool. To ensure high usability of this tool, an analysis of clinicians' cognitive strategies in retrieving information from the paper-based guidelines was used as requirements elicitation method. An information processing model was developed through an analysis of think aloud protocols and used as input for the design of the CDS user interface. Usability analysis of the user interface showed that the navigational structure of the CDS tool fitted well with the clinicians' mental strategies employed in deciding on survivors screening protocols. Clinicians were more efficient and more complete in deciding on patient-tailored screening procedures when supported by the CDS tool than by the paper-based guideline booklet. The think-aloud method provided detailed insight into users' clinical work patterns that supported the design of a highly usable CDS system.

  3. Retrieval and management of medical information from heterogeneous sources, for its integration in a medical record visualisation tool.

    PubMed

    Cabarcos, Alba; Sanchez, Tamara; Seoane, Jose A; Aguiar-Pulido, Vanessa; Freire, Ana; Dorado, Julian; Pazos, Alejandro

    2010-01-01

    Nowadays, medical practice needs, at the patient Point-of-Care (POC), personalised knowledge adjustable in each moment to the clinical needs of each patient, in order to provide support to decision-making processes, taking into account personalised information. To achieve this, adapting the hospital information systems is necessary. Thus, there is a need of computational developments capable of retrieving and integrating the large amount of biomedical information available today, managing the complexity and diversity of these systems. Hence, this paper describes a prototype which retrieves biomedical information from different sources, manages it to improve the results obtained and to reduce response time and, finally, integrates it so that it is useful for the clinician, providing all the information available about the patient at the POC. Moreover, it also uses tools which allow medical staff to communicate and share knowledge.

  4. A user-friendly tool for medical-related patent retrieval.

    PubMed

    Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnyakova, Dina; Lovis, Christian; Ruch, Patrick

    2012-01-01

    Health-related information retrieval is complicated by the variety of nomenclatures available to name entities, since different communities of users will use different ways to name a same entity. We present in this report the development and evaluation of a user-friendly interactive Web application aiming at facilitating health-related patent search. Our tool, called TWINC, relies on a search engine tuned during several patent retrieval competitions, enhanced with intelligent interaction modules, such as chemical query, normalization and expansion. While the functionality of related article search showed promising performances, the ad hoc search results in fairly contrasted results. Nonetheless, TWINC performed well during the PatOlympics competition and was appreciated by intellectual property experts. This result should be balanced by the limited evaluation sample. We can also assume that it can be customized to be applied in corporate search environments to process domain and company-specific vocabularies, including non-English literature and patents reports.

  5. Automated reuseable components system study results

    NASA Technical Reports Server (NTRS)

    Gilroy, Kathy

    1989-01-01

    The Automated Reusable Components System (ARCS) was developed under a Phase 1 Small Business Innovative Research (SBIR) contract for the U.S. Army CECOM. The objectives of the ARCS program were: (1) to investigate issues associated with automated reuse of software components, identify alternative approaches, and select promising technologies, and (2) to develop tools that support component classification and retrieval. The approach followed was to research emerging techniques and experimental applications associated with reusable software libraries, to investigate the more mature information retrieval technologies for applicability, and to investigate the applicability of specialized technologies to improve the effectiveness of a reusable component library. Various classification schemes and retrieval techniques were identified and evaluated for potential application in an automated library system for reusable components. Strategies for library organization and management, component submittal and storage, and component search and retrieval were developed. A prototype ARCS was built to demonstrate the feasibility of automating the reuse process. The prototype was created using a subset of the classification and retrieval techniques that were investigated. The demonstration system was exercised and evaluated using reusable Ada components selected from the public domain. A requirements specification for a production-quality ARCS was also developed.

  6. Is there a need for biomedical CBIR systems in clinical practice? Outcomes from a usability study

    NASA Astrophysics Data System (ADS)

    Antani, Sameer; Xue, Zhiyun; Long, L. Rodney; Bennett, Deborah; Ward, Sarah; Thoma, George R.

    2011-03-01

    Articles in the literature routinely describe advances in Content Based Image Retrieval (CBIR) and its potential for improving clinical practice, biomedical research and education. Several systems have been developed to address particular needs, however, surprisingly few are found to be in routine practical use. Our collaboration with the National Cancer Institute (NCI) has identified a need to develop tools to annotate and search a collection of over 100,000 cervigrams and related, anonymized patient data. One such tool developed for a projected need for retrieving similar patient images is the prototype CBIR system, called CervigramFinder, which retrieves images based on the visual similarity of particular regions on the cervix. In this article we report the outcomes from a usability study conducted at a primary meeting of practicing experts. We used the study to not only evaluate the system for software errors and ease of use, but also to explore its "user readiness", and to identify obstacles that hamper practical use of such systems, in general. Overall, the participants in the study found the technology interesting and bearing great potential; however, several challenges need to be addressed before the technology can be adopted.

  7. Investigation into the computerized data bases of the Employment and Training Administration. Regional Management Information System Project (RMIS) report on second-year activities, 1975--1976

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Postle, W.; Heckman, B.

    1977-01-01

    The research and development project discussed was aimed at creating the necessary computer system for the rapid retrieval, analysis, and display of information to meet the individual and nonroutine needs of the Department of Labor's Employment and Training Administration and the general public. The major objective was to demonstrate that it was both feasible and practical to organize data that are currently available and to provide planning and management information in a much more usable and timely fashion than previously possible. Fast access to data with a system which is easy to use was an important project goal. Programs weremore » written to analyze and display data by means of bar, pie, and line charts, etc. Although prototypical interactive retrieval, analysis, and report formation tools have been developed, further research and development of interactive tools is required. (RWR)« less

  8. Gorillas (Gorilla gorilla) and orangutans (Pongo pygmaeus) encode relevant problem features in a tool-using task.

    PubMed

    Mulcahy, Nicholas J; Call, Josep; Dunbar, Robin I M

    2005-02-01

    Two important elements in problem solving are the abilities to encode relevant task features and to combine multiple actions to achieve the goal. The authors investigated these 2 elements in a task in which gorillas (Gorilla gorilla) and orangutans (Pongo pygmaeus) had to use a tool to retrieve an out-of-reach reward. Subjects were able to select tools of an appropriate length to reach the reward even when the position of the reward and tools were not simultaneously visible. When presented with tools that were too short to retrieve the reward, subjects were more likely to refuse to use them than when tools were the appropriate length. Subjects were proficient at using tools in sequence to retrieve the reward.

  9. Sex differences in young children's use of tools in a problem-solving task : The role of object-oriented play.

    PubMed

    Gredlein, Jeffrey M; Bjorklund, David F

    2005-06-01

    Three-year-old children were observed in two free-play sessions and participated in a toy-retrieval task, in which only one of six tools could be used to retrieve an out-of-reach toy. Boys engaged in more object-oriented play than girls and were more likely to use tools to retrieve the toy during the baseline tool-use task. All children who did not retrieve the toy during the baseline trials did so after being given a hint, and performance on a transfer-of-training tool-use task approached ceiling levels. This suggests that the sex difference in tool use observed during the baseline phase does not reflect a difference in competency, but rather a sex difference in motivation to interact with objects. Amount of time boys, but not girls, spent in object-oriented play during the free-play sessions predicted performance on the tool-use task. The findings are interpreted in terms of evolutionary theory, consistent with the idea that boys' and girls' play styles evolved to prepare them for adult life in traditional environments.

  10. Recent Developments in Cultural Heritage Image Databases: Directions for User-Centered Design.

    ERIC Educational Resources Information Center

    Stephenson, Christie

    1999-01-01

    Examines the Museum Educational Site Licensing (MESL) Project--a cooperative project between seven cultural heritage repositories and seven universities--as well as other developments of cultural heritage image databases for academic use. Reviews recent literature on image indexing and retrieval, interface design, and tool development, urging a…

  11. The astronaut and the banana peel: An EVA retriever scenario

    NASA Technical Reports Server (NTRS)

    Shapiro, Daniel G.

    1989-01-01

    To prepare for the problem of accidents in Space Station activities, the Extravehicular Activity Retriever (EVAR) robot is being constructed, whose purpose is to retrieve astronauts and tools that float free of the Space Station. Advanced Decision Systems is at the beginning of a project to develop research software capable of guiding EVAR through the retrieval process. This involves addressing problems in machine vision, dexterous manipulation, real time construction of programs via speech input, and reactive execution of plans despite the mishaps and unexpected conditions that arise in uncontrolled domains. The problem analysis phase of this work is presented. An EVAR scenario is used to elucidate major domain and technical problems. An overview of the technical approach to prototyping an EVAR system is also presented.

  12. Multispectral information for gas and aerosol retrieval from TANSO-FTS instrument

    NASA Astrophysics Data System (ADS)

    Herbin, H.; Labonnote, L. C.; Dubuisson, P.

    2012-11-01

    The Greenhouse gases Observing SATellite (GOSAT) mission and in particular TANSO-FTS instrument has the advantage to measure simultaneously the same field of view in different spectral ranges with a high spectral resolution. These features are promising to improve, not only, gaseous retrieval in clear sky or scattering atmosphere, but also to retrieve aerosol parameters. Therefore, this paper is dedicated to an Information Content (IC) analysis of potential synergy between thermal infrared, shortwave infrared and visible, in order to obtain a more accurate retrieval of gas and aerosol. The latter is based on Shannon theory and used a sophisticated radiative transfer algorithm developed at "Laboratoire d'Optique Atmosphérique", dealing with multiple scattering. This forward model can be relied to an optimal estimation method, which allows simultaneously retrieving gases profiles and aerosol granulometry and concentration. The analysis of the information provided by the spectral synergy is based on climatology of dust, volcanic ash and biomass burning aerosols. This work was conducted in order to develop a powerful tool that allows retrieving simultaneously not only the gas concentrations but also the aerosol characteristics by selecting the so called "best channels", i.e. the channels that bring most of the information concerning gas and aerosol. The methodology developed in this paper could also be used to define the specifications of future high spectral resolution mission to reach a given accuracy on retrieved parameters.

  13. Natural Language Processing.

    ERIC Educational Resources Information Center

    Chowdhury, Gobinda G.

    2003-01-01

    Discusses issues related to natural language processing, including theoretical developments; natural language understanding; tools and techniques; natural language text processing systems; abstracting; information extraction; information retrieval; interfaces; software; Internet, Web, and digital library applications; machine translation for…

  14. GEM-TREND: a web tool for gene expression data mining toward relevant network discovery

    PubMed Central

    Feng, Chunlai; Araki, Michihiro; Kunimoto, Ryo; Tamon, Akiko; Makiguchi, Hiroki; Niijima, Satoshi; Tsujimoto, Gozoh; Okuno, Yasushi

    2009-01-01

    Background DNA microarray technology provides us with a first step toward the goal of uncovering gene functions on a genomic scale. In recent years, vast amounts of gene expression data have been collected, much of which are available in public databases, such as the Gene Expression Omnibus (GEO). To date, most researchers have been manually retrieving data from databases through web browsers using accession numbers (IDs) or keywords, but gene-expression patterns are not considered when retrieving such data. The Connectivity Map was recently introduced to compare gene expression data by introducing gene-expression signatures (represented by a set of genes with up- or down-regulated labels according to their biological states) and is available as a web tool for detecting similar gene-expression signatures from a limited data set (approximately 7,000 expression profiles representing 1,309 compounds). In order to support researchers to utilize the public gene expression data more effectively, we developed a web tool for finding similar gene expression data and generating its co-expression networks from a publicly available database. Results GEM-TREND, a web tool for searching gene expression data, allows users to search data from GEO using gene-expression signatures or gene expression ratio data as a query and retrieve gene expression data by comparing gene-expression pattern between the query and GEO gene expression data. The comparison methods are based on the nonparametric, rank-based pattern matching approach of Lamb et al. (Science 2006) with the additional calculation of statistical significance. The web tool was tested using gene expression ratio data randomly extracted from the GEO and with in-house microarray data, respectively. The results validated the ability of GEM-TREND to retrieve gene expression entries biologically related to a query from GEO. For further analysis, a network visualization interface is also provided, whereby genes and gene annotations are dynamically linked to external data repositories. Conclusion GEM-TREND was developed to retrieve gene expression data by comparing query gene-expression pattern with those of GEO gene expression data. It could be a very useful resource for finding similar gene expression profiles and constructing its gene co-expression networks from a publicly available database. GEM-TREND was designed to be user-friendly and is expected to support knowledge discovery. GEM-TREND is freely available at . PMID:19728865

  15. GEM-TREND: a web tool for gene expression data mining toward relevant network discovery.

    PubMed

    Feng, Chunlai; Araki, Michihiro; Kunimoto, Ryo; Tamon, Akiko; Makiguchi, Hiroki; Niijima, Satoshi; Tsujimoto, Gozoh; Okuno, Yasushi

    2009-09-03

    DNA microarray technology provides us with a first step toward the goal of uncovering gene functions on a genomic scale. In recent years, vast amounts of gene expression data have been collected, much of which are available in public databases, such as the Gene Expression Omnibus (GEO). To date, most researchers have been manually retrieving data from databases through web browsers using accession numbers (IDs) or keywords, but gene-expression patterns are not considered when retrieving such data. The Connectivity Map was recently introduced to compare gene expression data by introducing gene-expression signatures (represented by a set of genes with up- or down-regulated labels according to their biological states) and is available as a web tool for detecting similar gene-expression signatures from a limited data set (approximately 7,000 expression profiles representing 1,309 compounds). In order to support researchers to utilize the public gene expression data more effectively, we developed a web tool for finding similar gene expression data and generating its co-expression networks from a publicly available database. GEM-TREND, a web tool for searching gene expression data, allows users to search data from GEO using gene-expression signatures or gene expression ratio data as a query and retrieve gene expression data by comparing gene-expression pattern between the query and GEO gene expression data. The comparison methods are based on the nonparametric, rank-based pattern matching approach of Lamb et al. (Science 2006) with the additional calculation of statistical significance. The web tool was tested using gene expression ratio data randomly extracted from the GEO and with in-house microarray data, respectively. The results validated the ability of GEM-TREND to retrieve gene expression entries biologically related to a query from GEO. For further analysis, a network visualization interface is also provided, whereby genes and gene annotations are dynamically linked to external data repositories. GEM-TREND was developed to retrieve gene expression data by comparing query gene-expression pattern with those of GEO gene expression data. It could be a very useful resource for finding similar gene expression profiles and constructing its gene co-expression networks from a publicly available database. GEM-TREND was designed to be user-friendly and is expected to support knowledge discovery. GEM-TREND is freely available at http://cgs.pharm.kyoto-u.ac.jp/services/network.

  16. Biological data integration: wrapping data and tools.

    PubMed

    Lacroix, Zoé

    2002-06-01

    Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. Building a digital library for scientific data requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web as well as data generated by software. We present an approach to wrapping web data sources, databases, flat files, or data generated by tools through a database view mechanism. Generally, a wrapper has two tasks: it first sends a query to the source to retrieve data and, second builds the expected output with respect to the virtual structure. Our wrappers are composed of a retrieval component based on an intermediate object view mechanism called search views mapping the source capabilities to attributes, and an eXtensible Markup Language (XML) engine, respectively, to perform these two tasks. The originality of the approach consists of: 1) a generic view mechanism to access seamlessly data sources with limited capabilities and 2) the ability to wrap data sources as well as the useful specific tools they may provide. Our approach has been developed and demonstrated as part of the multidatabase system supporting queries via uniform object protocol model (OPM) interfaces.

  17. Development of a PubMed Based Search Tool for Identifying Sex and Gender Specific Health Literature.

    PubMed

    Song, Michael M; Simonsen, Cheryl K; Wilson, Joanna D; Jenkins, Marjorie R

    2016-02-01

    An effective literature search strategy is critical to achieving the aims of Sex and Gender Specific Health (SGSH): to understand sex and gender differences through research and to effectively incorporate the new knowledge into the clinical decision making process to benefit both male and female patients. The goal of this project was to develop and validate an SGSH literature search tool that is readily and freely available to clinical researchers and practitioners. PubMed, a freely available search engine for the Medline database, was selected as the platform to build the SGSH literature search tool. Combinations of Medical Subject Heading terms, text words, and title words were evaluated for optimal specificity and sensitivity. The search tool was then validated against reference bases compiled for two disease states, diabetes and stroke. Key sex and gender terms and limits were bundled to create a search tool to facilitate PubMed SGSH literature searches. During validation, the search tool retrieved 50 of 94 (53.2%) stroke and 62 of 95 (65.3%) diabetes reference articles selected for validation. A general keyword search of stroke or diabetes combined with sex difference retrieved 33 of 94 (35.1%) stroke and 22 of 95 (23.2%) diabetes reference base articles, with lower sensitivity and specificity for SGSH content. The Texas Tech University Health Sciences Center SGSH PubMed Search Tool provides higher sensitivity and specificity to sex and gender specific health literature. The tool will facilitate research, clinical decision-making, and guideline development relevant to SGSH.

  18. Development of a PubMed Based Search Tool for Identifying Sex and Gender Specific Health Literature

    PubMed Central

    Song, Michael M.; Simonsen, Cheryl K.; Wilson, Joanna D.

    2016-01-01

    Abstract Background: An effective literature search strategy is critical to achieving the aims of Sex and Gender Specific Health (SGSH): to understand sex and gender differences through research and to effectively incorporate the new knowledge into the clinical decision making process to benefit both male and female patients. The goal of this project was to develop and validate an SGSH literature search tool that is readily and freely available to clinical researchers and practitioners. Methods: PubMed, a freely available search engine for the Medline database, was selected as the platform to build the SGSH literature search tool. Combinations of Medical Subject Heading terms, text words, and title words were evaluated for optimal specificity and sensitivity. The search tool was then validated against reference bases compiled for two disease states, diabetes and stroke. Results: Key sex and gender terms and limits were bundled to create a search tool to facilitate PubMed SGSH literature searches. During validation, the search tool retrieved 50 of 94 (53.2%) stroke and 62 of 95 (65.3%) diabetes reference articles selected for validation. A general keyword search of stroke or diabetes combined with sex difference retrieved 33 of 94 (35.1%) stroke and 22 of 95 (23.2%) diabetes reference base articles, with lower sensitivity and specificity for SGSH content. Conclusions: The Texas Tech University Health Sciences Center SGSH PubMed Search Tool provides higher sensitivity and specificity to sex and gender specific health literature. The tool will facilitate research, clinical decision-making, and guideline development relevant to SGSH. PMID:26555409

  19. Development of an information retrieval tool for biomedical patents.

    PubMed

    Alves, Tiago; Rodrigues, Rúben; Costa, Hugo; Rocha, Miguel

    2018-06-01

    The volume of biomedical literature has been increasing in the last years. Patent documents have also followed this trend, being important sources of biomedical knowledge, technical details and curated data, which are put together along the granting process. The field of Biomedical text mining (BioTM) has been creating solutions for the problems posed by the unstructured nature of natural language, which makes the search of information a challenging task. Several BioTM techniques can be applied to patents. From those, Information Retrieval (IR) includes processes where relevant data are obtained from collections of documents. In this work, the main goal was to build a patent pipeline addressing IR tasks over patent repositories to make these documents amenable to BioTM tasks. The pipeline was developed within @Note2, an open-source computational framework for BioTM, adding a number of modules to the core libraries, including patent metadata and full text retrieval, PDF to text conversion and optical character recognition. Also, user interfaces were developed for the main operations materialized in a new @Note2 plug-in. The integration of these tools in @Note2 opens opportunities to run BioTM tools over patent texts, including tasks from Information Extraction, such as Named Entity Recognition or Relation Extraction. We demonstrated the pipeline's main functions with a case study, using an available benchmark dataset from BioCreative challenges. Also, we show the use of the plug-in with a user query related to the production of vanillin. This work makes available all the relevant content from patents to the scientific community, decreasing drastically the time required for this task, and provides graphical interfaces to ease the use of these tools. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Voss retrieves a small tool from a tool kit in ISS Node 1/Unity

    NASA Image and Video Library

    2001-08-13

    STS105-E-5175 (13 August 2001) --- Astronaut James S. Voss, retrieves a small tool from a tool case in the U.S.-built Unity node aboard the International Space Station (ISS). The Expedition Two flight engineer is only days away from returning to Earth following five months aboard the orbital outpost. The image was recorded with a digital still camera.

  1. An intelligent, free-flying robot

    NASA Technical Reports Server (NTRS)

    Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.

    1988-01-01

    The ground-based demonstration of EVA Retriever, a voice-supervised, intelligent, free-flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out, (2) searches for and acquires the target, (3) plans and executes a rendezvous while continuously tracking the target, (4) avoids stationary and moving obstacles, (5) reaches for and grapples the target, (6) returns to transfer the object, and (7) returns to base.

  2. An intelligent, free-flying robot

    NASA Technical Reports Server (NTRS)

    Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.; Phinney, Dale E.

    1989-01-01

    The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base.

  3. DynGO: a tool for visualizing and mining of Gene Ontology and its associations

    PubMed Central

    Liu, Hongfang; Hu, Zhang-Zhi; Wu, Cathy H

    2005-01-01

    Background A large volume of data and information about genes and gene products has been stored in various molecular biology databases. A major challenge for knowledge discovery using these databases is to identify related genes and gene products in disparate databases. The development of Gene Ontology (GO) as a common vocabulary for annotation allows integrated queries across multiple databases and identification of semantically related genes and gene products (i.e., genes and gene products that have similar GO annotations). Meanwhile, dozens of tools have been developed for browsing, mining or editing GO terms, their hierarchical relationships, or their "associated" genes and gene products (i.e., genes and gene products annotated with GO terms). Tools that allow users to directly search and inspect relations among all GO terms and their associated genes and gene products from multiple databases are needed. Results We present a standalone package called DynGO, which provides several advanced functionalities in addition to the standard browsing capability of the official GO browsing tool (AmiGO). DynGO allows users to conduct batch retrieval of GO annotations for a list of genes and gene products, and semantic retrieval of genes and gene products sharing similar GO annotations. The result are shown in an association tree organized according to GO hierarchies and supported with many dynamic display options such as sorting tree nodes or changing orientation of the tree. For GO curators and frequent GO users, DynGO provides fast and convenient access to GO annotation data. DynGO is generally applicable to any data set where the records are annotated with GO terms, as illustrated by two examples. Conclusion We have presented a standalone package DynGO that provides functionalities to search and browse GO and its association databases as well as several additional functions such as batch retrieval and semantic retrieval. The complete documentation and software are freely available for download from the website . PMID:16091147

  4. LandEx - Fast, FOSS-Based Application for Query and Retrieval of Land Cover Patterns

    NASA Astrophysics Data System (ADS)

    Netzel, P.; Stepinski, T.

    2012-12-01

    The amount of satellite-based spatial data is continuously increasing making a development of efficient data search tools a priority. The bulk of existing research on searching satellite-gathered data concentrates on images and is based on the concept of Content-Based Image Retrieval (CBIR); however, available solutions are not efficient and robust enough to be put to use as deployable web-based search tools. Here we report on development of a practical, deployable tool that searches classified, rather than raw image. LandEx (Landscape Explorer) is a GeoWeb-based tool for Content-Based Pattern Retrieval (CBPR) contained within the National Land Cover Dataset 2006 (NLCD2006). The USGS-developed NLCD2006 is derived from Landsat multispectral images; it covers the entire conterminous U.S. with the resolution of 30 meters/pixel and it depicts 16 land cover classes. The size of NLCD2006 is about 10 Gpixels (161,000 x 100,000 pixels). LandEx is a multi-tier GeoWeb application based on Open Source Software. Main components are: GeoExt/OpenLayers (user interface), GeoServer (OGC WMS, WCS and WPS server), and GRASS (calculation engine). LandEx performs search using query-by-example approach: user selects a reference scene (exhibiting a chosen pattern of land cover classes) and the tool produces, in real time, a map indicating a degree of similarity between the reference pattern and all local patterns across the U.S. Scene pattern is encapsulated by a 2D histogram of classes and sizes of single-class clumps. Pattern similarity is based on the notion of mutual information. The resultant similarity map can be viewed and navigated in a web browser, or it can download as a GeoTiff file for more in-depth analysis. The LandEx is available at http://sil.uc.edu

  5. A Data Management System for International Space Station Simulation Tools

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.

  6. A manipulator arm for zero-g simulations

    NASA Technical Reports Server (NTRS)

    Brodie, S. B.; Grant, C.; Lazar, J. J.

    1975-01-01

    A 12-ft counterbalanced Slave Manipulator Arm (SMA) was designed and fabricated to be used for resolving the questions of operational applications, capabilities, and limitations for such remote manned systems as the Payload Deployment and Retrieval Mechanism (PDRM) for the shuttle, the Free-Flying Teleoperator System, the Advanced Space Tug, and Planetary Rovers. As a developmental tool for the shuttle manipulator system (or PDRM), the SMA represents an approximate one-quarter scale working model for simulating and demonstrating payload handling, docking assistance, and satellite servicing. For the Free-Flying Teleoperator System and the Advanced Tug, the SMA provides a near full-scale developmental tool for satellite servicing, docking, and deployment/retrieval procedures, techniques, and support equipment requirements. For the Planetary Rovers, it provides an oversize developmental tool for sample handling and soil mechanics investigations. The design of the SMA was based on concepts developed for a 40-ft NASA technology arm to be used for zero-g shuttle manipulator simulations.

  7. Space resources. Overview

    NASA Technical Reports Server (NTRS)

    Mckay, Mary Fae (Editor); Mckay, David S. (Editor); Duke, Michael B. (Editor)

    1992-01-01

    Space resources must be used to support life on the Moon and in the exploration of Mars. Just as the pioneers applied the tools they brought with them to resources they found along the way rather than trying to haul all their needs over a long supply line, so too must space travelers apply their high technology tools to local resources. This overview describes the findings of a study on the use of space resources in the development of future space activities and defines the necessary research and development that must precede the practical utilization of these resources. Space resources considered included lunar soil, oxygen derived from lunar soil, material retrieved from near-Earth asteroids, abundant sunlight, low gravity, and high vacuum. The study participants analyzed the direct use of these resources, the potential demand for products from them, the techniques for retrieving and processing space resources, the necessary infrastructure, and the economic tradeoffs.

  8. Toward a coherent set of radiative transfer tools for the analysis of planetary atmospheres .

    NASA Astrophysics Data System (ADS)

    Grassi, D.; Ignatiev, N. I.; Zasova, L. V.; Piccioni, G.; Adriani, A.; Moriconi, M. L.; Sindoni, G.; D'Aversa, E.; Snels, M.; Altieri, F.; Migliorini, A.; Stefani, S.; Politi, R.; Dinelli, B. M.; Geminale, A.; Rinaldi, G.

    The IAPS experience in the field of analysis of planetary atmospheres from visual and infrared measurements dates back to the early '90 in the frame of the IFSI participation to the Mars96 program. Since then, the forward models as well as retrieval schemes have been constantly updated and have seen a large usage in the analysis of data from Mars Express, Venus Express and Cassini missions. At the eve of a new series of missions (Juno, ExoMars, JUICE), we review the tools currently available to the Italian community, the latest developments and future perspectives. Notably, recent reanalysis of PFS-MEX and VIRTIS-VEX data \\citep{Grassi2014} leaded to a full convergence of complete Bayesian retrieval schemes and approximate forward models, achieving a degree of maturity and flexibility quite close to the state-of-the-art NEMESIS package \\citep{Irwin2007}. As a test case, the retrieval code for the JIRAM observations of hot-spots will be discussed, with extensive validation against simulated observations.

  9. Developing and Evaluating a Chinese Collocation Retrieval Tool for CFL Students and Teachers

    ERIC Educational Resources Information Center

    Chen, Howard Hao-Jan; Wu, Jian-Cheng; Yang, Christine Ting-Yu; Pan, Iting

    2016-01-01

    The development of collocational knowledge is important for foreign language learners; unfortunately, learners often have difficulties producing proper collocations in the target language. Among the various ways of collocation learning, the DDL (data-driven learning) approach encourages the independent learning of collocations and allows learners…

  10. Engineering Analysis Using a Web-based Protocol

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.; Claus, Russell W.

    2002-01-01

    This paper reviews the development of a web-based framework for engineering analysis. A one-dimensional, high-speed analysis code called LAPIN was used in this study, but the approach can be generalized to any engineering analysis tool. The web-based framework enables users to store, retrieve, and execute an engineering analysis from a standard web-browser. We review the encapsulation of the engineering data into the eXtensible Markup Language (XML) and various design considerations in the storage and retrieval of application data.

  11. Regional trace gas monitoring simplified - A linear retrieval scheme for carbon monoxide from hyperspectral soundings

    NASA Astrophysics Data System (ADS)

    Smith, N.; Huang, A.; Weisz, E.; Annegarn, H. J.

    2011-12-01

    The Fast Linear Inversion Trace gas System (FLITS) is designed to retrieve tropospheric total column trace gas densities [molec.cm-2] from space-borne hyperspectral infrared soundings. The objective to develop a new retrieval scheme was motivated by the need for near real-time air quality monitoring at high spatial resolution. We present a case study of FLITS carbon monoxide (CO) retrievals from daytime (descending orbit) Infrared Atmospheric Sounding Interferometer (IASI) measurements that have a 0.5 cm-1 spectral resolution and 12 km footprint at nadir. The standard Level 2 IASI CO retrieval product (COL2) is available in near real-time but is spatially averaged over 2 x 2 pixels, or 50 x 50 km, and thus more suitable for global analysis. The study region is Southern Africa (south of the equator) for the period 28-31 August 2008. An atmospheric background estimate is obtained from a chemical transport model, emissivity from regional measurements and surface temperature (ST) from space-borne retrievals. The CO background error is set to 10%. FLITS retrieves CO by assuming a simple linear relationship between the IASI measurements and background estimate of the atmosphere and surface parameters. This differs from the COL2 algorithm that treats CO retrieval as a moderately non-linear problem. When compared to COL2, the FLITS retrievals display similar trends in distribution and transport of CO over time with the advantage of an improved spatial resolution (single-pixel). The value of the averaging kernel (A) is consistently above 0.5 and indicates that FLITS retrievals have a stable dependence on the measurement. This stability is achieved through careful channel selection in the strongest CO absorption lines (2050-2225 cm-1) and joint retrieval with skin temperature (IASI sensitivity to CO is highly correlated with ST), thus no spatial averaging is necessary. We conclude that the simplicity and stability of FLITS make it useful first as a research tool, i.e. the algorithm is easy to understand and computationally simple enough to run on most desktop computers, and second, as an operational tool that can calculate near real-time CO retrievals at instrument resolution for regional monitoring.

  12. An overview of the Office of Space Flight satellite servicing program plan

    NASA Technical Reports Server (NTRS)

    Levin, George M.; Erwin, Harry O., Jr.

    1987-01-01

    A comprehensive program for the development of satellite servicing tools and techniques is being currently carried out by the Office of Space Flight. The program is based on a satellite servicing infrastructure formulated by analyzing satellite servicing requirements; the program is Shuttle-based and compatible with the Orbital Maneuvering Vehicle and Space Station. The content of the satellite servicing program is reviewed with reference to the tools, techniques, and procedures being developed for refueling (or consumables resupply), repairing, and retrieving.

  13. The GLOBAL Learning and Observations to Benefit the Environment (GLOBE) Data Visualization and Retrieval System. Building a robust system for scientists and students.

    NASA Astrophysics Data System (ADS)

    Overoye, D.; Lewis, C.; Butler, D. M.; Andersen, T. J.

    2016-12-01

    The Global Learning and Observations to Benefit the Environment (GLOBE) Program is a worldwide hands-on, primary and secondary school-based science and education program founded on Earth Day 1995. Implemented in 117 countries, GLOBE promotes the teaching and learning of science, supporting students, teachers and scientists worldwide to collaborate with each other on inquiry-based investigations of the Earth system. The GLOBE Data Information System (DIS) currently supports users with the ability to enter data from over 50 different science protocols. GLOBE's Data Access and Visualization tools have been developed to accommodate the need to display and retrieve data from this large number of protocols. The community of users is also diverse, including NASA scientists, citizen scientists and grade school students. The challenge for GLOBE is to meet the needs from this diverse set of users with protocol specific displays that are simple enough for a GLOBE school to use, but also provide enough features for a NASA Scientist to retrieve data sets they are interested in. During the last 3 years, the GLOBE visualization system has evolved to meet the needs of these various users, leveraging user feedback and technological advances. Further refinements and enhancements continue. In this session we review the design and capabilities of the GLOBE visualization and data retrieval tool set, discuss the evolution of these tools, and discuss coming directions.

  14. Development of the Aboriginal Communication Assessment After Brain Injury (ACAABI): A screening tool for identifying acquired communication disorders in Aboriginal Australians.

    PubMed

    Armstrong, Elizabeth M; Ciccone, Natalie; Hersh, Deborah; Katzenellebogen, Judith; Coffin, Juli; Thompson, Sandra; Flicker, Leon; Hayward, Colleen; Woods, Deborah; McAllister, Meaghan

    2017-06-01

    Acquired communication disorders (ACD), following stroke and traumatic brain injury, may not be correctly identified in Aboriginal Australians due to a lack of linguistically and culturally appropriate assessment tools. Within this paper we explore key issues that were considered in the development of the Aboriginal Communication Assessment After Brain Injury (ACAABI) - a screening tool designed to assess the presence of ACD in Aboriginal populations. A literature review and consultation with key stakeholders were undertaken to explore directions needed to develop a new tool, based on existing tools and recommendations for future developments. The literature searches revealed no existing screening tool for ACD in these populations, but identified tools in the areas of cognition and social-emotional wellbeing. Articles retrieved described details of the content and style of these tools, with recommendations for the development and administration of a new tool. The findings from the interview and focus group views were consistent with the approach recommended in the literature. There is a need for a screening tool for ACD to be developed but any tool must be informed by knowledge of Aboriginal language, culture and community input in order to be acceptable and valid.

  15. Data Compression in Full-Text Retrieval Systems.

    ERIC Educational Resources Information Center

    Bell, Timothy C.; And Others

    1993-01-01

    Describes compression methods for components of full-text systems such as text databases on CD-ROM. Topics discussed include storage media; structures for full-text retrieval, including indexes, inverted files, and bitmaps; compression tools; memory requirements during retrieval; and ranking and information retrieval. (Contains 53 references.)…

  16. Vega-Constellation Tools to Analize Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Savorskiy, V.; Loupian, E.; Balashov, I.; Kashnitskii, A.; Konstantinova, A.; Tolpin, V.; Uvarov, I.; Kuznetsov, O.; Maklakov, S.; Panova, O.; Savchenko, E.

    2016-06-01

    Creating high-performance means to manage massive hyperspectral data (HSD) arrays is an actual challenge when it is implemented to deal with disparate information resources. Aiming to solve this problem the present work develops tools to work with HSD in a distributed information infrastructure, i.e. primarily to use those tools in remote access mode. The main feature of presented approach is in the development of remotely accessed services, which allow users both to conduct search and retrieval procedures on HSD sets and to provide target users with tools to analyze and to process HSD in remote mode. These services were implemented within VEGA-Constellation family information systems that were extended by adding tools oriented to support the studies of certain classes of natural objects by exploring their HSD. Particular developed tools provide capabilities to conduct analysis of such objects as vegetation canopies (forest and agriculture), open soils, forest fires, and areas of thermal anomalies. Developed software tools were successfully tested on Hyperion data sets.

  17. New Knowledge Management Systems: The Implications for Data Discovery, Collection Development, and the Changing Role of the Librarian.

    ERIC Educational Resources Information Center

    Stern, David

    2003-01-01

    Discusses questions to consider as chemistry libraries develop new information storage and retrieval systems. Addresses new integrated tools for data manipulation that will guarantee access to information; differential pricing and package plans and effects on libraries' budgeting; and the changing role of the librarian. (LRW)

  18. Earth Observation oriented teaching materials development based on OGC Web services and Bashyt generated reports

    NASA Astrophysics Data System (ADS)

    Stefanut, T.; Gorgan, D.; Giuliani, G.; Cau, P.

    2012-04-01

    Creating e-Learning materials in the Earth Observation domain is a difficult task especially for non-technical specialists who have to deal with distributed repositories, large amounts of information and intensive processing requirements. Furthermore, due to the lack of specialized applications for developing teaching resources, technical knowledge is required also for defining data presentation structures or in the development and customization of user interaction techniques for better teaching results. As a response to these issues during the GiSHEO FP7 project [1] and later in the EnviroGRIDS FP7 [2] project, we have developed the eGLE e-Learning Platform [3], a tool based application that provides dedicated functionalities to the Earth Observation specialists for developing teaching materials. The proposed architecture is built around a client-server design that provides the core functionalities (e.g. user management, tools integration, teaching materials settings, etc.) and has been extended with a distributed component implemented through the tools that are integrated into the platform, as described further. Our approach in dealing with multiple transfer protocol types, heterogeneous data formats or various user interaction techniques involve the development and integration of very specialized elements (tools) that can be customized by the trainers in a visual manner through simple user interfaces. In our concept each tool is dedicated to a specific data type, implementing optimized mechanisms for searching, retrieving, visualizing and interacting with it. At the same time, in each learning resource can be integrated any number of tools, through drag-and-drop interaction, allowing the teacher to retrieve pieces of data of various types (e.g. images, charts, tables, text, videos etc.) from different sources (e.g. OGC web services, charts created through Bashyt application, etc.) through different protocols (ex. WMS, BASHYT API, FTP, HTTP etc.) and to display them all together in a unitary manner using the same visual structure [4]. Addressing the High Power Computation requirements that are met while processing environmental data, our platform can be easily extended through tools that connect to GRID infrastructures, WCS web services, Bashyt API (for creating specialized hydrological reports) or any other specialized services (ex. graphics cluster visualization) that can be reached over the Internet. At run time, on the trainee's computer each tool is launched in an asynchronous running mode and connects to the data source that has been established by the teacher, retrieving and displaying the information to the user. The data transfer is accomplished directly between the trainee's computer and the corresponding services (e.g. OGC, Bashyt API, etc.) without passing through the core server platform. In this manner, the eGLE application can provide better and more responsive connections to a large number of users.

  19. Near-Real Time Cloud Retrievals from Operational and Research Meteorological Satellites

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Nguyen, Louis; Palilonda, Rabindra; Heck, Patrick W.; Spangenberg, Douglas A.; Doelling, David R.; Ayers, J. Kirk; Smith, William L., Jr.; Khaiyer, Mandana M.; Trepte, Qing Z.; hide

    2008-01-01

    A set of cloud retrieval algorithms developed for CERES and applied to MODIS data have been adapted to analyze other satellite imager data in near-real time. The cloud products, including single-layer cloud amount, top and base height, optical depth, phase, effective particle size, and liquid and ice water paths, are being retrieved from GOES- 10/11/12, MTSAT-1R, FY-2C, and Meteosat imager data as well as from MODIS. A comprehensive system to normalize the calibrations to MODIS has been implemented to maximize consistency in the products across platforms. Estimates of surface and top-of-atmosphere broadband radiative fluxes are also provided. Multilayered cloud properties are retrieved from GOES-12, Meteosat, and MODIS data. Native pixel resolution analyses are performed over selected domains, while reduced sampling is used for full-disk retrievals. Tools have been developed for matching the pixel-level results with instrumented surface sites and active sensor satellites. The calibrations, methods, examples of the products, and comparisons with the ICESat GLAS lidar are discussed. These products are currently being used for aircraft icing diagnoses, numerical weather modeling assimilation, and atmospheric radiation research and have potential for use in many other applications.

  20. Visual Based Retrieval Systems and Web Mining--Introduction.

    ERIC Educational Resources Information Center

    Iyengar, S. S.

    2001-01-01

    Briefly discusses Web mining and image retrieval techniques, and then presents a summary of articles in this special issue. Articles focus on Web content mining, artificial neural networks as tools for image retrieval, content-based image retrieval systems, and personalizing the Web browsing experience using media agents. (AEF)

  1. Fishing tool retrieves MWD nuclear source from deep well

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    A new wire line tool has successfully retrieved the nuclear sources and formation data from a measurement-while-drilling (MWD) tool stuck in a deep, highly deviated well in the Gulf of Mexico. On Nov. 8, 1993, Schlumberger Wireline and Testing and Anadrill ran a logging-while-drilling inductive coupling (LINC) tool on conventional wire line to fish the gamma ray and neutron sources from a compensated density neutron (CDN) tool stuck in a well at 19,855 ft with an inclination greater than 80[degree]. The paper briefly describes the operation and equipment.

  2. ASIS '99 Knowledge: Creation, Organization and Use, Part II: SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1999

    1999-01-01

    Abstracts and descriptions of Special Interest Group (SIG) sessions include such topics as: knowledge management tools, knowledge organization, information retrieval, information seeking behavior, metadata, indexing, library service for distance education, electronic books, future information workforce needs, technological developments, and…

  3. Enhancements in medicine by integrating content based image retrieval in computer-aided diagnosis

    NASA Astrophysics Data System (ADS)

    Aggarwal, Preeti; Sardana, H. K.

    2010-02-01

    Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. With cad, radiologists use the computer output as a "second opinion" and make the final decisions. Retrieving images is a useful tool to help radiologist to check medical image and diagnosis. The impact of contentbased access to medical images is frequently reported but existing systems are designed for only a particular context of diagnosis. The challenge in medical informatics is to develop tools for analyzing the content of medical images and to represent them in a way that can be efficiently searched and compared by the physicians. CAD is a concept established by taking into account equally the roles of physicians and computers. To build a successful computer aided diagnostic system, all the relevant technologies, especially retrieval need to be integrated in such a manner that should provide effective and efficient pre-diagnosed cases with proven pathology for the current case at the right time. In this paper, it is suggested that integration of content-based image retrieval (CBIR) in cad can bring enormous results in medicine especially in diagnosis. This approach is also compared with other approaches by highlighting its advantages over those approaches.

  4. Comparison of Gaussian and non-Gaussian Atmospheric Profile Retrievals from Satellite Microwave Data

    NASA Astrophysics Data System (ADS)

    Kliewer, A.; Forsythe, J. M.; Fletcher, S. J.; Jones, A. S.

    2017-12-01

    The Cooperative Institute for Research in the Atmosphere at Colorado State University has recently developed two different versions of a mixed-distribution (lognormal combined with a Gaussian) based microwave temperature and mixing ratio retrieval system as well as the original Gaussian-based approach. These retrieval systems are based upon 1DVAR theory but have been adapted to use different descriptive statistics of the lognormal distribution to minimize the background errors. The input radiance data is from the AMSU-A and MHS instruments on the NOAA series of spacecraft. To help illustrate how the three retrievals are affected by the change in the distribution we are in the process of creating a new website to show the output from the different retrievals. Here we present initial results from different dynamical situations to show how the tool could be used by forecasters as well as for educators. However, as the new retrieved values are from a non-Gaussian based 1DVAR then they will display non-Gaussian behaviors that need to pass a quality control measure that is consistent with this distribution, and these new measures are presented here along with initial results for checking the retrievals.

  5. A qualitative study on personal information management (PIM) in clinical and basic sciences faculty members of a medical university in Iran

    PubMed Central

    Sedghi, Shahram; Abdolahi, Nida; Azimi, Ali; Tahamtan, Iman; Abdollahi, Leila

    2015-01-01

    Background: Personal Information Management (PIM) refers to the tools and activities to save and retrieve personal information for future uses. This study examined the PIM activities of faculty members of Iran University of Medical Sciences (IUMS) regarding their preferred PIM tools and four aspects of acquiring, organizing, storing and retrieving personal information. Methods: The qualitative design was based on phenomenology approach and we carried out 37 interviews with clinical and basic sciences faculty members of IUMS in 2014. The participants were selected using a random sampling method. All interviews were recorded by a digital voice recorder, and then transcribed, codified and finally analyzed using NVivo 8 software. Results: The use of PIM electronic tools (e-tools) was below expectation among the studied sample and just 37% had reasonable knowledge of PIM e-tools such as, external hard drivers, flash memories etc. However, all participants used both paper and electronic devices to store and access information. Internal mass memories (in Laptops) and flash memories were the most used e-tools to save information. Most participants used "subject" (41.00%) and "file name" (33.7 %) to save, organize and retrieve their stored information. Most users preferred paper-based rather than electronic tools to keep their personal information. Conclusion: Faculty members had little knowledge about PIM techniques and tools. Those who organized personal information could easier retrieve the stored information for future uses. Enhancing familiarity with PIM tools and training courses of PIM tools and techniques are suggested. PMID:26793648

  6. A vector-product information retrieval system adapted to heterogeneous, distributed computing environments

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1991-01-01

    Vector-product information retrieval (IR) systems produce retrieval results superior to all other searching methods but presently have no commercial implementations beyond the personal computer environment. The NASA Electronic Library Systems (NELS) provides a ranked list of the most likely relevant objects in collections in response to a natural language query. Additionally, the system is constructed using standards and tools (Unix, X-Windows, Notif, and TCP/IP) that permit its operation in organizations that possess many different hosts, workstations, and platforms. There are no known commercial equivalents to this product at this time. The product has applications in all corporate management environments, particularly those that are information intensive, such as finance, manufacturing, biotechnology, and research and development.

  7. Current Standardization and Cooperative Efforts Related to Industrial Information Infrastructures.

    DTIC Science & Technology

    1993-05-01

    Data Management Systems: Components used to store, manage, and retrieve data. Data management includes knowledge bases, database management...Application Development Tools and Methods X/Open and POSIX APIs Integrated Design Support System (IDS) Knowledge -Based Systems (KBS) Application...IDEFlx) Yourdon Jackson System Design (JSD) Knowledge -Based Systems (KBSs) Structured Systems Development (SSD) Semantic Unification Meta-Model

  8. Wood transportation systems-a spin-off of a computerized information and mapping technique

    Treesearch

    William W. Phillips; Thomas J. Corcoran

    1978-01-01

    A computerized mapping system originally developed for planning the control of the spruce budworm in Maine has been extended into a tool for planning road net-work development and optimizing transportation costs. A budgetary process and a mathematical linear programming routine are used interactively with the mapping and information retrieval capabilities of the system...

  9. Overview of the EarthCARE simulator and its applications

    NASA Astrophysics Data System (ADS)

    van Zadelhoff, G.; Donovan, D. P.; Lajas, D.

    2011-12-01

    The EarthCARE Simulator (ECSIM) was initially developed in 2004 as a scientific tool to simulate atmospheric scenes, radiative transfer and instrument models for the four instruments of the EarthCARE mission. ECSIM has subsequently been significantly further enhanced and is evolving into a tool for both mission performance assessment and L2 retrieval development. It is an ESA requirement that all L2 retrieval algorithms foreseen for the ground segment will be integrated and tested in ECSIM. It is furthermore envisaged, that the (retrieval part of) ECSIM will be the tool for scientists to work with on updates and new L2 algorithms during the EarthCARE Commissioning phase and beyond. ECSIM is capable of performing 'end to end' simulations of single, or any combination of the EarthCARE instruments. That is, ECSIM starts with an input atmospheric ``scene'', then uses various radiative transfer and instrument models in order to generate synthetic observations which can be subsequently inverted. The results of the inversions may then be compared to the input "truth". ECSIM consists of a modular general framework populated by various models. The models within ECSIM are grouped according to the following scheme: 1) Scene creation models (3D atmospheric scene definition) 2) Orbit models (orbit and orientation of the platform as it overflies the scene) 3) Forward models (calculate the signal impinging on the telescope/antenna of the instrument(s) in question) 4) Instrument models (calculate the instrument response to the signals calculated by the Forward models) 5) Retrieval models (invert the instrument signals to recover relevant geophysical information) Within the default ECSIM models crude instrument specific parameterizations (i.e. empirically based radar reflectivity vs. IWC relationships) are avoided. Instead, the radiative transfer forward models are kept separate (as possible) from the instrument models. In order to accomplish this, the atmospheric scenes are specified in high detail (i.e. bin resolved [cloud] size distributions) and the relevant wavelength dependent optical properties are specified in a separate database. This helps insure that all the instruments involved in the simulation are treated consistently and that the physical relationships between the various measurements are realistically captured. ECSIM is mainly used as an algorithm development platform for EarthCARE. However, it has also been used for simulating Calipso, CloudSAT, future multi-wavelength HSRL satellite missions and airborne HSRL data, showing the versatility of the tool. Validating L2 retrieval algorithms require the creation of atmospheric scenes ranging in complexity from very simple (blocky) to 'realistic' (high resolution) scenes. Recent work on the evaluation of aerosol retrieval algorithms from satellite lidar data (e.g. ATLID) required these latter scenes, which were created based on HSRL and in-situ measurements from the DLR FALCON aircraft. The synthetic signals were subsequently evaluated by comparing to the original measured signals. In this presentation an overview of the EarthCARE Simulator, its philosophy and the construction of realistic "scenes'' based on actual campaign observations is presented.

  10. Automated extraction of radiation dose information from CT dose report images.

    PubMed

    Li, Xinhua; Zhang, Da; Liu, Bob

    2011-06-01

    The purpose of this article is to describe the development of an automated tool for retrieving texts from CT dose report images. Optical character recognition was adopted to perform text recognitions of CT dose report images. The developed tool is able to automate the process of analyzing multiple CT examinations, including text recognition, parsing, error correction, and exporting data to spreadsheets. The results were precise for total dose-length product (DLP) and were about 95% accurate for CT dose index and DLP of scanned series.

  11. AncestrySNPminer: A bioinformatics tool to retrieve and develop ancestry informative SNP panels

    PubMed Central

    Amirisetty, Sushil; Khurana Hershey, Gurjit K.; Baye, Tesfaye M.

    2012-01-01

    A wealth of genomic information is available in public and private databases. However, this information is underutilized for uncovering population specific and functionally relevant markers underlying complex human traits. Given the huge amount of SNP data available from the annotation of human genetic variation, data mining is a faster and cost effective approach for investigating the number of SNPs that are informative for ancestry. In this study, we present AncestrySNPminer, the first web-based bioinformatics tool specifically designed to retrieve Ancestry Informative Markers (AIMs) from genomic data sets and link these informative markers to genes and ontological annotation classes. The tool includes an automated and simple “scripting at the click of a button” functionality that enables researchers to perform various population genomics statistical analyses methods with user friendly querying and filtering of data sets across various populations through a single web interface. AncestrySNPminer can be freely accessed at https://research.cchmc.org/mershalab/AncestrySNPminer/login.php. PMID:22584067

  12. Wireless data collection retrievals of bridge inspection/management information.

    DOT National Transportation Integrated Search

    2017-02-28

    To increase the efficiency and reliability of bridge inspections, MDOT contracted to have a 3D-model-based data entry application for mobile tablets developed to aid inspectors in the field. The 3D Bridge App is a mobile software tool designed to fac...

  13. Intelligent Information Retrieval: Diagnosing Information Need. Part II. Uncertainty Expansion in a Prototype of a Diagnostic IR Tool.

    ERIC Educational Resources Information Center

    Cole, Charles; Cantero, Pablo; Sauve, Diane

    1998-01-01

    Outlines a prototype of an intelligent information-retrieval tool to facilitate information access for an undergraduate seeking information for a term paper. Topics include diagnosing the information need, Kuhlthau's information-search-process model, Shannon's mathematical theory of communication, and principles of uncertainty expansion and…

  14. Assessment and application of AirMSPI high-resolution multiangle imaging photo-polarimetric observations for atmospheric correction

    NASA Astrophysics Data System (ADS)

    Kalashnikova, O. V.; Xu, F.; Garay, M. J.; Seidel, F. C.; Diner, D. J.

    2016-02-01

    Water-leaving radiance comprises less than 10% of the signal measured from space, making correction for absorption and scattering by the intervening atmosphere imperative. Modern improvements have been developed in ocean color retrieval algorithms to handle absorbing aerosols such as urban particulates in coastal areas and transported desert dust over the open ocean. In addition, imperfect knowledge of the absorbing aerosol optical properties or their height distribution results in well-documented sources of error in the retrieved water leaving radiance. Multi-angle spectro-polarimetric measurements have been advocated as an additional tool to better understand and retrieve the aerosol properties needed for atmospheric correction for ocean color retrievals. The Airborne Multiangle SpectroPolarimetric Imager-1 (AirMSPI-1) has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. AirMSPI typically acquires observations of a target area at 9 view angles between ±67° at 10 m resolution. AirMSPI spectral channels are centered at 355, 380, 445, 470, 555, 660, and 865 nm, with 470, 660, and 865 reporting linear polarization. We have developed a retrieval code that employs a coupled Markov Chain (MC) and adding/doubling radiative transfer method for joint retrieval of aerosol properties and water leaving radiance from AirMSPI polarimetric observations. We tested prototype retrievals by comparing the retrieved aerosol concentration, size distribution, water-leaving radiance, and chlorophyll concentrations to values reported by the USC SeaPRISM AERONET-OC site off the coast of California. The retrieval then was applied to a variety of costal regions in California to evaluate variability in the water-leaving radiance under different atmospheric conditions. We will present results, and will discuss algorithm sensitivity and potential applications for future space-borne coastal monitoring.

  15. The Healthcare Administrator's Associate: an experiment in distributed healthcare information systems.

    PubMed Central

    Fowler, J.; Martin, G.

    1997-01-01

    The Healthcare Administrator's Associate is a collection of portable tools designed to support analysis of data retrieved via the Internet from diverse distributed healthcare information systems by means of the InfoSleuth system of distributed software agents. Development of these tools is part of an effort to enhance access to diverse and geographically distributed healthcare data in order to improve the basis upon which administrative and clinical decisions are made. PMID:9357686

  16. Tropospheric nitrogen dioxide column retrieval based on ground-based zenith-sky DOAS observations

    NASA Astrophysics Data System (ADS)

    Tack, F. M.; Hendrick, F.; Pinardi, G.; Fayt, C.; Van Roozendael, M.

    2013-12-01

    A retrieval approach has been developed to derive tropospheric NO2 vertical column amounts from ground-based zenith-sky measurements of scattered sunlight. Zenith radiance spectra are observed in the visible range by the BIRA-IASB Multi-Axis Differential Optical Absorption Spectroscopy (MAX-DOAS) instrument and analyzed by the DOAS technique, based on a least-squares spectral fitting. In recent years, this technique has shown to be a well-suited remote sensing tool for monitoring atmospheric trace gases. The retrieval algorithm is developed and validated based on a two month dataset acquired from June to July 2009 in the framework of the Cabauw (51.97° N, 4.93° E) Intercomparison campaign for Nitrogen Dioxide measuring Instruments (CINDI). Once fully operational, the retrieval approach can be applied to observations from stations of the Network for the Detection of Atmospheric Composition Change (NDACC). The obtained tropospheric vertical column amounts are compared with the multi-axis retrieval from the BIRA-IASB MAX-DOAS instrument and the retrieval from a zenith-viewing only SAOZ instrument (Système d'Analyse par Observations Zénithales), owned by Laboratoire Atmosphères, Milieux, Observations Spatiales (LATMOS). First results show a good agreement for the whole time series with the multi-axis retrieval (R = 0.82; y = 0.88x + 0.30) as well as with the SAOZ retrieval (R = 0.85; y = 0.76x + 0.28 ). Main error sources arise from the uncertainties in the determination of tropospheric and stratospheric air mass factors, the stratospheric NO2 abundances and the residual amount in the reference spectrum. However zenith-sky measurements have been commonly used over the last decades for stratospheric monitoring, this study also illustrates the suitability for retrieval of tropospheric column amounts. As there are long time series of zenith-sky acquisitions available, the developed approach offers new perspectives with regard to the use of observations from the NDACC stations.

  17. RDFBuilder: a tool to automatically build RDF-based interfaces for MAGE-OM microarray data sources.

    PubMed

    Anguita, Alberto; Martin, Luis; Garcia-Remesal, Miguel; Maojo, Victor

    2013-07-01

    This paper presents RDFBuilder, a tool that enables RDF-based access to MAGE-ML-compliant microarray databases. We have developed a system that automatically transforms the MAGE-OM model and microarray data stored in the ArrayExpress database into RDF format. Additionally, the system automatically enables a SPARQL endpoint. This allows users to execute SPARQL queries for retrieving microarray data, either from specific experiments or from more than one experiment at a time. Our system optimizes response times by caching and reusing information from previous queries. In this paper, we describe our methods for achieving this transformation. We show that our approach is complementary to other existing initiatives, such as Bio2RDF, for accessing and retrieving data from the ArrayExpress database. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Improving hydrologic disaster forecasting and response for transportation by assimilating and fusing NASA and other data sets : final report.

    DOT National Transportation Integrated Search

    2017-04-15

    In this 3-year project, the research team developed the Hydrologic Disaster Forecast and Response (HDFR) system, a set of integrated software tools for end users that streamlines hydrologic prediction workflows involving automated retrieval of hetero...

  19. Using Metadata To Improve Organization and Information Retrieval on the WWW.

    ERIC Educational Resources Information Center

    Doan, Bich-Lien; Beigbeder, Michel; Girardot, Jean-Jacques; Jaillon, Philippe

    The growing volume of heterogeneous and distributed information on the World Wide Web has made it increasingly difficult for existing tools to retrieve relevant information. To improve the performance of these tools, this paper suggests how to handle two aspects of the problem. The first aspect concerns a better representation and description of…

  20. Identify, Organize, and Retrieve Items Using Zotero

    ERIC Educational Resources Information Center

    Clark, Brian; Stierman, John

    2009-01-01

    Librarians build collections. To do this they use tools that help them identify, organize, and retrieve items for the collection. Zotero (zoh-TAIR-oh) is such a tool that helps the user build a library of useful books, articles, web sites, blogs, etc., discovered while surfing online. A visit to Zotero's homepage, www.zotero.org, shows a number of…

  1. GARNET--gene set analysis with exploration of annotation relations.

    PubMed

    Rho, Kyoohyoung; Kim, Bumjin; Jang, Youngjun; Lee, Sanghyun; Bae, Taejeong; Seo, Jihae; Seo, Chaehwa; Lee, Jihyun; Kang, Hyunjung; Yu, Ungsik; Kim, Sunghoon; Lee, Sanghyuk; Kim, Wan Kyu

    2011-02-15

    Gene set analysis is a powerful method of deducing biological meaning for an a priori defined set of genes. Numerous tools have been developed to test statistical enrichment or depletion in specific pathways or gene ontology (GO) terms. Major difficulties towards biological interpretation are integrating diverse types of annotation categories and exploring the relationships between annotation terms of similar information. GARNET (Gene Annotation Relationship NEtwork Tools) is an integrative platform for gene set analysis with many novel features. It includes tools for retrieval of genes from annotation database, statistical analysis & visualization of annotation relationships, and managing gene sets. In an effort to allow access to a full spectrum of amassed biological knowledge, we have integrated a variety of annotation data that include the GO, domain, disease, drug, chromosomal location, and custom-defined annotations. Diverse types of molecular networks (pathways, transcription and microRNA regulations, protein-protein interaction) are also included. The pair-wise relationship between annotation gene sets was calculated using kappa statistics. GARNET consists of three modules--gene set manager, gene set analysis and gene set retrieval, which are tightly integrated to provide virtually automatic analysis for gene sets. A dedicated viewer for annotation network has been developed to facilitate exploration of the related annotations. GARNET (gene annotation relationship network tools) is an integrative platform for diverse types of gene set analysis, where complex relationships among gene annotations can be easily explored with an intuitive network visualization tool (http://garnet.isysbio.org/ or http://ercsb.ewha.ac.kr/garnet/).

  2. Sequential Tool Use in Great Apes

    PubMed Central

    Martin-Ordas, Gema; Schumacher, Lena; Call, Josep

    2012-01-01

    Sequential tool use is defined as using a tool to obtain another non-food object which subsequently itself will serve as a tool to act upon a further (sub)goal. Previous studies have shown that birds and great apes succeed in such tasks. However, the inclusion of a training phase for each of the sequential steps and the low cost associated with retrieving the longest tools limits the scope of the conclusions. The goal of the experiments presented here was, first to replicate a previous study on sequential tool use conducted on New Caledonian crows and, second, extend this work by increasing the cost of retrieving a tool in order to test tool selectivity of apes. In Experiment 1, we presented chimpanzees, orangutans and bonobos with an out-of-reach reward, two tools that were available but too short to reach the food and four out-of-reach tools differing in functionality. Similar to crows, apes spontaneously used up to 3 tools in sequence to get the reward and also showed a strong preference for the longest out-of reach tool independently of the distance of the food. In Experiment 2, we increased the cost of reaching for the longest out-of reach tool. Now apes used up to 5 tools in sequence to get the reward and became more selective in their choice of the longest tool as the costs of its retrieval increased. The findings of the studies presented here contribute to the growing body of comparative research on tool use. PMID:23300592

  3. An information retrieval system for computerized patient records in the context of a daily hospital practice: the example of the Léon Bérard Cancer Center (France).

    PubMed

    Biron, P; Metzger, M H; Pezet, C; Sebban, C; Barthuet, E; Durand, T

    2014-01-01

    A full-text search tool was introduced into the daily practice of Léon Bérard Center (France), a health care facility devoted to treatment of cancer. This tool was integrated into the hospital information system by the IT department having been granted full autonomy to improve the system. To describe the development and various uses of a tool for full-text search of computerized patient records. The technology is based on Solr, an open-source search engine. It is a web-based application that processes HTTP requests and returns HTTP responses. A data processing pipeline that retrieves data from different repositories, normalizes, cleans and publishes it to Solr, was integrated in the information system of the Leon Bérard center. The IT department developed also user interfaces to allow users to access the search engine within the computerized medical record of the patient. From January to May 2013, 500 queries were launched per month by an average of 140 different users. Several usages of the tool were described, as follows: medical management of patients, medical research, and improving the traceability of medical care in medical records. The sensitivity of the tool for detecting the medical records of patients diagnosed with both breast cancer and diabetes was 83.0%, and its positive predictive value was 48.7% (gold standard: manual screening by a clinical research assistant). The project demonstrates that the introduction of full-text-search tools allowed practitioners to use unstructured medical information for various purposes.

  4. Protein structural similarity search by Ramachandran codes

    PubMed Central

    Lo, Wei-Cheng; Huang, Po-Jung; Chang, Chih-Hung; Lyu, Ping-Chiang

    2007-01-01

    Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation). SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE) and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era. PMID:17716377

  5. USER'S GUIDE FOR GLOED VERSION 1.0 - THE GLOBAL EMISSIONS DATABASE

    EPA Science Inventory

    The document is a user's guide for the EPA-developed, powerful software package, Global Emissions Database (GloED). GloED is a user-friendly, menu-driven tool for storing and retrieving emissions factors and activity data on a country-specific basis. Data can be selected from dat...

  6. Can Visualizing Document Space Improve Users' Information Foraging?

    ERIC Educational Resources Information Center

    Song, Min

    1998-01-01

    This study shows how users access relevant information in a visualized document space and determine whether BiblioMapper, a visualization tool, strengthens an information retrieval (IR) system and makes it more usable. BiblioMapper, developed for a CISI collection, was evaluated by accuracy, time, and user satisfaction. Users' navigation…

  7. A Hypermedia Information System for Aviation.

    ERIC Educational Resources Information Center

    Hartzell, Karin M.

    The Hypermedia Information System (HIS) is being developed under the auspices of the Federal Aviation Administration (FAA) Office of Aviation Medicine's (AAM) Human Factors in Aviation Maintenance (HFAM) research program. The goal of the hypermedia project is to create new tools and methods for aviation-related information storage and retrieval.…

  8. First Toronto Conference on Database Users. Systems that Enhance User Performance.

    ERIC Educational Resources Information Center

    Doszkocs, Tamas E.; Toliver, David

    1987-01-01

    The first of two papers discusses natural language searching as a user performance enhancement tool, focusing on artificial intelligence applications for information retrieval and problems with natural language processing. The second presents a conceptual framework for further development and future design of front ends to online bibliographic…

  9. A high throughput geocomputing system for remote sensing quantitative retrieval and a case study

    NASA Astrophysics Data System (ADS)

    Xue, Yong; Chen, Ziqiang; Xu, Hui; Ai, Jianwen; Jiang, Shuzheng; Li, Yingjie; Wang, Ying; Guang, Jie; Mei, Linlu; Jiao, Xijuan; He, Xingwei; Hou, Tingting

    2011-12-01

    The quality and accuracy of remote sensing instruments have been improved significantly, however, rapid processing of large-scale remote sensing data becomes the bottleneck for remote sensing quantitative retrieval applications. The remote sensing quantitative retrieval is a data-intensive computation application, which is one of the research issues of high throughput computation. The remote sensing quantitative retrieval Grid workflow is a high-level core component of remote sensing Grid, which is used to support the modeling, reconstruction and implementation of large-scale complex applications of remote sensing science. In this paper, we intend to study middleware components of the remote sensing Grid - the dynamic Grid workflow based on the remote sensing quantitative retrieval application on Grid platform. We designed a novel architecture for the remote sensing Grid workflow. According to this architecture, we constructed the Remote Sensing Information Service Grid Node (RSSN) with Condor. We developed a graphic user interface (GUI) tools to compose remote sensing processing Grid workflows, and took the aerosol optical depth (AOD) retrieval as an example. The case study showed that significant improvement in the system performance could be achieved with this implementation. The results also give a perspective on the potential of applying Grid workflow practices to remote sensing quantitative retrieval problems using commodity class PCs.

  10. Retrieval with Clustering in a Case-Based Reasoning System for Radiotherapy Treatment Planning

    NASA Astrophysics Data System (ADS)

    Khussainova, Gulmira; Petrovic, Sanja; Jagannathan, Rupa

    2015-05-01

    Radiotherapy treatment planning aims to deliver a sufficient radiation dose to cancerous tumour cells while sparing healthy organs in the tumour surrounding area. This is a trial and error process highly dependent on the medical staff's experience and knowledge. Case-Based Reasoning (CBR) is an artificial intelligence tool that uses past experiences to solve new problems. A CBR system has been developed to facilitate radiotherapy treatment planning for brain cancer. Given a new patient case the existing CBR system retrieves a similar case from an archive of successfully treated patient cases with the suggested treatment plan. The next step requires adaptation of the retrieved treatment plan to meet the specific demands of the new case. The CBR system was tested by medical physicists for the new patient cases. It was discovered that some of the retrieved cases were not suitable and could not be adapted for the new cases. This motivated us to revise the retrieval mechanism of the existing CBR system by adding a clustering stage that clusters cases based on their tumour positions. A number of well-known clustering methods were investigated and employed in the retrieval mechanism. Results using real world brain cancer patient cases have shown that the success rate of the new CBR retrieval is higher than that of the original system.

  11. Comparative evaluation of polarimetric and bi-spectral cloud microphysics retrievals: Retrieval closure experiments and comparisons based on idealized and LES case studies

    NASA Astrophysics Data System (ADS)

    Miller, D. J.; Zhang, Z.; Ackerman, A. S.; Platnick, S. E.; Cornet, C.

    2016-12-01

    A remote sensing cloud retrieval simulator, created by coupling an LES cloud model with vector radiative transfer (RT) models is the ideal framework for assessing cloud remote sensing techniques. This simulator serves as a tool for understanding bi-spectral and polarimetric retrievals by comparing them directly to LES cloud properties (retrieval closure comparison) and for comparing the retrieval techniques to one another. Our simulator utilizes the DHARMA LES [Ackerman et al., 2004] with cloud properties based on marine boundary layer (MBL) clouds observed during the DYCOMS-II and ATEX field campaigns. The cloud reflectances are produced by the vectorized RT models based on polarized doubling adding and monte carlo techniques (PDA, MCPOL). Retrievals are performed utilizing techniques as similar as possible to those implemented on their corresponding well known instruments; polarimetric retrievals are based on techniques implemented for polarimeters (POLDER, AirMSPI, and RSP) and bi-spectral retrievals are performed using the Nakajima-King LUT method utilized on a number of spectral instruments (MODIS and VIIRS). Retrieval comparisons focus on cloud droplet effective radius (re), effective variance (ve), and cloud optical thickness (τ). This work explores the sensitivities of these two retrieval techniques to various observation limitations, such as spatial resolution/cloud inhomogeneity, impact of 3D radiative effects, and angular resolution requirements. With future remote sensing missions like NASA's Aerosols/Clouds/Ecosystems (ACE) planning to feature advanced polarimetric instruments it is important to understand how these retrieval techniques compare to one another. The cloud retrieval simulator we've developed allows us to probe these important questions in a realistically relevant test bed.

  12. Challenges and methodology for indexing the computerized patient record.

    PubMed

    Ehrler, Frédéric; Ruch, Patrick; Geissbuhler, Antoine; Lovis, Christian

    2007-01-01

    Patient records contain most crucial documents for managing the treatments and healthcare of patients in the hospital. Retrieving information from these records in an easy, quick and safe way helps care providers to save time and find important facts about their patient's health. This paper presents the scalability issues induced by the indexing and the retrieval of the information contained in the patient records. For this study, EasyIR, an information retrieval tool performing full text queries and retrieving the related documents has been used. An evaluation of the performance reveals that the indexing process suffers from overhead consequence of the particular structure of the patient records. Most IR tools are designed to manage very large numbers of documents in a single index whereas in our hypothesis, one index per record, which usually implies few documents, has been imposed. As the number of modifications and creations of patient records are significant in a day, using a specialized and efficient indexation tool is required.

  13. Endobronchial Forceps-Assisted and Excimer Laser-Assisted Inferior Vena Cava Filter Removal: The Data, Where We Are, and How It Is Done.

    PubMed

    Chen, James X; Montgomery, Jennifer; McLennan, Gordon; Stavropoulos, S William

    2018-06-01

    The recognition of inferior vena cava filter related complications has motivated increased attentiveness in clinical follow-up of patients with inferior vena cava filters and has led to development of multiple approaches for retrieving filters that are challenging or impossible to remove using conventional techniques. Endobronchial forceps and excimer lasers are tools for designed to aid in complex inferior vena cava filter removals. This article discusses endobronchial forceps-assisted and excimer laser-assisted inferior vena cava filter retrievals. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Use of microwave satellite data to study variations in rainfall over the Indian Ocean

    NASA Technical Reports Server (NTRS)

    Hinton, Barry B.; Martin, David W.; Auvine, Brian; Olson, William S.

    1990-01-01

    The University of Wisconsin Space Science and Engineering Center mapped rainfall over the Indian Ocean using a newly developed Scanning Multichannel Microwave Radiometer (SMMR) rain-retrieval algorithm. The short-range objective was to characterize the distribution and variability of Indian Ocean rainfall on seasonal and annual scales. In the long-range, the objective is to clarify differences between land and marine regimes of monsoon rain. Researchers developed a semi-empirical algorithm for retrieving Indian Ocean rainfall. Tools for this development have come from radiative transfer and cloud liquid water models. Where possible, ground truth information from available radars was used in development and testing. SMMR rainfalls were also compared with Indian Ocean gauge rainfalls. Final Indian Ocean maps were produced for months, seasons, and years and interpreted in terms of historical analysis over the sub-continent.

  15. Data-driven information retrieval in heterogeneous collections of transcriptomics data links SIM2s to malignant pleural mesothelioma.

    PubMed

    Caldas, José; Gehlenborg, Nils; Kettunen, Eeva; Faisal, Ali; Rönty, Mikko; Nicholson, Andrew G; Knuutila, Sakari; Brazma, Alvis; Kaski, Samuel

    2012-01-15

    Genome-wide measurement of transcript levels is an ubiquitous tool in biomedical research. As experimental data continues to be deposited in public databases, it is becoming important to develop search engines that enable the retrieval of relevant studies given a query study. While retrieval systems based on meta-data already exist, data-driven approaches that retrieve studies based on similarities in the expression data itself have a greater potential of uncovering novel biological insights. We propose an information retrieval method based on differential expression. Our method deals with arbitrary experimental designs and performs competitively with alternative approaches, while making the search results interpretable in terms of differential expression patterns. We show that our model yields meaningful connections between biological conditions from different studies. Finally, we validate a previously unknown connection between malignant pleural mesothelioma and SIM2s suggested by our method, via real-time polymerase chain reaction in an independent set of mesothelioma samples. Supplementary data and source code are available from http://www.ebi.ac.uk/fg/research/rex.

  16. The Digital electronic Guideline Library (DeGeL): a hybrid framework for representation and use of clinical guidelines.

    PubMed

    Shahar, Yuval; Young, Ohad; Shalom, Erez; Mayaffit, Alon; Moskovitch, Robert; Hessing, Alon; Galperin, Maya

    2004-01-01

    We propose to present a poster (and potentially also a demonstration of the implemented system) summarizing the current state of our work on a hybrid, multiple-format representation of clinical guidelines that facilitates conversion of guidelines from free text to a formal representation. We describe a distributed Web-based architecture (DeGeL) and a set of tools using the hybrid representation. The tools enable performing tasks such as guideline specification, semantic markup, search, retrieval, visualization, eligibility determination, runtime application and retrospective quality assessment. The representation includes four parallel formats: Free text (one or more original sources); semistructured text (labeled by the target guideline-ontology semantic labels); semiformal text (which includes some control specification); and a formal, machine-executable representation. The specification, indexing, search, retrieval, and browsing tools are essentially independent of the ontology chosen for guideline representation, but editing the semi-formal and formal formats requires ontology-specific tools, which we have developed in the case of the Asbru guideline-specification language. The four formats support increasingly sophisticated computational tasks. The hybrid guidelines are stored in a Web-based library. All tools, such as for runtime guideline application or retrospective quality assessment, are designed to operate on all representations. We demonstrate the hybrid framework by providing examples from the semantic markup and search tools.

  17. Acquisition of a visual discrimination and reversal learning task by Labrador retrievers.

    PubMed

    Lazarowski, Lucia; Foster, Melanie L; Gruen, Margaret E; Sherman, Barbara L; Case, Beth C; Fish, Richard E; Milgram, Norton W; Dorman, David C

    2014-05-01

    Optimal cognitive ability is likely important for military working dogs (MWD) trained to detect explosives. An assessment of a dog's ability to rapidly learn discriminations might be useful in the MWD selection process. In this study, visual discrimination and reversal tasks were used to assess cognitive performance in Labrador retrievers selected for an explosives detection program using a modified version of the Toronto General Testing Apparatus (TGTA), a system developed for assessing performance in a battery of neuropsychological tests in canines. The results of the current study revealed that, as previously found with beagles tested using the TGTA, Labrador retrievers (N = 16) readily acquired both tasks and learned the discrimination task significantly faster than the reversal task. The present study confirmed that the modified TGTA system is suitable for cognitive evaluations in Labrador retriever MWDs and can be used to further explore effects of sex, phenotype, age, and other factors in relation to canine cognition and learning, and may provide an additional screening tool for MWD selection.

  18. Supporting information retrieval from electronic health records: A report of University of Michigan's nine-year experience in developing and using the Electronic Medical Record Search Engine (EMERSE).

    PubMed

    Hanauer, David A; Mei, Qiaozhu; Law, James; Khanna, Ritu; Zheng, Kai

    2015-06-01

    This paper describes the University of Michigan's nine-year experience in developing and using a full-text search engine designed to facilitate information retrieval (IR) from narrative documents stored in electronic health records (EHRs). The system, called the Electronic Medical Record Search Engine (EMERSE), functions similar to Google but is equipped with special functionalities for handling challenges unique to retrieving information from medical text. Key features that distinguish EMERSE from general-purpose search engines are discussed, with an emphasis on functions crucial to (1) improving medical IR performance and (2) assuring search quality and results consistency regardless of users' medical background, stage of training, or level of technical expertise. Since its initial deployment, EMERSE has been enthusiastically embraced by clinicians, administrators, and clinical and translational researchers. To date, the system has been used in supporting more than 750 research projects yielding 80 peer-reviewed publications. In several evaluation studies, EMERSE demonstrated very high levels of sensitivity and specificity in addition to greatly improved chart review efficiency. Increased availability of electronic data in healthcare does not automatically warrant increased availability of information. The success of EMERSE at our institution illustrates that free-text EHR search engines can be a valuable tool to help practitioners and researchers retrieve information from EHRs more effectively and efficiently, enabling critical tasks such as patient case synthesis and research data abstraction. EMERSE, available free of charge for academic use, represents a state-of-the-art medical IR tool with proven effectiveness and user acceptance. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Virtual Resources Centers and Their Role in Small Rural Schools.

    ERIC Educational Resources Information Center

    Freitas, Candido Varela de; Silva, Antonio Pedro da

    Virtual resources centers have been considered a pedagogical tool since the increasing development of electronic means allowed for the storage of huge amounts of information and its easy retrieval. Bearing in mind the need for enhancing the appearance of those centers, a discipline of "Management of Resources Centers" was included in a…

  20. Policy Compliance of Queries for Private Information Retrieval

    DTIC Science & Technology

    2010-11-01

    SPARQL, unfortunately, is not in RDF and so we had to develop tools to translate SPARQL queries into RDF to be used by our policy compliance prototype...policy-assurance/sparql2n3.py) that accepts SPARQL queries and returns the translated query in our simplified ontology. An example of a translated

  1. Interconnectedness and Contingencies: A Study of Context in Collaborative Information Seeking

    ERIC Educational Resources Information Center

    Spence, Patricia Ruma

    2013-01-01

    Collaborative information seeking (CIS) is an important aspect of work in organizational settings. Researchers are developing a more detailed understanding of CIS activities and the tools to support them; however, most studies of CIS focus on how people find and retrieve information collaboratively, while overlooking the important question of how…

  2. A Bibliography of the Literature on Optical Storage Technology. Final Report.

    ERIC Educational Resources Information Center

    Park, James R.

    Compiled to serve as a working tool for those involved in optical storage research, planning, and development, this bibliography contains nearly 700 references related to the optical storage and retrieval of digital computer data. Citations are divided into two major groups covering the general and patent literatures. Each citation includes the…

  3. The Influence of Retrieval Practice on Memory and Comprehension of Science Texts

    ERIC Educational Resources Information Center

    Hinze, Scott R.

    2010-01-01

    The testing effect, where retrieval practice aids performance on later tests, may be a powerful tool for improving learning and retention. Three experiments test the potentials and limitations of retrieval practice for retention and comprehension of the content of science texts. Experiment 1 demonstrated that cued recall of paragraphs, but not…

  4. WWW Entrez: A Hypertext Retrieval Tool for Molecular Biology.

    ERIC Educational Resources Information Center

    Epstein, Jonathan A.; Kans, Jonathan A.; Schuler, Gregory D.

    This article describes the World Wide Web (WWW) Entrez server which is based upon the National Center for Biotechnology Information's (NCBI) Entrez retrieval database and software. Entrez is a molecular sequence retrieval system that contains an integrated view of portions of Medline and all publicly available nucleotide and protein databases,…

  5. Practical quantum retrieval games

    NASA Astrophysics Data System (ADS)

    Arrazola, Juan Miguel; Karasamanis, Markos; Lütkenhaus, Norbert

    2016-06-01

    Complex cryptographic protocols are often constructed from simpler building blocks. In order to advance quantum cryptography, it is important to study practical building blocks that can be used to develop new protocols. An example is quantum retrieval games (QRGs), which have broad applicability and have already been used to construct quantum money schemes. In this work, we introduce a general construction of quantum retrieval games based on the hidden matching problem and show how they can be implemented in practice using available technology. More precisely, we provide a general method to construct (1-out-of-k ) QRGs, proving that their cheating probabilities decrease exponentially in k . In particular, we define QRGs based on coherent states of light, which can be implemented even in the presence of experimental imperfections. Our results constitute a tool in the arsenal of the practical quantum cryptographer.

  6. While drilling system and method

    DOEpatents

    Mayes, James C.; Araya, Mario A.; Thorp, Richard Edward

    2007-02-20

    A while drilling system and method for determining downhole parameters is provided. The system includes a retrievable while drilling tool positionable in a downhole drilling tool, a sensor chassis and at least one sensor. The while drilling tool is positionable in the downhole drilling tool and has a first communication coupler at an end thereof. The sensor chassis is supported in the drilling tool. The sensor chassis has a second communication coupler at an end thereof for operative connection with the first communication coupler. The sensor is positioned in the chassis and is adapted to measure internal and/or external parameters of the drilling tool. The sensor is operatively connected to the while drilling tool via the communication coupler for communication therebetween. The sensor may be positioned in the while drilling tool and retrievable with the drilling tool. Preferably, the system is operable in high temperature and high pressure conditions.

  7. Videofile for Law Enforcement

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Components of a videotape storage and retrieval system originally developed for NASA have been adapted as a tool for law enforcement agencies. Ampex Corp., Redwood City, Cal., built a unique system for NASA-Marshall. The first application of professional broadcast technology to computerized record-keeping, it incorporates new equipment for transporting tapes within the system. After completing the NASA system, Ampex continued development, primarily to improve image resolution. The resulting advanced system, known as the Ampex Videofile, offers advantages over microfilm for filing, storing, retrieving, and distributing large volumes of information. The system's computer stores information in digital code rather than in pictorial form. While microfilm allows visual storage of whole documents, it requires a step before usage--developing the film. With Videofile, the actual document is recorded, complete with photos and graphic material, and a picture of the document is available instantly.

  8. EARLINET Single Calculus Chain - overview on methodology and strategy

    NASA Astrophysics Data System (ADS)

    D'Amico, G.; Amodeo, A.; Baars, H.; Binietoglou, I.; Freudenthaler, V.; Mattis, I.; Wandinger, U.; Pappalardo, G.

    2015-11-01

    In this paper we describe the EARLINET Single Calculus Chain (SCC), a tool for the automatic analysis of lidar measurements. The development of this tool started in the framework of EARLINET-ASOS (European Aerosol Research Lidar Network - Advanced Sustainable Observation System); it was extended within ACTRIS (Aerosol, Clouds and Trace gases Research InfraStructure Network), and it is continuing within ACTRIS-2. The main idea was to develop a data processing chain that allows all EARLINET stations to retrieve, in a fully automatic way, the aerosol backscatter and extinction profiles starting from the raw lidar data of the lidar systems they operate. The calculus subsystem of the SCC is composed of two modules: a pre-processor module which handles the raw lidar data and corrects them for instrumental effects and an optical processing module for the retrieval of aerosol optical products from the pre-processed data. All input parameters needed to perform the lidar analysis are stored in a database to keep track of all changes which may occur for any EARLINET lidar system over the time. The two calculus modules are coordinated and synchronized by an additional module (daemon) which makes the whole analysis process fully automatic. The end user can interact with the SCC via a user-friendly web interface. All SCC modules are developed using open-source and freely available software packages. The final products retrieved by the SCC fulfill all requirements of the EARLINET quality assurance programs on both instrumental and algorithm levels. Moreover, the manpower needed to provide aerosol optical products is greatly reduced and thus the near-real-time availability of lidar data is improved. The high-quality of the SCC products is proven by the good agreement between the SCC analysis, and the corresponding independent manual retrievals. Finally, the ability of the SCC to provide high-quality aerosol optical products is demonstrated for an EARLINET intense observation period.

  9. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.

    PubMed

    Simonyan, Vahan; Mazumder, Raja

    2014-09-30

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  10. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis

    PubMed Central

    Simonyan, Vahan; Mazumder, Raja

    2014-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis. PMID:25271953

  11. Document retrieval on repetitive string collections.

    PubMed

    Gagie, Travis; Hartikainen, Aleksi; Karhu, Kalle; Kärkkäinen, Juha; Navarro, Gonzalo; Puglisi, Simon J; Sirén, Jouni

    2017-01-01

    Most of the fastest-growing string collections today are repetitive, that is, most of the constituent documents are similar to many others. As these collections keep growing, a key approach to handling them is to exploit their repetitiveness, which can reduce their space usage by orders of magnitude. We study the problem of indexing repetitive string collections in order to perform efficient document retrieval operations on them. Document retrieval problems are routinely solved by search engines on large natural language collections, but the techniques are less developed on generic string collections. The case of repetitive string collections is even less understood, and there are very few existing solutions. We develop two novel ideas, interleaved LCPs and precomputed document lists , that yield highly compressed indexes solving the problem of document listing (find all the documents where a string appears), top- k document retrieval (find the k documents where a string appears most often), and document counting (count the number of documents where a string appears). We also show that a classical data structure supporting the latter query becomes highly compressible on repetitive data. Finally, we show how the tools we developed can be combined to solve ranked conjunctive and disjunctive multi-term queries under the simple [Formula: see text] model of relevance. We thoroughly evaluate the resulting techniques in various real-life repetitiveness scenarios, and recommend the best choices for each case.

  12. Video content analysis of surgical procedures.

    PubMed

    Loukas, Constantinos

    2018-02-01

    In addition to its therapeutic benefits, minimally invasive surgery offers the potential for video recording of the operation. The videos may be archived and used later for reasons such as cognitive training, skills assessment, and workflow analysis. Methods from the major field of video content analysis and representation are increasingly applied in the surgical domain. In this paper, we review recent developments and analyze future directions in the field of content-based video analysis of surgical operations. The review was obtained from PubMed and Google Scholar search on combinations of the following keywords: 'surgery', 'video', 'phase', 'task', 'skills', 'event', 'shot', 'analysis', 'retrieval', 'detection', 'classification', and 'recognition'. The collected articles were categorized and reviewed based on the technical goal sought, type of surgery performed, and structure of the operation. A total of 81 articles were included. The publication activity is constantly increasing; more than 50% of these articles were published in the last 3 years. Significant research has been performed for video task detection and retrieval in eye surgery. In endoscopic surgery, the research activity is more diverse: gesture/task classification, skills assessment, tool type recognition, shot/event detection and retrieval. Recent works employ deep neural networks for phase and tool recognition as well as shot detection. Content-based video analysis of surgical operations is a rapidly expanding field. Several future prospects for research exist including, inter alia, shot boundary detection, keyframe extraction, video summarization, pattern discovery, and video annotation. The development of publicly available benchmark datasets to evaluate and compare task-specific algorithms is essential.

  13. Physical Examination Tools Used to Identify Swollen and Tender Lower Limb Joints in Juvenile Idiopathic Arthritis: A Scoping Review.

    PubMed

    Fellas, Antoni; Singh-Grewal, Davinder; Santos, Derek; Coda, Andrea

    2018-01-01

    Juvenile idiopathic arthritis (JIA) is the most common form of rheumatic disease in childhood and adolescents, affecting between 16 and 150 per 100,000 young persons below the age of 16. The lower limb is commonly affected in JIA, with joint swelling and tenderness often observed as a result of active synovitis. The objective of this scoping review is to identify the existence of physical examination (PE) tools to identify and record swollen and tender lower limb joints in children with JIA. Two reviewers individually screened the eligibility of titles and abstracts retrieved from the following online databases: MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, and CINAHL. Studies that proposed and validated a comprehensive lower limb PE tool were included in this scoping review. After removal of duplicates, 1232 citations were retrieved, in which twelve were identified as potentially eligible. No studies met the set criteria for inclusion. Further research is needed in developing and validating specific PE tools for clinicians such as podiatrists and other allied health professionals involved in the management of pathological lower limb joints in children diagnosed with JIA. These lower limb PE tools may be useful in conjunction with existing disease activity scores to optimise screening of the lower extremity and monitoring the efficacy of targeted interventions.

  14. A Term Project for a Course on Computer Forensics

    ERIC Educational Resources Information Center

    Harrison, Warren

    2006-01-01

    The typical approach to creating an examination disk for exercises and projects in a course on computer forensics is for the instructor to populate a piece of media with evidence to be retrieved. While such an approach supports the simple use of forensic tools, in many cases the use of an instructor-developed examination disk avoids utilizing some…

  15. How To Succeed in Promoting Your Web Site: The Impact of Search Engine Registration on Retrieval of a World Wide Web Site.

    ERIC Educational Resources Information Center

    Tunender, Heather; Ervin, Jane

    1998-01-01

    Character strings were planted in a World Wide Web site (Project Whistlestop) to test indexing and retrieval rates of five Web search tools (Lycos, infoseek, AltaVista, Yahoo, Excite). It was found that search tools indexed few of the planted character strings, none indexed the META descriptor tag, and only Excite indexed into the 3rd-4th site…

  16. Logging-while-coring method and apparatus

    DOEpatents

    Goldberg, David S.; Myers, Gregory J.

    2007-11-13

    A method and apparatus for downhole coring while receiving logging-while-drilling tool data. The apparatus includes core collar and a retrievable core barrel. The retrievable core barrel receives core from a borehole which is sent to the surface for analysis via wireline and latching tool The core collar includes logging-while-drilling tools for the simultaneous measurement of formation properties during the core excavation process. Examples of logging-while-drilling tools include nuclear sensors, resistivity sensors, gamma ray sensors, and bit resistivity sensors. The disclosed method allows for precise core-log depth calibration and core orientation within a single borehole, and without at pipe trip, providing both time saving and unique scientific advantages.

  17. Logging-while-coring method and apparatus

    DOEpatents

    Goldberg, David S.; Myers, Gregory J.

    2007-01-30

    A method and apparatus for downhole coring while receiving logging-while-drilling tool data. The apparatus includes core collar and a retrievable core barrel. The retrievable core barrel receives core from a borehole which is sent to the surface for analysis via wireline and latching tool The core collar includes logging-while-drilling tools for the simultaneous measurement of formation properties during the core excavation process. Examples of logging-while-drilling tools include nuclear sensors, resistivity sensors, gamma ray sensors, and bit resistivity sensors. The disclosed method allows for precise core-log depth calibration and core orientation within a single borehole, and without at pipe trip, providing both time saving and unique scientific advantages.

  18. Recent developments in software tools for high-throughput in vitro ADME support with high-resolution MS.

    PubMed

    Paiva, Anthony; Shou, Wilson Z

    2016-08-01

    The last several years have seen the rapid adoption of the high-resolution MS (HRMS) for bioanalytical support of high throughput in vitro ADME profiling. Many capable software tools have been developed and refined to process quantitative HRMS bioanalysis data for ADME samples with excellent performance. Additionally, new software applications specifically designed for quan/qual soft spot identification workflows using HRMS have greatly enhanced the quality and efficiency of the structure elucidation process for high throughput metabolite ID in early in vitro ADME profiling. Finally, novel approaches in data acquisition and compression, as well as tools for transferring, archiving and retrieving HRMS data, are being continuously refined to tackle the issue of large data file size typical for HRMS analyses.

  19. Impact and User Satisfaction of a Clinical Information Portal Embedded in an Electronic Health Record

    PubMed Central

    Tannery, Nancy H; Epstein, Barbara A; Wessel, Charles B; Yarger, Frances; LaDue, John; Klem, Mary Lou

    2011-01-01

    In 2008, a clinical information tool was developed and embedded in the electronic health record system of an academic medical center. In 2009, the initial information tool, Clinical-e, was superseded by a portal called Clinical Focus, with a single search box enabling a federated search of selected online information resources. To measure the usefulness and impact of Clinical Focus, a survey was used to gather feedback about users' experience with this clinical resource. The survey determined what type of clinicians were using this tool and assessed user satisfaction and perceived impact on patient care decision making. Initial survey results suggest the majority of respondents found Clinical Focus easy to navigate, the content easy to read, and the retrieved information relevant and complete. The majority would recommend Clinical Focus to their colleagues. Results indicate that this tool is a promising area for future development. PMID:22016670

  20. System engineering approach to GPM retrieval algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, C. R.; Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Groundmore » validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do calculated at each bin, the rain rate can then be calculated based on a suitable rain-rate model. This paper develops a system engineering interface to the retrieval algorithms while remaining cognizant of system engineering issues so that it can be used to bridge the divide between algorithm physics an d overall mission requirements. Additionally, in line with the systems approach, a methodology is developed such that the measurement requirements pass through the retrieval model and other subsystems and manifest themselves as measurement and other system constraints. A systems model has been developed for the retrieval algorithm that can be evaluated through system-analysis tools such as MATLAB/Simulink.« less

  1. Music to knowledge: A visual programming environment for the development and evaluation of music information retrieval techniques

    NASA Astrophysics Data System (ADS)

    Ehmann, Andreas F.; Downie, J. Stephen

    2005-09-01

    The objective of the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL) project is the creation of a large, secure corpus of audio and symbolic music data accessible to the music information retrieval (MIR) community for the testing and evaluation of various MIR techniques. As part of the IMIRSEL project, a cross-platform JAVA based visual programming environment called Music to Knowledge (M2K) is being developed for a variety of music information retrieval related tasks. The primary objective of M2K is to supply the MIR community with a toolset that provides the ability to rapidly prototype algorithms, as well as foster the sharing of techniques within the MIR community through the use of a standardized set of tools. Due to the relatively large size of audio data and the computational costs associated with some digital signal processing and machine learning techniques, M2K is also designed to support distributed computing across computing clusters. In addition, facilities to allow the integration of non-JAVA based (e.g., C/C++, MATLAB, etc.) algorithms and programs are provided within M2K. [Work supported by the Andrew W. Mellon Foundation and NSF Grants No. IIS-0340597 and No. IIS-0327371.

  2. Coherent Evaluation of Aerosol Data Products from Multiple Satellite Sensors

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles

    2011-01-01

    Aerosol retrieval from satellite has practically become routine, especially during the last decade. However, there is often disagreement between similar aerosol parameters retrieved from different sensors, thereby leaving users confused as to which sensors to trust for answering important science questions about the distribution, properties, and impacts of aerosols. As long as there is no consensus, and the inconsistencies are not well characterized and understood, there will be no way of developing reliable model inputs and climate data records from satellite aerosol measurements. Fortunately, the Aerosol Robotic Network (AERONET) is providing well-calibrated globally representative ground-based aerosol measurements corresponding to the satellite-retrieved products. Through a recently developed web-based Multi-sensor Aerosol Products Sampling System (MAPSS), we are utilizing the advantages offered by collocated AERONET and satellite products to characterize and evaluate aerosol retrieval from multiple sensors. Indeed, MAPSS and its companion statistical tool AeroStat are facilitating detailed comparative uncertainty analysis of satellite aerosol measurements from Terra-MODIS, Aqua-MODIS, Terra-MISR, Aura-OMI, Parasol-POLDER, and Calipso-CALIOP. In this presentation, we will describe the strategy of the MAPSS system, its potential advantages for the aerosol community, and the preliminary results of an integrated comparative uncertainly analysis of aerosol products from multiple satellite sensors.

  3. An object-oriented programming system for the integration of internet-based bioinformatics resources.

    PubMed

    Beveridge, Allan

    2006-01-01

    The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data.

  4. Jupiter Europa Orbiter Architecture Definition Process

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Shishko, Robert

    2011-01-01

    The proposed Jupiter Europa Orbiter mission, planned for launch in 2020, is using a new architectural process and framework tool to drive its model-based systems engineering effort. The process focuses on getting the architecture right before writing requirements and developing a point design. A new architecture framework tool provides for the structured entry and retrieval of architecture artifacts based on an emerging architecture meta-model. This paper describes the relationships among these artifacts and how they are used in the systems engineering effort. Some early lessons learned are discussed.

  5. RSAT 2018: regulatory sequence analysis tools 20th anniversary.

    PubMed

    Nguyen, Nga Thi Thuy; Contreras-Moreira, Bruno; Castro-Mondragon, Jaime A; Santana-Garcia, Walter; Ossio, Raul; Robles-Espinoza, Carla Daniela; Bahin, Mathieu; Collombet, Samuel; Vincens, Pierre; Thieffry, Denis; van Helden, Jacques; Medina-Rivera, Alejandra; Thomas-Chollier, Morgane

    2018-05-02

    RSAT (Regulatory Sequence Analysis Tools) is a suite of modular tools for the detection and the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, including from genome-wide datasets like ChIP-seq/ATAC-seq, (ii) motif scanning, (iii) motif analysis (quality assessment, comparisons and clustering), (iv) analysis of regulatory variations, (v) comparative genomics. Six public servers jointly support 10 000 genomes from all kingdoms. Six novel or refactored programs have been added since the 2015 NAR Web Software Issue, including updated programs to analyse regulatory variants (retrieve-variation-seq, variation-scan, convert-variations), along with tools to extract sequences from a list of coordinates (retrieve-seq-bed), to select motifs from motif collections (retrieve-matrix), and to extract orthologs based on Ensembl Compara (get-orthologs-compara). Three use cases illustrate the integration of new and refactored tools to the suite. This Anniversary update gives a 20-year perspective on the software suite. RSAT is well-documented and available through Web sites, SOAP/WSDL (Simple Object Access Protocol/Web Services Description Language) web services, virtual machines and stand-alone programs at http://www.rsat.eu/.

  6. Using action research to design bereavement software: engaging people with intellectual disabilities for effective development.

    PubMed

    Read, Sue; Nte, Sol; Corcoran, Patsy; Stephens, Richard

    2013-05-01

     Loss is a universal experience and death is perceived as the ultimate loss. The overarching aim of this research is to produce a qualitative, flexible, interactive, computerised tool to support the facilitation of emotional expressions around loss for people with intellectual disabilities. This paper explores the process of using Participatory Action Research (PAR) to develop this tool.  Participator Action Research provided the indicative framework for the process of developing a software tool that is likely to be used in practice. People with intellectual disability worked alongside researchers to produce an accessible, flexible piece of software that can facilitate storytelling around loss and bereavement and promote spontaneous expression that can be shared with others. This tool has the capacity to enable individuals to capture experiences in a storyboard format; that can be stored; is easily retrievable; can be printed out; and could feasibly be personalised by the insertion of photographs. © 2012 Blackwell Publishing Ltd.

  7. Early experiences in developing and managing the neuroscience gateway.

    PubMed

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas T

    2015-02-01

    The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway.

  8. Early experiences in developing and managing the neuroscience gateway

    PubMed Central

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas. T.

    2015-01-01

    SUMMARY The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway. PMID:26523124

  9. Sequence Search and Comparative Genomic Analysis of SUMO-Activating Enzymes Using CoGe.

    PubMed

    Carretero-Paulet, Lorenzo; Albert, Victor A

    2016-01-01

    The growing number of genome sequences completed during the last few years has made necessary the development of bioinformatics tools for the easy access and retrieval of sequence data, as well as for downstream comparative genomic analyses. Some of these are implemented as online platforms that integrate genomic data produced by different genome sequencing initiatives with data mining tools as well as various comparative genomic and evolutionary analysis possibilities.Here, we use the online comparative genomics platform CoGe ( http://www.genomevolution.org/coge/ ) (Lyons and Freeling. Plant J 53:661-673, 2008; Tang and Lyons. Front Plant Sci 3:172, 2012) (1) to retrieve the entire complement of orthologous and paralogous genes belonging to the SUMO-Activating Enzymes 1 (SAE1) gene family from a set of species representative of the Brassicaceae plant eudicot family with genomes fully sequenced, and (2) to investigate the history, timing, and molecular mechanisms of the gene duplications driving the evolutionary expansion and functional diversification of the SAE1 family in Brassicaceae.

  10. TRMM Common Microphysics Products: A Tool for Evaluating Spaceborne Precipitation Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Kingsmill, David E.; Yuter, Sandra E.; Hobbs, Peter V.; Rangno, Arthur L.; Heymsfield, Andrew J.; Stith, Jeffrey L.; Bansemer, Aaron; Haggerty, Julie A.; Korolev, Alexei V.

    2004-01-01

    A customized product for analysis of microphysics data collected from aircraft during field campaigns in support of the TRMM program is described. These Common Microphysics Products (CMP's) are designed to aid in evaluation of TRMM spaceborne precipitation retrieval algorithms. Information needed for this purpose (e.g., particle size spectra and habit, liquid and ice water content) was derived using a common processing strategy on the wide variety of microphysical instruments and raw native data formats employed in the field campaigns. The CMP's are organized into an ASCII structure to allow easy access to the data for those less familiar with and without the tools to accomplish microphysical data processing. Detailed examples of the CMP show its potential and some of its limitations. This approach may be a first step toward developing a generalized microphysics format and an associated community-oriented, non-proprietary software package for microphysics data processing, initiatives that would likely broaden community access to and use of microphysics datasets.

  11. NELS 2.0 - A general system for enterprise wide information management

    NASA Technical Reports Server (NTRS)

    Smith, Stephanie L.

    1993-01-01

    NELS, the NASA Electronic Library System, is an information management tool for creating distributed repositories of documents, drawings, and code for use and reuse by the aerospace community. The NELS retrieval engine can load metadata and source files of full text objects, perform natural language queries to retrieve ranked objects, and create links to connect user interfaces. For flexibility, the NELS architecture has layered interfaces between the application program and the stored library information. The session manager provides the interface functions for development of NELS applications. The data manager is an interface between session manager and the structured data system. The center of the structured data system is the Wide Area Information Server. This system architecture provides access to information across heterogeneous platforms in a distributed environment. There are presently three user interfaces that connect to the NELS engine; an X-Windows interface, and ASCII interface and the Spatial Data Management System. This paper describes the design and operation of NELS as an information management tool and repository.

  12. Alkemio: association of chemicals with biomedical topics by text and data mining

    PubMed Central

    Gijón-Correas, José A.; Andrade-Navarro, Miguel A.; Fontaine, Jean F.

    2014-01-01

    The PubMed® database of biomedical citations allows the retrieval of scientific articles studying the function of chemicals in biology and medicine. Mining millions of available citations to search reported associations between chemicals and topics of interest would require substantial human time. We have implemented the Alkemio text mining web tool and SOAP web service to help in this task. The tool uses biomedical articles discussing chemicals (including drugs), predicts their relatedness to the query topic with a naïve Bayesian classifier and ranks all chemicals by P-values computed from random simulations. Benchmarks on seven human pathways showed good retrieval performance (areas under the receiver operating characteristic curves ranged from 73.6 to 94.5%). Comparison with existing tools to retrieve chemicals associated to eight diseases showed the higher precision and recall of Alkemio when considering the top 10 candidate chemicals. Alkemio is a high performing web tool ranking chemicals for any biomedical topics and it is free to non-commercial users. Availability: http://cbdm.mdc-berlin.de/∼medlineranker/cms/alkemio. PMID:24838570

  13. Adjusted Levenberg-Marquardt method application to methene retrieval from IASI/METOP spectra

    NASA Astrophysics Data System (ADS)

    Khamatnurova, Marina; Gribanov, Konstantin

    2016-04-01

    Levenberg-Marquardt method [1] with iteratively adjusted parameter and simultaneous evaluation of averaging kernels together with technique of parameters selection are developed and applied to the retrieval of methane vertical profiles in the atmosphere from IASI/METOP spectra. Retrieved methane vertical profiles are then used for calculation of total atmospheric column amount. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder,USA) [2] are taken as initial guess for retrieval algorithm. Surface temperature, temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval for each selected spectrum. Modified software package FIRE-ARMS [3] were used for numerical experiments. To adjust parameters and validate the method we used ECMWF MACC reanalysis data [4]. Methane columnar values retrieved from cloudless IASI spectra demonstrate good agreement with MACC columnar values. Comparison is performed for IASI spectra measured in May of 2012 over Western Siberia. Application of the method for current IASI/METOP measurements are discussed. 1.Ma C., Jiang L. Some Research on Levenberg-Marquardt Method for the Nonlinear Equations // Applied Mathematics and Computation. 2007. V.184. P. 1032-1040 2.http://www.esrl.noaa.gov/psdhttp://www.esrl.noaa.gov/psd 3.Gribanov K.G., Zakharov V.I., Tashkun S.A., Tyuterev Vl.G.. A New Software Tool for Radiative Transfer Calculations and its application to IMG/ADEOS data // JQSRT.2001.V.68.№ 4. P. 435-451. 4.http://www.ecmwf.int/http://www.ecmwf.int

  14. Improving e-book access via a library-developed full-text search tool.

    PubMed

    Foust, Jill E; Bergen, Phillip; Maxeiner, Gretchen L; Pawlowski, Peter N

    2007-01-01

    This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single "Google-style" query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products.

  15. Improving e-book access via a library-developed full-text search tool*

    PubMed Central

    Foust, Jill E.; Bergen, Phillip; Maxeiner, Gretchen L.; Pawlowski, Peter N.

    2007-01-01

    Purpose: This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. Setting: The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. Brief Description: The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single “Google-style” query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. Results/Evaluation: A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. Conclusion: This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products. PMID:17252065

  16. The Strength of Ethical Matrixes as a Tool for Normative Analysis Related to Technological Choices: The Case of Geological Disposal for Radioactive Waste.

    PubMed

    Kermisch, Céline; Depaus, Christophe

    2018-02-01

    The ethical matrix is a participatory tool designed to structure ethical reflection about the design, the introduction, the development or the use of technologies. Its collective implementation, in the context of participatory decision-making, has shown its potential usefulness. On the contrary, its implementation by a single researcher has not been thoroughly analyzed. The aim of this paper is precisely to assess the strength of ethical matrixes implemented by a single researcher as a tool for conceptual normative analysis related to technological choices. Therefore, the ethical matrix framework is applied to the management of high-level radioactive waste, more specifically to retrievable and non-retrievable geological disposal. The results of this analysis show that the usefulness of ethical matrixes is twofold and that they provide a valuable input for further decision-making. Indeed, by using ethical matrixes, implicit ethically relevant issues were revealed-namely issues of equity associated with health impacts and differences between close and remote future generations regarding ethical impacts. Moreover, the ethical matrix framework was helpful in synthesizing and comparing systematically the ethical impacts of the technologies under scrutiny, and hence in highlighting the potential ethical conflicts.

  17. RAVEL: retrieval and visualization in ELectronic health records.

    PubMed

    Thiessard, Frantz; Mougin, Fleur; Diallo, Gayo; Jouhet, Vianney; Cossin, Sébastien; Garcelon, Nicolas; Campillo, Boris; Jouini, Wassim; Grosjean, Julien; Massari, Philippe; Griffon, Nicolas; Dupuch, Marie; Tayalati, Fayssal; Dugas, Edwige; Balvet, Antonio; Grabar, Natalia; Pereira, Suzanne; Frandji, Bruno; Darmoni, Stefan; Cuggia, Marc

    2012-01-01

    Because of the ever-increasing amount of information in patients' EHRs, healthcare professionals may face difficulties for making diagnoses and/or therapeutic decisions. Moreover, patients may misunderstand their health status. These medical practitioners need effective tools to locate in real time relevant elements within the patients' EHR and visualize them according to synthetic and intuitive presentation models. The RAVEL project aims at achieving this goal by performing a high profile industrial research and development program on the EHR considering the following areas: (i) semantic indexing, (ii) information retrieval, and (iii) data visualization. The RAVEL project is expected to implement a generic, loosely coupled to data sources prototype so that it can be transposed into different university hospitals information systems.

  18. Computer aided systems human engineering: A hypermedia tool

    NASA Technical Reports Server (NTRS)

    Boff, Kenneth R.; Monk, Donald L.; Cody, William J.

    1992-01-01

    The Computer Aided Systems Human Engineering (CASHE) system, Version 1.0, is a multimedia ergonomics database on CD-ROM for the Apple Macintosh II computer, being developed for use by human system designers, educators, and researchers. It will initially be available on CD-ROM and will allow users to access ergonomics data and models stored electronically as text, graphics, and audio. The CASHE CD-ROM, Version 1.0 will contain the Boff and Lincoln (1988) Engineering Data Compendium, MIL-STD-1472D and a unique, interactive simulation capability, the Perception and Performance Prototyper. Its features also include a specialized data retrieval, scaling, and analysis capability and the state of the art in information retrieval, browsing, and navigation.

  19. Visualization and interaction tools for aerial photograph mosaics

    NASA Astrophysics Data System (ADS)

    Fernandes, João Pedro; Fonseca, Alexandra; Pereira, Luís; Faria, Adriano; Figueira, Helder; Henriques, Inês; Garção, Rita; Câmara, António

    1997-05-01

    This paper describes the development of a digital spatial library based on mosaics of digital orthophotos, called Interactive Portugal, that will enable users both to retrieve geospatial information existing in the Portuguese National System for Geographic Information World Wide Web server, and to develop local databases connected to the main system. A set of navigation, interaction, and visualization tools are proposed and discussed. They include sketching, dynamic sketching, and navigation capabilities over the digital orthophotos mosaics. Main applications of this digital spatial library are pointed out and discussed, namely for education, professional, and tourism markets. Future developments are considered. These developments are related to user reactions, technological advancements, and projects that also aim at delivering and exploring digital imagery on the World Wide Web. Future capabilities for site selection and change detection are also considered.

  20. UNIFORM ATMOSPHERIC RETRIEVAL ANALYSIS OF ULTRACOOL DWARFS. I. CHARACTERIZING BENCHMARKS, Gl 570D AND HD 3651B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Line, Michael R.; Fortney, Jonathan J.; Teske, Johanna

    Interpreting the spectra of brown dwarfs is key to determining the fundamental physical and chemical processes occurring in their atmospheres. Powerful Bayesian atmospheric retrieval tools have recently been applied to both exoplanet and brown dwarf spectra to tease out the thermal structures and molecular abundances to understand those processes. In this manuscript we develop a significantly upgraded retrieval method and apply it to the SpeX spectral library data of two benchmark late T dwarfs, Gl 570D and HD 3651B, to establish the validity of our upgraded forward model parameterization and Bayesian estimator. Our retrieved metallicities, gravities, and effective temperatures are consistentmore » with the metallicity and presumed ages of the systems. We add the carbon-to-oxygen ratio as a new dimension to benchmark systems and find good agreement between carbon-to-oxygen ratios derived in the brown dwarfs and the host stars. Furthermore, we have for the first time unambiguously determined the presence of ammonia in the low-resolution spectra of these two late T dwarfs. We also show that the retrieved results are not significantly impacted by the possible presence of clouds, though some quantities are significantly impacted by uncertainties in photometry. This investigation represents a watershed study in establishing the utility of atmospheric retrieval approaches on brown dwarf spectra.« less

  1. The Development of a Diagnostic-Prescriptive Tool for Undergraduates Seeking Information for a Social Science/Humanities Assignment. III. Enabling Devices.

    ERIC Educational Resources Information Center

    Cole, Charles; Cantero, Pablo; Ungar, Andras

    2000-01-01

    This article focuses on a study of undergraduates writing an essay for a remedial writing course that tested two devices, an uncertainty expansion device and an uncertainty reduction device. Highlights include Kuhlthau's information search process model, and enabling technology devices for the information needs of information retrieval system…

  2. Knowledge: Creation, Organization and Use. ASIS '99: Proceedings of the American Society for Information Science (ASIS) Annual Meeting (62nd, Washington, DC, October 31-November 4, 1999). Volume 36.

    ERIC Educational Resources Information Center

    Woods, Larry, Ed.

    The 1999 American Society for Information Science (ASIS) conference explored current knowledge creation, acquisition, navigation, correlation, retrieval, management, and dissemination practicalities and potentialities, their implementation and impact, and the theories behind the developments. Speakers reviewed processes, technologies, and tools,…

  3. Plume Tracker: Interactive mapping of volcanic sulfur dioxide emissions with high-performance radiative transfer modeling

    NASA Astrophysics Data System (ADS)

    Realmuto, Vincent J.; Berk, Alexander

    2016-11-01

    We describe the development of Plume Tracker, an interactive toolkit for the analysis of multispectral thermal infrared observations of volcanic plumes and clouds. Plume Tracker is the successor to MAP_SO2, and together these flexible and comprehensive tools have enabled investigators to map sulfur dioxide (SO2) emissions from a number of volcanoes with TIR data from a variety of airborne and satellite instruments. Our objective for the development of Plume Tracker was to improve the computational performance of the retrieval procedures while retaining the accuracy of the retrievals. We have achieved a 300 × improvement in the benchmark performance of the retrieval procedures through the introduction of innovative data binning and signal reconstruction strategies, and improved the accuracy of the retrievals with a new method for evaluating the misfit between model and observed radiance spectra. We evaluated the accuracy of Plume Tracker retrievals with case studies based on MODIS and AIRS data acquired over Sarychev Peak Volcano, and ASTER data acquired over Kilauea and Turrialba Volcanoes. In the Sarychev Peak study, the AIRS-based estimate of total SO2 mass was 40% lower than the MODIS-based estimate. This result was consistent with a 45% reduction in the AIRS-based estimate of plume area relative to the corresponding MODIS-based estimate. In addition, we found that our AIRS-based estimate agreed with an independent estimate, based on a competing retrieval technique, within a margin of ± 20%. In the Kilauea study, the ASTER-based concentration estimates from 21 May 2012 were within ± 50% of concurrent ground-level concentration measurements. In the Turrialba study, the ASTER-based concentration estimates on 21 January 2012 were in exact agreement with SO2 concentrations measured at plume altitude on 1 February 2012.

  4. PLASMAP: an interactive computational tool for storage, retrieval and device-independent graphic display of conventional restriction maps.

    PubMed Central

    Stone, B N; Griesinger, G L; Modelevsky, J L

    1984-01-01

    We describe an interactive computational tool, PLASMAP, which allows the user to electronically store, retrieve, and display circular restriction maps. PLASMAP permits users to construct libraries of plasmid restriction maps as a set of files which may be edited in the laboratory at any time. The display feature of PLASMAP quickly generates device-independent, artist-quality, full-color or monochrome, hard copies or CRT screens of complex, conventional circular restriction maps. PMID:6320096

  5. A Multi­Discipline Approach to Digitizing Historic Seismograms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Andrew

    2016-04-07

    Retriever Technology has developed and has made available free of charge a seismogram digitization software package called SKATE (Seismogram Kit for Automatic Trace Extraction). We have developed an extensive set of algorithms that process seismogram image files, provide editing tools, and output time series data. The software is available online and free of charge at seismo.redfish.com. To demonstrate the speed and cost effectiveness of the software, we have processed over 30,000 images.

  6. Advancements in Large-Scale Data/Metadata Management for Scientific Data.

    NASA Astrophysics Data System (ADS)

    Guntupally, K.; Devarakonda, R.; Palanisamy, G.; Frame, M. T.

    2017-12-01

    Scientific data often comes with complex and diverse metadata which are critical for data discovery and users. The Online Metadata Editor (OME) tool, which was developed by an Oak Ridge National Laboratory team, effectively manages diverse scientific datasets across several federal data centers, such as DOE's Atmospheric Radiation Measurement (ARM) Data Center and USGS's Core Science Analytics, Synthesis, and Libraries (CSAS&L) project. This presentation will focus mainly on recent developments and future strategies for refining OME tool within these centers. The ARM OME is a standard based tool (https://www.archive.arm.gov/armome) that allows scientists to create and maintain metadata about their data products. The tool has been improved with new workflows that help metadata coordinators and submitting investigators to submit and review their data more efficiently. The ARM Data Center's newly upgraded Data Discovery Tool (http://www.archive.arm.gov/discovery) uses rich metadata generated by the OME to enable search and discovery of thousands of datasets, while also providing a citation generator and modern order-delivery techniques like Globus (using GridFTP), Dropbox and THREDDS. The Data Discovery Tool also supports incremental indexing, which allows users to find new data as and when they are added. The USGS CSAS&L search catalog employs a custom version of the OME (https://www1.usgs.gov/csas/ome), which has been upgraded with high-level Federal Geographic Data Committee (FGDC) validations and the ability to reserve and mint Digital Object Identifiers (DOIs). The USGS's Science Data Catalog (SDC) (https://data.usgs.gov/datacatalog) allows users to discover a myriad of science data holdings through a web portal. Recent major upgrades to the SDC and ARM Data Discovery Tool include improved harvesting performance and migration using new search software, such as Apache Solr 6.0 for serving up data/metadata to scientific communities. Our presentation will highlight the future enhancements of these tools which enable users to retrieve fast search results, along with parallelizing the retrieval process from online and High Performance Storage Systems. In addition, these improvements to the tools will support additional metadata formats like the Large-Eddy Simulation (LES) ARM Symbiotic and Observation (LASSO) bundle data.

  7. Future missions for observing Earth's changing gravity field: a closed-loop simulation tool

    NASA Astrophysics Data System (ADS)

    Visser, P. N.

    2008-12-01

    The GRACE mission has successfully demonstrated the observation from space of the changing Earth's gravity field at length and time scales of typically 1000 km and 10-30 days, respectively. Many scientific communities strongly advertise the need for continuity of observing Earth's gravity field from space. Moreover, a strong interest is being expressed to have gravity missions that allow a more detailed sampling of the Earth's gravity field both in time and in space. Designing a gravity field mission for the future is a complicated process that involves making many trade-offs, such as trade-offs between spatial, temporal resolution and financial budget. Moreover, it involves the optimization of many parameters, such as orbital parameters (height, inclination), distinction between which gravity sources to observe or correct for (for example are gravity changes due to ocean currents a nuisance or a signal to be retrieved?), observation techniques (low-low satellite-to-satellite tracking, satellite gravity gradiometry, accelerometers), and satellite control systems (drag-free?). A comprehensive tool has been developed and implemented that allows the closed-loop simulation of gravity field retrievals for different satellite mission scenarios. This paper provides a description of this tool. Moreover, its capabilities are demonstrated by a few case studies. Acknowledgments. The research that is being done with the closed-loop simulation tool is partially funded by the European Space Agency (ESA). An important component of the tool is the GEODYN software, kindly provided by NASA Goddard Space Flight Center in Greenbelt, Maryland.

  8. Can hook-bending be let off the hook? Bending/unbending of pliant tools by cockatoos.

    PubMed

    Laumer, I B; Bugnyar, T; Reber, S A; Auersperg, A M I

    2017-09-13

    The spontaneous crafting of hook-tools from bendable material to lift a basket out of a vertical tube in corvids has widely been used as one of the prime examples of animal tool innovation. However, it was recently suggested that the animals' solution was hardly innovative but strongly influenced by predispositions from habitual tool use and nest building. We tested Goffin's cockatoo, which is neither a specialized tool user nor a nest builder, on a similar task set-up. Three birds individually learned to bend hook tools from straight wire to retrieve food from vertical tubes and four subjects unbent wire to retrieve food from horizontal tubes. Pre-experience with ready-made hooks had some effect but was not necessary for success. Our results indicate that the ability to represent and manufacture tools according to a current need does not require genetically hardwired behavioural routines, but can indeed arise innovatively from domain general cognitive processing. © 2017 The Authors.

  9. ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets

    NASA Astrophysics Data System (ADS)

    Hosseini, Kasra; Sigloch, Karin

    2017-10-01

    We present obspyDMT, a free, open-source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous and/or dynamically growing ones. ObspyDMT simplifies and speeds up user interaction with data centers, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centers and data exchange protocols and is provided with powerful diagnostic and plotting tools to check the retrieved data and metadata. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command-line tool (requiring no knowledge of Python) or can be integrated as a module with other Python codes. It facilitates data archiving, preprocessing, instrument correction and quality control - routine but nontrivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation and retrieval of synthetic seismograms from the Synthetics Engine (Syngine) web service of the Data Management Center (DMC) at the Incorporated Research Institutions for Seismology (IRIS).

  10. Techniques for Soundscape Retrieval and Synthesis

    NASA Astrophysics Data System (ADS)

    Mechtley, Brandon Michael

    The study of acoustic ecology is concerned with the manner in which life interacts with its environment as mediated through sound. As such, a central focus is that of the soundscape: the acoustic environment as perceived by a listener. This dissertation examines the application of several computational tools in the realms of digital signal processing, multimedia information retrieval, and computer music synthesis to the analysis of the soundscape. Namely, these tools include a) an open source software library, Sirens, which can be used for the segmentation of long environmental field recordings into individual sonic events and compare these events in terms of acoustic content, b) a graph-based retrieval system that can use these measures of acoustic similarity and measures of semantic similarity using the lexical database WordNet to perform both text-based retrieval and automatic annotation of environmental sounds, and c) new techniques for the dynamic, realtime parametric morphing of multiple field recordings, informed by the geographic paths along which they were recorded.

  11. Customised search and comparison of in situ, satellite and model data for ocean modellers

    NASA Astrophysics Data System (ADS)

    Hamre, Torill; Vines, Aleksander; Lygre, Kjetil

    2014-05-01

    For the ocean modelling community, the amount of available data from historical and upcoming in situ sensor networks and satellite missions, provides an rich opportunity to validate and improve their simulation models. However, the problem of making the different data interoperable and intercomparable remains, due to, among others, differences in terminology and format used by different data providers and the different granularity provided by e.g. in situ data and ocean models. The GreenSeas project (Development of global plankton data base and model system for eco-climate early warning) aims to advance the knowledge and predictive capacities of how marine ecosystems will respond to global change. In the project, one specific objective has been to improve the technology for accessing historical plankton and associated environmental data sets, along with earth observation data and simulation outputs. To this end, we have developed a web portal enabling ocean modellers to easily search for in situ or satellite data overlapping in space and time, and compare the retrieved data with their model results. The in situ data are retrieved from a geo-spatial repository containing both historical and new physical, biological and chemical parameters for the Southern Ocean, Atlantic, Nordic Seas and the Arctic. The satellite-derived quantities of similar parameters from the same areas are retrieved from another geo-spatial repository established in the project. Both repositories are accessed through standard interfaces, using the Open Geospatial Consortium (OGC) Web Map Service (WMS) and Web Feature Service (WFS), and OPeNDAP protocols, respectively. While the developed data repositories use standard terminology to describe the parameters, especially the measured in situ biological parameters are too fine grained to be immediately useful for modelling purposes. Therefore, the plankton parameters were grouped according to category, size and if available by element. This grouping was reflected in the web portal's graphical user interface, where the groups and subgroups were organized in a tree structure, enabling the modeller to quickly get an overview of available data, going into more detail (subgroups) if needed or staying at a higher level of abstraction (merging the parameters below) if this provided a better base for comparison with the model parameters. Once a suitable level of detail, as determined by the modeller, was decided, the system would retrieve available in situ parameters. The modellers could then select among the pre-defined models or upload his own model forecast file (in NetCDF/CF format), for comparison with the retrieved in situ data. The comparison can be shown in different kinds of plots (e.g. scatter plots), through simple statistical measures or near-coincident values of in situ of model points can be exported for further analysis in the modeller's own tools. During data search and presentation, the modeller can determine both query criteria and what associated metadata to include in the display and export of the retrieved data. Satellite-derived parameters can be queried and compared with model results in the same manner. With the developed prototype system, we have demonstrated that a customised tool for searching, presenting, comparing and exporting ocean data from multiple platforms (in situ, satellite, model), makes it easy to compare model results with independent observations. With further enhancement of functionality and inclusion of more data, we believe the resulting system can greatly benefit the wider community of ocean modellers looking for data and tools to validate their models.

  12. Optimal estimation retrievals of the atmospheric structure and composition of HD 189733b from secondary eclipse spectroscopy

    NASA Astrophysics Data System (ADS)

    Lee, J.-M.; Fletcher, L. N.; Irwin, P. G. J.

    2012-02-01

    Recent spectroscopic observations of transiting hot Jupiters have permitted the derivation of the thermal structure and molecular abundances of H2O, CO2, CO and CH4 in these extreme atmospheres. Here, for the first time, we apply the technique of optimal estimation to determine the thermal structure and composition of an exoplanet by solving the inverse problem. The development of a suite of radiative transfer and retrieval tools for exoplanet atmospheres is described, building upon a retrieval algorithm which is extensively used in the study of our own Solar system. First, we discuss the plausibility of detection of different molecules in the dayside atmosphere of HD 189733b and the best-fitting spectrum retrieved from all publicly available sets of secondary eclipse observations between 1.45 and 24 μm. Additionally, we use contribution functions to assess the vertical sensitivity of the emission spectrum to temperatures and molecular composition. Over the altitudes probed by the contribution functions, the retrieved thermal structure shows an isothermal upper atmosphere overlying a deeper adiabatic layer (temperature decreasing with altitude), which is consistent with previously reported dynamical and observational results. The formal uncertainties on retrieved parameters are estimated conservatively using an analysis of the cross-correlation functions and the degeneracy between different atmospheric properties. The formal solution of the inverse problem suggests that the uncertainties on retrieved parameters are larger than suggested in previous studies, and that the presence of CO and CH4 is only marginally supported by the available data. Nevertheless, by including as broad a wavelength range as possible in the retrieval, we demonstrate that available spectra of HD 189733b can constrain a family of potential solutions for the atmospheric structure.

  13. Alkemio: association of chemicals with biomedical topics by text and data mining.

    PubMed

    Gijón-Correas, José A; Andrade-Navarro, Miguel A; Fontaine, Jean F

    2014-07-01

    The PubMed® database of biomedical citations allows the retrieval of scientific articles studying the function of chemicals in biology and medicine. Mining millions of available citations to search reported associations between chemicals and topics of interest would require substantial human time. We have implemented the Alkemio text mining web tool and SOAP web service to help in this task. The tool uses biomedical articles discussing chemicals (including drugs), predicts their relatedness to the query topic with a naïve Bayesian classifier and ranks all chemicals by P-values computed from random simulations. Benchmarks on seven human pathways showed good retrieval performance (areas under the receiver operating characteristic curves ranged from 73.6 to 94.5%). Comparison with existing tools to retrieve chemicals associated to eight diseases showed the higher precision and recall of Alkemio when considering the top 10 candidate chemicals. Alkemio is a high performing web tool ranking chemicals for any biomedical topics and it is free to non-commercial users. http://cbdm.mdc-berlin.de/∼medlineranker/cms/alkemio. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Diminishing-cues retrieval practice: A memory-enhancing technique that works when regular testing doesn't.

    PubMed

    Fiechter, Joshua L; Benjamin, Aaron S

    2017-08-28

    Retrieval practice has been shown to be a highly effective tool for enhancing memory, a fact that has led to major changes to educational practice and technology. However, when initial learning is poor, initial retrieval practice is unlikely to be successful and long-term benefits of retrieval practice are compromised or nonexistent. Here, we investigate the benefit of a scaffolded retrieval technique called diminishing-cues retrieval practice (Finley, Benjamin, Hays, Bjork, & Kornell, Journal of Memory and Language, 64, 289-298, 2011). Under learning conditions that favored a strong testing effect, diminishing cues and standard retrieval practice both enhanced memory performance relative to restudy. Critically, under learning conditions where standard retrieval practice was not helpful, diminishing cues enhanced memory performance substantially. These experiments demonstrate that diminishing-cues retrieval practice can widen the range of conditions under which testing can benefit memory, and so can serve as a model for the broader application of testing-based techniques for enhancing learning.

  15. Content Classification: Leveraging New Tools and Librarians' Expertise.

    ERIC Educational Resources Information Center

    Starr, Jennie

    1999-01-01

    Presents factors for librarians to consider when decision-making about information retrieval. Discusses indexing theory; thesauri aids; controlled vocabulary or thesauri to increase access; humans versus machines; automated tools; product evaluations and evaluation criteria; automated classification tools; content server products; and document…

  16. Automated payload experiment tool feasibility study

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Clark, James; Delugach, Harry; Hammons, Charles; Logan, Julie; Provancha, Anna

    1991-01-01

    To achieve an environment less dependent on the flow of paper, automated techniques of data storage and retrieval must be utilized. The prototype under development seeks to demonstrate the ability of a knowledge-based, hypertext computer system. This prototype is concerned with the logical links between two primary NASA support documents, the Science Requirements Document (SRD) and the Engineering Requirements Document (ERD). Once developed, the final system should have the ability to guide a principal investigator through the documentation process in a more timely and efficient manner, while supplying more accurate information to the NASA payload developer.

  17. Tool use and mechanical problem solving in apraxia.

    PubMed

    Goldenberg, G; Hagmann, S

    1998-07-01

    Moorlaas (1928) proposed that apraxic patients can identify objects and can remember the purpose they have been made for but do not know the way in which they must be used to achieve that purpose. Knowledge about the use of objects and tools can have two sources: It can be based on retrieval of instructions of use from semantic memory or on a direct inference of function from structure. The ability to infer function from structure enables subjects to use unfamiliar tools and to detect alternative uses of familiar tools. It is the basis of mechanical problem solving. The purpose of the present study was to analyze retrieval of instruction of use, mechanical problem solving, and actual tool use in patients with apraxia due to circumscribed lesions of the left hemisphere. For assessing mechanical problem solving we developed a test of selection and application of novel tools. Access to instruction of use was tested by pantomime of tool use. Actual tool use was examined for the same familiar tools. Forty two patients with left brain damage (LBD) and aphasia, 22 patients with right brain damage (RBD) and 22 controls were examined. Only LBD patients differed from controls on all tests. RBD patients had difficulties with the use but not with the selection of novel tools. In LBD patients there was a significant correlation between pantomime of tool use and novel tool selection but there were single cases who scored in the defective range on one of these tests and normally on the other. Analysis of LBD patients' lesions suggested that frontal lobe damage does not disturb novel tool selection. Only LBD patients who failed on pantomime of object use and on novel tool selection committed errors in actual use of familiar tools. The finding that mechanical problem solving is invariably defective in apraxic patients who commit errors with familiar tools is in good accord with clinical observations, as the gravity of their errors goes beyond what one would expect as a mere sequel of loss of access to instruction of use.

  18. IntegromeDB: an integrated system and biological search engine.

    PubMed

    Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia

    2012-01-19

    With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.

  19. A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1993-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the quality of information obtained from the sensors, and complementary usage of the sensors and redundant capabilities.

  20. Purdue ionomics information management system. An integrated functional genomics platform.

    PubMed

    Baxter, Ivan; Ouzzani, Mourad; Orcun, Seza; Kennedy, Brad; Jandhyala, Shrinivas S; Salt, David E

    2007-02-01

    The advent of high-throughput phenotyping technologies has created a deluge of information that is difficult to deal with without the appropriate data management tools. These data management tools should integrate defined workflow controls for genomic-scale data acquisition and validation, data storage and retrieval, and data analysis, indexed around the genomic information of the organism of interest. To maximize the impact of these large datasets, it is critical that they are rapidly disseminated to the broader research community, allowing open access for data mining and discovery. We describe here a system that incorporates such functionalities developed around the Purdue University high-throughput ionomics phenotyping platform. The Purdue Ionomics Information Management System (PiiMS) provides integrated workflow control, data storage, and analysis to facilitate high-throughput data acquisition, along with integrated tools for data search, retrieval, and visualization for hypothesis development. PiiMS is deployed as a World Wide Web-enabled system, allowing for integration of distributed workflow processes and open access to raw data for analysis by numerous laboratories. PiiMS currently contains data on shoot concentrations of P, Ca, K, Mg, Cu, Fe, Zn, Mn, Co, Ni, B, Se, Mo, Na, As, and Cd in over 60,000 shoot tissue samples of Arabidopsis (Arabidopsis thaliana), including ethyl methanesulfonate, fast-neutron and defined T-DNA mutants, and natural accession and populations of recombinant inbred lines from over 800 separate experiments, representing over 1,000,000 fully quantitative elemental concentrations. PiiMS is accessible at www.purdue.edu/dp/ionomics.

  1. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  2. Phase Retrieval System for Assessing Diamond Turning and Optical Surface Defects

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Maldonado, Alex; Bolcar, Matthew

    2011-01-01

    An optical design is presented for a measurement system used to assess the impact of surface errors originating from diamond turning artifacts. Diamond turning artifacts are common by-products of optical surface shaping using the diamond turning process (a diamond-tipped cutting tool used in a lathe configuration). Assessing and evaluating the errors imparted by diamond turning (including other surface errors attributed to optical manufacturing techniques) can be problematic and generally requires the use of an optical interferometer. Commercial interferometers can be expensive when compared to the simple optical setup developed here, which is used in combination with an image-based sensing technique (phase retrieval). Phase retrieval is a general term used in optics to describe the estimation of optical imperfections or aberrations. This turnkey system uses only image-based data and has minimal hardware requirements. The system is straightforward to set up, easy to align, and can provide nanometer accuracy on the measurement of optical surface defects.

  3. Homemade specimen retrieval bag for laparoscopic cholecystectomy: A solution in the time of fiscal crisis.

    PubMed

    Stavrou, George; Fotiadis, Kyriakos; Panagiotou, Dimitrios; Faitatzidou, Afroditi; Kotzampassi, Katerina

    2015-05-01

    Due to the current economic crisis in Greece, major cutbacks on healthcare costs have been imposed, resulting in a shortage of surgical supplies, including laparoscopic materials. In an attempt to reduce costs, we developed a homemade specimen retrieval bag for laparoscopic cholecystectomy. We used the polyethylene bag containing the catheter of a Redon drainage set. The bag was cut in half and pleated longitudinally; then, the gallbladder was placed in the bag and removed through the umbilicus with a grasping forceps. From September 2011 to June 2012, we used our homemade bag on 85 patients undergoing laparoscopic cholecystectomy. No rupture, accidental opening, or bile leak was observed. The learning curve was found to be five cases. Our homemade specimen retrieval bag seems to be a safe, effective, and easy tool for tissue extraction. Further studies need to be conducted to evaluate its full potential. © 2015 Japan Society for Endoscopic Surgery, Asia Endosurgery Task Force and Wiley Publishing Asia Pty Ltd.

  4. Effects of Information Access Cost and Accountability on Medical Residents' Information Retrieval Strategy and Performance During Prehandover Preparation: Evidence From Interview and Simulation Study.

    PubMed

    Yang, X Jessie; Wickens, Christopher D; Park, Taezoon; Fong, Liesel; Siah, Kewin T H

    2015-12-01

    We aimed to examine the effects of information access cost and accountability on medical residents' information retrieval strategy and performance during prehandover preparation. Prior studies observing doctors' prehandover practices witnessed the use of memory-intensive strategies when retrieving patient information. These strategies impose potential threats to patient safety as human memory is prone to errors. Of interest in this work are the underlying determinants of information retrieval strategy and the potential impacts on medical residents' information preparation performance. A two-step research approach was adopted, consisting of semistructured interviews with 21 medical residents and a simulation-based experiment with 32 medical residents. The semistructured interviews revealed that a substantial portion of medical residents (38%) relied largely on memory for preparing handover information. The simulation-based experiment showed that higher information access cost reduced information access attempts and access duration on patient documents and harmed information preparation performance. Higher accountability led to marginally longer access to patient documents. It is important to understand the underlying determinants of medical residents' information retrieval strategy and performance during prehandover preparation. We noted the criticality of easy access to patient documents in prehandover preparation. In addition, accountability marginally influenced medical residents' information retrieval strategy. Findings from this research suggested that the cost of accessing information sources should be minimized in developing handover preparation tools. © 2015, Human Factors and Ergonomics Society.

  5. EFL Students' Perceptions of Corpus-Tools as Writing References

    ERIC Educational Resources Information Center

    Lai, Shu-Li

    2015-01-01

    A number of studies have suggested the potentials of corpus tools in vocabulary learning. However, there are still some concerns. Corpus tools might be too complicated to use; example sentences retrieved from corpus tools might be too difficult to understand; processing large number of sample sentences could be challenging and time-consuming;…

  6. An application of machine learning to the organization of institutional software repositories

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney; Henderson, Scott; Truszkowski, Walt

    1993-01-01

    Software reuse has become a major goal in the development of space systems, as a recent NASA-wide workshop on the subject made clear. The Data Systems Technology Division of Goddard Space Flight Center has been working on tools and techniques for promoting reuse, in particular in the development of satellite ground support software. One of these tools is the Experiment in Libraries via Incremental Schemata and Cobweb (ElvisC). ElvisC applies machine learning to the problem of organizing a reusable software component library for efficient and reliable retrieval. In this paper we describe the background factors that have motivated this work, present the design of the system, and evaluate the results of its application.

  7. Photogrammetry for Archaeology: Collecting Pieces Together

    NASA Astrophysics Data System (ADS)

    Chibunichev, A. G.; Knyaz, V. A.; Zhuravlev, D. V.; Kurkov, V. M.

    2018-05-01

    The complexity of retrieving and understanding the archaeological data requires to apply different techniques, tools and sensors for information gathering, processing and documenting. Archaeological research now has the interdisciplinary nature involving technologies based on different physical principles for retrieving information about archaeological findings. The important part of archaeological data is visual and spatial information which allows reconstructing the appearance of the findings and relation between them. Photogrammetry has a great potential for accurate acquiring of spatial and visual data of different scale and resolution allowing to create archaeological documents of new type and quality. The aim of the presented study is to develop an approach for creating new forms of archaeological documents, a pipeline for their producing and collecting in one holistic model, describing an archaeological site. A set of techniques is developed for acquiring and integration of spatial and visual data of different level of details. The application of the developed techniques is demonstrated for documenting of Bosporus archaeological expedition of Russian State Historical Museum.

  8. External access to ALICE controls conditions data

    NASA Astrophysics Data System (ADS)

    Jadlovský, J.; Jadlovská, A.; Sarnovský, J.; Jajčišin, Š.; Čopík, M.; Jadlovská, S.; Papcun, P.; Bielek, R.; Čerkala, J.; Kopčík, M.; Chochula, P.; Augustinus, A.

    2014-06-01

    ALICE Controls data produced by commercial SCADA system WINCCOA is stored in ORACLE database on the private experiment network. The SCADA system allows for basic access and processing of the historical data. More advanced analysis requires tools like ROOT and needs therefore a separate access method to the archives. The present scenario expects that detector experts create simple WINCCOA scripts, which retrieves and stores data in a form usable for further studies. This relatively simple procedure generates a lot of administrative overhead - users have to request the data, experts needed to run the script, the results have to be exported outside of the experiment network. The new mechanism profits from database replica, which is running on the CERN campus network. Access to this database is not restricted and there is no risk of generating a heavy load affecting the operation of the experiment. The developed tools presented in this paper allow for access to this data. The users can use web-based tools to generate the requests, consisting of the data identifiers and period of time of interest. The administrators maintain full control over the data - an authorization and authentication mechanism helps to assign privileges to selected users and restrict access to certain groups of data. Advanced caching mechanism allows the user to profit from the presence of already processed data sets. This feature significantly reduces the time required for debugging as the retrieval of raw data can last tens of minutes. A highly configurable client allows for information retrieval bypassing the interactive interface. This method is for example used by ALICE Offline to extract operational conditions after a run is completed. Last but not least, the software can be easily adopted to any underlying database structure and is therefore not limited to WINCCOA.

  9. The CMIP5 Model Documentation Questionnaire: Development of a Metadata Retrieval System for the METAFOR Common Information Model

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Lawrence, Bryan; Moine, Marie-Pierre; Ford, Rupert; Devine, Gerry

    2010-05-01

    The EU METAFOR Project (http://metaforclimate.eu) has created a web-based model documentation questionnaire to collect metadata from the modelling groups that are running simulations in support of the Coupled Model Intercomparison Project - 5 (CMIP5). The CMIP5 model documentation questionnaire will retrieve information about the details of the models used, how the simulations were carried out, how the simulations conformed to the CMIP5 experiment requirements and details of the hardware used to perform the simulations. The metadata collected by the CMIP5 questionnaire will allow CMIP5 data to be compared in a scientifically meaningful way. This paper describes the life-cycle of the CMIP5 questionnaire development which starts with relatively unstructured input from domain specialists and ends with formal XML documents that comply with the METAFOR Common Information Model (CIM). Each development step is associated with a specific tool. (1) Mind maps are used to capture information requirements from domain experts and build a controlled vocabulary, (2) a python parser processes the XML files generated by the mind maps, (3) Django (python) is used to generate the dynamic structure and content of the web based questionnaire from processed xml and the METAFOR CIM, (4) Python parsers ensure that information entered into the CMIP5 questionnaire is output as CIM compliant xml, (5) CIM compliant output allows automatic information capture tools to harvest questionnaire content into databases such as the Earth System Grid (ESG) metadata catalogue. This paper will focus on how Django (python) and XML input files are used to generate the structure and content of the CMIP5 questionnaire. It will also address how the choice of development tools listed above provided a framework that enabled working scientists (who we would never ordinarily get to interact with UML and XML) to be part the iterative development process and ensure that the CMIP5 model documentation questionnaire reflects what scientists want to know about the models. Keywords: metadata, CMIP5, automatic information capture, tool development

  10. EARLINET Single Calculus Chain - technical - Part 2: Calculation of optical products

    NASA Astrophysics Data System (ADS)

    Mattis, Ina; D'Amico, Giuseppe; Baars, Holger; Amodeo, Aldo; Madonna, Fabio; Iarlori, Marco

    2016-07-01

    In this paper we present the automated software tool ELDA (EARLINET Lidar Data Analyzer) for the retrieval of profiles of optical particle properties from lidar signals. This tool is one of the calculus modules of the EARLINET Single Calculus Chain (SCC) which allows for the analysis of the data of many different lidar systems of EARLINET in an automated, unsupervised way. ELDA delivers profiles of particle extinction coefficients from Raman signals as well as profiles of particle backscatter coefficients from combinations of Raman and elastic signals or from elastic signals only. Those analyses start from pre-processed signals which have already been corrected for background, range dependency and hardware specific effects. An expert group reviewed all algorithms and solutions for critical calculus subsystems which are used within EARLINET with respect to their applicability for automated retrievals. Those methods have been implemented in ELDA. Since the software was designed in a modular way, it is possible to add new or alternative methods in future. Most of the implemented algorithms are well known and well documented, but some methods have especially been developed for ELDA, e.g., automated vertical smoothing and temporal averaging or the handling of effective vertical resolution in the case of lidar ratio retrievals, or the merging of near-range and far-range products. The accuracy of the retrieved profiles was tested following the procedure of the EARLINET-ASOS algorithm inter-comparison exercise which is based on the analysis of synthetic signals. Mean deviations, mean relative deviations, and normalized root-mean-square deviations were calculated for all possible products and three height layers. In all cases, the deviations were clearly below the maximum allowed values according to the EARLINET quality requirements.

  11. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed.more » The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and retrieve the required data, and their ability to integrate the data into environmental models using the FRAMES environment.« less

  12. Automated documentation generator for advanced protein crystal growth

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Provancha, Anna; Chattam, David

    1994-01-01

    To achieve an environment less dependent on the flow of paper, automated techniques of data storage and retrieval must be utilized. This software system, 'Automated Payload Experiment Tool,' seeks to provide a knowledge-based, hypertext environment for the development of NASA documentation. Once developed, the final system should be able to guide a Principal Investigator through the documentation process in a more timely and efficient manner, while supplying more accurate information to the NASA payload developer. The current system is designed for the development of the Science Requirements Document (SRD), the Experiment Requirements Document (ERD), the Project Plan, and the Safety Requirements Document.

  13. Post-Flight Data Analysis Tool

    NASA Technical Reports Server (NTRS)

    George, Marina

    2018-01-01

    A software tool that facilitates the retrieval and analysis of post-flight data. This allows our team and other teams to effectively and efficiently analyze and evaluate post-flight data in order to certify commercial providers.

  14. Applying the metro map to software development management

    NASA Astrophysics Data System (ADS)

    Aguirregoitia, Amaia; Dolado, J. Javier; Presedo, Concepción

    2010-01-01

    This paper presents MetroMap, a new graphical representation model for controlling and managing the software development process. Metromap uses metaphors and visual representation techniques to explore several key indicators in order to support problem detection and resolution. The resulting visualization addresses diverse management tasks, such as tracking of deviations from the plan, analysis of patterns of failure detection and correction, overall assessment of change management policies, and estimation of product quality. The proposed visualization uses a metaphor with a metro map along with various interactive techniques to represent information concerning the software development process and to deal efficiently with multivariate visual queries. Finally, the paper shows the implementation of the tool in JavaFX with data of a real project and the results of testing the tool with the aforementioned data and users attempting several information retrieval tasks. The conclusion shows the results of analyzing user response time and efficiency using the MetroMap visualization system. The utility of the tool was positively evaluated.

  15. Enhancing biomedical text summarization using semantic relation extraction.

    PubMed

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.

  16. Observing the ExoEarth: Simulating the Retrieval of Exoplanet Parameters Using DSCOVR

    NASA Astrophysics Data System (ADS)

    Kane, S.; Cowan, N. B.; Domagal-Goldman, S. D.; Herman, J. R.; Robinson, T.; Stine, A.

    2017-12-01

    The field of exoplanets has rapidly expanded from detection to include exoplanet characterization. This has been enabled by developments such as the detection of terrestrial-sized planets and the use of transit spectroscopy to study exoplanet atmospheres. Studies of rocky planets are leading towards the direct imaging of exoplanets and the development of techniques to extract their intrinsic properties. The importance of properties such as rotation, albedo, and obliquity are significant since they inform planet formation theories and are key input parameters for Global Circulation Models used to determine surface conditions, including habitability. Thus, a complete characterization of exoplanets for understanding habitable climates requires the ability to measure these key planetary parameters. The retrieval of planetary rotation rates, albedos, and obliquities from highly undersampled imaging data can be honed using satellites designed to study the Earth's atmosphere. In this talk I will describe how the Deep Space Climate Observatory (DSCOVR) provides a unique opportunity to test such retrieval methods using data for the sunlit hemisphere of the Earth. Our methods use the high-resolution DSCOVR-EPIC images to simulate the Earth as an exoplanet, by deconvolving the images to match a variety of expected exoplanet mission requirements, and by comparing EPIC data with the cavity radiometer data from DSCOVR-NISTAR that views the Earth as a single pixel. Through this methodology, we are creating a grid of retrieval states as a function of image resolution, observing cadence, passband, etc. Our modeling of the DSCOVR data will provide an effective baseline from which to develop tools that can be applied to a variety of exoplanet imaging data.

  17. The ISS National Inventory of Chemical Substances (INSC).

    PubMed

    Binetti, Roberto; Costamagna, Francesca Marina; Ceccarelli, Federica; D'angiolini, Antonella; Fabri, Alessandra; Riva, Giovanni; Satalia, Susanna; Marcello, Ida

    2008-01-01

    The INSC (Inventario Nazionale delle Sostanze Chimiche), a factual data bank, produced by Istituto Superiore di Sanità (ISS), consists of an electronic tool on chemical information developed for routine and emergency purposes. Historical background, current status and future perspectives of INSC are discussed. The structure and the feature of INSC are briefly examined. Aspects of information retrieval and the criteria for inclusion of data and priority selection are also considered.

  18. Phenotypic and genotypic data integration and exploration through a web-service architecture.

    PubMed

    Nuzzo, Angelo; Riva, Alberto; Bellazzi, Riccardo

    2009-10-15

    Linking genotypic and phenotypic information is one of the greatest challenges of current genetics research. The definition of an Information Technology infrastructure to support this kind of studies, and in particular studies aimed at the analysis of complex traits, which require the definition of multifaceted phenotypes and the integration genotypic information to discover the most prevalent diseases, is a paradigmatic goal of Biomedical Informatics. This paper describes the use of Information Technology methods and tools to develop a system for the management, inspection and integration of phenotypic and genotypic data. We present the design and architecture of the Phenotype Miner, a software system able to flexibly manage phenotypic information, and its extended functionalities to retrieve genotype information from external repositories and to relate it to phenotypic data. For this purpose we developed a module to allow customized data upload by the user and a SOAP-based communications layer to retrieve data from existing biomedical knowledge management tools. In this paper we also demonstrate the system functionality by an example application of the system in which we analyze two related genomic datasets. In this paper we show how a comprehensive, integrated and automated workbench for genotype and phenotype integration can facilitate and improve the hypothesis generation process underlying modern genetic studies.

  19. Comparison of three web-scale discovery services for health sciences research.

    PubMed

    Hanneke, Rosie; O'Brien, Kelly K

    2016-04-01

    The purpose of this study was to investigate the relative effectiveness of three web-scale discovery (WSD) tools in answering health sciences search queries. Simple keyword searches, based on topics from six health sciences disciplines, were run at multiple real-world implementations of EBSCO Discovery Service (EDS), Ex Libris's Primo, and ProQuest's Summon. Each WSD tool was evaluated in its ability to retrieve relevant results and in its coverage of MEDLINE content. All WSD tools returned between 50%-60% relevant results. Primo returned a higher number of duplicate results than the other 2 WSD products. Summon results were more relevant when search terms were automatically mapped to controlled vocabulary. EDS indexed the largest number of MEDLINE citations, followed closely by Summon. Additionally, keyword searches in all 3 WSD tools retrieved relevant material that was not found with precision (Medical Subject Headings) searches in MEDLINE. None of the 3 WSD products studied was overwhelmingly more effective in returning relevant results. While difficult to place the figure of 50%-60% relevance in context, it implies a strong likelihood that the average user would be able to find satisfactory sources on the first page of search results using a rudimentary keyword search. The discovery of additional relevant material beyond that retrieved from MEDLINE indicates WSD tools' value as a supplement to traditional resources for health sciences researchers.

  20. Comparison of three web-scale discovery services for health sciences research*

    PubMed Central

    Hanneke, Rosie; O'Brien, Kelly K.

    2016-01-01

    Objective The purpose of this study was to investigate the relative effectiveness of three web-scale discovery (WSD) tools in answering health sciences search queries. Methods Simple keyword searches, based on topics from six health sciences disciplines, were run at multiple real-world implementations of EBSCO Discovery Service (EDS), Ex Libris's Primo, and ProQuest's Summon. Each WSD tool was evaluated in its ability to retrieve relevant results and in its coverage of MEDLINE content. Results All WSD tools returned between 50%–60% relevant results. Primo returned a higher number of duplicate results than the other 2 WSD products. Summon results were more relevant when search terms were automatically mapped to controlled vocabulary. EDS indexed the largest number of MEDLINE citations, followed closely by Summon. Additionally, keyword searches in all 3 WSD tools retrieved relevant material that was not found with precision (Medical Subject Headings) searches in MEDLINE. Conclusions None of the 3 WSD products studied was overwhelmingly more effective in returning relevant results. While difficult to place the figure of 50%–60% relevance in context, it implies a strong likelihood that the average user would be able to find satisfactory sources on the first page of search results using a rudimentary keyword search. The discovery of additional relevant material beyond that retrieved from MEDLINE indicates WSD tools' value as a supplement to traditional resources for health sciences researchers. PMID:27076797

  1. Analysis of queries sent to PubMed at the point of care: Observation of search behaviour in a medical teaching hospital

    PubMed Central

    Hoogendam, Arjen; Stalenhoef, Anton FH; Robbé, Pieter F de Vries; Overbeke, A John PM

    2008-01-01

    Background The use of PubMed to answer daily medical care questions is limited because it is challenging to retrieve a small set of relevant articles and time is restricted. Knowing what aspects of queries are likely to retrieve relevant articles can increase the effectiveness of PubMed searches. The objectives of our study were to identify queries that are likely to retrieve relevant articles by relating PubMed search techniques and tools to the number of articles retrieved and the selection of articles for further reading. Methods This was a prospective observational study of queries regarding patient-related problems sent to PubMed by residents and internists in internal medicine working in an Academic Medical Centre. We analyzed queries, search results, query tools (Mesh, Limits, wildcards, operators), selection of abstract and full-text for further reading, using a portal that mimics PubMed. Results PubMed was used to solve 1121 patient-related problems, resulting in 3205 distinct queries. Abstracts were viewed in 999 (31%) of these queries, and in 126 (39%) of 321 queries using query tools. The average term count per query was 2.5. Abstracts were selected in more than 40% of queries using four or five terms, increasing to 63% if the use of four or five terms yielded 2–161 articles. Conclusion Queries sent to PubMed by physicians at our hospital during daily medical care contain fewer than three terms. Queries using four to five terms, retrieving less than 161 article titles, are most likely to result in abstract viewing. PubMed search tools are used infrequently by our population and are less effective than the use of four or five terms. Methods to facilitate the formulation of precise queries, using more relevant terms, should be the focus of education and research. PMID:18816391

  2. Results and Validation of MODIS Aerosol Retrievals Over Land and Ocean

    NASA Technical Reports Server (NTRS)

    Remer, Lorraine; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.

  3. Results and Validation of MODIS Aerosol Retrievals over Land and Ocean

    NASA Technical Reports Server (NTRS)

    Remer, L. A.; Kaufman, Y. J.; Tanre, D.; Ichoku, C.; Chu, D. A.; Mattoo, S.; Levy, R.; Martins, J. V.; Li, R.-R.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The MODerate Resolution Imaging Spectroradiometer (MODIS) instrument aboard the Terra spacecraft has been retrieving aerosol parameters since late February 2000. Initial qualitative checking of the products showed very promising results including matching of land and ocean retrievals at coastlines. Using AERONET ground-based radiometers as our primary validation tool, we have established quantitative validation as well. Our results show that for most aerosol types, the MODIS products fall within the pre-launch estimated uncertainties. Surface reflectance and aerosol model assumptions appear to be sufficiently accurate for the optical thickness retrieval. Dust provides a possible exception, which may be due to non-spherical effects. Over ocean the MODIS products include information on particle size, and these parameters are also validated with AERONET retrievals.

  4. Comparing features sets for content-based image retrieval in a medical-case database

    NASA Astrophysics Data System (ADS)

    Muller, Henning; Rosset, Antoine; Vallee, Jean-Paul; Geissbuhler, Antoine

    2004-04-01

    Content-based image retrieval systems (CBIRSs) have frequently been proposed for the use in medical image databases and PACS. Still, only few systems were developed and used in a real clinical environment. It rather seems that medical professionals define their needs and computer scientists develop systems based on data sets they receive with little or no interaction between the two groups. A first study on the diagnostic use of medical image retrieval also shows an improvement in diagnostics when using CBIRSs which underlines the potential importance of this technique. This article explains the use of an open source image retrieval system (GIFT - GNU Image Finding Tool) for the retrieval of medical images in the medical case database system CasImage that is used in daily, clinical routine in the university hospitals of Geneva. Although the base system of GIFT shows an unsatisfactory performance, already little changes in the feature space show to significantly improve the retrieval results. The performance of variations in feature space with respect to color (gray level) quantizations and changes in texture analysis (Gabor filters) is compared. Whereas stock photography relies mainly on colors for retrieval, medical images need a large number of gray levels for successful retrieval, especially when executing feedback queries. The results also show that a too fine granularity in the gray levels lowers the retrieval quality, especially with single-image queries. For the evaluation of the retrieval peformance, a subset of the entire case database of more than 40,000 images is taken with a total of 3752 images. Ground truth was generated by a user who defined the expected query result of a perfect system by selecting images relevant to a given query image. The results show that a smaller number of gray levels (32 - 64) leads to a better retrieval performance, especially when using relevance feedback. The use of more scales and directions for the Gabor filters in the texture analysis also leads to improved results but response time is going up equally due to the larger feature space. CBIRSs can be of great use in managing large medical image databases. They allow to find images that might otherwise be lost for research and publications. They also give students students the possibility to navigate within large image repositories. In the future, CBIR might also become more important in case-based reasoning and evidence-based medicine to support the diagnostics because first studies show good results.

  5. Using the Saccharomyces Genome Database (SGD) for analysis of genomic information

    PubMed Central

    Skrzypek, Marek S.; Hirschman, Jodi

    2011-01-01

    Analysis of genomic data requires access to software tools that place the sequence-derived information in the context of biology. The Saccharomyces Genome Database (SGD) integrates functional information about budding yeast genes and their products with a set of analysis tools that facilitate exploring their biological details. This unit describes how the various types of functional data available at SGD can be searched, retrieved, and analyzed. Starting with the guided tour of the SGD Home page and Locus Summary page, this unit highlights how to retrieve data using YeastMine, how to visualize genomic information with GBrowse, how to explore gene expression patterns with SPELL, and how to use Gene Ontology tools to characterize large-scale datasets. PMID:21901739

  6. Hermeneutics, Accreting Receptions, Hypermedia: A Tool for Reference Versus a Tool for Instruction.

    ERIC Educational Resources Information Center

    Nissan, Ephraim; Rossler, Isaac; Weiss, Hillel

    1997-01-01

    Provides a select overview of hypertext and information retrieval tools that support traditional Jewish learning and discusses a project in instructional hypermedia that is applied to teaching, teacher training, and self-instruction in given Bible passages. Highlights include accretion of receptions, hermeneutics, literary appropriations, and…

  7. A Tool for the Analysis of Motion Picture Film or Video Tape.

    ERIC Educational Resources Information Center

    Ekman, Paul; Friesen, Wallace V.

    1969-01-01

    A visual information display and retrieval system (VID-R) is described for application to visual records. VID-R searches and retrieves events by time address (location) or by previously stored ovservations or measurements. Fields are labeled by writing discriminable binary addresses on the horizontal lines outside the normal viewing area. The…

  8. TESE--Thesaurus for Education Systems in Europe. English Version

    ERIC Educational Resources Information Center

    Eurydice, 2009

    2009-01-01

    The Thesaurus for Education Systems in Europe (TESE-2009 edition) is a multilingual thesaurus and a robust information retrieval tool focusing on European education systems and policies. It is specifically designed to cover the indexation needs of the Eurydice network and to facilitate information retrieval on Eurydice's central website. It can…

  9. Semantic Storyboard of Judicial Debates: A Novel Multimedia Summarization Environment

    ERIC Educational Resources Information Center

    Fersini, E.; Sartori, F.

    2012-01-01

    Purpose: The need of tools for content analysis, information extraction and retrieval of multimedia objects in their native form is strongly emphasized into the judicial domain: digital videos represent a fundamental informative source of events occurring during judicial proceedings that should be stored, organized and retrieved in short time and…

  10. Using Retrieval Practice and Metacognitive Skills to Improve Content Learning

    ERIC Educational Resources Information Center

    Littrell-Baez, Megan K.; Friend, Angela; Caccamise, Donna; Okochi, Christine

    2015-01-01

    Classroom tests have been traditionally used to assess student growth and content mastery. However, a wealth of research in cognitive and educational psychology has demonstrated that retrieval practice (testing) as a form of low-stakes, rather than traditional high-stakes testing, can also be used as an effective pedagogical tool, improving…

  11. Retrieval operations with SPARTAN 201

    NASA Image and Video Library

    1994-09-15

    STS064-74-052 (9-20 Sept. 1994) --- Astronauts onboard the space shuttle Discovery used a 70mm camera to capture this photograph of the retrieval operations with the Shuttle Pointed Autonomous Research Tool for Astronomy 201 (SPARTAN 201). A gibbous moon can be seen in the background. Photo credit: NASA or National Aeronautics and Space Administration

  12. LibKiSAO: a Java library for Querying KiSAO.

    PubMed

    Zhukova, Anna; Adams, Richard; Laibe, Camille; Le Novère, Nicolas

    2012-09-24

    The Kinetic Simulation Algorithm Ontology (KiSAO) supplies information about existing algorithms available for the simulation of Systems Biology models, their characteristics, parameters and inter-relationships. KiSAO enables the unambiguous identification of algorithms from simulation descriptions. Information about analogous methods having similar characteristics and about algorithm parameters incorporated into KiSAO is desirable for simulation tools. To retrieve this information programmatically an application programming interface (API) for KiSAO is needed. We developed libKiSAO, a Java library to enable querying of the KiSA Ontology. It implements methods to retrieve information about simulation algorithms stored in KiSAO, their characteristics and parameters, and methods to query the algorithm hierarchy and search for similar algorithms providing comparable results for the same simulation set-up. Using libKiSAO, simulation tools can make logical inferences based on this knowledge and choose the most appropriate algorithm to perform a simulation. LibKiSAO also enables simulation tools to handle a wider range of simulation descriptions by determining which of the available methods are similar and can be used instead of the one indicated in the simulation description if that one is not implemented. LibKiSAO enables Java applications to easily access information about simulation algorithms, their characteristics and parameters stored in the OWL-encoded Kinetic Simulation Algorithm Ontology. LibKiSAO can be used by simulation description editors and simulation tools to improve reproducibility of computational simulation tasks and facilitate model re-use.

  13. Control/structure interaction conceptual design tool

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    1990-01-01

    The JPL Control/Structure Interaction Program is developing new analytical methods for designing micro-precision spacecraft with controlled structures. One of these, the Conceptual Design Tool, will illustrate innovative new approaches to the integration of multi-disciplinary analysis and design methods. The tool will be used to demonstrate homogeneity of presentation, uniform data representation across analytical methods, and integrated systems modeling. The tool differs from current 'integrated systems' that support design teams most notably in its support for the new CSI multi-disciplinary engineer. The design tool will utilize a three dimensional solid model of the spacecraft under design as the central data organization metaphor. Various analytical methods, such as finite element structural analysis, control system analysis, and mechanical configuration layout, will store and retrieve data from a hierarchical, object oriented data structure that supports assemblies of components with associated data and algorithms. In addition to managing numerical model data, the tool will assist the designer in organizing, stating, and tracking system requirements.

  14. Design and application of a tool for structuring, capitalizing and making more accessible information and lessons learned from accidents involving machinery.

    PubMed

    Sadeghi, Samira; Sadeghi, Leyla; Tricot, Nicolas; Mathieu, Luc

    2017-12-01

    Accident reports are published in order to communicate the information and lessons learned from accidents. An efficient accident recording and analysis system is a necessary step towards improvement of safety. However, currently there is a shortage of efficient tools to support such recording and analysis. In this study we introduce a flexible and customizable tool that allows structuring and analysis of this information. This tool has been implemented under TEEXMA®. We named our prototype TEEXMA®SAFETY. This tool provides an information management system to facilitate data collection, organization, query, analysis and reporting of accidents. A predefined information retrieval module provides ready access to data which allows the user to quickly identify the possible hazards for specific machines and provides information on the source of hazards. The main target audience for this tool includes safety personnel, accident reporters and designers. The proposed data model has been developed by analyzing different accident reports.

  15. Operational use of the AIRS Total Column Ozone Retrievals along with the RGB Airmass Product as Part of the GOES-R Proving Ground

    NASA Technical Reports Server (NTRS)

    Folmer, M.; Zavodsky, Bradley; Molthan, Andrew

    2012-01-01

    The Red, Green, Blue (RGB) Air Mass product has been demonstrated in the GOES ]R Proving Ground as a possible decision aid. Forecasters have been trained on the usefulness of identifying stratospheric intrusions and potential vorticity (PV) anomalies that can lead to explosive cyclogenesis, genesis of mesoscale convective systems (MCSs), or the transition of tropical cyclones to extratropical cyclones. It has also been demonstrated to distinguish different air mass types from warm, low ozone air masses to cool, high ozone air masses and the various interactions with the PV anomalies. To assist the forecasters in understanding the stratospheric contribution to high impact weather systems, the Atmospheric Infrared Sounder (AIRS) Total Column Ozone Retrievals have been made available as an operational tool. These AIRS retrievals provide additional information on the amount of ozone that is associated with the red coloring seen in the RGB Air Mass product. This paper discusses how the AIRS retrievals can be used to quantify the red coloring in RGB Air Mass product. These retrievals can be used to diagnose the depth of the stratospheric intrusions associated with different types of weather systems and provide the forecasters decision aid tools that can improve the quality of forecast products.

  16. IntegromeDB: an integrated system and biological search engine

    PubMed Central

    2012-01-01

    Background With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Description Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. Conclusions The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback. PMID:22260095

  17. Atmospheric Retrievals from Exoplanet Observations and Simulations with BART

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph

    This project will determine the observing plans needed to retrieve exoplanet atmospheric composition and thermal profiles over a broad range of planets, stars, instruments, and observing modes. Characterizing exoplanets is hard. The dim planets orbit bright stars, giving orders of magnitude more relative noise than for solar-system planets. Advanced statistical techniques are needed to determine what the data can - and more importantly cannot - say. We therefore developed Bayesian Atmospheric Radiative Transfer (BART). BART explores the parameter space of atmospheric chemical abundances and thermal profiles using Differential-Evolution Markov-Chain Monte Carlo. It generates thousands of candidate spectra, integrates over observational bandpasses, and compares to data, generating a statistical model for an atmosphere's composition and thermal structure. At best, it gives abundances and thermal profiles with uncertainties. At worst, it shows what kinds of planets the data allow. It also gives parameter correlations. BART is open-source, designed for community use and extension (http://github.com/exosports/BART). Three arXived PhD theses (papers in publication) provide technical documentation, tests, and application to Spitzer and HST data. There are detailed user and programmer manuals and community support forums. Exoplanet analysis techniques must be tested against synthetic data, where the answer is known, and vetted by statisticians. Unfortunately, this has rarely been done, and never sufficiently. Several recent papers question the entire body of Spitzer exoplanet observations, because different analyses of the same data give different results. The latest method, pixel-level decorrelation, produces results that diverge from an emerging consensus. We do not know the retrieval problem's strengths and weaknesses relative to low SNR, red noise, low resolution, instrument systematics, or incomplete spectral line lists. In observing eclipses and transits, we assume the planet has uniform composition and the same temperature profile everywhere. We do not know this assumption's impact. While Spitzer and HST have few exoplanet observing modes, JWST will have over 20. Given the signal challenges and the complexity of retrieval, modeling the observations and data analysis is the best way to optimize an observing plan. Our project solves all of these problems. Using only open-source codes, with tools available to the community for their immediate application in JWST and HST proposals and analyses, we will produce a faithful simulator of 2D spectral and photometric frames from each JWST exoplanet mode (WFC3 spatial scan mode works already), including jitter and intrapixel effects. We will extract and calibrate data, analyzing them with BART. Given planetary input spectra for terrestrial, super-Earth, Neptune, and Jupiterclass planets, and a variety of stellar spectra, we will determine the best combination of observations to recover each atmosphere, and the limits where low SNR or spectral coverage produce deceptive results. To facilitate these analyses, we will adapt an existing cloud model to BART, add condensate code now being written to its thermochemical model, include scattering, add a 3D atmosphere module (for dayside occultation mapping and the 1D vs. 3D question), and improve performance and documentation, among other improvements. We will host a web site and community discussions online and at conferences about retrieval issues. We will develop validation tests for radiative-transfer and BART-style retrieval codes, and provide examples to validate others' codes. We will engage the retrieval community in data challenges. We will provide web-enabled tools to specify planets easily for modeling. We will make all of these tools, tests, and comparisons available online so everyone can use them to maximize NASA's investment in high-end observing capabilities to characterize exoplanets.

  18. A web tool for STORET/WQX water quality data retrieval and Best Management Practice scenario suggestion.

    PubMed

    Park, Youn Shik; Engel, Bernie A; Kim, Jonggun; Theller, Larry; Chaubey, Indrajeet; Merwade, Venkatesh; Lim, Kyoung Jae

    2015-03-01

    Total Maximum Daily Load is a water quality standard to regulate water quality of streams, rivers and lakes. A wide range of approaches are used currently to develop TMDLs for impaired streams and rivers. Flow and load duration curves (FDC and LDC) have been used in many states to evaluate the relationship between flow and pollutant loading along with other models and approaches. A web-based LDC Tool was developed to facilitate development of FDC and LDC as well as to support other hydrologic analyses. In this study, the FDC and LDC tool was enhanced to allow collection of water quality data via the web and to assist in establishing cost-effective Best Management Practice (BMP) implementations. The enhanced web-based tool provides use of water quality data not only from the US Geological Survey but also from the Water Quality Portal for the U.S. via web access. Moreover, the web-based tool identifies required pollutant reductions to meet standard loads and suggests a BMP scenario based on ability of BMPs to reduce pollutant loads, BMP establishment and maintenance costs. In the study, flow and water quality data were collected via web access to develop LDC and to identify the required reduction. The suggested BMP scenario from the web-based tool was evaluated using the EPA Spreadsheet Tool for the Estimation of Pollutant Load model to attain the required pollutant reduction at least cost. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Measurement properties of self-report physical activity assessment tools in stroke: a protocol for a systematic review

    PubMed Central

    Martins, Júlia Caetano; Aguiar, Larissa Tavares; Nadeau, Sylvie; Scianni, Aline Alvim; Teixeira-Salmela, Luci Fuscaldi; Faria, Christina Danielli Coelho de Morais

    2017-01-01

    Introduction Self-report physical activity assessment tools are commonly used for the evaluation of physical activity levels in individuals with stroke. A great variety of these tools have been developed and widely used in recent years, which justify the need to examine their measurement properties and clinical utility. Therefore, the main objectives of this systematic review are to examine the measurement properties and clinical utility of self-report measures of physical activity and discuss the strengths and limitations of the identified tools. Methods and analysis A systematic review of studies that investigated the measurement properties and/or clinical utility of self-report physical activity assessment tools in stroke will be conducted. Electronic searches will be performed in five databases: Medical Literature Analysis and Retrieval System Online (MEDLINE) (PubMed), Excerpta Medica Database (EMBASE), Physiotherapy Evidence Database (PEDro), Literatura Latino-Americana e do Caribe em Ciências da Saúde (LILACS) and Scientific Electronic Library Online (SciELO), followed by hand searches of the reference lists of the included studies. Two independent reviewers will screen all retrieve titles, abstracts, and full texts, according to the inclusion criteria and will also extract the data. A third reviewer will be referred to solve any disagreement. A descriptive summary of the included studies will contain the design, participants, as well as the characteristics, measurement properties, and clinical utility of the self-report tools. The methodological quality of the studies will be evaluated using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist and the clinical utility of the identified tools will be assessed considering predefined criteria. This systematic review will follow the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) statement. Discussion This systematic review will provide an extensive review of the measurement properties and clinical utility of self-report physical activity assessment tools used in individuals with stroke, which would benefit clinicians and researchers. Trial registration number PROSPERO CRD42016037146. PMID:28193848

  20. Bioinformatics resource manager v2.3: an integrated software environment for systems biology with microRNA and cross-species analysis tools

    PubMed Central

    2012-01-01

    Background MicroRNAs (miRNAs) are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP) miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM) v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf) results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p<0.05) gene targets in BRM indicates that nicotine exposure disrupts genes involved in neurogenesis, possibly through misregulation of nicotine-sensitive miRNAs. Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis generation tool for systems biology. The miRNA workflow in BRM allows for efficient processing of multiple miRNA and mRNA datasets in a single software environment with the added capability to interact with public data sources and visual analytic tools for HTP data analysis at a systems level. BRM is developed using Java™ and other open-source technologies for free distribution (http://www.sysbio.org/dataresources/brm.stm). PMID:23174015

  1. Knowledge-light adaptation approaches in case-based reasoning for radiotherapy treatment planning.

    PubMed

    Petrovic, Sanja; Khussainova, Gulmira; Jagannathan, Rupa

    2016-03-01

    Radiotherapy treatment planning aims at delivering a sufficient radiation dose to cancerous tumour cells while sparing healthy organs in the tumour-surrounding area. It is a time-consuming trial-and-error process that requires the expertise of a group of medical experts including oncologists and medical physicists and can take from 2 to 3h to a few days. Our objective is to improve the performance of our previously built case-based reasoning (CBR) system for brain tumour radiotherapy treatment planning. In this system, a treatment plan for a new patient is retrieved from a case base containing patient cases treated in the past and their treatment plans. However, this system does not perform any adaptation, which is needed to account for any difference between the new and retrieved cases. Generally, the adaptation phase is considered to be intrinsically knowledge-intensive and domain-dependent. Therefore, an adaptation often requires a large amount of domain-specific knowledge, which can be difficult to acquire and often is not readily available. In this study, we investigate approaches to adaptation that do not require much domain knowledge, referred to as knowledge-light adaptation. We developed two adaptation approaches: adaptation based on machine-learning tools and adaptation-guided retrieval. They were used to adapt the beam number and beam angles suggested in the retrieved case. Two machine-learning tools, neural networks and naive Bayes classifier, were used in the adaptation to learn how the difference in attribute values between the retrieved and new cases affects the output of these two cases. The adaptation-guided retrieval takes into consideration not only the similarity between the new and retrieved cases, but also how to adapt the retrieved case. The research was carried out in collaboration with medical physicists at the Nottingham University Hospitals NHS Trust, City Hospital Campus, UK. All experiments were performed using real-world brain cancer patient cases treated with three-dimensional (3D)-conformal radiotherapy. Neural networks-based adaptation improved the success rate of the CBR system with no adaptation by 12%. However, naive Bayes classifier did not improve the current retrieval results as it did not consider the interplay among attributes. The adaptation-guided retrieval of the case for beam number improved the success rate of the CBR system by 29%. However, it did not demonstrate good performance for the beam angle adaptation. Its success rate was 29% versus 39% when no adaptation was performed. The obtained empirical results demonstrate that the proposed adaptation methods improve the performance of the existing CBR system in recommending the number of beams to use. However, we also conclude that to be effective, the proposed adaptation of beam angles requires a large number of relevant cases in the case base. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Applying CBR to machine tool product configuration design oriented to customer requirements

    NASA Astrophysics Data System (ADS)

    Wang, Pengjia; Gong, Yadong; Xie, Hualong; Liu, Yongxian; Nee, Andrew Yehching

    2017-01-01

    Product customization is a trend in the current market-oriented manufacturing environment. However, deduction from customer requirements to design results and evaluation of design alternatives are still heavily reliant on the designer's experience and knowledge. To solve the problem of fuzziness and uncertainty of customer requirements in product configuration, an analysis method based on the grey rough model is presented. The customer requirements can be converted into technical characteristics effectively. In addition, an optimization decision model for product planning is established to help the enterprises select the key technical characteristics under the constraints of cost and time to serve the customer to maximal satisfaction. A new case retrieval approach that combines the self-organizing map and fuzzy similarity priority ratio method is proposed in case-based design. The self-organizing map can reduce the retrieval range and increase the retrieval efficiency, and the fuzzy similarity priority ratio method can evaluate the similarity of cases comprehensively. To ensure that the final case has the best overall performance, an evaluation method of similar cases based on grey correlation analysis is proposed to evaluate similar cases to select the most suitable case. Furthermore, a computer-aided system is developed using MATLAB GUI to assist the product configuration design. The actual example and result on an ETC series machine tool product show that the proposed method is effective, rapid and accurate in the process of product configuration. The proposed methodology provides a detailed instruction for the product configuration design oriented to customer requirements.

  3. Improvement of encoding and retrieval in normal and pathological aging with word-picture paradigm.

    PubMed

    Iodice, Rosario; Meilán, Juan José G; Carro, Juan

    2015-01-01

    During the aging process, there is a progressive deficit in the encoding of new information and its retrieval. Different strategies are used in order to maintain, optimize or diminish these deficits in people with and without dementia. One of the classic techniques is paired-associate learning (PAL), which is based on improving the encoding of memories, but it has yet to be used to its full potential in people with dementia. In this study, our aim is to corroborate the importance of PAL tasks as instrumental tools for creating contextual cues, during both the encoding and retrieval phases of memory. Additionally, we aim to identify the most effective form of presenting the related items. Pairs of stimuli were shown to healthy elderly people and to patients with moderate and mild Alzheimer's disease. The encoding conditions were as follows: word/word, picture/picture, picture/word, and word/picture. Associative cued recall of the second item in the pair shows that retrieval is higher for the word/picture condition in the two groups of patients with dementia when compared to the other conditions, while word/word is the least effective in all cases. These results confirm that PAL is an effective tool for creating contextual cues during both the encoding and retrieval phases in people with dementia when the items are presented using the word/picture condition. In this way, the encoding and retrieval deficit can be reduced in these people.

  4. Developing Students' Critical Reasoning About Online Health Information: A Capabilities Approach

    NASA Astrophysics Data System (ADS)

    Wiblom, Jonna; Rundgren, Carl-Johan; Andrée, Maria

    2017-11-01

    The internet has become a main source for health-related information retrieval. In addition to information published by medical experts, individuals share their personal experiences and narratives on blogs and social media platforms. Our increasing need to confront and make meaning of various sources and conflicting health information has challenged the way critical reasoning has become relevant in science education. This study addresses how the opportunities for students to develop and practice their capabilities to critically approach online health information can be created in science education. Together with two upper secondary biology teachers, we carried out a design-based study. The participating students were given an online retrieval task that included a search and evaluation of health-related online sources. After a few lessons, the students were introduced to an evaluation tool designed to support critical evaluation of health information online. Using qualitative content analysis, four themes could be discerned in the audio and video recordings of student interactions when engaging with the task. Each theme illustrates the different ways in which critical reasoning became practiced in the student groups. Without using the evaluation tool, the students struggled to overview the vast amount of information and negotiate trustworthiness. Guided by the evaluation tool, critical reasoning was practiced to handle source subjectivity and to sift out scientific information only. Rather than a generic skill and transferable across contexts, students' critical reasoning became conditioned by the multi-dimensional nature of health issues, the blend of various contexts and the shift of purpose constituted by the students.

  5. Purdue Ionomics Information Management System. An Integrated Functional Genomics Platform1[C][W][OA

    PubMed Central

    Baxter, Ivan; Ouzzani, Mourad; Orcun, Seza; Kennedy, Brad; Jandhyala, Shrinivas S.; Salt, David E.

    2007-01-01

    The advent of high-throughput phenotyping technologies has created a deluge of information that is difficult to deal with without the appropriate data management tools. These data management tools should integrate defined workflow controls for genomic-scale data acquisition and validation, data storage and retrieval, and data analysis, indexed around the genomic information of the organism of interest. To maximize the impact of these large datasets, it is critical that they are rapidly disseminated to the broader research community, allowing open access for data mining and discovery. We describe here a system that incorporates such functionalities developed around the Purdue University high-throughput ionomics phenotyping platform. The Purdue Ionomics Information Management System (PiiMS) provides integrated workflow control, data storage, and analysis to facilitate high-throughput data acquisition, along with integrated tools for data search, retrieval, and visualization for hypothesis development. PiiMS is deployed as a World Wide Web-enabled system, allowing for integration of distributed workflow processes and open access to raw data for analysis by numerous laboratories. PiiMS currently contains data on shoot concentrations of P, Ca, K, Mg, Cu, Fe, Zn, Mn, Co, Ni, B, Se, Mo, Na, As, and Cd in over 60,000 shoot tissue samples of Arabidopsis (Arabidopsis thaliana), including ethyl methanesulfonate, fast-neutron and defined T-DNA mutants, and natural accession and populations of recombinant inbred lines from over 800 separate experiments, representing over 1,000,000 fully quantitative elemental concentrations. PiiMS is accessible at www.purdue.edu/dp/ionomics. PMID:17189337

  6. The influence of retrieval practice on memory and comprehension of science texts

    NASA Astrophysics Data System (ADS)

    Hinze, Scott R.

    The testing effect, where retrieval practice aids performance on later tests, may be a powerful tool for improving learning and retention. Three experiments test the potentials and limitations of retrieval practice for retention and comprehension of the content of science texts. Experiment 1 demonstrated that cued recall of paragraphs, but not fill-in-the-blank tests, improved performance on new memory items. Experiment 2 manipulated test expectancy and extended cued recall benefits to inference items. Test expectancies established prior to retrieval altered processing to either be ineffective (when expecting a memory test) or effective (when expecting an inference test). In Experiment 3, the processing task engaged in during retrieval practice was manipulated. Explanation during retrieval practice led to more effective transfer than free recall instructions, especially when participants were compliant and effective in their explanations. These experiments demonstrate that some, but not all, processing during retrieval practice can influence both memory and understanding of science texts.

  7. A symbolic shaped-based retrieval of skull images.

    PubMed

    Lin, H Jill; Ruiz-Correa, Salvador; Shapiro, Linda G; Cunningham, Michael L; Sze, Raymond W

    2005-01-01

    In this work, we describe a novel symbolic representation of shapes for quantifying skull abnormalities in children with craniosynostosis. We show the efficacy of our work by demonstrating an application of this representation in shape-based retrieval of skull morphologies. This tool will enable correlation with potential pathogenesis and prognosis in order to enhance medical care.

  8. Aerosol-type retrieval and uncertainty quantification from OMI data

    NASA Astrophysics Data System (ADS)

    Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna

    2017-11-01

    We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.

  9. On the potential of the EChO mission to characterize gas giant atmospheres

    NASA Astrophysics Data System (ADS)

    Barstow, J. K.; Aigrain, S.; Irwin, P. G. J.; Bowles, N.; Fletcher, L. N.; Lee, J.-M.

    2013-04-01

    Space telescopes such as Exoplanet Characterisation Observatory (EChO) and James Webb Space Telescope (JWST) will be important for the future study of extrasolar planet atmospheres. Both of these missions are capable of performing high sensitivity spectroscopic measurements at moderate resolutions in the visible and infrared, which will allow the characterization of atmospheric properties using primary and secondary transit spectroscopy. We use the Non-linear optimal Estimator for MultivariateE spectral analysis (NEMESIS) radiative transfer and retrieval tool, as developed by Irwin et al. and Lee et al., to explore the potential of the proposed EChO mission to solve the retrieval problem for a range of H2-He planets orbiting different stars. We find that EChO should be capable of retrieving temperature structure to ˜200 K precision and detecting H2O, CO2 and CH4 from a single eclipse measurement for a hot Jupiter orbiting a Sun-like star and a hot Neptune orbiting an M star, also providing upper limits on CO and NH3. We provide a table of retrieval precisions for these quantities in each test case. We expect around 30 Jupiter-sized planets to be observable by EChO; hot Neptunes orbiting M dwarfs are rarer, but we anticipate observations of at least one similar planet.

  10. Comparative study of quantitative phase imaging techniques for refractometry of optical fibers

    NASA Astrophysics Data System (ADS)

    de Dorlodot, Bertrand; Bélanger, Erik; Bérubé, Jean-Philippe; Vallée, Réal; Marquet, Pierre

    2018-02-01

    The refractive index difference profile of optical fibers is the key design parameter because it determines, among other properties, the insertion losses and propagating modes. Therefore, an accurate refractive index profiling method is of paramount importance to their development and optimization. Quantitative phase imaging (QPI) is one of the available tools to retrieve structural characteristics of optical fibers, including the refractive index difference profile. Having the advantage of being non-destructive, several different QPI methods have been developed over the last decades. Here, we present a comparative study of three different available QPI techniques, namely the transport-of-intensity equation, quadriwave lateral shearing interferometry and digital holographic microscopy. To assess the accuracy and precision of those QPI techniques, quantitative phase images of the core of a well-characterized optical fiber have been retrieved for each of them and a robust image processing procedure has been applied in order to retrieve their refractive index difference profiles. As a result, even if the raw images for all the three QPI methods were suffering from different shortcomings, our robust automated image-processing pipeline successfully corrected these. After this treatment, all three QPI techniques yielded accurate, reliable and mutually consistent refractive index difference profiles in agreement with the accuracy and precision of the refracted near-field benchmark measurement.

  11. Skin Parameter Map Retrieval from a Dedicated Multispectral Imaging System Applied to Dermatology/Cosmetology

    PubMed Central

    2013-01-01

    In vivo quantitative assessment of skin lesions is an important step in the evaluation of skin condition. An objective measurement device can help as a valuable tool for skin analysis. We propose an explorative new multispectral camera specifically developed for dermatology/cosmetology applications. The multispectral imaging system provides images of skin reflectance at different wavebands covering visible and near-infrared domain. It is coupled with a neural network-based algorithm for the reconstruction of reflectance cube of cutaneous data. This cube contains only skin optical reflectance spectrum in each pixel of the bidimensional spatial information. The reflectance cube is analyzed by an algorithm based on a Kubelka-Munk model combined with evolutionary algorithm. The technique allows quantitative measure of cutaneous tissue and retrieves five skin parameter maps: melanin concentration, epidermis/dermis thickness, haemoglobin concentration, and the oxygenated hemoglobin. The results retrieved on healthy participants by the algorithm are in good accordance with the data from the literature. The usefulness of the developed technique was proved during two experiments: a clinical study based on vitiligo and melasma skin lesions and a skin oxygenation experiment (induced ischemia) with healthy participant where normal tissues are recorded at normal state and when temporary ischemia is induced. PMID:24159326

  12. A New Flow-Diverter (the FloWise): In-Vivo Evaluation in an Elastase-Induced Rabbit Aneurysm Model.

    PubMed

    Kim, Byung Moon; Kim, Dong Joon; Kim, Dong Ik

    2016-01-01

    We aimed to evaluate the efficacy and safety of a newly developed, partially retrievable flow-diverter (the FloWise) in an elastase-induced rabbit aneurysm model. We developed a partially retrievable flow diverter composed of 48 strands of Nitinol and platinum wire. The FloWise is compatible with any microcatheter of 0.027-inch inner diameter, and is retrievable up to 70% deployment. The efficacy and safety of the FloWise were evaluated in the elastase-induced rabbit aneurysm model. The rate of technical success (full coverage of aneurysm neck) and assessment of aneurysm occlusion and stent patency was conducted by angiograms and histologic examinations at the 1-month, 3-month, and 6-month follow-up. The patency of small arterial branches (intercostal or lumbar arteries) covered by the FloWise were also assessed in the 5 subjects. We attempted FloWise insertion in a total of 32 aneurysm models. FloWise placement was successful in 31 subjects (96.9%). Two stents (6.2%) were occluded at the 3-month follow-up, but there was no evidence of in-stent stenosis in other subjects. All stented aneurysms showed progressive occlusion: grade I (complete aneurysm occlusion) in 44.4% and grade II (aneurysm occlusion > 90%) in 55.6% at 1 month; grade I in 90% and II in 10% at 3 months; and grade I in 90% and II in 10% at 6 months. All small arterial branches covered by the FloWise remained patent. A newly developed, partially retrievable flow-diverter seems to be a safe and effective tool of aneurysm occlusion, as evaluated in the rabbit aneurysm model.

  13. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    PubMed

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. SWMPr: An R Package for Retrieving, Organizing, and ...

    EPA Pesticide Factsheets

    The System-Wide Monitoring Program (SWMP) was implemented in 1995 by the US National Estuarine Research Reserve System. This program has provided two decades of continuous monitoring data at over 140 fixed stations in 28 estuaries. However, the increasing quantity of data provided by the monitoring network has complicated broad-scale comparisons between systems and, in some cases, prevented simple trend analysis of water quality parameters at individual sites. This article describes the SWMPr package that provides several functions that facilitate data retrieval, organization, andanalysis of time series data in the reserve estuaries. Previously unavailable functions for estuaries are also provided to estimate rates of ecosystem metabolism using the open-water method. The SWMPr package has facilitated a cross-reserve comparison of water quality trends and links quantitative information with analysis tools that have use for more generic applications to environmental time series. The manuscript describes a software package that was recently developed to retrieve, organize, and analyze monitoring data from the National Estuarine Research Reserve System. Functions are explained in detail, including recent applications for trend analysis of ecosystem metabolism.

  15. Galen-In-Use: using artificial intelligence terminology tools to improve the linguistic coherence of a national coding system for surgical procedures.

    PubMed

    Rodrigues, J M; Trombert-Paviot, B; Baud, R; Wagner, J; Meusnier-Carriot, F

    1998-01-01

    GALEN has developed a language independent common reference model based on a medically oriented ontology and practical tools and techniques for managing healthcare terminology including natural language processing. GALEN-IN-USE is the current phase which applied the modelling and the tools to the development or the updating of coding systems for surgical procedures in different national coding centers co-operating within the European Federation of Coding Centre (EFCC) to create a language independent knowledge repository for multicultural Europe. We used an integrated set of artificial intelligence terminology tools named CLAssification Manager workbench to process French professional medical language rubrics into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation we generate controlled French natural language. The French national coding centre is then able to retrieve the initial professional rubrics with different categories of concepts, to compare the professional language proposed by expert clinicians to the French generated controlled vocabulary and to finalize the linguistic labels of the coding system in relation with the meanings of the conceptual system structure.

  16. Collection and Retention Procedures for Electronically Stored Information (ESI) Collected Using E-Discovery Tools

    EPA Pesticide Factsheets

    This procedure is designed to support the collection of potentially responsive information using automated E-Discovery tools that rely on keywords, key phrases, index queries, or other technological assistance to retrieve Electronically Stored Information

  17. Engaging Patients as Partners in Developing Patient-Reported Outcome Measures in Cancer-A Review of the Literature.

    PubMed

    Camuso, Natasha; Bajaj, Prerna; Dudgeon, Deborah; Mitera, Gunita

    2016-08-01

    Tools to collect patient-reported outcome measures (PROMs) are frequently used in the healthcare setting to collect information that is most meaningful to patients. Due to discordance among how patients and healthcare providers rank symptoms that are considered most meaningful to the patient, engagement of patients in the development of PROMs is extremely important. This review aimed to identify studies that described how patients are involved in the item generation stage of cancer-specific PROM tools developed for cancer patients. A literature search was conducted using keywords relevant to PROMs, cancer, and patient engagement. A manual search of relevant reference lists was also conducted. Inclusion criteria stipulated that publications must describe patient engagement in the item generation stage of development of cancer-specific PROM tools. Results were excluded if they were duplicate findings or non-English. The initial search yielded 230 publications. After removal of duplicates and review of publications, 6 were deemed relevant. Fourteen additional publications were retrieved through a manual search of references from relevant publications. A total of 13 unique PROM tools that included patient input in item generation were identified. The most common method of patient engagement was through qualitative interviews or focus groups. Despite recommendations from international groups and the emphasized importance of incorporating patient feedback in all stages of development of PROMs, few unique tools have incorporated patient input in item generation of cancer-specific tools. Moving forward, a framework of best practices on how to best engage patients in developing PROMs is warranted to support high-quality patient-centered care.

  18. STARNET 2: a web-based tool for accelerating discovery of gene regulatory networks using microarray co-expression data

    PubMed Central

    Jupiter, Daniel; Chen, Hailin; VanBuren, Vincent

    2009-01-01

    Background Although expression microarrays have become a standard tool used by biologists, analysis of data produced by microarray experiments may still present challenges. Comparison of data from different platforms, organisms, and labs may involve complicated data processing, and inferring relationships between genes remains difficult. Results STARNET 2 is a new web-based tool that allows post hoc visual analysis of correlations that are derived from expression microarray data. STARNET 2 facilitates user discovery of putative gene regulatory networks in a variety of species (human, rat, mouse, chicken, zebrafish, Drosophila, C. elegans, S. cerevisiae, Arabidopsis and rice) by graphing networks of genes that are closely co-expressed across a large heterogeneous set of preselected microarray experiments. For each of the represented organisms, raw microarray data were retrieved from NCBI's Gene Expression Omnibus for a selected Affymetrix platform. All pairwise Pearson correlation coefficients were computed for expression profiles measured on each platform, respectively. These precompiled results were stored in a MySQL database, and supplemented by additional data retrieved from NCBI. A web-based tool allows user-specified queries of the database, centered at a gene of interest. The result of a query includes graphs of correlation networks, graphs of known interactions involving genes and gene products that are present in the correlation networks, and initial statistical analyses. Two analyses may be performed in parallel to compare networks, which is facilitated by the new HEATSEEKER module. Conclusion STARNET 2 is a useful tool for developing new hypotheses about regulatory relationships between genes and gene products, and has coverage for 10 species. Interpretation of the correlation networks is supported with a database of previously documented interactions, a test for enrichment of Gene Ontology terms, and heat maps of correlation distances that may be used to compare two networks. The list of genes in a STARNET network may be useful in developing a list of candidate genes to use for the inference of causal networks. The tool is freely available at , and does not require user registration. PMID:19828039

  19. Generic trending and analysis system

    NASA Technical Reports Server (NTRS)

    Keehan, Lori; Reese, Jay

    1994-01-01

    The Generic Trending and Analysis System (GTAS) is a generic spacecraft performance monitoring tool developed by NASA Code 511 and Loral Aerosys. It is designed to facilitate quick anomaly resolution and trend analysis. Traditionally, the job of off-line analysis has been performed using hardware and software systems developed for real-time spacecraft contacts; then, the systems were supplemented with a collection of tools developed by Flight Operations Team (FOT) members. Since the number of upcoming missions is increasing, NASA can no longer afford to operate in this manner. GTAS improves control center productivity and effectiveness because it provides a generic solution across multiple missions. Thus, GTAS eliminates the need for each individual mission to develop duplicate capabilities. It also allows for more sophisticated tools to be developed because it draws resources from several projects. In addition, the GTAS software system incorporates commercial off-the-shelf tools software (COTS) packages and reuses components of other NASA-developed systems wherever possible. GTAS has incorporated lessons learned from previous missions by involving the users early in the development process. GTAS users took a proactive role in requirements analysis, design, development, and testing. Because of user involvement, several special tools were designed and are now being developed. GTAS users expressed considerable interest in facilitating data collection for long term trending and analysis. As a result, GTAS provides easy access to large volumes of processed telemetry data directly in the control center. The GTAS archival and retrieval capabilities are supported by the integration of optical disk technology and a COTS relational database management system.

  20. VisualUrText: A Text Analytics Tool for Unstructured Textual Data

    NASA Astrophysics Data System (ADS)

    Zainol, Zuraini; Jaymes, Mohd T. H.; Nohuddin, Puteri N. E.

    2018-05-01

    The growing amount of unstructured text over Internet is tremendous. Text repositories come from Web 2.0, business intelligence and social networking applications. It is also believed that 80-90% of future growth data is available in the form of unstructured text databases that may potentially contain interesting patterns and trends. Text Mining is well known technique for discovering interesting patterns and trends which are non-trivial knowledge from massive unstructured text data. Text Mining covers multidisciplinary fields involving information retrieval (IR), text analysis, natural language processing (NLP), data mining, machine learning statistics and computational linguistics. This paper discusses the development of text analytics tool that is proficient in extracting, processing, analyzing the unstructured text data and visualizing cleaned text data into multiple forms such as Document Term Matrix (DTM), Frequency Graph, Network Analysis Graph, Word Cloud and Dendogram. This tool, VisualUrText, is developed to assist students and researchers for extracting interesting patterns and trends in document analyses.

  1. Building a Massive Volcano Archive and the Development of a Tool for the Science Community

    NASA Technical Reports Server (NTRS)

    Linick, Justin

    2012-01-01

    The Jet Propulsion Laboratory has traditionally housed one of the world's largest databases of volcanic satellite imagery, the ASTER Volcano Archive (10Tb), making these data accessible online for public and scientific use. However, a series of changes in how satellite imagery is housed by the Earth Observing System (EOS) Data Information System has meant that JPL has been unable to systematically maintain its database for the last several years. We have provided a fast, transparent, machine-to-machine client that has updated JPL's database and will keep it current in near real-time. The development of this client has also given us the capability to retrieve any data provided by NASA's Earth Observing System Clearinghouse (ECHO) that covers a volcanic event reported by U.S. Air Force Weather Agency (AFWA). We will also provide a publicly available tool that interfaces with ECHO that can provide functionality not available in any of ECHO's Earth science discovery tools.

  2. Methods, Software and Tools for Three Numerical Applications. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E. R. Jessup

    2000-03-01

    This is a report of the results of the authors work supported by DOE contract DE-FG03-97ER25325. They proposed to study three numerical problems. They are: (1) the extension of the PMESC parallel programming library; (2) the development of algorithms and software for certain generalized eigenvalue and singular value (SVD) problems, and (3) the application of techniques of linear algebra to an information retrieval technique known as latent semantic indexing (LSI).

  3. Data preparation and evaluation techniques for x-ray diffraction microscopy.

    PubMed

    Steinbrener, Jan; Nelson, Johanna; Huang, Xiaojing; Marchesini, Stefano; Shapiro, David; Turner, Joshua J; Jacobsen, Chris

    2010-08-30

    The post-experiment processing of X-ray Diffraction Microscopy data is often time-consuming and difficult. This is mostly due to the fact that even if a preliminary result has been reconstructed, there is no definitive answer as to whether or not a better result with more consistently retrieved phases can still be obtained. We show here that the first step in data analysis, the assembly of two-dimensional diffraction patterns from a large set of raw diffraction data, is crucial to obtaining reconstructions of highest possible consistency. We have developed software that automates this process and results in consistently accurate diffraction patterns. We have furthermore derived some criteria of validity for a tool commonly used to assess the consistency of reconstructions, the phase retrieval transfer function, and suggest a modified version that has improved utility for judging reconstruction quality.

  4. Navy Medical Information Storage and Retrieval System: Navy MEDISTARS. TR-1-71-Part 2, Manual of Indexing Terms; First Edition.

    ERIC Educational Resources Information Center

    Ramsey-Klee, Diane M.

    A computer-based information storage and retrieval system was designed and implemented for processing Navy neuropsychiatric case history reports. The system design objectives were to produce a dynamic and flexible medical information processing tool. The system that was designed has been given the name NAVY MEDical Information STorage and…

  5. An Examination of Natural Language as a Query Formation Tool for Retrieving Information on E-Health from Pub Med.

    ERIC Educational Resources Information Center

    Peterson, Gabriel M.; Su, Kuichun; Ries, James E.; Sievert, Mary Ellen C.

    2002-01-01

    Discussion of Internet use for information searches on health-related topics focuses on a study that examined complexity and variability of natural language in using search terms that express the concept of electronic health (e-health). Highlights include precision of retrieved information; shift in terminology; and queries using the Pub Med…

  6. Krikalev in Service module with tools

    NASA Image and Video Library

    2001-03-30

    ISS01-E-5150 (December 2000) --- Cosmonaut Sergei K. Krikalev, Expedition One flight engineer, retrieves a tool during an installation and set-up session in the Zvezda service module aboard the International Space Station (ISS). The picture was recorded with a digital still camera.

  7. Automation and hypermedia technology applications

    NASA Technical Reports Server (NTRS)

    Jupin, Joseph H.; Ng, Edward W.; James, Mark L.

    1993-01-01

    This paper represents a progress report on HyLite (Hypermedia Library technology): a research and development activity to produce a versatile system as part of NASA's technology thrusts in automation, information sciences, and communications. HyLite can be used as a system or tool to facilitate the creation and maintenance of large distributed electronic libraries. The contents of such a library may be software components, hardware parts or designs, scientific data sets or databases, configuration management information, etc. Proliferation of computer use has made the diversity and quantity of information too large for any single user to sort, process, and utilize effectively. In response to this information deluge, we have created HyLite to enable the user to process relevant information into a more efficient organization for presentation, retrieval, and readability. To accomplish this end, we have incorporated various AI techniques into the HyLite hypermedia engine to facilitate parameters and properties of the system. The proposed techniques include intelligent searching tools for the libraries, intelligent retrievals, and navigational assistance based on user histories. HyLite itself is based on an earlier project, the Encyclopedia of Software Components (ESC) which used hypermedia to facilitate and encourage software reuse.

  8. Enhancing Biomedical Text Summarization Using Semantic Relation Extraction

    PubMed Central

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336

  9. Promzea: a pipeline for discovery of co-regulatory motifs in maize and other plant species and its application to the anthocyanin and phlobaphene biosynthetic pathways and the Maize Development Atlas.

    PubMed

    Liseron-Monfils, Christophe; Lewis, Tim; Ashlock, Daniel; McNicholas, Paul D; Fauteux, François; Strömvik, Martina; Raizada, Manish N

    2013-03-15

    The discovery of genetic networks and cis-acting DNA motifs underlying their regulation is a major objective of transcriptome studies. The recent release of the maize genome (Zea mays L.) has facilitated in silico searches for regulatory motifs. Several algorithms exist to predict cis-acting elements, but none have been adapted for maize. A benchmark data set was used to evaluate the accuracy of three motif discovery programs: BioProspector, Weeder and MEME. Analysis showed that each motif discovery tool had limited accuracy and appeared to retrieve a distinct set of motifs. Therefore, using the benchmark, statistical filters were optimized to reduce the false discovery ratio, and then remaining motifs from all programs were combined to improve motif prediction. These principles were integrated into a user-friendly pipeline for motif discovery in maize called Promzea, available at http://www.promzea.org and on the Discovery Environment of the iPlant Collaborative website. Promzea was subsequently expanded to include rice and Arabidopsis. Within Promzea, a user enters cDNA sequences or gene IDs; corresponding upstream sequences are retrieved from the maize genome. Predicted motifs are filtered, combined and ranked. Promzea searches the chosen plant genome for genes containing each candidate motif, providing the user with the gene list and corresponding gene annotations. Promzea was validated in silico using a benchmark data set: the Promzea pipeline showed a 22% increase in nucleotide sensitivity compared to the best standalone program tool, Weeder, with equivalent nucleotide specificity. Promzea was also validated by its ability to retrieve the experimentally defined binding sites of transcription factors that regulate the maize anthocyanin and phlobaphene biosynthetic pathways. Promzea predicted additional promoter motifs, and genome-wide motif searches by Promzea identified 127 non-anthocyanin/phlobaphene genes that each contained all five predicted promoter motifs in their promoters, perhaps uncovering a broader co-regulated gene network. Promzea was also tested against tissue-specific microarray data from maize. An online tool customized for promoter motif discovery in plants has been generated called Promzea. Promzea was validated in silico by its ability to retrieve benchmark motifs and experimentally defined motifs and was tested using tissue-specific microarray data. Promzea predicted broader networks of gene regulation associated with the historic anthocyanin and phlobaphene biosynthetic pathways. Promzea is a new bioinformatics tool for understanding transcriptional gene regulation in maize and has been expanded to include rice and Arabidopsis.

  10. Research-IQ: Development and Evaluation of an Ontology-anchored Integrative Query Tool

    PubMed Central

    Borlawsky, Tara B.; Lele, Omkar; Payne, Philip R. O.

    2011-01-01

    Investigators in the translational research and systems medicine domains require highly usable, efficient and integrative tools and methods that allow for the navigation of and reasoning over emerging large-scale data sets. Such resources must cover a spectrum of granularity from bio-molecules to population phenotypes. Given such information needs, we report upon the initial design and evaluation of an ontology-anchored integrative query tool, Research-IQ, which employs a combination of conceptual knowledge engineering and information retrieval techniques to enable the intuitive and rapid construction of queries, in terms of semi-structured textual propositions, that can subsequently be applied to integrative data sets. Our initial results, based upon both quantitative and qualitative evaluations of the efficacy and usability of Research-IQ, demonstrate its potential to increase clinical and translational research throughput. PMID:21821150

  11. Radiometry simulation within the end-to-end simulation tool SENSOR

    NASA Astrophysics Data System (ADS)

    Wiest, Lorenz; Boerner, Anko

    2001-02-01

    12 An end-to-end simulation is a valuable tool for sensor system design, development, optimization, testing, and calibration. This contribution describes the radiometry module of the end-to-end simulation tool SENSOR. It features MODTRAN 4.0-based look up tables in conjunction with a cache-based multilinear interpolation algorithm to speed up radiometry calculations. It employs a linear reflectance parameterization to reduce look up table size, considers effects due to the topology of a digital elevation model (surface slope, sky view factor) and uses a reflectance class feature map to assign Lambertian and BRDF reflectance properties to the digital elevation model. The overall consistency of the radiometry part is demonstrated by good agreement between ATCOR 4-retrieved reflectance spectra of a simulated digital image cube and the original reflectance spectra used to simulate this image data cube.

  12. Evaluation of tools used to measure critical thinking development in nursing and midwifery undergraduate students: a systematic review.

    PubMed

    Carter, Amanda G; Creedy, Debra K; Sidebotham, Mary

    2015-07-01

    Well developed critical thinking skills are essential for nursing and midwifery practices. The development of students' higher-order cognitive abilities, such as critical thinking, is also well recognised in nursing and midwifery education. Measurement of critical thinking development is important to demonstrate change over time and effectiveness of teaching strategies. To evaluate tools designed to measure critical thinking in nursing and midwifery undergraduate students. The following six databases were searched and resulted in the retrieval of 1191 papers: CINAHL, Ovid Medline, ERIC, Informit, PsycINFO and Scopus. After screening for inclusion, each paper was evaluated using the Critical Appraisal Skills Programme Tool. Thirty-four studies met the inclusion criteria and quality appraisal. Sixteen different tools that measure critical thinking were reviewed for reliability and validity and extent to which the domains of critical thinking were evident. Sixty percent of studies utilised one of four standardised commercially available measures of critical thinking. Reliability and validity were not consistently reported and there was a variation in reliability across studies that used the same measure. Of the remaining studies using different tools, there was also limited reporting of reliability making it difficult to assess internal consistency and potential applicability of measures across settings. Discipline specific instruments to measure critical thinking in nursing and midwifery are required, specifically tools that measure the application of critical thinking to practise. Given that critical thinking development occurs over an extended period, measurement needs to be repeated and multiple methods of measurement used over time. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  13. Calculation of the Actual Cost of Engine Maintenance

    DTIC Science & Technology

    2003-03-01

    Cost Estimating Integrated Tools ( ACEIT ) helps analysts store, retrieve, and analyze data; build cost models; analyze risk; time phase budgets; and...Tools ( ACEIT ).” n. pag. http://www.aceit.com/ 21 February 2003. • USAMC Logistics Support Activity (LOGSA). “Cost Analysis Strategy Assessment

  14. STS-57 Pilot Duffy uses TDS soldering tool in SPACEHAB-01 aboard OV-105

    NASA Image and Video Library

    1993-07-01

    STS057-30-021 (21 June-1 July 1993) --- Astronaut Brian Duffy, pilot, handles a soldering tool onboard the Earth-orbiting Space Shuttle Endeavour. The Soldering Experiment (SE) called for a crew member to solder on a printed circuit board containing 45 connection points, then de-solder 35 points on a similar board. The SE was part of a larger project called the Tools and Diagnostic Systems (TDS), sponsored by the Space and Life Sciences Directorate at Johnson Space Center (JSC). TDS represents a group of equipment selected from the tools and diagnostic hardware to be supported by the International Space Station program. TDS was designed to demonstrate the maintenance of experiment hardware on-orbit and to evaluate the adequacy of its design and the crew interface. Duffy and five other NASA astronauts spent almost ten days aboard the Space Shuttle Endeavour in Earth-orbit supporting the SpaceHab mission, retrieving the European Retrievable Carrier (EURECA) and conducting various experiments.

  15. Google Scholar as replacement for systematic literature searches: good relative recall and precision are not enough.

    PubMed

    Boeker, Martin; Vach, Werner; Motschall, Edith

    2013-10-26

    Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools.The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary.

  16. KA-SB: from data integration to large scale reasoning

    PubMed Central

    Roldán-García, María del Mar; Navas-Delgado, Ismael; Kerzazi, Amine; Chniber, Othmane; Molina-Castro, Joaquín; Aldana-Montes, José F

    2009-01-01

    Background The analysis of information in the biological domain is usually focused on the analysis of data from single on-line data sources. Unfortunately, studying a biological process requires having access to disperse, heterogeneous, autonomous data sources. In this context, an analysis of the information is not possible without the integration of such data. Methods KA-SB is a querying and analysis system for final users based on combining a data integration solution with a reasoner. Thus, the tool has been created with a process divided into two steps: 1) KOMF, the Khaos Ontology-based Mediator Framework, is used to retrieve information from heterogeneous and distributed databases; 2) the integrated information is crystallized in a (persistent and high performance) reasoner (DBOWL). This information could be further analyzed later (by means of querying and reasoning). Results In this paper we present a novel system that combines the use of a mediation system with the reasoning capabilities of a large scale reasoner to provide a way of finding new knowledge and of analyzing the integrated information from different databases, which is retrieved as a set of ontology instances. This tool uses a graphical query interface to build user queries easily, which shows a graphical representation of the ontology and allows users o build queries by clicking on the ontology concepts. Conclusion These kinds of systems (based on KOMF) will provide users with very large amounts of information (interpreted as ontology instances once retrieved), which cannot be managed using traditional main memory-based reasoners. We propose a process for creating persistent and scalable knowledgebases from sets of OWL instances obtained by integrating heterogeneous data sources with KOMF. This process has been applied to develop a demo tool , which uses the BioPax Level 3 ontology as the integration schema, and integrates UNIPROT, KEGG, CHEBI, BRENDA and SABIORK databases. PMID:19796402

  17. Motor-Iconicity of Sign Language Does Not Alter the Neural Systems Underlying Tool and Action Naming

    ERIC Educational Resources Information Center

    Emmorey, Karen; Grabowski, Thomas; McCullough, Stephen; Damasio, Hannah; Ponto, Laurie; Hichwa, Richard; Bellugi, Ursula

    2004-01-01

    Positron emission tomography was used to investigate whether the motor-iconic basis of certain forms in American Sign Language (ASL) partially alters the neural systems engaged during lexical retrieval. Most ASL nouns denoting tools and ASL verbs referring to tool-based actions are produced with a handshape representing the human hand holding a…

  18. The Study on Collaborative Manufacturing Platform Based on Agent

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-yan; Qu, Zheng-geng

    To fulfill the trends of knowledge-intensive in collaborative manufacturing development, we have described multi agent architecture supporting knowledge-based platform of collaborative manufacturing development platform. In virtue of wrapper service and communication capacity agents provided, the proposed architecture facilitates organization and collaboration of multi-disciplinary individuals and tools. By effectively supporting the formal representation, capture, retrieval and reuse of manufacturing knowledge, the generalized knowledge repository based on ontology library enable engineers to meaningfully exchange information and pass knowledge across boundaries. Intelligent agent technology increases traditional KBE systems efficiency and interoperability and provides comprehensive design environments for engineers.

  19. A suite of R packages for web-enabled modeling and analysis of surface waters

    NASA Astrophysics Data System (ADS)

    Read, J. S.; Winslow, L. A.; Nüst, D.; De Cicco, L.; Walker, J. I.

    2014-12-01

    Researchers often create redundant methods for downloading, manipulating, and analyzing data from online resources. Moreover, the reproducibility of science can be hampered by complicated and voluminous data, lack of time for documentation and long-term maintenance of software, and fear of exposing programming skills. The combination of these factors can encourage unshared one-off programmatic solutions instead of openly provided reusable methods. Federal and academic researchers in the water resources and informatics domains have collaborated to address these issues. The result of this collaboration is a suite of modular R packages that can be used independently or as elements in reproducible analytical workflows. These documented and freely available R packages were designed to fill basic needs for the effective use of water data: the retrieval of time-series and spatial data from web resources (dataRetrieval, geoknife), performing quality assurance and quality control checks of these data with robust statistical methods (sensorQC), the creation of useful data derivatives (including physically- and biologically-relevant indices; GDopp, LakeMetabolizer), and the execution and evaluation of models (glmtools, rLakeAnalyzer). Here, we share details and recommendations for the collaborative coding process, and highlight the benefits of an open-source tool development pattern with a popular programming language in the water resources discipline (such as R). We provide examples of reproducible science driven by large volumes of web-available data using these tools, explore benefits of accessing packages as standardized web processing services (WPS) and present a working platform that allows domain experts to publish scientific algorithms in a service-oriented architecture (WPS4R). We assert that in the era of open data, tools that leverage these data should also be freely shared, transparent, and developed in an open innovation environment.

  20. Predictive Risk Modelling to Prevent Child Maltreatment and Other Adverse Outcomes for Service Users: Inside the ‘Black Box’ of Machine Learning

    PubMed Central

    Gillingham, Philip

    2016-01-01

    Recent developments in digital technology have facilitated the recording and retrieval of administrative data from multiple sources about children and their families. Combined with new ways to mine such data using algorithms which can ‘learn’, it has been claimed that it is possible to develop tools that can predict which individual children within a population are most likely to be maltreated. The proposed benefit is that interventions can then be targeted to the most vulnerable children and their families to prevent maltreatment from occurring. As expertise in predictive modelling increases, the approach may also be applied in other areas of social work to predict and prevent adverse outcomes for vulnerable service users. In this article, a glimpse inside the ‘black box’ of predictive tools is provided to demonstrate how their development for use in social work may not be straightforward, given the nature of the data recorded about service users and service activity. The development of predictive risk modelling (PRM) in New Zealand is focused on as an example as it may be the first such tool to be applied as part of ongoing reforms to child protection services. PMID:27559213

  1. Predictive Risk Modelling to Prevent Child Maltreatment and Other Adverse Outcomes for Service Users: Inside the 'Black Box' of Machine Learning.

    PubMed

    Gillingham, Philip

    2016-06-01

    Recent developments in digital technology have facilitated the recording and retrieval of administrative data from multiple sources about children and their families. Combined with new ways to mine such data using algorithms which can 'learn', it has been claimed that it is possible to develop tools that can predict which individual children within a population are most likely to be maltreated. The proposed benefit is that interventions can then be targeted to the most vulnerable children and their families to prevent maltreatment from occurring. As expertise in predictive modelling increases, the approach may also be applied in other areas of social work to predict and prevent adverse outcomes for vulnerable service users. In this article, a glimpse inside the 'black box' of predictive tools is provided to demonstrate how their development for use in social work may not be straightforward, given the nature of the data recorded about service users and service activity. The development of predictive risk modelling (PRM) in New Zealand is focused on as an example as it may be the first such tool to be applied as part of ongoing reforms to child protection services.

  2. PIRIA: a general tool for indexing, search, and retrieval of multimedia content

    NASA Astrophysics Data System (ADS)

    Joint, Magali; Moellic, Pierre-Alain; Hede, P.; Adam, P.

    2004-05-01

    The Internet is a continuously expanding source of multimedia content and information. There are many products in development to search, retrieve, and understand multimedia content. But most of the current image search/retrieval engines, rely on a image database manually pre-indexed with keywords. Computers are still powerless to understand the semantic meaning of still or animated image content. Piria (Program for the Indexing and Research of Images by Affinity), the search engine we have developed brings this possibility closer to reality. Piria is a novel search engine that uses the query by example method. A user query is submitted to the system, which then returns a list of images ranked by similarity, obtained by a metric distance that operates on every indexed image signature. These indexed images are compared according to several different classifiers, not only Keywords, but also Form, Color and Texture, taking into account geometric transformations and variance like rotation, symmetry, mirroring, etc. Form - Edges extracted by an efficient segmentation algorithm. Color - Histogram, semantic color segmentation and spatial color relationship. Texture - Texture wavelets and local edge patterns. If required, Piria is also able to fuse results from multiple classifiers with a new classification of index categories: Single Indexer Single Call (SISC), Single Indexer Multiple Call (SIMC), Multiple Indexers Single Call (MISC) or Multiple Indexers Multiple Call (MIMC). Commercial and industrial applications will be explored and discussed as well as current and future development.

  3. Fluency heuristic: a model of how the mind exploits a by-product of information retrieval.

    PubMed

    Hertwig, Ralph; Herzog, Stefan M; Schooler, Lael J; Reimer, Torsten

    2008-09-01

    Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the most of an automatic by-product of retrieval from memory, namely, retrieval fluency. In 4 experiments, the authors show that retrieval fluency can be a proxy for real-world quantities, that people can discriminate between two objects' retrieval fluencies, and that people's inferences are in line with the fluency heuristic (in particular fast inferences) and with experimentally manipulated fluency. The authors conclude that the fluency heuristic may be one tool in the mind's repertoire of strategies that artfully probes memory for encapsulated frequency information that can veridically reflect statistical regularities in the world. (c) 2008 APA, all rights reserved.

  4. Information content and sensitivity of the 3β + 2α lidar measurement system for aerosol microphysical retrievals

    NASA Astrophysics Data System (ADS)

    Burton, Sharon P.; Chemyakin, Eduard; Liu, Xu; Knobelspiesse, Kirk; Stamnes, Snorre; Sawamura, Patricia; Moore, Richard H.; Hostetler, Chris A.; Ferrare, Richard A.

    2016-11-01

    There is considerable interest in retrieving profiles of aerosol effective radius, total number concentration, and complex refractive index from lidar measurements of extinction and backscatter at several wavelengths. The combination of three backscatter channels plus two extinction channels (3β + 2α) is particularly important since it is believed to be the minimum configuration necessary for the retrieval of aerosol microphysical properties and because the technological readiness of lidar systems permits this configuration on both an airborne and future spaceborne instrument. The second-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β + 2α measurements since 2012. The planned NASA Aerosol/Clouds/Ecosystems (ACE) satellite mission also recommends the 3β + 2α combination.Here we develop a deeper understanding of the information content and sensitivities of the 3β + 2α system in terms of aerosol microphysical parameters of interest. We use a retrieval-free methodology to determine the basic sensitivities of the measurements independent of retrieval assumptions and constraints. We calculate information content and uncertainty metrics using tools borrowed from the optimal estimation methodology based on Bayes' theorem, using a simplified forward model look-up table, with no explicit inversion. The forward model is simplified to represent spherical particles, monomodal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, the given simplified aerosol scenario is applicable as a best case for all existing retrievals in the absence of additional constraints. Retrieval-dependent errors due to mismatch between retrieval assumptions and true atmospheric aerosols are not included in this sensitivity study, and neither are retrieval errors that may be introduced in the inversion process. The choice of a simplified model adds clarity to the understanding of the uncertainties in such retrievals, since it allows for separately assessing the sensitivities and uncertainties of the measurements alone that cannot be corrected by any potential or theoretical improvements to retrieval methodology but must instead be addressed by adding information content.The sensitivity metrics allow for identifying (1) information content of the measurements vs. a priori information; (2) error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. The results suggest that the 3β + 2α measurement system is underdetermined with respect to the full suite of microphysical parameters considered in this study and that additional information is required, in the form of additional coincident measurements (e.g., sun-photometer or polarimeter) or a priori retrieval constraints. A specific recommendation is given for addressing cross-talk between effective radius and total number concentration.

  5. Text mining and its potential applications in systems biology.

    PubMed

    Ananiadou, Sophia; Kell, Douglas B; Tsujii, Jun-ichi

    2006-12-01

    With biomedical literature increasing at a rate of several thousand papers per week, it is impossible to keep abreast of all developments; therefore, automated means to manage the information overload are required. Text mining techniques, which involve the processes of information retrieval, information extraction and data mining, provide a means of solving this. By adding meaning to text, these techniques produce a more structured analysis of textual knowledge than simple word searches, and can provide powerful tools for the production and analysis of systems biology models.

  6. DRR is a teenager

    NASA Astrophysics Data System (ADS)

    Nagy, George

    2008-01-01

    The fifteenth anniversary of the first SPIE symposium (titled Character Recognition Technologies) on Document Recognition and Retrieval provides an opportunity to examine DRR's contributions to the development of document technologies. Many of the tools taken for granted today, including workable general purpose OCR, large-scale, semi-automatic forms processing, inter-format table conversion, and text mining, followed research presented at this venue. This occasion also affords an opportunity to offer tribute to the conference organizers and proceedings editors and to the coterie of professionals who regularly participate in DRR.

  7. 3DRT-MPASS

    NASA Technical Reports Server (NTRS)

    Lickly, Ben

    2005-01-01

    Data from all current JPL missions are stored in files called SPICE kernels. At present, animators who want to use data from these kernels have to either read through the kernels looking for the desired data, or write programs themselves to retrieve information about all the needed objects for their animations. In this project, methods of automating the process of importing the data from the SPICE kernels were researched. In particular, tools were developed for creating basic scenes in Maya, a 3D computer graphics software package, from SPICE kernels.

  8. A digital library for medical imaging activities

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Furuie, Sérgio S.

    2007-03-01

    This work presents the development of an electronic infrastructure to make available a free, online, multipurpose and multimodality medical image database. The proposed infrastructure implements a distributed architecture for medical image database, authoring tools, and a repository for multimedia documents. Also it includes a peer-reviewed model that assures quality of dataset. This public repository provides a single point of access for medical images and related information to facilitate retrieval tasks. The proposed approach has been used as an electronic teaching system in Radiology as well.

  9. Raking it in: the impact of enculturation on chimpanzee tool use.

    PubMed

    Furlong, E E; Boose, K J; Boysen, S T

    2008-01-01

    Recent evidence for different tool kits, proposed to be based upon culture-like transmission, have been observed across different chimpanzee communities across Western Africa. In light of these findings, the reported failures by seven captive juvenile chimpanzees tested with 27 tool use tasks (Povinelli 2000) seem enigmatic. Here we report successful performance by a group of nine captive, enculturated chimpanzees, and limited success by a group of six semi-enculturated chimpanzees, on two of the Povinelli tasks, the Flimsy Tool task, and the Hybrid Tool task. All chimpanzees were presented with a rake with a flimsy head and a second rake with a rigid head, either of which could be used to attempt to retrieve a food reward that was out of reach. The rigid rake was constructed such that it had the necessary functional features to permit successful retrieval, while the flimsy rake did not. Both chimpanzee groups in the present experiment selected the functional rigid tool correctly to use during the Flimsy Tool task. All animals were then presented with two "hybrid rakes" A and B, with one half of each rake head constructed from flimsy, non-functional fabric, and the other half of the head was made of wood. Food rewards were placed in front of the rigid side of Rake A and the flimsy side of Rake B. To be successful, the chimps needed to choose the rake that had the reward in front of the rigid side of the rake head. The fully enculturated animals were successful in selecting the functional rake, while the semi-enculturated subjects chose randomly between the two hybrid tools. Compared with findings from Povinelli, whose non-enculturated animals failed both tasks, our results demonstrate that chimpanzees reared under conditions of semi-enculturation could learn to discriminate correctly the necessary tool through trial-and-error during the Flimsy Tool task, but were unable to recognize the functional relationship necessary for retrieving the reward with the "hybrid" rake. In contrast, the enculturated chimpanzees were correct in their choices during both the Flimsy Tool and the Hybrid Tool tasks. These results provide the first empirical evidence for the differential effects of enculturation on subsequent tool use capacities in captive chimpanzees.

  10. Contingency Contractor Optimization Phase 3 Sustainment Database Design Document - Contingency Contractor Optimization Tool - Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frazier, Christopher Rawls; Durfee, Justin David; Bandlow, Alisa

    The Contingency Contractor Optimization Tool – Prototype (CCOT-P) database is used to store input and output data for the linear program model described in [1]. The database allows queries to retrieve this data and updating and inserting new input data.

  11. Business Intelligence: Turning Knowledge into Power

    ERIC Educational Resources Information Center

    Endsley, Krista

    2009-01-01

    Today, many school districts are turning to business intelligence tools to retrieve, organize, and share knowledge for faster analysis and more effective, guided decision making. Business intelligence (BI) tools are the technologies and applications that gather and report information to help an organization's leaders make better decisions. BI…

  12. Toolbox of assessment tools of technical skills in otolaryngology-head and neck surgery: A systematic review.

    PubMed

    Labbé, Mathilde; Young, Meredith; Nguyen, Lily H P

    2017-10-08

    To support the development of programs of assessment of technical skills in the operating room (OR), we systematically reviewed the literature to identify assessment tools specific to otolaryngology-head and neck surgery (OTL-HNS) core procedures and summarized their characteristics. We systematically searched Embase, MEDLINE, PubMed, and Cochrane to identify and report on assessment tools that can be used to assess residents' technical surgical skills in the operating room for OTL-HNS core procedures. Of the 736 unique titles retrieved, 16 articles met inclusion criteria, covering 11 different procedures (in otology, rhinology, laryngology, head and neck, and general otolaryngology). The tools were composed of a task-specific checklist and/or global rating scale and were developed in the OR, on human cadavers, or in a simulation setting. Our study reports on published tools for assessing technical skills for OTL-HNS residents during core procedures conducted in the OR. These assessment tools could facilitate the provision of timely feedback to trainees including specific goals for improvement. However, the paucity of publications suggests little agreement on how to best perform work-based direct-observation assessment for core surgical procedures in OTL-HNS. The sparsity of tools specific to OTL-HNS may become a barrier to a fluid transition to competency-based medical education. Laryngoscope, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  13. Estimating Software-Development Costs With Greater Accuracy

    NASA Technical Reports Server (NTRS)

    Baker, Dan; Hihn, Jairus; Lum, Karen

    2008-01-01

    COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.

  14. Episodic memory retrieval in adolescents with and without developmental language disorder (DLD).

    PubMed

    Lee, Joanna C

    2018-03-01

    Two reasons may explain the discrepant findings regarding declarative memory in developmental language disorder (DLD) in the literature. First, standardized tests are one of the primary tools used to assess declarative memory in previous studies. It is possible they are not sensitive enough to subtle memory impairment. Second, the system underlying declarative memory is complex, and thus results may vary depending on the types of encoding and retrieval processes measured (e.g., item specific or relational) and/or task demands (e.g., recall or recognition during memory retrieval). To adopt an experimental paradigm to examine episodic memory functioning in adolescents with and without DLD, with the focus on memory recognition of item-specific and relational information. Two groups of adolescents, one with DLD (n = 23; mean age = 16.73 years) and the other without (n = 23; mean age = 16.75 years), participated in the study. The Relational and Item-Specific Encoding (RISE) paradigm was used to assess the effect of different encoding processes on episodic memory retrieval in DLD. The advantage of using the RISE task is that both item-specific and relational encoding/retrieval can be examined within the same learning paradigm. Adolescents with DLD and those with typical language development showed comparable engagement during the encoding phase. The DLD group showed significantly poorer item recognition than the comparison group. Associative recognition was not significantly different between the two groups; however, there was a non-significant trend for to be poorer in the DLD group than in the comparison group, suggesting a possible impairment in associative recognition in individuals with DLD, but to a lesser magnitude. These results indicate that adolescents with DLD have difficulty with episodic memory retrieval when stimuli are encoded and retrieved without support from contextual information. Associative recognition is relatively less affected than item recognition in adolescents with DLD. © 2017 Royal College of Speech and Language Therapists.

  15. The Development of Online Information Retrieval Services in the People's Republic of China.

    ERIC Educational Resources Information Center

    Xiaocun, Lu

    1986-01-01

    Assesses the promotion and development of online information retrieval in China. Highlights include opening of the first online retrieval center at China Overseas Building Development Company Limited; establishment and activities of a cooperative network; online retrieval seminars; telecommunication lines and terminal installations; and problems…

  16. Constraints on the exploitation of the functional properties of objects in expert tool-using chimpanzees (Pan troglodytes).

    PubMed

    Povinelli, Daniel J; Frey, Scott H

    2016-09-01

    Many species exploit immediately apparent dimensions of objects during tool use and manufacture and operate over internal perceptual representations of objects (they move and reorient objects in space, have rules of operation to deform or modify objects, etc). Humans, however, actively test for functionally relevant object properties before such operations begin, even when no previous percepts of a particular object's qualities in the domain have been established. We hypothesize that such prospective diagnostic interventions are a human specialization of cognitive function that has been entirely overlooked in the neuropsychological literature. We presented chimpanzees with visually identical rakes: one was functional for retrieving a food reward; the other was non-functional (its base was spring-loaded). Initially, they learned that only the functional tool could retrieve a distant reward. In test 1, we explored if they would manually test for the rakes' rigidity during tool selection, but before using it. We found no evidence of such behavior. In test 2, we obliged the apes to deform the non-functional tool's base before using it, in order to evaluate whether this would cause them to switch rakes. It did not. Tests 3-6 attempted to focus the apes' attention on the functionally relevant property (rigidity). Although one ape eventually learned to abandon the non-functional rake before using it, she still did not attempt to test the rakes for rigidity prior to use. While these results underscore the ability of chimpanzees to use novel tools, at the same time they point toward a fundamental (and heretofore unexplored) difference in causal reasoning between humans and apes. We propose that this behavioral difference reflects a human specialization in how object properties are represented, which could have contributed significantly to the evolution of our technological culture. We discuss developing a new line of evolutionarily motivated neuropsychological research on action disorders. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Retrieval of profile information from airborne multiaxis UV-visible skylight absorption measurements.

    PubMed

    Bruns, Marco; Buehler, Stefan A; Burrows, John P; Heue, Klaus-Peter; Platt, Ulrich; Pundt, Irene; Richter, Andreas; Rozanov, Alexej; Wagner, Thomas; Wang, Ping

    2004-08-01

    A recent development in ground-based remote sensing of atmospheric constituents by UV-visible absorption measurements of scattered light is the simultaneous use of several horizon viewing directions in addition to the traditional zenith-sky pointing. The different light paths through the atmosphere enable the vertical distribution of some atmospheric absorbers, such as NO2, BrO, or O3, to be retrieved. This approach has recently been implemented on an airborne platform. This novel instrument, the airborne multiaxis differential optical absorption spectrometer (AMAXDOAS), has been flown for the first time. In this study, the amount of profile information that can be retrieved from such measurements is investigated for the trace gas NO2. Sensitivity studies on synthetic data are performed for a variety of representative measurement conditions including two wavelengths, one in the UV and one in the visible, two different surface spectral reflectances, various lines of sight (LOSs), and for two different flight altitudes. The results demonstrate that the AMAXDOAS measurements contain useful profile information, mainly at flight altitude and below the aircraft. Depending on wavelength and LOS used, the vertical resolution of the retrieved profiles is as good as 2 km near flight altitude. Above 14 km the profile information content of AMAXDOAS measurements is sparse. Airborne multiaxis measurements are thus a promising tool for atmospheric studies in the troposphere and the upper troposphere and lower stratosphere region.

  18. Plume Tracker: A New Toolkit for the Mapping of Volcanic Plumes with Multispectral Thermal Infrared Remote Sensing

    NASA Astrophysics Data System (ADS)

    Realmuto, V. J.; Baxter, S.; Webley, P. W.

    2011-12-01

    Plume Tracker is the next generation of interactive plume mapping tools pioneered by MAP_SO2. First developed in 1995, MAP_SO2 has been used to study plumes at a number of volcanoes worldwide with data acquired by both airborne and space-borne instruments. The foundation of these tools is a radiative transfer (RT) model, based on MODTRAN, which we use as the forward model for our estimation of ground temperature and sulfur dioxide concentration. Plume Tracker retains the main functions of MAP_SO2, providing interactive tools to input radiance measurements and ancillary data, such as profiles of atmospheric temperature and humidity, to the retrieval procedure, generating the retrievals, and visualizing the resulting retrievals. Plume Tracker improves upon MAP_SO2 in the following areas: (1) an RT model based on an updated version of MODTRAN, (2) a retrieval procedure based on maximizing the vector projection of model spectra onto observed spectra, rather than minimizing the least-squares misfit between the model and observed spectra, (3) an ability to input ozone profiles to the RT model, (4) increased control over the vertical distribution of the atmospheric gas species used in the model, (5) a standard programmatic interface to the RT model code, based on the Component Object Model (COM) interface, which will provide access to any programming language that conforms to the COM standard, and (6) a new binning algorithm that decreases running time by exploiting spatial redundancy in the radiance data. Based on our initial testing, the binning algorithm can reduce running time by an order of magnitude. The Plume Tracker project is a collaborative effort between the Jet Propulsion Laboratory and Geophysical Institute (GI) of the University of Alaska-Fairbanks. Plume Tracker is integrated into the GI's operational plume dispersion modeling system and will ingest temperature and humidity profiles generated by the Weather Research and Forecasting model, together with plume height estimates from the Puff model. The access to timely forecasts of atmospheric conditions, together with the reductions in running time, will increase the utility of Plume Tracker in the Alaska Volcano Observatory's mission to mitigate volcanic hazards in Alaska and the Northern Pacific region.

  19. Measuring social exclusion in healthcare settings: a scoping review.

    PubMed

    O'Donnell, Patrick; O'Donovan, Diarmuid; Elmusharaf, Khalifa

    2018-02-02

    Social exclusion is a concept that has been widely debated in recent years; a particular focus of the discussion has been its significance in relation to health. The meanings of the phrase "social exclusion", and the closely associated term "social inclusion", are contested in the literature. Both of these concepts are important in relation to health and the area of primary healthcare in particular. Thus, several tools for the measurement of social exclusion or social inclusion status in health care settings have been developed. A scoping review of the peer-reviewed and grey literature was conducted to examine tools developed since 2000 that measure social exclusion or social inclusion. We focused on those measurement tools developed for use with individual patients in healthcare settings. Efforts were made to obtain a copy of each of the original tools, and all relevant background literature. All tools retrieved were compared in tables, and the specific domains that were included in each measure were tabulated. Twenty-two measurement tools were included in the final scoping review. The majority of these had been specifically developed for the measurement of social inclusion or social exclusion, but a small number were created for the measurement of other closely aligned concepts. The majority of the tools included were constructed for engaging with patients in mental health settings. The tools varied greatly in their design, the scoring systems and the ways they were administered. The domains covered by these tools varied widely and some of the tools were quite narrow in the areas of focus. A review of the definitions of both social inclusion and social exclusion also revealed the variations among the explanations of these complex concepts. There are several definitions of both social inclusion and social exclusion in use and they differ greatly in scope. While there are many tools that have been developed for measuring these concepts in healthcare settings, these do not have a primary healthcare focus. There is a need for the development of a tool for measuring social inclusion or social exclusion in primary healthcare settings.

  20. Programmatic access to data and information at the IRIS DMC via web services

    NASA Astrophysics Data System (ADS)

    Weertman, B. R.; Trabant, C.; Karstens, R.; Suleiman, Y. Y.; Ahern, T. K.; Casey, R.; Benson, R. B.

    2011-12-01

    The IRIS Data Management Center (DMC) has developed a suite of web services that provide access to the DMC's time series holdings, their related metadata and earthquake catalogs. In addition, services are available to perform simple, on-demand time series processing at the DMC prior to being shipped to the user. The primary goal is to provide programmatic access to data and processing services in a manner usable by and useful to the research community. The web services are relatively simple to understand and use and will form the foundation on which future DMC access tools will be built. Based on standard Web technologies they can be accessed programmatically with a wide range of programming languages (e.g. Perl, Python, Java), command line utilities such as wget and curl or with any web browser. We anticipate these services being used for everything from simple command line access, used in shell scripts and higher programming languages to being integrated within complex data processing software. In addition to improving access to our data by the seismological community the web services will also make our data more accessible to other disciplines. The web services available from the DMC include ws-bulkdataselect for the retrieval of large volumes of miniSEED data, ws-timeseries for the retrieval of individual segments of time series data in a variety of formats (miniSEED, SAC, ASCII, audio WAVE, and PNG plots) with optional signal processing, ws-station for station metadata in StationXML format, ws-resp for the retrieval of instrument response in RESP format, ws-sacpz for the retrieval of sensor response in the SAC poles and zeros convention and ws-event for the retrieval of earthquake catalogs. To make the services even easier to use, the DMC is developing a library that allows Java programmers to seamlessly retrieve and integrate DMC information into their own programs. The library will handle all aspects of dealing with the services and will parse the returned data. By using this library a developer will not need to learn the details of the service interfaces or understand the data formats returned. This library will be used to build the software bridge needed to request data and information from within MATLAB°. We also provide several client scripts written in Perl for the retrieval of waveform data, metadata and earthquake catalogs using command line programs. For more information on the DMC's web services please visit http://www.iris.edu/ws/

  1. Are all metal-on-metal hip revision operations contributing to the National Joint Registry implant survival curves? : a study comparing the London Implant Retrieval Centre and National Joint Registry datasets.

    PubMed

    Sabah, S A; Henckel, J; Koutsouris, S; Rajani, R; Hothi, H; Skinner, J A; Hart, A J

    2016-01-01

    The National Joint Registry for England, Wales and Northern Ireland (NJR) has extended its scope to report on hospital, surgeon and implant performance. Data linkage of the NJR to the London Implant Retrieval Centre (LIRC) has previously evaluated data quality for hip primary procedures, but did not assess revision records. We analysed metal-on-metal hip revision procedures performed between 2003 and 2013. A total of 69 929 revision procedures from the NJR and 929 revised pairs of components from the LIRC were included. We were able to link 716 (77.1%) revision procedures on the NJR to the LIRC. This meant that 213 (22.9%) revision procedures at the LIRC could not be identified on the NJR. We found that 349 (37.6%) explants at the LIRC completed the full linkage process to both NJR primary and revision databases. Data completion was excellent (> 99.9%) for revision procedures reported to the NJR. This study has shown that only approximately one third of retrieved components at the LIRC, contributed to survival curves on the NJR. We recommend prospective registry-retrieval linkage as a tool to feedback missing and erroneous data to the NJR and improve data quality. Prospective Registry - retrieval linkage is a simple tool to evaluate and improve data quality on the NJR. ©2016 Sabah et al.

  2. Knowledge-base browsing: an application of hybrid distributed/local connectionist networks

    NASA Astrophysics Data System (ADS)

    Samad, Tariq; Israel, Peggy

    1990-08-01

    We describe a knowledge base browser based on a connectionist (or neural network) architecture that employs both distributed and local representations. The distributed representations are used for input and output thereby enabling associative noise-tolerant interaction with the environment. Internally all representations are fully local. This simplifies weight assignment and facilitates network configuration for specific applications. In our browser concepts and relations in a knowledge base are represented using " microfeatures. " The microfeatures can encode semantic attributes structural features contextual information etc. Desired portions of the knowledge base can then be associatively retrieved based on a structured cue. An ordered list of partial matches is presented to the user for selection. Microfeatures can also be used as " bookmarks" they can be placed dynamically at appropriate points in the knowledge base and subsequently used as retrieval cues. A proof-of-concept system has been implemented for an internally developed Honeywell-proprietary knowledge acquisition tool. 1.

  3. Data preparation and evaluation techniques for x-ray diffraction microscopy

    DOE PAGES

    Steinbrener, Jan; Nelson, Johanna; Huang, Xiaojing; ...

    2010-01-01

    The post-experiment processing of X-ray Diffraction Microscopy data is often time-consuming and difficult. This is mostly due to the fact that even if a preliminary result has been reconstructed, there is no definitive answer as to whether or not a better result with more consistently retrieved phases can still be obtained. In addition, we show here that the first step in data analysis, the assembly of two-dimensional diffraction patterns from a large set of raw diffraction data, is crucial to obtaining reconstructions of highest possible consistency. We have developed software that automates this process and results in consistently accurate diffractionmore » patterns. We have furthermore derived some criteria of validity for a tool commonly used to assess the consistency of reconstructions, the phase retrieval transfer function, and suggest a modified version that has improved utility for judging reconstruction quality.« less

  4. HONselect: multilingual assistant search engine operated by a concept-based interface system to decentralized heterogeneous sources.

    PubMed

    Boyer, C; Baujard, V; Scherrer, J R

    2001-01-01

    Any new user to the Internet will think that to retrieve the relevant document is an easy task especially with the wealth of sources available on this medium, but this is not the case. Even experienced users have difficulty formulating the right query for making the most of a search tool in order to efficiently obtain an accurate result. The goal of this work is to reduce the time and the energy necessary in searching and locating medical and health information. To reach this goal we have developed HONselect [1]. The aim of HONselect is not only to improve efficiency in retrieving documents but to respond to an increased need for obtaining a selection of relevant and accurate documents from a breadth of various knowledge databases including scientific bibliographical references, clinical trials, daily news, multimedia illustrations, conferences, forum, Web sites, clinical cases, and others. The authors based their approach on the knowledge representation using the National Library of Medicine's Medical Subject Headings (NLM, MeSH) vocabulary and classification [2,3]. The innovation is to propose a multilingual "one-stop searching" (one Web interface to databases currently in English, French and German) with full navigational and connectivity capabilities. The user may choose from a given selection of related terms the one that best suit his search, navigate in the term's hierarchical tree, and access directly to a selection of documents from high quality knowledge suppliers such as the MEDLINE database, the NLM's ClinicalTrials.gov server, the NewsPage's daily news, the HON's media gallery, conference listings and MedHunt's Web sites [4, 5, 6, 7, 8, 9]. HONselect, developed by HON, a non-profit organisation [10], is a free online available multilingual tool based on the MeSH thesaurus to index, select, retrieve and display accurate, up to date, high-level and quality documents.

  5. Assisting Consumer Health Information Retrieval with Query Recommendations

    PubMed Central

    Zeng, Qing T.; Crowell, Jonathan; Plovnick, Robert M.; Kim, Eunjung; Ngo, Long; Dibble, Emily

    2006-01-01

    Objective: Health information retrieval (HIR) on the Internet has become an important practice for millions of people, many of whom have problems forming effective queries. We have developed and evaluated a tool to assist people in health-related query formation. Design: We developed the Health Information Query Assistant (HIQuA) system. The system suggests alternative/additional query terms related to the user's initial query that can be used as building blocks to construct a better, more specific query. The recommended terms are selected according to their semantic distance from the original query, which is calculated on the basis of concept co-occurrences in medical literature and log data as well as semantic relations in medical vocabularies. Measurements: An evaluation of the HIQuA system was conducted and a total of 213 subjects participated in the study. The subjects were randomized into 2 groups. One group was given query recommendations and the other was not. Each subject performed HIR for both a predefined and a self-defined task. Results: The study showed that providing HIQuA recommendations resulted in statistically significantly higher rates of successful queries (odds ratio = 1.66, 95% confidence interval = 1.16–2.38), although no statistically significant impact on user satisfaction or the users' ability to accomplish the predefined retrieval task was found. Conclusion: Providing semantic-distance-based query recommendations can help consumers with query formation during HIR. PMID:16221944

  6. User-oriented evaluation of a medical image retrieval system for radiologists.

    PubMed

    Markonis, Dimitrios; Holzer, Markus; Baroz, Frederic; De Castaneda, Rafael Luis Ruiz; Boyer, Célia; Langs, Georg; Müller, Henning

    2015-10-01

    This article reports the user-oriented evaluation of a text- and content-based medical image retrieval system. User tests with radiologists using a search system for images in the medical literature are presented. The goal of the tests is to assess the usability of the system, identify system and interface aspects that need improvement and useful additions. Another objective is to investigate the system's added value to radiology information retrieval. The study provides an insight into required specifications and potential shortcomings of medical image retrieval systems through a concrete methodology for conducting user tests. User tests with a working image retrieval system of images from the biomedical literature were performed in an iterative manner, where each iteration had the participants perform radiology information seeking tasks and then refining the system as well as the user study design itself. During these tasks the interaction of the users with the system was monitored, usability aspects were measured, retrieval success rates recorded and feedback was collected through survey forms. In total, 16 radiologists participated in the user tests. The success rates in finding relevant information were on average 87% and 78% for image and case retrieval tasks, respectively. The average time for a successful search was below 3 min in both cases. Users felt quickly comfortable with the novel techniques and tools (after 5 to 15 min), such as content-based image retrieval and relevance feedback. User satisfaction measures show a very positive attitude toward the system's functionalities while the user feedback helped identifying the system's weak points. The participants proposed several potentially useful new functionalities, such as filtering by imaging modality and search for articles using image examples. The iterative character of the evaluation helped to obtain diverse and detailed feedback on all system aspects. Radiologists are quickly familiar with the functionalities but have several comments on desired functionalities. The analysis of the results can potentially assist system refinement for future medical information retrieval systems. Moreover, the methodology presented as well as the discussion on the limitations and challenges of such studies can be useful for user-oriented medical image retrieval evaluation, as user-oriented evaluation of interactive system is still only rarely performed. Such interactive evaluations can be limited in effort if done iteratively and can give many insights for developing better systems. Copyright © 2015. Published by Elsevier Ireland Ltd.

  7. PubChemSR: A search and retrieval tool for PubChem

    PubMed Central

    Hur, Junguk; Wild, David J

    2008-01-01

    Background Recent years have seen an explosion in the amount of publicly available chemical and related biological information. A significant step has been the emergence of PubChem, which contains property information for millions of chemical structures, and acts as a repository of compounds and bioassay screening data for the NIH Roadmap. There is a strong need for tools designed for scientists that permit easy download and use of these data. We present one such tool, PubChemSR. Implementation PubChemSR (Search and Retrieve) is a freely available desktop application written for Windows using Microsoft .NET that is designed to assist scientists in search, retrieval and organization of chemical and biological data from the PubChem database. It employs SOAP web services made available by NCBI for extraction of information from PubChem. Results and Discussion The program supports a wide range of searching techniques, including queries based on assay or compound keywords and chemical substructures. Results can be examined individually or downloaded and exported in batch for use in other programs such as Microsoft Excel. We believe that PubChemSR makes it straightforward for researchers to utilize the chemical, biological and screening data available in PubChem. We present several examples of how it can be used. PMID:18482452

  8. Measurement properties of self-report physical activity assessment tools in stroke: a protocol for a systematic review.

    PubMed

    Martins, Júlia Caetano; Aguiar, Larissa Tavares; Nadeau, Sylvie; Scianni, Aline Alvim; Teixeira-Salmela, Luci Fuscaldi; Faria, Christina Danielli Coelho de Morais

    2017-02-13

    Self-report physical activity assessment tools are commonly used for the evaluation of physical activity levels in individuals with stroke. A great variety of these tools have been developed and widely used in recent years, which justify the need to examine their measurement properties and clinical utility. Therefore, the main objectives of this systematic review are to examine the measurement properties and clinical utility of self-report measures of physical activity and discuss the strengths and limitations of the identified tools. A systematic review of studies that investigated the measurement properties and/or clinical utility of self-report physical activity assessment tools in stroke will be conducted. Electronic searches will be performed in five databases: Medical Literature Analysis and Retrieval System Online (MEDLINE) (PubMed), Excerpta Medica Database (EMBASE), Physiotherapy Evidence Database (PEDro), Literatura Latino-Americana e do Caribe em Ciências da Saúde (LILACS) and Scientific Electronic Library Online (SciELO), followed by hand searches of the reference lists of the included studies. Two independent reviewers will screen all retrieve titles, abstracts, and full texts, according to the inclusion criteria and will also extract the data. A third reviewer will be referred to solve any disagreement. A descriptive summary of the included studies will contain the design, participants, as well as the characteristics, measurement properties, and clinical utility of the self-report tools. The methodological quality of the studies will be evaluated using the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist and the clinical utility of the identified tools will be assessed considering predefined criteria. This systematic review will follow the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) statement. This systematic review will provide an extensive review of the measurement properties and clinical utility of self-report physical activity assessment tools used in individuals with stroke, which would benefit clinicians and researchers. PROSPERO CRD42016037146. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. CardioClassifier: disease- and gene-specific computational decision support for clinical genome interpretation.

    PubMed

    Whiffin, Nicola; Walsh, Roddy; Govind, Risha; Edwards, Matthew; Ahmad, Mian; Zhang, Xiaolei; Tayal, Upasana; Buchan, Rachel; Midwinter, William; Wilk, Alicja E; Najgebauer, Hanna; Francis, Catherine; Wilkinson, Sam; Monk, Thomas; Brett, Laura; O'Regan, Declan P; Prasad, Sanjay K; Morris-Rosendahl, Deborah J; Barton, Paul J R; Edwards, Elizabeth; Ware, James S; Cook, Stuart A

    2018-01-25

    PurposeInternationally adopted variant interpretation guidelines from the American College of Medical Genetics and Genomics (ACMG) are generic and require disease-specific refinement. Here we developed CardioClassifier (http://www.cardioclassifier.org), a semiautomated decision-support tool for inherited cardiac conditions (ICCs).MethodsCardioClassifier integrates data retrieved from multiple sources with user-input case-specific information, through an interactive interface, to support variant interpretation. Combining disease- and gene-specific knowledge with variant observations in large cohorts of cases and controls, we refined 14 computational ACMG criteria and created three ICC-specific rules.ResultsWe benchmarked CardioClassifier on 57 expertly curated variants and show full retrieval of all computational data, concordantly activating 87.3% of rules. A generic annotation tool identified fewer than half as many clinically actionable variants (64/219 vs. 156/219, Fisher's P = 1.1  ×  10 -18 ), with important false positives, illustrating the critical importance of disease and gene-specific annotations. CardioClassifier identified putatively disease-causing variants in 33.7% of 327 cardiomyopathy cases, comparable with leading ICC laboratories. Through addition of manually curated data, variants found in over 40% of cardiomyopathy cases are fully annotated, without requiring additional user-input data.ConclusionCardioClassifier is an ICC-specific decision-support tool that integrates expertly curated computational annotations with case-specific data to generate fast, reproducible, and interactive variant pathogenicity reports, according to best practice guidelines.GENETICS in MEDICINE advance online publication, 25 January 2018; doi:10.1038/gim.2017.258.

  10. 3D visualization of molecular structures in the MOGADOC database

    NASA Astrophysics Data System (ADS)

    Vogt, Natalja; Popov, Evgeny; Rudert, Rainer; Kramer, Rüdiger; Vogt, Jürgen

    2010-08-01

    The MOGADOC database (Molecular Gas-Phase Documentation) is a powerful tool to retrieve information about compounds which have been studied in the gas-phase by electron diffraction, microwave spectroscopy and molecular radio astronomy. Presently the database contains over 34,500 bibliographic references (from the beginning of each method) for about 10,000 inorganic, organic and organometallic compounds and structural data (bond lengths, bond angles, dihedral angles, etc.) for about 7800 compounds. Most of the implemented molecular structures are given in a three-dimensional (3D) presentation. To create or edit and visualize the 3D images of molecules, new tools (special editor and Java-based 3D applet) were developed. Molecular structures in internal coordinates were converted to those in Cartesian coordinates.

  11. Earlinet single calculus chain: new products overview

    NASA Astrophysics Data System (ADS)

    D'Amico, Giuseppe; Mattis, Ina; Binietoglou, Ioannis; Baars, Holger; Mona, Lucia; Amato, Francesco; Kokkalis, Panos; Rodríguez-Gómez, Alejandro; Soupiona, Ourania; Kalliopi-Artemis, Voudouri

    2018-04-01

    The Single Calculus Chain (SCC) is an automatic and flexible tool to analyze raw lidar data using EARLINET quality assured retrieval algorithms. It has been already demonstrated the SCC can retrieve reliable aerosol backscatter and extinction coefficient profiles for different lidar systems. In this paper we provide an overview of new SCC products like particle linear depolarization ratio, cloud masking, aerosol layering allowing relevant improvements in the atmospheric aerosol characterization.

  12. Large Scale Hierarchical K-Means Based Image Retrieval With MapReduce

    DTIC Science & Technology

    2014-03-27

    hadoop distributed file system: Architecture and design, 2007. [10] G. Bradski. Dr. Dobb’s Journal of Software Tools, 2000. [11] Terry Costlow. Big data ...million images running on 20 virtual machines are shown. 15. SUBJECT TERMS Image Retrieval, MapReduce, Hierarchical K-Means, Big Data , Hadoop U U U UU 87...13 2.1.1.2 HDFS Data Representation . . . . . . . . . . . . . . . . 14 2.1.1.3 Hadoop Engine

  13. Case-based fracture image retrieval.

    PubMed

    Zhou, Xin; Stern, Richard; Müller, Henning

    2012-05-01

    Case-based fracture image retrieval can assist surgeons in decisions regarding new cases by supplying visually similar past cases. This tool may guide fracture fixation and management through comparison of long-term outcomes in similar cases. A fracture image database collected over 10 years at the orthopedic service of the University Hospitals of Geneva was used. This database contains 2,690 fracture cases associated with 43 classes (based on the AO/OTA classification). A case-based retrieval engine was developed and evaluated using retrieval precision as a performance metric. Only cases in the same class as the query case are considered as relevant. The scale-invariant feature transform (SIFT) is used for image analysis. Performance evaluation was computed in terms of mean average precision (MAP) and early precision (P10, P30). Retrieval results produced with the GNU image finding tool (GIFT) were used as a baseline. Two sampling strategies were evaluated. One used a dense 40 × 40 pixel grid sampling, and the second one used the standard SIFT features. Based on dense pixel grid sampling, three unsupervised feature selection strategies were introduced to further improve retrieval performance. With dense pixel grid sampling, the image is divided into 1,600 (40 × 40) square blocks. The goal is to emphasize the salient regions (blocks) and ignore irrelevant regions. Regions are considered as important when a high variance of the visual features is found. The first strategy is to calculate the variance of all descriptors on the global database. The second strategy is to calculate the variance of all descriptors for each case. A third strategy is to perform a thumbnail image clustering in a first step and then to calculate the variance for each cluster. Finally, a fusion between a SIFT-based system and GIFT is performed. A first comparison on the selection of sampling strategies using SIFT features shows that dense sampling using a pixel grid (MAP = 0.18) outperformed the SIFT detector-based sampling approach (MAP = 0.10). In a second step, three unsupervised feature selection strategies were evaluated. A grid parameter search is applied to optimize parameters for feature selection and clustering. Results show that using half of the regions (700 or 800) obtains the best performance for all three strategies. Increasing the number of clusters in clustering can also improve the retrieval performance. The SIFT descriptor variance in each case gave the best indication of saliency for the regions (MAP = 0.23), better than the other two strategies (MAP = 0.20 and 0.21). Combining GIFT (MAP = 0.23) and the best SIFT strategy (MAP = 0.23) produced significantly better results (MAP = 0.27) than each system alone. A case-based fracture retrieval engine was developed and is available for online demonstration. SIFT is used to extract local features, and three feature selection strategies were introduced and evaluated. A baseline using the GIFT system was used to evaluate the salient point-based approaches. Without supervised learning, SIFT-based systems with optimized parameters slightly outperformed the GIFT system. A fusion of the two approaches shows that the information contained in the two approaches is complementary. Supervised learning on the feature space is foreseen as the next step of this study.

  14. Mobile medical image retrieval

    NASA Astrophysics Data System (ADS)

    Duc, Samuel; Depeursinge, Adrien; Eggel, Ivan; Müller, Henning

    2011-03-01

    Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has gained ground and many applications have been explored. This creates a new field of mobile information search & access and in this context images can play an important role as they often allow understanding complex scenarios much quicker and easier than free text. Mobile information retrieval in general has skyrocketed over the past year with many new applications and tools being developed and all sorts of interfaces being adapted to mobile clients. This article describes constraints of an information retrieval system including visual and textual information retrieval from the medical literature of BioMedCentral and of the RSNA journals Radiology and Radiographics. Solutions for mobile data access with an example on an iPhone in a web-based environment are presented as iPhones are frequently used and the operating system is bound to become the most frequent smartphone operating system in 2011. A web-based scenario was chosen to allow for a use by other smart phone platforms such as Android as well. Constraints of small screens and navigation with touch screens are taken into account in the development of the application. A hybrid choice had to be taken to allow for taking pictures with the cell phone camera and upload them for visual similarity search as most producers of smart phones block this functionality to web applications. Mobile information access and in particular access to images can be surprisingly efficient and effective on smaller screens. Images can be read on screen much faster and relevance of documents can be identified quickly through the use of images contained in the text. Problems with the many, often incompatible mobile platforms were discovered and are listed in the text. Mobile information access is a quickly growing domain and the constraints of mobile access also need to be taken into account for image retrieval. The demonstrated access to the medical literature is most relevant as the medical literature and their images are clearly the largest knowledge source in the medical field.

  15. Data standards for clinical research data collection forms: current status and challenges.

    PubMed

    Richesson, Rachel L; Nadkarni, Prakash

    2011-05-01

    Case report forms (CRFs) are used for structured-data collection in clinical research studies. Existing CRF-related standards encompass structural features of forms and data items, content standards, and specifications for using terminologies. This paper reviews existing standards and discusses their current limitations. Because clinical research is highly protocol-specific, forms-development processes are more easily standardized than is CRF content. Tools that support retrieval and reuse of existing items will enable standards adoption in clinical research applications. Such tools will depend upon formal relationships between items and terminological standards. Future standards adoption will depend upon standardized approaches for bridging generic structural standards and domain-specific content standards. Clinical research informatics can help define tools requirements in terms of workflow support for research activities, reconcile the perspectives of varied clinical research stakeholders, and coordinate standards efforts toward interoperability across healthcare and research data collection.

  16. Bioinformatics and molecular modeling in glycobiology

    PubMed Central

    Schloissnig, Siegfried

    2010-01-01

    The field of glycobiology is concerned with the study of the structure, properties, and biological functions of the family of biomolecules called carbohydrates. Bioinformatics for glycobiology is a particularly challenging field, because carbohydrates exhibit a high structural diversity and their chains are often branched. Significant improvements in experimental analytical methods over recent years have led to a tremendous increase in the amount of carbohydrate structure data generated. Consequently, the availability of databases and tools to store, retrieve and analyze these data in an efficient way is of fundamental importance to progress in glycobiology. In this review, the various graphical representations and sequence formats of carbohydrates are introduced, and an overview of newly developed databases, the latest developments in sequence alignment and data mining, and tools to support experimental glycan analysis are presented. Finally, the field of structural glycoinformatics and molecular modeling of carbohydrates, glycoproteins, and protein–carbohydrate interaction are reviewed. PMID:20364395

  17. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  18. On search guide phrase compilation for recommending home medical products.

    PubMed

    Luo, Gang

    2010-01-01

    To help people find desired home medical products (HMPs), we developed an intelligent personal health record (iPHR) system that can automatically recommend HMPs based on users' health issues. Using nursing knowledge, we pre-compile a set of "search guide" phrases that provides semantic translation from words describing health issues to their underlying medical meanings. Then iPHR automatically generates queries from those phrases and uses them and a search engine to retrieve HMPs. To avoid missing relevant HMPs during retrieval, the compiled search guide phrases need to be comprehensive. Such compilation is a challenging task because nursing knowledge updates frequently and contains numerous details scattered in many sources. This paper presents a semi-automatic tool facilitating such compilation. Our idea is to formulate the phrase compilation task as a multi-label classification problem. For each newly obtained search guide phrase, we first use nursing knowledge and information retrieval techniques to identify a small set of potentially relevant classes with corresponding hints. Then a nurse makes the final decision on assigning this phrase to proper classes based on those hints. We demonstrate the effectiveness of our techniques by compiling search guide phrases from an occupational therapy textbook.

  19. AMUC: Associated Motion capture User Categories.

    PubMed

    Norman, Sally Jane; Lawson, Sian E M; Olivier, Patrick; Watson, Paul; Chan, Anita M-A; Dade-Robertson, Martyn; Dunphy, Paul; Green, Dave; Hiden, Hugo; Hook, Jonathan; Jackson, Daniel G

    2009-07-13

    The AMUC (Associated Motion capture User Categories) project consisted of building a prototype sketch retrieval client for exploring motion capture archives. High-dimensional datasets reflect the dynamic process of motion capture and comprise high-rate sampled data of a performer's joint angles; in response to multiple query criteria, these data can potentially yield different kinds of information. The AMUC prototype harnesses graphic input via an electronic tablet as a query mechanism, time and position signals obtained from the sketch being mapped to the properties of data streams stored in the motion capture repository. As well as proposing a pragmatic solution for exploring motion capture datasets, the project demonstrates the conceptual value of iterative prototyping in innovative interdisciplinary design. The AMUC team was composed of live performance practitioners and theorists conversant with a variety of movement techniques, bioengineers who recorded and processed motion data for integration into the retrieval tool, and computer scientists who designed and implemented the retrieval system and server architecture, scoped for Grid-based applications. Creative input on information system design and navigation, and digital image processing, underpinned implementation of the prototype, which has undergone preliminary trials with diverse users, allowing identification of rich potential development areas.

  20. Clinical results of HIS, RIS, PACS integration using data integration CASE tools

    NASA Astrophysics Data System (ADS)

    Taira, Ricky K.; Chan, Hing-Ming; Breant, Claudine M.; Huang, Lu J.; Valentino, Daniel J.

    1995-05-01

    Current infrastructure research in PACS is dominated by the development of communication networks (local area networks, teleradiology, ATM networks, etc.), multimedia display workstations, and hierarchical image storage architectures. However, limited work has been performed on developing flexible, expansible, and intelligent information processing architectures for the vast decentralized image and text data repositories prevalent in healthcare environments. Patient information is often distributed among multiple data management systems. Current large-scale efforts to integrate medical information and knowledge sources have been costly with limited retrieval functionality. Software integration strategies to unify distributed data and knowledge sources is still lacking commercially. Systems heterogeneity (i.e., differences in hardware platforms, communication protocols, database management software, nomenclature, etc.) is at the heart of the problem and is unlikely to be standardized in the near future. In this paper, we demonstrate the use of newly available CASE (computer- aided software engineering) tools to rapidly integrate HIS, RIS, and PACS information systems. The advantages of these tools include fast development time (low-level code is generated from graphical specifications), and easy system maintenance (excellent documentation, easy to perform changes, and centralized code repository in an object-oriented database). The CASE tools are used to develop and manage the `middle-ware' in our client- mediator-serve architecture for systems integration. Our architecture is scalable and can accommodate heterogeneous database and communication protocols.

  1. A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang

    2014-05-01

    The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time series, Geophys. Res. Lett., 40,3624-3630, doi:10.1002/grl.50695. Sang Y.-F., D. Wang, J.-C. Wu, Q.-P. Zhu, and L. Wang (2009), Entropy-Based Wavelet De-noising Method for Time Series Analysis, Entropy, 11, pp. 1123-1148, doi:10.3390/e11041123.

  2. An Observing System Simulation Experiment (OSSE) Investigating the OMI Aerosol Products Using Simulated Aerosol and Atmospheric Fields from the NASA GEOS-5 Model

    NASA Astrophysics Data System (ADS)

    Colarco, P. R.; Gasso, S.; Jethva, H. T.; Buchard, V.; Ahn, C.; Torres, O.; daSilva, A.

    2016-12-01

    Output from the NASA Goddard Earth Observing System, version 5 (GEOS-5) Earth system model is used to simulate the top-of-atmosphere 354 and 388 nm radiances observed by the Ozone Monitoring Instrument (OMI) onboard the Aura spacecraft. The principle purpose of developing this simulator tool is to compute from the modeled fields the so-called OMI Aerosol Index (AI), which is a more fundamental retrieval product than higher level products such as the aerosol optical depth (AOD) or absorbing aerosol optical depth (AAOD). This lays the groundwork for eventually developing a capability to assimilate either the OMI AI or its radiances, which would provide further constraint on aerosol loading and absorption properties for global models. We extend the use of the simulator capability to understand the nature of the OMI aerosol retrieval algorithms themselves in an Observing System Simulation Experiment (OSSE). The simulated radiances are used to calculate the AI from the modeled fields. These radiances are also provided to the OMI aerosol algorithms, which return their own retrievals of the AI, AOD, and AAOD. Our assessment reveals that the OMI-retrieved AI can be mostly harmonized with the model-derived AI given the same radiances provided a common surface pressure field is assumed. This is important because the operational OMI algorithms presently assume a fixed pressure field, while the contribution of molecular scattering to the actual OMI signal in fact responds to the actual atmospheric pressure profile, which is accounted for in our OSSE by using GEOS-5 produced atmospheric reanalyses. Other differences between the model and OMI AI are discussed, and we present a preliminary assessment of the OMI AOD and AAOD products with respect to the known inputs from the GEOS-5 simulation.

  3. MeSHy: Mining unanticipated PubMed information using frequencies of occurrences and concurrences of MeSH terms.

    PubMed

    Theodosiou, T; Vizirianakis, I S; Angelis, L; Tsaftaris, A; Darzentas, N

    2011-12-01

    PubMed is the most widely used database of biomedical literature. To the detriment of the user though, the ranking of the documents retrieved for a query is not content-based, and important semantic information in the form of assigned Medical Subject Headings (MeSH) terms is not readily presented or productively utilized. The motivation behind this work was the discovery of unanticipated information through the appropriate ranking of MeSH term pairs and, indirectly, documents. Such information can be useful in guiding novel research and following promising trends. A web-based tool, called MeSHy, was developed implementing a mainly statistical algorithm. The algorithm takes into account the frequencies of occurrences, concurrences, and the semantic similarities of MeSH terms in retrieved PubMed documents to create MeSH term pairs. These are then scored and ranked, focusing on their unexpectedly frequent or infrequent occurrences. MeSHy presents results through an online interactive interface facilitating further manipulation through filtering and sorting. The results themselves include the MeSH term pairs, along with MeSH categories, the score, and document IDs, all of which are hyperlinked for convenience. To highlight the applicability of the tool, we report the findings of an expert in the pharmacology field on querying the molecularly-targeted drug imatinib and nutrition-related flavonoids. To the best of our knowledge, MeSHy is the first publicly available tool able to directly provide such a different perspective on the complex nature of published work. Implemented in Perl and served by Apache2 at http://bat.ina.certh.gr/tools/meshy/ with all major browsers supported. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Application of a new vertical profiling tool (ESASS) for sampling groundwater quality during hollow-stem auger drilling

    USGS Publications Warehouse

    Harte, Philip T.; Flanagan, Sarah M.

    2011-01-01

    A new tool called ESASS (Enhanced Screen Auger Sampling System) was developed by the U.S. Geological Survey. The use of ESASS, because of its unique U.S. patent design (U.S. patent no. 7,631,705 B1), allows for the collection of representative, depth-specific groundwater samples (vertical profiling) in a quick and efficient manner using a 0.305-m long screen auger during hollow-stem auger drilling. With ESASS, the water column in the flights above the screen auger is separated from the water in the screen auger by a specially designed removable plug and collar. The tool fits inside an auger of standard inner diameter (82.55 mm). The novel design of the system constituted by the plug, collar, and A-rod allows the plug to be retrieved using conventional drilling A-rods. After retrieval, standard-diameter (50.8 mm) observation wells can be installed within the hollow-stem augers. Testing of ESASS was conducted at one waste-disposal site with tetrachloroethylene (PCE) contamination and at two reference sites with no known waste-disposal history. All three sites have similar geology and are underlain by glacial, stratified-drift deposits. For the applications tested, ESASS proved to be a useful tool in vertical profiling of groundwater quality. At the waste site, PCE concentrations measured with ESASS profiling at several depths were comparable (relative percent difference <25%) to PCE concentrations sampled from wells. Vertical profiling with ESASS at the reference sites illustrated the vertical resolution achievable in the profile system; shallow groundwater quality varied by a factor of five in concentration of some constituents (nitrate and nitrite) over short (0.61 m) distances.

  5. Application of a new vertical profiling tool (ESASS) for sampling groundwater quality during hollow-stem auger drilling

    USGS Publications Warehouse

    Harte, P.T.; Flanagan, S.M.

    2011-01-01

    A new tool called ESASS (Enhanced Screen Auger Sampling System) was developed by the U.S. Geological Survey. The use of ESASS, because of its unique U.S. patent design (U.S. patent no. 7,631,705 B1), allows for the collection of representative, depth-specific groundwater samples (vertical profiling) in a quick and efficient manner using a 0.305-m long screen auger during hollow-stem auger drilling. With ESASS, the water column in the flights above the screen auger is separated from the water in the screen auger by a specially designed removable plug and collar. The tool fits inside an auger of standard inner diameter (82.55 mm). The novel design of the system constituted by the plug, collar, and A-rod allows the plug to be retrieved using conventional drilling A-rods. After retrieval, standard-diameter (50.8 mm) observation wells can be installed within the hollow-stem augers. Testing of ESASS was conducted at one waste-disposal site with tetrachloroethylene (PCE) contamination and at two reference sites with no known waste-disposal history. All three sites have similar geology and are underlain by glacial, stratified-drift deposits. For the applications tested, ESASS proved to be a useful tool in vertical profiling of groundwater quality. At the waste site, PCE concentrations measured with ESASS profiling at several depths were comparable (relative percent difference <25%) to PCE concentrations sampled from wells. Vertical profiling with ESASS at the reference sites illustrated the vertical resolution achievable in the profile system; shallow groundwater quality varied by a factor of five in concentration of some constituents (nitrate and nitrite) over short (0.61 m) distances. Ground Water Monitoring & Remediation ?? 2011, National Ground Water Association. No claim to original US government works.

  6. The Comprehensive Microbial Resource.

    PubMed

    Peterson, J D; Umayam, L A; Dickinson, T; Hickey, E K; White, O

    2001-01-01

    One challenge presented by large-scale genome sequencing efforts is effective display of uniform information to the scientific community. The Comprehensive Microbial Resource (CMR) contains robust annotation of all complete microbial genomes and allows for a wide variety of data retrievals. The bacterial information has been placed on the Web at http://www.tigr.org/CMR for retrieval using standard web browsing technology. Retrievals can be based on protein properties such as molecular weight or hydrophobicity, GC-content, functional role assignments and taxonomy. The CMR also has special web-based tools to allow data mining using pre-run homology searches, whole genome dot-plots, batch downloading and traversal across genomes using a variety of datatypes.

  7. Issues associated with manipulator-based waste retrieval from Hanford underground storage tanks with a preliminary review of commercial concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berglin, E.J.

    1996-09-17

    Westinghouse Hanford Company (WHC) is exploring commercial methods for retrieving waste from the underground storage tanks at the Hanford site in south central Washington state. WHC needs data on commercial retrieval systems equipment in order to make programmatic decisions for waste retrieval. Full system testing of retrieval processes is to be demonstrated in phases through September 1997 in support of programs aimed to Acquire Commercial Technology for Retrieval (ACTR) and at the Hanford Tanks Initiative (HTI). One of the important parts of the integrated testing will be the deployment of retrieval tools using manipulator-based systems. WHC requires an assessment ofmore » a number of commercial deployment systems that have been identified by the ACTR program as good candidates to be included in an integrated testing effort. Included in this assessment should be an independent evaluation of manipulator tests performed to date, so that WHC can construct an integrated test based on these systems. The objectives of this document are to provide a description of the need, requirements, and constraints for a manipulator-based retrieval system; to evaluate manipulator-based concepts and testing performed to date by a number of commercial organizations; and to identify issues to be resolved through testing and/or analysis for each concept.« less

  8. Ground-Based Correction of Remote-Sensing Spectral Imagery

    NASA Technical Reports Server (NTRS)

    Alder-Golden, Steven M.; Rochford, Peter; Matthew, Michael; Berk, Alexander

    2007-01-01

    Software has been developed for an improved method of correcting for the atmospheric optical effects (primarily, effects of aerosols and water vapor) in spectral images of the surface of the Earth acquired by airborne and spaceborne remote-sensing instruments. In this method, the variables needed for the corrections are extracted from the readings of a radiometer located on the ground in the vicinity of the scene of interest. The software includes algorithms that analyze measurement data acquired from a shadow-band radiometer. These algorithms are based on a prior radiation transport software model, called MODTRAN, that has been developed through several versions up to what are now known as MODTRAN4 and MODTRAN5 . These components have been integrated with a user-friendly Interactive Data Language (IDL) front end and an advanced version of MODTRAN4. Software tools for handling general data formats, performing a Langley-type calibration, and generating an output file of retrieved atmospheric parameters for use in another atmospheric-correction computer program known as FLAASH have also been incorporated into the present soft-ware. Concomitantly with the soft-ware described thus far, there has been developed a version of FLAASH that utilizes the retrieved atmospheric parameters to process spectral image data.

  9. Java-based browsing, visualization and processing of heterogeneous medical data from remote repositories.

    PubMed

    Masseroli, M; Bonacina, S; Pinciroli, F

    2004-01-01

    The actual development of distributed information technologies and Java programming enables employing them also in the medical arena to support the retrieval, integration and evaluation of heterogeneous data and multimodal images in a web browser environment. With this aim, we used them to implement a client-server architecture based on software agents. The client side is a Java applet running in a web browser and providing a friendly medical user interface to browse and visualize different patient and medical test data, integrating them properly. The server side manages secure connections and queries to heterogeneous remote databases and file systems containing patient personal and clinical data. Based on the Java Advanced Imaging API, processing and analysis tools were developed to support the evaluation of remotely retrieved bioimages through the quantification of their features in different regions of interest. The Java platform-independence allows the centralized management of the implemented prototype and its deployment to each site where an intranet or internet connection is available. Giving healthcare providers effective support for comprehensively browsing, visualizing and evaluating medical images and records located in different remote repositories, the developed prototype can represent an important aid in providing more efficient diagnoses and medical treatments.

  10. The Qatar National Historic Environment Record: a Platform for the Development of a Fully-Integrated Cultural Heritage Management Application

    NASA Astrophysics Data System (ADS)

    Cuttler, R. T. H.; Tonner, T. W. W.; Al-Naimi, F. A.; Dingwall, L. M.; Al-Hemaidi, N.

    2013-07-01

    The development of the Qatar National Historic Environment Record (QNHER) by the Qatar Museums Authority and the University of Birmingham in 2008 was based on a customised, bilingual Access database and ArcGIS. While both platforms are stable and well supported, neither was designed for the documentation and retrieval of cultural heritage data. As a result it was decided to develop a custom application using Open Source code. The core module of this application is now completed and is orientated towards the storage and retrieval of geospatial heritage data for the curation of heritage assets. Based on MIDAS Heritage data standards and regionally relevant thesauri, it is a truly bilingual system. Significant attention has been paid to the user interface, which is userfriendly and intuitive. Based on a suite of web services and accessed through a web browser, the system makes full use of internet resources such as Google Maps and Bing Maps. The application avoids long term vendor ''tie-ins'' and as a fully integrated data management system, is now an important tool for both cultural resource managers and heritage researchers in Qatar.

  11. What can graph theory tell us about word learning and lexical retrieval?

    PubMed

    Vitevitch, Michael S

    2008-04-01

    Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of phonological word-forms. Pajek, a program for large network analysis and visualization (V. Batagelj & A. Mvrar, 1998), was used to examine several characteristics of a network derived from a computerized database of the adult lexicon. Nodes in the network represented words, and a link connected two nodes if the words were phonological neighbors. The average path length and clustering coefficient suggest that the phonological network exhibits small-world characteristics. The degree distribution was fit better by an exponential rather than a power-law function. Finally, the network exhibited assortative mixing by degree. Some of these structural characteristics were also found in graphs that were formed by 2 simple stochastic processes suggesting that similar processes might influence the development of the lexicon. The graph theoretic perspective may provide novel insights about the mental lexicon and lead to future studies that help us better understand language development and processing.

  12. Advanced Techniques for Removal of Retrievable Inferior Vena Cava Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliescu, Bogdan; Haskal, Ziv J., E-mail: ziv2@mac.com

    Inferior vena cava (IVC) filters have proven valuable for the prevention of primary or recurrent pulmonary embolism in selected patients with or at high risk for venous thromboembolic disease. Their use has become commonplace, and the numbers implanted increase annually. During the last 3 years, in the United States, the percentage of annually placed optional filters, i.e., filters than can remain as permanent filters or potentially be retrieved, has consistently exceeded that of permanent filters. In parallel, the complications of long- or short-term filtration have become increasingly evident to physicians, regulatory agencies, and the public. Most filter removals are uneventful,more » with a high degree of success. When routine filter-retrieval techniques prove unsuccessful, progressively more advanced tools and skill sets must be used to enhance filter-retrieval success. These techniques should be used with caution to avoid damage to the filter or cava during IVC retrieval. This review describes the complex techniques for filter retrieval, including use of additional snares, guidewires, angioplasty balloons, and mechanical and thermal approaches as well as illustrates their specific application.« less

  13. Tomato Expression Database (TED): a suite of data presentation and analysis tools

    PubMed Central

    Fei, Zhangjun; Tang, Xuemei; Alba, Rob; Giovannoni, James

    2006-01-01

    The Tomato Expression Database (TED) includes three integrated components. The Tomato Microarray Data Warehouse serves as a central repository for raw gene expression data derived from the public tomato cDNA microarray. In addition to expression data, TED stores experimental design and array information in compliance with the MIAME guidelines and provides web interfaces for researchers to retrieve data for their own analysis and use. The Tomato Microarray Expression Database contains normalized and processed microarray data for ten time points with nine pair-wise comparisons during fruit development and ripening in a normal tomato variety and nearly isogenic single gene mutants impacting fruit development and ripening. Finally, the Tomato Digital Expression Database contains raw and normalized digital expression (EST abundance) data derived from analysis of the complete public tomato EST collection containing >150 000 ESTs derived from 27 different non-normalized EST libraries. This last component also includes tools for the comparison of tomato and Arabidopsis digital expression data. A set of query interfaces and analysis, and visualization tools have been developed and incorporated into TED, which aid users in identifying and deciphering biologically important information from our datasets. TED can be accessed at . PMID:16381976

  14. Tomato Expression Database (TED): a suite of data presentation and analysis tools.

    PubMed

    Fei, Zhangjun; Tang, Xuemei; Alba, Rob; Giovannoni, James

    2006-01-01

    The Tomato Expression Database (TED) includes three integrated components. The Tomato Microarray Data Warehouse serves as a central repository for raw gene expression data derived from the public tomato cDNA microarray. In addition to expression data, TED stores experimental design and array information in compliance with the MIAME guidelines and provides web interfaces for researchers to retrieve data for their own analysis and use. The Tomato Microarray Expression Database contains normalized and processed microarray data for ten time points with nine pair-wise comparisons during fruit development and ripening in a normal tomato variety and nearly isogenic single gene mutants impacting fruit development and ripening. Finally, the Tomato Digital Expression Database contains raw and normalized digital expression (EST abundance) data derived from analysis of the complete public tomato EST collection containing >150,000 ESTs derived from 27 different non-normalized EST libraries. This last component also includes tools for the comparison of tomato and Arabidopsis digital expression data. A set of query interfaces and analysis, and visualization tools have been developed and incorporated into TED, which aid users in identifying and deciphering biologically important information from our datasets. TED can be accessed at http://ted.bti.cornell.edu.

  15. Search for 5'-leader regulatory RNA structures based on gene annotation aided by the RiboGap database.

    PubMed

    Naghdi, Mohammad Reza; Smail, Katia; Wang, Joy X; Wade, Fallou; Breaker, Ronald R; Perreault, Jonathan

    2017-03-15

    The discovery of noncoding RNAs (ncRNAs) and their importance for gene regulation led us to develop bioinformatics tools to pursue the discovery of novel ncRNAs. Finding ncRNAs de novo is challenging, first due to the difficulty of retrieving large numbers of sequences for given gene activities, and second due to exponential demands on calculation needed for comparative genomics on a large scale. Recently, several tools for the prediction of conserved RNA secondary structure were developed, but many of them are not designed to uncover new ncRNAs, or are too slow for conducting analyses on a large scale. Here we present various approaches using the database RiboGap as a primary tool for finding known ncRNAs and for uncovering simple sequence motifs with regulatory roles. This database also can be used to easily extract intergenic sequences of eubacteria and archaea to find conserved RNA structures upstream of given genes. We also show how to extend analysis further to choose the best candidate ncRNAs for experimental validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Comparing Noun Phrasing Techniques for Use with Medical Digital Library Tools.

    ERIC Educational Resources Information Center

    Tolle, Kristin M.; Chen, Hsinchun

    2000-01-01

    Describes a study that investigated the use of a natural language processing technique called noun phrasing to determine whether it is a viable technique for medical information retrieval. Evaluates four noun phrase generation tools for their ability to isolate noun phrases from medical journal abstracts, focusing on precision and recall.…

  17. Open, Cross Platform Chemistry Application Unifying Structure Manipulation, External Tools, Databases and Visualization

    DTIC Science & Technology

    2012-11-27

    with powerful analysis tools and an informatics approach leveraging best-of-breed NoSQL databases, in order to store, search and retrieve relevant...dictionaries, and JavaScript also has good support. The MongoDB project[15] was chosen as a scalable NoSQL data store for the cheminfor- matics components

  18. Cognitive Interviewing: A Qualitative Tool for Improving Questionnaires in Sport Science

    ERIC Educational Resources Information Center

    Dietrich, Hanno; Ehrlenspiel, Felix

    2010-01-01

    Cognitive models postulate that respondents to a questionnaire follow a four-stage process when answering a question: comprehension, memory retrieval, decision, and response. Cognitive interviewing is a qualitative tool to gain insight into this process by means of letting respondents think aloud or asking them specific questions (Willis, 2005).…

  19. Sentence-Based Metadata: An Approach and Tool for Viewing Database Designs.

    ERIC Educational Resources Information Center

    Boyle, John M.; Gunge, Jakob; Bryden, John; Librowski, Kaz; Hanna, Hsin-Yi

    2002-01-01

    Describes MARS (Museum Archive Retrieval System), a research tool which enables organizations to exchange digital images and documents by means of a common thesaurus structure, and merge the descriptive data and metadata of their collections. Highlights include theoretical basis; searching the MARS database; and examples in European museums.…

  20. Enriching the Web of Data with Educational Information Using We-Share

    ERIC Educational Resources Information Center

    Ruiz-Calleja, Adolfo; Asensio-Pérez, Juan I.; Vega-Gorgojo, Guillermo; Gómez-Sánchez, Eduardo; Bote-Lorenzo, Miguel L.; Alario-Hoyos, Carlos

    2017-01-01

    This paper presents We-Share, a social annotation application that enables educators to publish and retrieve information about educational ICT tools. As a distinctive characteristic, We-Share provides educators data about educational tools already available on the Web of Data while allowing them to enrich such data with their experience using…

  1. Comprehensive Analysis of Semantic Web Reasoners and Tools: A Survey

    ERIC Educational Resources Information Center

    Khamparia, Aditya; Pandey, Babita

    2017-01-01

    Ontologies are emerging as best representation techniques for knowledge based context domains. The continuing need for interoperation, collaboration and effective information retrieval has lead to the creation of semantic web with the help of tools and reasoners which manages personalized information. The future of semantic web lies in an ontology…

  2. Community socioeconomic information system. [CD–ROM].

    Treesearch

    E.M. Donoghue; N.L. Sutton

    2006-01-01

    The Community Socioeconomic Information System (CSIS) is a tool that allows users to retrieve 1990 and 2000 U.S. census data to examine conditions and trends for communities in western Washington, western Oregon, and northern California. The tool includes socioeconomic data for 1,314 communities in the entire region, including incorporated and unincorporated places....

  3. PubFinder: a tool for improving retrieval rate of relevant PubMed abstracts.

    PubMed

    Goetz, Thomas; von der Lieth, Claus-Wilhelm

    2005-07-01

    Since it is becoming increasingly laborious to manually extract useful information embedded in the ever-growing volumes of literature, automated intelligent text analysis tools are becoming more and more essential to assist in this task. PubFinder (www.glycosciences.de/tools/PubFinder) is a publicly available web tool designed to improve the retrieval rate of scientific abstracts relevant for a specific scientific topic. Only the selection of a representative set of abstracts is required, which are central for a scientific topic. No special knowledge concerning the query-syntax is necessary. Based on the selected abstracts, a list of discriminating words is automatically calculated, which is subsequently used for scoring all defined PubMed abstracts for their probability of belonging to the defined scientific topic. This results in a hit-list of references in the descending order of their likelihood score. The algorithms and procedures implemented in PubFinder facilitate the perpetual task for every scientist of staying up-to-date with current publications dealing with a specific subject in biomedicine.

  4. Oceans 2.0: a Data Management Infrastructure as a Platform

    NASA Astrophysics Data System (ADS)

    Pirenne, B.; Guillemot, E.

    2012-04-01

    Oceans 2.0: a Data Management Infrastructure as a Platform Benoît Pirenne, Associate Director, IT, NEPTUNE Canada Eric Guillemot, Manager, Software Development, NEPTUNE Canada The Data Management and Archiving System (DMAS) serving the needs of a number of undersea observing networks such as VENUS and NEPTUNE Canada was conceived from the beginning as a Service-Oriented Infrastructure. Its core functional elements (data acquisition, transport, archiving, retrieval and processing) can interact with the outside world using Web Services. Those Web Services can be exploited by a variety of higher level applications. Over the years, DMAS has developed Oceans 2.0: an environment where these techniques are implemented. The environment thereby becomes a platform in that it allows for easy addition of new and advanced features that build upon the tools at the core of the system. The applications that have been developed include: data search and retrieval, including options such as data product generation, data decimation or averaging, etc. dynamic infrastructure description (search all observatory metadata) and visualization data visualization, including dynamic scalar data plots, integrated fast video segment search and viewing Building upon these basic applications are new concepts, coming from the Web 2.0 world that DMAS has added: They allow people equipped only with a web browser to collaborate and contribute their findings or work results to the wider community. Examples include: addition of metadata tags to any part of the infrastructure or to any data item (annotations) ability to edit and execute, share and distribute Matlab code on-line, from a simple web browser, with specific calls within the code to access data ability to interactively and graphically build pipeline processing jobs that can be executed on the cloud web-based, interactive instrument control tools that allow users to truly share the use of the instruments and communicate with each other and last but not least: a public tool in the form of a game, that crowd-sources the inventory of the underwater video archive content, thereby adding tremendous amounts of metadata Beyond those tools that represent the functionality presently available to users, a number of the Web Services dedicated to data access are being exposed for anyone to use. This allows not only for ad hoc data access by individuals who need non-interactive access, but will foster the development of new applications in a variety of areas.

  5. Physicians' perceptions of the impact of the EHR on the collection and retrieval of psychosocial information in outpatient diabetes care.

    PubMed

    Senteio, Charles; Veinot, Tiffany; Adler-Milstein, Julia; Richardson, Caroline

    2018-05-01

    Psychosocial information informs clinical decisions by providing crucial context for patients' barriers to recommended self-care; this is especially important in outpatient diabetes care because outcomes are largely dependent upon self-care behavior. Little is known about provider perceptions of use of psychosocial information. Further, while EHRs have dramatically changed how providers interact with patient health information, the EHRs' role in collection and retrieval of psychosocial information is not understood. We designed a qualitative study. We used semi-structured interviews to investigate physicians' (N = 17) perspectives on the impact of EHR for psychosocial information use for outpatient Type II diabetes care decisions. We selected the constant comparative method to analyze the data. Psychosocial information is perceived as dissimilar from other clinical information such as HbA1c and prescribed medications. Its narrative form conveys the patient's story, which elucidates barriers to following self-care recommendations. The narrative is abstract, and requires interpretation of patterns. Psychosocial information is also circumstantial; hence, the patients' context determines influence on self-care. Furthermore, EHRs can impair the collection of psychosocial information because the designs of EHR tools make it difficult to document, search for, and retrieve it. Templates do not enable users from collecting the patient's 'story', and using free text fields is time consuming. Providers therefore had low use of, and confidence in, the accuracy of psychosocial information in the EHR. Workflows and EHR tools should be re-designed to better support psychosocial information collection and retrieval. Tools should enable recording and summarization of the patient's story, and the rationale for treatment decisions. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Using Radar, Lidar, and Radiometer measurements to Classify Cloud Type and Study Middle-Level Cloud Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhien

    2010-06-29

    The project is mainly focused on the characterization of cloud macrophysical and microphysical properties, especially for mixed-phased clouds and middle level ice clouds by combining radar, lidar, and radiometer measurements available from the ACRF sites. First, an advanced mixed-phase cloud retrieval algorithm will be developed to cover all mixed-phase clouds observed at the ACRF NSA site. The algorithm will be applied to the ACRF NSA observations to generate a long-term arctic mixed-phase cloud product for model validations and arctic mixed-phase cloud processes studies. To improve the representation of arctic mixed-phase clouds in GCMs, an advanced understanding of mixed-phase cloud processesmore » is needed. By combining retrieved mixed-phase cloud microphysical properties with in situ data and large-scale meteorological data, the project aim to better understand the generations of ice crystals in supercooled water clouds, the maintenance mechanisms of the arctic mixed-phase clouds, and their connections with large-scale dynamics. The project will try to develop a new retrieval algorithm to study more complex mixed-phase clouds observed at the ACRF SGP site. Compared with optically thin ice clouds, optically thick middle level ice clouds are less studied because of limited available tools. The project will develop a new two wavelength radar technique for optically thick ice cloud study at SGP site by combining the MMCR with the W-band radar measurements. With this new algorithm, the SGP site will have a better capability to study all ice clouds. Another area of the proposal is to generate long-term cloud type classification product for the multiple ACRF sites. The cloud type classification product will not only facilitates the generation of the integrated cloud product by applying different retrieval algorithms to different types of clouds operationally, but will also support other research to better understand cloud properties and to validate model simulations. The ultimate goal is to improve our cloud classification algorithm into a VAP.« less

  7. Content based information retrieval in forensic image databases.

    PubMed

    Geradts, Zeno; Bijhold, Jurrien

    2002-03-01

    This paper gives an overview of the various available image databases and ways of searching these databases on image contents. The developments in research groups of searching in image databases is evaluated and compared with the forensic databases that exist. Forensic image databases of fingerprints, faces, shoeprints, handwriting, cartridge cases, drugs tablets, and tool marks are described. The developments in these fields appear to be valuable for forensic databases, especially that of the framework in MPEG-7, where the searching in image databases is standardized. In the future, the combination of the databases (also DNA-databases) and possibilities to combine these can result in stronger forensic evidence.

  8. Google Scholar as replacement for systematic literature searches: good relative recall and precision are not enough

    PubMed Central

    2013-01-01

    Background Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools. The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. Methods General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. Results We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. Conclusion The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary. PMID:24160679

  9. Tool use in left brain damage and Alzheimer's disease: What about function and manipulation knowledge?

    PubMed

    Jarry, Christophe; Osiurak, François; Besnard, Jérémy; Baumard, Josselin; Lesourd, Mathieu; Croisile, Bernard; Etcharry-Bouyx, Frédérique; Chauviré, Valérie; Le Gall, Didier

    2016-03-01

    Tool use disorders are usually associated with difficulties in retrieving function and manipulation knowledge. Here, we investigate tool use (Real Tool Use, RTU), function (Functional Association, FA) and manipulation knowledge (Gesture Recognition, GR) in 17 left-brain-damaged (LBD) patients and 14 AD patients (Alzheimer disease). LBD group exhibited predicted deficit on RTU but not on FA and GR while AD patients showed deficits on GR and FA with preserved tool use skills. These findings question the role played by function and manipulation knowledge in actual tool use. © 2016 The British Psychological Society.

  10. New frontiers for intelligent content-based retrieval

    NASA Astrophysics Data System (ADS)

    Benitez, Ana B.; Smith, John R.

    2001-01-01

    In this paper, we examine emerging frontiers in the evolution of content-based retrieval systems that rely on an intelligent infrastructure. Here, we refer to intelligence as the capabilities of the systems to build and maintain situational or world models, utilize dynamic knowledge representation, exploit context, and leverage advanced reasoning and learning capabilities. We argue that these elements are essential to producing effective systems for retrieving audio-visual content at semantic levels matching those of human perception and cognition. In this paper, we review relevant research on the understanding of human intelligence and construction of intelligent system in the fields of cognitive psychology, artificial intelligence, semiotics, and computer vision. We also discus how some of the principal ideas form these fields lead to new opportunities and capabilities for content-based retrieval systems. Finally, we describe some of our efforts in these directions. In particular, we present MediaNet, a multimedia knowledge presentation framework, and some MPEG-7 description tools that facilitate and enable intelligent content-based retrieval.

  11. New frontiers for intelligent content-based retrieval

    NASA Astrophysics Data System (ADS)

    Benitez, Ana B.; Smith, John R.

    2000-12-01

    In this paper, we examine emerging frontiers in the evolution of content-based retrieval systems that rely on an intelligent infrastructure. Here, we refer to intelligence as the capabilities of the systems to build and maintain situational or world models, utilize dynamic knowledge representation, exploit context, and leverage advanced reasoning and learning capabilities. We argue that these elements are essential to producing effective systems for retrieving audio-visual content at semantic levels matching those of human perception and cognition. In this paper, we review relevant research on the understanding of human intelligence and construction of intelligent system in the fields of cognitive psychology, artificial intelligence, semiotics, and computer vision. We also discus how some of the principal ideas form these fields lead to new opportunities and capabilities for content-based retrieval systems. Finally, we describe some of our efforts in these directions. In particular, we present MediaNet, a multimedia knowledge presentation framework, and some MPEG-7 description tools that facilitate and enable intelligent content-based retrieval.

  12. User's Guide to the Water-Analysis Screening Tool (WAST): A Tool for Assessing Available Water Resources in Relation to Aquatic-Resource Uses

    USGS Publications Warehouse

    Stuckey, Marla H.; Kiesler, James L.

    2008-01-01

    A water-analysis screening tool (WAST) was developed by the U.S. Geological Survey, in partnership with the Pennsylvania Department of Environmental Protection, to provide an initial screening of areas in the state where potential problems may exist related to the availability of water resources to meet current and future water-use demands. The tool compares water-use information to an initial screening criteria of the 7-day, 10-year low-flow statistic (7Q10) resulting in a screening indicator for influences of net withdrawals (withdrawals minus discharges) on aquatic-resource uses. This report is intended to serve as a guide for using the screening tool. The WAST can display general basin characteristics, water-use information, and screening-indicator information for over 10,000 watersheds in the state. The tool includes 12 primary functions that allow the user to display watershed information, edit water-use and water-supply information, observe effects downstream from edited water-use information, reset edited values to baseline, load new water-use information, save and retrieve scenarios, and save output as a Microsoft Excel spreadsheet.

  13. Grazing Incidence Wavefront Sensing and Verification of X-Ray Optics Performance

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Rohrbach, Scott; Zhang, William W.

    2011-01-01

    Evaluation of interferometrically measured mirror metrology data and characterization of a telescope wavefront can be powerful tools in understanding of image characteristics of an x-ray optical system. In the development of soft x-ray telescope for the International X-Ray Observatory (IXO), we have developed new approaches to support the telescope development process. Interferometrically measuring the optical components over all relevant spatial frequencies can be used to evaluate and predict the performance of an x-ray telescope. Typically, the mirrors are measured using a mount that minimizes the mount and gravity induced errors. In the assembly and mounting process the shape of the mirror segments can dramatically change. We have developed wavefront sensing techniques suitable for the x-ray optical components to aid us in the characterization and evaluation of these changes. Hartmann sensing of a telescope and its components is a simple method that can be used to evaluate low order mirror surface errors and alignment errors. Phase retrieval techniques can also be used to assess and estimate the low order axial errors of the primary and secondary mirror segments. In this paper we describe the mathematical foundation of our Hartmann and phase retrieval sensing techniques. We show how these techniques can be used in the evaluation and performance prediction process of x-ray telescopes.

  14. Chlorophyll induced fluorescence retrieved from GOME2 for improving gross primary productivity estimates of vegetation

    NASA Astrophysics Data System (ADS)

    van Leth, Thomas C.; Verstraeten, Willem W.; Sanders, Abram F. J.

    2014-05-01

    Mapping terrestrial chlorophyll fluorescence is a crucial activity to obtain information on the functional status of vegetation and to improve estimates of light-use efficiency (LUE) and global primary productivity (GPP). GPP quantifies carbon fixation by plant ecosystems and is therefore an important parameter for budgeting terrestrial carbon cycles. Satellite remote sensing offers an excellent tool for investigating GPP in a spatially explicit fashion across different scales of observation. The GPP estimates, however, still remain largely uncertain due to biotic and abiotic factors that influence plant production. Sun-induced fluorescence has the ability to enhance our knowledge on how environmentally induced changes affect the LUE. This can be linked to optical derived remote sensing parameters thereby reducing the uncertainty in GPP estimates. Satellite measurements provide a relatively new perspective on global sun-induced fluorescence, enabling us to quantify spatial distributions and changes over time. Techniques have recently been developed to retrieve fluorescence emissions from hyperspectral satellite measurements. We use data from the Global Ozone Monitoring Instrument 2 (GOME2) to infer terrestrial fluorescence. The spectral signatures of three basic components atmospheric: absorption, surface reflectance, and fluorescence radiance are separated using reference measurements of non-fluorescent surfaces (desserts, deep oceans and ice) to solve for the atmospheric absorption. An empirically based principal component analysis (PCA) approach is applied similar to that of Joiner et al. (2013, ACP). Here we show our first global maps of the GOME2 retrievals of chlorophyll fluorescence. First results indicate fluorescence distributions that are similar with that obtained by GOSAT and GOME2 as reported by Joiner et al. (2013, ACP), although we find slightly higher values. In view of optimizing the fluorescence retrieval, we will show the effect of the references selection procedure on the retrieval product.

  15. Technology for an intelligent, free-flying robot for crew and equipment retrieval in space

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Reuter, G. J.; Healey, Kathleen J.; Phinney, D. E.

    1990-01-01

    Crew rescue and equipment retrieval is a Space Station Freedom requirement. During Freedom's lifetime, there is a high probability that a number of objects will accidently become separated. Members of the crew, replacement units, and key tools are examples. Retrieval of these objects within a short time is essential. Systems engineering studies were conducted to identify system requirements and candidate approaches. One such approach, based on a voice-supervised, intelligent, free-flying robot was selected for further analysis. A ground-based technology demonstration, now in its second phase, was designed to provide an integrated robotic hardware and software testbed supporting design of a space-borne system. The ground system, known as the EVA Retriever, is examining the problem of autonomously planning and executing a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles. The current prototype is an anthropomorphic manipulator unit with dexterous arms and hands attached to a robot body and latched in a manned maneuvering unit. A precision air-bearing floor is used to simulate space. Sensor data include two vision systems and force/proximity/tactile sensors on the hands and arms. Planning for a shuttle file experiment is underway. A set of scenarios and strawman requirements were defined to support conceptual development. Initial design activities are expected to begin in late 1989 with the flight occurring in 1994. The flight hardware and software will be based on lessons learned from both the ground prototype and computer simulations.

  16. PubMed and beyond: a survey of web tools for searching biomedical literature

    PubMed Central

    Lu, Zhiyong

    2011-01-01

    The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search PMID:21245076

  17. Use of information-retrieval languages in automated retrieval of experimental data from long-term storage

    NASA Technical Reports Server (NTRS)

    Khovanskiy, Y. D.; Kremneva, N. I.

    1975-01-01

    Problems and methods are discussed of automating information retrieval operations in a data bank used for long term storage and retrieval of data from scientific experiments. Existing information retrieval languages are analyzed along with those being developed. The results of studies discussing the application of the descriptive 'Kristall' language used in the 'ASIOR' automated information retrieval system are presented. The development and use of a specialized language of the classification-descriptive type, using universal decimal classification indices as the main descriptors, is described.

  18. PubMed vs. HighWire Press: a head-to-head comparison of two medical literature search engines.

    PubMed

    Vanhecke, Thomas E; Barnes, Michael A; Zimmerman, Janet; Shoichet, Sandor

    2007-09-01

    PubMed and HighWire Press are both useful medical literature search engines available for free to anyone on the internet. We measured retrieval accuracy, number of results generated, retrieval speed, features and search tools on HighWire Press and PubMed using the quick search features of each. We found that using HighWire Press resulted in a higher likelihood of retrieving the desired article and higher number of search results than the same search on PubMed. PubMed was faster than HighWire Press in delivering search results regardless of search settings. There are considerable differences in search features between these two search engines.

  19. Information retrieval for a document writing assistance program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corral, M.L.; Simon, A.; Julien, C.

    This paper presents an Information Retrieval mechanism to facilitate the writing of technical documents in the space domain. To address the need for document exchange between partners in a given project, documents are standardized. The writing of a new document requires the re-use of existing documents or parts thereof. These parts can be identified by {open_quotes}tagging{close_quotes} the logical structure of documents and restored by means of a purpose-built Information Retrieval System (I.R.S.). The I.R.S. implemented in our writing assistance tool uses natural language queries and is based on a statistical linguistic approach which is enhanced by the use of documentmore » structure module.« less

  20. The Comprehensive Microbial Resource

    PubMed Central

    Peterson, Jeremy D.; Umayam, Lowell A.; Dickinson, Tanja; Hickey, Erin K.; White, Owen

    2001-01-01

    One challenge presented by large-scale genome sequencing efforts is effective display of uniform information to the scientific community. The Comprehensive Microbial Resource (CMR) contains robust annotation of all complete microbial genomes and allows for a wide variety of data retrievals. The bacterial information has been placed on the Web at http://www.tigr.org/CMR for retrieval using standard web browsing technology. Retrievals can be based on protein properties such as molecular weight or hydrophobicity, GC-content, functional role assignments and taxonomy. The CMR also has special web-based tools to allow data mining using pre-run homology searches, whole genome dot-plots, batch downloading and traversal across genomes using a variety of datatypes. PMID:11125067

  1. Dental Informatics tool "SOFPRO" for the study of oral submucous fibrosis.

    PubMed

    Erlewad, Dinesh Masajirao; Mundhe, Kalpana Anandrao; Hazarey, Vinay K

    2016-01-01

    Dental informatics is an evolving branch widely used in dental education and practice. Numerous applications that support clinical care, education and research have been developed. However, very few such applications are developed and utilized in the epidemiological studies of oral submucous fibrosis (OSF) which is affecting a significant population of Asian countries. To design and develop an user friendly software for the descriptive epidemiological study of OSF. With the help of a software engineer a computer program SOFPRO was designed and developed by using, Ms-Visual Basic 6.0 (VB), Ms-Access 2000, Crystal Report 7.0 and Ms-Paint in operating system XP. For the analysis purpose the available OSF data from the departmental precancer registry was fed into the SOFPRO. Known data, not known and null data are successfully accepted in data entry and represented in data analysis of OSF. Smooth working of SOFPRO and its correct data flow was tested against real-time data of OSF. SOFPRO was found to be a user friendly automated tool for easy data collection, retrieval, management and analysis of OSF patients.

  2. Lifelong Learning Organisers: Requirements for Tools for Supporting Episodic and Semantic Learning

    ERIC Educational Resources Information Center

    Vavoula, Giasemi; Sharples, Mike

    2009-01-01

    We propose Lifelong Learning Organisers (LLOs) as tools to support the capturing, organisation and retrieval of personal learning experiences, resources and notes, over a range of learning topics, at different times and places. The paper discusses general requirements for the design of LLOs based on findings from a diary-based study of everyday…

  3. Improving long-term global precipitation dataset using multi-sensor surface soil moisture retrievals and the soil moisture analysis rainfall tool (SMART)

    USDA-ARS?s Scientific Manuscript database

    Using multiple historical satellite surface soil moisture products, the Kalman Filtering-based Soil Moisture Analysis Rainfall Tool (SMART) is applied to improve the accuracy of a multi-decadal global daily rainfall product that has been bias-corrected to match the monthly totals of available rain g...

  4. Challenges for automatically extracting molecular interactions from full-text articles.

    PubMed

    McIntosh, Tara; Curran, James R

    2009-09-24

    The increasing availability of full-text biomedical articles will allow more biomedical knowledge to be extracted automatically with greater reliability. However, most Information Retrieval (IR) and Extraction (IE) tools currently process only abstracts. The lack of corpora has limited the development of tools that are capable of exploiting the knowledge in full-text articles. As a result, there has been little investigation into the advantages of full-text document structure, and the challenges developers will face in processing full-text articles. We manually annotated passages from full-text articles that describe interactions summarised in a Molecular Interaction Map (MIM). Our corpus tracks the process of identifying facts to form the MIM summaries and captures any factual dependencies that must be resolved to extract the fact completely. For example, a fact in the results section may require a synonym defined in the introduction. The passages are also annotated with negated and coreference expressions that must be resolved.We describe the guidelines for identifying relevant passages and possible dependencies. The corpus includes 2162 sentences from 78 full-text articles. Our corpus analysis demonstrates the necessity of full-text processing; identifies the article sections where interactions are most commonly stated; and quantifies the proportion of interaction statements requiring coherent dependencies. Further, it allows us to report on the relative importance of identifying synonyms and resolving negated expressions. We also experiment with an oracle sentence retrieval system using the corpus as a gold-standard evaluation set. We introduce the MIM corpus, a unique resource that maps interaction facts in a MIM to annotated passages within full-text articles. It is an invaluable case study providing guidance to developers of biomedical IR and IE systems, and can be used as a gold-standard evaluation set for full-text IR tasks.

  5. VitisExpDB: a database resource for grape functional genomics.

    PubMed

    Doddapaneni, Harshavardhan; Lin, Hong; Walker, M Andrew; Yao, Jiqiang; Civerolo, Edwin L

    2008-02-28

    The family Vitaceae consists of many different grape species that grow in a range of climatic conditions. In the past few years, several studies have generated functional genomic information on different Vitis species and cultivars, including the European grape vine, Vitis vinifera. Our goal is to develop a comprehensive web data source for Vitaceae. VitisExpDB is an online MySQL-PHP driven relational database that houses annotated EST and gene expression data for V. vinifera and non-vinifera grape species and varieties. Currently, the database stores approximately 320,000 EST sequences derived from 8 species/hybrids, their annotation (BLAST top match) details and Gene Ontology based structured vocabulary. Putative homologs for each EST in other species and varieties along with information on their percent nucleotide identities, phylogenetic relationship and common primers can be retrieved. The database also includes information on probe sequence and annotation features of the high density 60-mer gene expression chip consisting of approximately 20,000 non-redundant set of ESTs. Finally, the database includes 14 processed global microarray expression profile sets. Data from 12 of these expression profile sets have been mapped onto metabolic pathways. A user-friendly web interface with multiple search indices and extensively hyperlinked result features that permit efficient data retrieval has been developed. Several online bioinformatics tools that interact with the database along with other sequence analysis tools have been added. In addition, users can submit their ESTs to the database. The developed database provides genomic resource to grape community for functional analysis of genes in the collection and for the grape genome annotation and gene function identification. The VitisExpDB database is available through our website http://cropdisease.ars.usda.gov/vitis_at/main-page.htm.

  6. VitisExpDB: A database resource for grape functional genomics

    PubMed Central

    Doddapaneni, Harshavardhan; Lin, Hong; Walker, M Andrew; Yao, Jiqiang; Civerolo, Edwin L

    2008-01-01

    Background The family Vitaceae consists of many different grape species that grow in a range of climatic conditions. In the past few years, several studies have generated functional genomic information on different Vitis species and cultivars, including the European grape vine, Vitis vinifera. Our goal is to develop a comprehensive web data source for Vitaceae. Description VitisExpDB is an online MySQL-PHP driven relational database that houses annotated EST and gene expression data for V. vinifera and non-vinifera grape species and varieties. Currently, the database stores ~320,000 EST sequences derived from 8 species/hybrids, their annotation (BLAST top match) details and Gene Ontology based structured vocabulary. Putative homologs for each EST in other species and varieties along with information on their percent nucleotide identities, phylogenetic relationship and common primers can be retrieved. The database also includes information on probe sequence and annotation features of the high density 60-mer gene expression chip consisting of ~20,000 non-redundant set of ESTs. Finally, the database includes 14 processed global microarray expression profile sets. Data from 12 of these expression profile sets have been mapped onto metabolic pathways. A user-friendly web interface with multiple search indices and extensively hyperlinked result features that permit efficient data retrieval has been developed. Several online bioinformatics tools that interact with the database along with other sequence analysis tools have been added. In addition, users can submit their ESTs to the database. Conclusion The developed database provides genomic resource to grape community for functional analysis of genes in the collection and for the grape genome annotation and gene function identification. The VitisExpDB database is available through our website . PMID:18307813

  7. Fracture Systems - Digital Field Data Capture

    NASA Astrophysics Data System (ADS)

    Haslam, Richard

    2017-04-01

    Fracture systems play a key role in subsurface resources and developments including groundwater and nuclear waste repositories. There is increasing recognition that there is a need to record and quantify fracture systems to better understand the potential risks and opportunities. With the advent of smart phones and digital field geology there have been numerous systems designed for field data collection. Digital field data collection allows for rapid data collection and interpretations. However, many of the current systems have principally been designed to cover the full range of field mapping and data needs, making them large and complex, plus many do not offer the tools necessary for the collection of fracture specific data. A new multiplatform data recording app has been developed for the collection of field data on faults and joint/fracture systems and a relational database designed for storage and retrieval. The app has been developed to collect fault data and joint/fracture data based on an open source platform. Data is captured in a form-based approach including validity checks to ensure data is collected systematically. In addition to typical structural data collection, the International Society of Rock Mechanics' (ISRM) "Suggested Methods for the Quantitative Description of Discontinuities in Rock Masses" is included allowing for industry standards to be followed and opening up the tools to industry as well as research. All data is uploaded automatically to a secure server and users can view their data and open access data as required. Users can decide if the data they produce should remain private or be open access. A series of automatic reports can be produced and/or the data downloaded. The database will hold a national archive and data retrieval will be made through a web interface.

  8. Computerized Clinical Decision Support: Contributions from 2015

    PubMed Central

    Bouaud, J.

    2016-01-01

    Summary Objective To summarize recent research and select the best papers published in 2015 in the field of computerized clinical decision support for the Decision Support section of the IMIA yearbook. Method A literature review was performed by searching two bibliographic databases for papers related to clinical decision support systems (CDSSs) and computerized provider order entry (CPOE) systems. The aim was to identify a list of candidate best papers from the retrieved papers that were then peer-reviewed by external reviewers. A consensus meeting between the two section editors and the IMIA editorial team was finally conducted to conclude in the best paper selection. Results Among the 974 retrieved papers, the entire review process resulted in the selection of four best papers. One paper reports on a CDSS routinely applied in pediatrics for more than 10 years, relying on adaptations of the Arden Syntax. Another paper assessed the acceptability and feasibility of an important CPOE evaluation tool in hospitals outside the US where it was developed. The third paper is a systematic, qualitative review, concerning usability flaws of medication-related alerting functions, providing an important evidence-based, methodological contribution in the domain of CDSS design and development in general. Lastly, the fourth paper describes a study quantifying the effect of a complex, continuous-care, guideline-based CDSS on the correctness and completeness of clinicians’ decisions. Conclusions While there are notable examples of routinely used decision support systems, this 2015 review on CDSSs and CPOE systems still shows that, despite methodological contributions, theoretical frameworks, and prototype developments, these technologies are not yet widely spread (at least with their full functionalities) in routine clinical practice. Further research, testing, evaluation, and training are still needed for these tools to be adopted in clinical practice and, ultimately, illustrate the benefits that they promise. PMID:27830247

  9. Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension

    PubMed Central

    Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.

    2016-01-01

    The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974

  10. Knowledge retrieval from PubMed abstracts and electronic medical records with the Multiple Sclerosis Ontology.

    PubMed

    Malhotra, Ashutosh; Gündel, Michaela; Rajput, Abdul Mateen; Mevissen, Heinz-Theodor; Saiz, Albert; Pastor, Xavier; Lozano-Rubi, Raimundo; Martinez-Lapiscina, Elena H; Martinez-Lapsicina, Elena H; Zubizarreta, Irati; Mueller, Bernd; Kotelnikova, Ekaterina; Toldo, Luca; Hofmann-Apitius, Martin; Villoslada, Pablo

    2015-01-01

    In order to retrieve useful information from scientific literature and electronic medical records (EMR) we developed an ontology specific for Multiple Sclerosis (MS). The MS Ontology was created using scientific literature and expert review under the Protégé OWL environment. We developed a dictionary with semantic synonyms and translations to different languages for mining EMR. The MS Ontology was integrated with other ontologies and dictionaries (diseases/comorbidities, gene/protein, pathways, drug) into the text-mining tool SCAIView. We analyzed the EMRs from 624 patients with MS using the MS ontology dictionary in order to identify drug usage and comorbidities in MS. Testing competency questions and functional evaluation using F statistics further validated the usefulness of MS ontology. Validation of the lexicalized ontology by means of named entity recognition-based methods showed an adequate performance (F score = 0.73). The MS Ontology retrieved 80% of the genes associated with MS from scientific abstracts and identified additional pathways targeted by approved disease-modifying drugs (e.g. apoptosis pathways associated with mitoxantrone, rituximab and fingolimod). The analysis of the EMR from patients with MS identified current usage of disease modifying drugs and symptomatic therapy as well as comorbidities, which are in agreement with recent reports. The MS Ontology provides a semantic framework that is able to automatically extract information from both scientific literature and EMR from patients with MS, revealing new pathogenesis insights as well as new clinical information.

  11. PRIMED: PRIMEr Database for Deleting and Tagging All Fission and Budding Yeast Genes Developed Using the Open-Source Genome Retrieval Script (GRS)

    PubMed Central

    Cummings, Michael T.; Joh, Richard I.; Motamedi, Mo

    2015-01-01

    The fission (Schizosaccharomyces pombe) and budding (Saccharomyces cerevisiae) yeasts have served as excellent models for many seminal discoveries in eukaryotic biology. In these organisms, genes are deleted or tagged easily by transforming cells with PCR-generated DNA inserts, flanked by short (50-100bp) regions of gene homology. These PCR reactions use especially designed long primers, which, in addition to the priming sites, carry homology for gene targeting. Primer design follows a fixed method but is tedious and time-consuming especially when done for a large number of genes. To automate this process, we developed the Python-based Genome Retrieval Script (GRS), an easily customizable open-source script for genome analysis. Using GRS, we created PRIMED, the complete PRIMEr D atabase for deleting and C-terminal tagging genes in the main S. pombe and five of the most commonly used S. cerevisiae strains. Because of the importance of noncoding RNAs (ncRNAs) in many biological processes, we also included the deletion primer set for these features in each genome. PRIMED are accurate and comprehensive and are provided as downloadable Excel files, removing the need for future primer design, especially for large-scale functional analyses. Furthermore, the open-source GRS can be used broadly to retrieve genome information from custom or other annotated genomes, thus providing a suitable platform for building other genomic tools by the yeast or other research communities. PMID:25643023

  12. Kingfisher: a system for remote sensing image database management

    NASA Astrophysics Data System (ADS)

    Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.

    2003-04-01

    At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.

  13. Using an Ishikawa diagram as a tool to assist memory and retrieval of relevant medical cases from the medical literature.

    PubMed

    Wong, Kam Cheong

    2011-03-29

    Studying medical cases is an effective way to enhance clinical reasoning skills and reinforce clinical knowledge. An Ishikawa diagram, also known as a cause-and-effect diagram or fishbone diagram, is often used in quality management in manufacturing industries.In this report, an Ishikawa diagram is used to demonstrate how to relate potential causes of a major presenting problem in a clinical setting. This tool can be used by teams in problem-based learning or in self-directed learning settings.An Ishikawa diagram annotated with references to relevant medical cases and literature can be continually updated and can assist memory and retrieval of relevant medical cases and literature. It could also be used to cultivate a lifelong learning habit in medical professionals.

  14. An evaluation of psychometric properties of caregiver burden outcome measures used in caregivers of children with cerebral palsy: a systematic review protocol.

    PubMed

    Dambi, Jermaine M; Jelsma, Jennifer; Mlambo, Tecla; Chiwaridzo, Matthew; Dangarembizi-Munambah, Nyaradzai; Corten, Lieselotte

    2016-03-09

    Cerebral palsy (CP) is the most common, life-long paediatric disability. Taking care of a child with CP often results in caregiver burden/strain in the long run. As caregivers play an essential role in the rehabilitation of these children, it is therefore important to routinely screen for health outcomes in informal caregivers. Consequently, a plethora of caregiver burden outcome measures have been developed; however, there is a dearth of evidence of the most psychometrically sound tools. Therefore, the broad objective of this systematic review is to evaluate the psychometrical properties and clinical utility of tools used to measure caregiver burden in caregivers of children with CP. This is a systematic review for the evaluation of the psychometric properties of caregiver burden outcome tools. Two independent and blinded reviewers will search articles on PubMed, Scopus, Web of Science, CINAHL, PsychINFO and Africa-Wide Google Scholar. Information will be analysed using predefined criteria. Thereafter, three independent reviewers will then screen the retrieved articles. The methodological quality of studies on the development and validation of the identified tools will be evaluated using the four point COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. Finally, the psychometric properties of the tools which were developed and validated from methodological sound studies will then be analysed using predefined criteria. The proposed systematic review will give an extensive review of the psychometrical properties of tools used to measure caregiver burden in caregivers of children with CP. We hope to identify tools that can be used to accurately screen for caregiver burden both in clinical setting and for research purposes. PROSPERO CRD42015028026.

  15. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac.

    PubMed

    Chen, Guang-Pei; Ahunbay, Ergun; Li, X Allen

    2016-04-01

    To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data are accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose-volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.

  16. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guang-Pei, E-mail: gpchen@mcw.edu; Ahunbay, Ergun; Li, X. Allen

    Purpose: To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. Methods: The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data aremore » accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose–volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. Conclusions: The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.« less

  17. The Co-Development of Skill at and Preference for Use of Retrieval-Based Processes for Solving Addition Problems: Individual and Sex Differences from First to Sixth Grade

    PubMed Central

    Bailey, Drew H.; Littlefield, Andrew; Geary, David C.

    2012-01-01

    The ability to retrieve basic arithmetic facts from long-term memory contributes to individual and perhaps sex differences in mathematics achievement. The current study tracked the co-development of preference for using retrieval over other strategies to solve single-digit addition problems, independent of accuracy, and skilled use of retrieval (i.e., accuracy and RT) from first to sixth grade, inclusive (n = 311). Accurate retrieval in first grade was related to working memory capacity and intelligence and predicted a preference for retrieval in second grade. In later grades, the relation between skill and preference changed such that preference in one grade predicted accuracy and RT in the next, as RT and accuracy continued to predict future gains in preference. In comparison to girls, boys had a consistent preference for retrieval over other strategies and had faster retrieval speeds, but the sex difference in retrieval accuracy varied across grades. Results indicate ability influences early skilled retrieval but both practice and skill influence each other in a feedback loop later in development, and provide insights into the source of the sex difference in problem solving approaches. PMID:22704036

  18. More emotional facial expressions during episodic than during semantic autobiographical retrieval.

    PubMed

    El Haj, Mohamad; Antoine, Pascal; Nandrino, Jean Louis

    2016-04-01

    There is a substantial body of research on the relationship between emotion and autobiographical memory. Using facial analysis software, our study addressed this relationship by investigating basic emotional facial expressions that may be detected during autobiographical recall. Participants were asked to retrieve 3 autobiographical memories, each of which was triggered by one of the following cue words: happy, sad, and city. The autobiographical recall was analyzed by a software for facial analysis that detects and classifies basic emotional expressions. Analyses showed that emotional cues triggered the corresponding basic facial expressions (i.e., happy facial expression for memories cued by happy). Furthermore, we dissociated episodic and semantic retrieval, observing more emotional facial expressions during episodic than during semantic retrieval, regardless of the emotional valence of cues. Our study provides insight into facial expressions that are associated with emotional autobiographical memory. It also highlights an ecological tool to reveal physiological changes that are associated with emotion and memory.

  19. The Development of Automatic and Controlled Inhibitory Retrieval Processes in True and False Recall

    ERIC Educational Resources Information Center

    Knott, Lauren M.; Howe, Mark L.; Wimmer, Marina C.; Dewhurst, Stephen A.

    2011-01-01

    In three experiments, we investigated the role of automatic and controlled inhibitory retrieval processes in true and false memory development in children and adults. Experiment 1 incorporated a directed forgetting task to examine controlled retrieval inhibition. Experiments 2 and 3 used a part-set cue and retrieval practice task to examine…

  20. Comparison of MAX-DOAS profiling algorithms during CINDI-2 - Part 1: aerosols

    NASA Astrophysics Data System (ADS)

    Friess, Udo; Hendrick, Francois; Tirpitz, Jan-Lukas; Apituley, Arnoud; van Roozendael, Michel; Kreher, Karin; Richter, Andreas; Wagner, Thomas

    2017-04-01

    The second Cabauw Intercomparison campaign for Nitrogen Dioxide measuring Instruments (CINDI-2) took place at the Cabauw Experimental Site for Atmospheric Research (CESAR; Utrecht area, The Netherlands) from 25 August until 7 October 2016. CINDI-2 was aiming at assessing the consistency of MAX-DOAS slant column density measurements of tropospheric species (NO2, HCHO, O3, and O4) relevant for the validation of future ESA atmospheric Sentinel missions, through coordinated operation of a large number of DOAS and MAXDOAS instruments from all over the world. An important objective of the campaign was to study the relationship between remote-sensing column and profile measurements of the above species and collocated reference ancillary observations. For this purpose, the CINDI-2 Profiling Task Team (CPTT) was created, involving 22 groups performing aerosol and trace gas vertical profile inversion using dedicated MAX-DOAS profiling algorithms, as well as the teams responsible for ancillary profile and surface concentration measurements (NO2 analysers, NO2 sondes, NO2 and Raman LIDARs, CAPS, Long-Path DOAS, sun photometer, nephelometer, etc). The main purpose of the CPTT is to assess the consistency of the different profiling tools for retrieving aerosol extinction and trace gas vertical profiles through comparison exercises using commonly defined settings and to validate the retrievals with correlative observations. In this presentation, we give an overview of the MAX-DOAS vertical profile comparison results, focusing on the retrieval of aerosol extinction profiles, with the trace gas retrievals being presented in a companion abstract led by F. Hendrick. The performance of the different algorithms is investigated with respect to the variable visibility and cloud conditions encountered during the campaign. The consistency between optimal-estimation-based and parameterized profiling tools is also evaluated for these different conditions, together with the level of agreement with available ancillary aerosol observations, including sun photometer, nephelometer and LIDAR. This comparison study will be put in the perspective of the development of a centralized MAX-DOAS processing system within the framework of the ESA Fiducial Reference Measurements (FRM) project.

  1. Physical Mechanism, Spectral Detection, and Potential Mitigation of 3D Cloud Effects on OCO-2 Radiances and Retrievals

    NASA Astrophysics Data System (ADS)

    Cochrane, S.; Schmidt, S.; Massie, S. T.; Iwabuchi, H.; Chen, H.

    2017-12-01

    Analysis of multiple partially cloudy scenes as observed by OCO-2 in nadir and target mode (published previously and reviewed here) revealed that XCO2 retrievals are systematically biased in presence of scattered clouds. The bias can only partially be removed by applying more stringent filtering, and it depends on the degree of scene inhomogeneity as quantified with collocated MODIS/Aqua imagery. The physical reason behind this effect was so far not well understood because in contrast to cloud-mediated biases in imagery-derived aerosol retrievals, passive gas absorption spectroscopy products do not depend on the absolute radiance level and should therefore be less sensitive to 3D cloud effects and surface albedo variability. However, preliminary evidence from 3D radiative transfer calculations suggested that clouds in the vicinity of an OCO-2 footprint not only offset the reflected radiance spectrum, but introduce a spectrally dependent perturbation that affects absorbing channels disproportionately, and therefore bias the spectroscopy products. To understand the nature of this effect for a variety of scenes, we developed the OCO-2 radiance simulator, which uses the available information on a scene (e.g., MODIS-derived surface albedo, cloud distribution, and other parameters) as the basis for 3D radiative transfer calculations that can predict the radiances observed by OCO-2. We present this new tool and show examples of its utility for a few specific scenes. More importantly, we draw conclusions about the physical mechanism behind this 3D cloud effect on radiances and ultimately OCO-2 retrievals, which involves not only the clouds themselves but also the surface. Harnessed with this understanding, we can now detect cloud vicinity effects in the OCO-2 spectra directly, without actually running the 3D radiance simulator. Potentially, it is even possible to mitigate these effects and thus increase data harvest in regions with ubiquitous cloud cover such as the Amazon. We will discuss some of the hurdles one faces when using only OCO-2 spectra to accomplish this goal, but also that scene context from the other A-Train instruments and the new radiance simulator tool can help overcome some of them.

  2. Analysis Tool Web Services from the EMBL-EBI.

    PubMed

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-07-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.

  3. Analysis Tool Web Services from the EMBL-EBI

    PubMed Central

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-01-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods. PMID:23671338

  4. Harnessing user generated multimedia content in the creation of collaborative classification structures and retrieval learning games

    NASA Astrophysics Data System (ADS)

    Borchert, Otto Jerome

    This paper describes a software tool to assist groups of people in the classification and identification of real world objects called the Classification, Identification, and Retrieval-based Collaborative Learning Environment (CIRCLE). A thorough literature review identified current pedagogical theories that were synthesized into a series of five tasks: gathering, elaboration, classification, identification, and reinforcement through game play. This approach is detailed as part of an included peer reviewed paper. Motivation is increased through the use of formative and summative gamification; getting points completing important portions of the tasks and playing retrieval learning based games, respectively, which is also included as a peer-reviewed conference proceedings paper. Collaboration is integrated into the experience through specific tasks and communication mediums. Implementation focused on a REST-based client-server architecture. The client is a series of web-based interfaces to complete each of the tasks, support formal classroom interaction through faculty accounts and student tracking, and a module for peers to help each other. The server, developed using an in-house JavaMOO platform, stores relevant project data and serves data through a series of messages implemented as a JavaScript Object Notation Application Programming Interface (JSON API). Through a series of two beta tests and two experiments, it was discovered the second, elaboration, task requires considerable support. While students were able to properly suggest experiments and make observations, the subtask involving cleaning the data for use in CIRCLE required extra support. When supplied with more structured data, students were enthusiastic about the classification and identification tasks, showing marked improvement in usability scores and in open ended survey responses. CIRCLE tracks a variety of educationally relevant variables, facilitating support for instructors and researchers. Future work will revolve around material development, software refinement, and theory building. Curricula, lesson plans, instructional materials need to be created to seamlessly integrate CIRCLE in a variety of courses. Further refinement of the software will focus on improving the elaboration interface and developing further game templates to add to the motivation and retrieval learning aspects of the software. Data gathered from CIRCLE experiments can be used to develop and strengthen theories on teaching and learning.

  5. Characterizing a New Surface-Based Shortwave Cloud Retrieval Technique, Based on Transmitted Radiance for Soil and Vegetated Surface Types

    NASA Technical Reports Server (NTRS)

    Coddington, Odele; Pilewskie, Peter; Schmidt, K. Sebastian; McBride, Patrick J.; Vukicevic, Tomislava

    2013-01-01

    This paper presents an approach using the GEneralized Nonlinear Retrieval Analysis (GENRA) tool and general inverse theory diagnostics including the maximum likelihood solution and the Shannon information content to investigate the performance of a new spectral technique for the retrieval of cloud optical properties from surface based transmittance measurements. The cumulative retrieval information over broad ranges in cloud optical thickness (tau), droplet effective radius (r(sub e)), and overhead sun angles is quantified under two conditions known to impact transmitted radiation; the variability in land surface albedo and atmospheric water vapor content. Our conclusions are: (1) the retrieved cloud properties are more sensitive to the natural variability in land surface albedo than to water vapor content; (2) the new spectral technique is more accurate (but still imprecise) than a standard approach, in particular for tau between 5 and 60 and r(sub e) less than approximately 20 nm; and (3) the retrieved cloud properties are dependent on sun angle for clouds of tau from 5 to 10 and r(sub e) less than 10 nm, with maximum sensitivity obtained for an overhead sun.

  6. Regulatory sequence analysis tools.

    PubMed

    van Helden, Jacques

    2003-07-01

    The web resource Regulatory Sequence Analysis Tools (RSAT) (http://rsat.ulb.ac.be/rsat) offers a collection of software tools dedicated to the prediction of regulatory sites in non-coding DNA sequences. These tools include sequence retrieval, pattern discovery, pattern matching, genome-scale pattern matching, feature-map drawing, random sequence generation and other utilities. Alternative formats are supported for the representation of regulatory motifs (strings or position-specific scoring matrices) and several algorithms are proposed for pattern discovery. RSAT currently holds >100 fully sequenced genomes and these data are regularly updated from GenBank.

  7. Application of Rough Sets to Information Retrieval.

    ERIC Educational Resources Information Center

    Miyamoto, Sadaaki

    1998-01-01

    Develops a method of rough retrieval, an application of the rough set theory to information retrieval. The aim is to: (1) show that rough sets are naturally applied to information retrieval in which categorized information structure is used; and (2) show that a fuzzy retrieval scheme is induced from the rough retrieval. (AEF)

  8. Task-specific modulation of adult humans' tool preferences: number of choices and size of the problem.

    PubMed

    Silva, Kathleen M; Gross, Thomas J; Silva, Francisco J

    2015-03-01

    In two experiments, we examined the effect of modifications to the features of a stick-and-tube problem on the stick lengths that adult humans used to solve the problem. In Experiment 1, we examined whether people's tool preferences for retrieving an out-of-reach object in a tube might more closely resemble those reported with laboratory crows if people could modify a single stick to an ideal length to solve the problem. Contrary to when adult humans have selected a tool from a set of ten sticks, asking people to modify a single stick to retrieve an object did not generally result in a stick whose length was related to the object's distance. Consistent with the prior research, though, the working length of the stick was related to the object's distance. In Experiment 2, we examined the effect of increasing the scale of the stick-and-tube problem on people's tool preferences. Increasing the scale of the task influenced people to select relatively shorter tools than had selected in previous studies. Although the causal structures of the tasks used in the two experiments were identical, their results were not. This underscores the necessity of studying physical cognition in relation to a particular causal structure by using a variety of tasks and methods.

  9. Proteasix: a tool for automated and large-scale prediction of proteases involved in naturally occurring peptide generation.

    PubMed

    Klein, Julie; Eales, James; Zürbig, Petra; Vlahou, Antonia; Mischak, Harald; Stevens, Robert

    2013-04-01

    In this study, we have developed Proteasix, an open-source peptide-centric tool that can be used to predict in silico the proteases involved in naturally occurring peptide generation. We developed a curated cleavage site (CS) database, containing 3500 entries about human protease/CS combinations. On top of this database, we built a tool, Proteasix, which allows CS retrieval and protease associations from a list of peptides. To establish the proof of concept of the approach, we used a list of 1388 peptides identified from human urine samples, and compared the prediction to the analysis of 1003 randomly generated amino acid sequences. Metalloprotease activity was predominantly involved in urinary peptide generation, and more particularly to peptides associated with extracellular matrix remodelling, compared to proteins from other origins. In comparison, random sequences returned almost no results, highlighting the specificity of the prediction. This study provides a tool that can facilitate linking of identified protein fragments to predicted protease activity, and therefore into presumed mechanisms of disease. Experiments are needed to confirm the in silico hypotheses; nevertheless, this approach may be of great help to better understand molecular mechanisms of disease, and define new biomarkers, and therapeutic targets. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Galen: a third generation terminology tool to support a multipurpose national coding system for surgical procedures.

    PubMed

    Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H

    1999-01-01

    GALEN has developed a new generation of terminology tools based on a language independent concept reference model using a compositional formalism allowing computer processing and multiple reuses. During the 4th framework program project Galen-In-Use we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures (CCAM) in France. On one hand we contributed to a language independent knowledge repository for multicultural Europe. On the other hand we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW to process French professional medical language rubrics produced by the national colleges of surgeons into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation on one hand we generate controlled French natural language to support the finalization of the linguistic labels in relation with the meanings of the conceptual system structure. On the other hand the classification manager of third generation proves to be very powerful to retrieve the initial professional rubrics with different categories of concepts within a semantic network.

  11. Development of guided horizontal boring tools. Final report, June 1984-March 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, W.J.; Herben, W.C.; Pittard, G.T.

    Maurer Engineering Inc. (MEI), under a contract with the Gas Research Institute (GRI), has led a team of research and manufacturing companies with the goal of developing a guided boring tool for installing gas distribution piping. Studies indicated guided horizontal boring systems can provide gas utilities with a more effective and economical method for installing pipe than conventional techniques with a potential cost savings of at least 15% to 50%. A comprehensive state of technology review of horizontal boring tools identified concepts appropriate to being directionally controlled. Development of a universal system was found impractical because the requirements for shortmore » and extended range systems are significantly different. Concepts for steering and tracking were evaluated through lab and field experiments which progressed from proof of concept tests to cooperative field tests with gas utilities at the various stages of system development. The systems were brought to commercial status with needed modifications and the technology transferred to licensees who would market the systems. The program resulted in the development and commercialization of five distinct guided boring system. Pipe can now be installed more rapidly over longer distances with a minimal amount of excavation required for launching and retrieval. This means increased work crew productivity, reduced disturbance to landscaping and environmentally sensitive areas, and reduced traffic disruption and public inconvenience.« less

  12. Updates to the QBIC system

    NASA Astrophysics Data System (ADS)

    Niblack, Carlton W.; Zhu, Xiaoming; Hafner, James L.; Breuel, Tom; Ponceleon, Dulce B.; Petkovic, Dragutin; Flickner, Myron D.; Upfal, Eli; Nin, Sigfredo I.; Sull, Sanghoon; Dom, Byron E.; Yeo, Boon-Lock; Srinivasan, Savitha; Zivkovic, Dan; Penner, Mike

    1997-12-01

    QBICTM (Query By Image Content) is a set of technologies and associated software that allows a user to search, browse, and retrieve image, graphic, and video data from large on-line collections. This paper discusses current research directions of the QBIC project such as indexing for high-dimensional multimedia data, retrieval of gray level images, and storyboard generation suitable for video. It describes aspects of QBIC software including scripting tools, application interfaces, and available GUIs, and gives examples of applications and demonstration systems using it.

  13. SIRW: A web server for the Simple Indexing and Retrieval System that combines sequence motif searches with keyword searches.

    PubMed

    Ramu, Chenna

    2003-07-01

    SIRW (http://sirw.embl.de/) is a World Wide Web interface to the Simple Indexing and Retrieval System (SIR) that is capable of parsing and indexing various flat file databases. In addition it provides a framework for doing sequence analysis (e.g. motif pattern searches) for selected biological sequences through keyword search. SIRW is an ideal tool for the bioinformatics community for searching as well as analyzing biological sequences of interest.

  14. Evaluation of applicability of high-resolution multiangle imaging photo-polarimetric observations for aerosol atmospheric correction

    NASA Astrophysics Data System (ADS)

    Kalashnikova, Olga; Garay, Michael; Xu, Feng; Diner, David; Seidel, Felix

    2016-07-01

    Multiangle spectro-polarimetric measurements have been advocated as an additional tool for better understanding and quantifying the aerosol properties needed for atmospheric correction for ocean color retrievals. The central concern of this work is the assessment of the effects of absorbing aerosol properties on remote sensing reflectance measurement uncertainty caused by neglecting UV-enhanced absorption of carbonaceous particles and by not accounting for dust nonsphericity. In addition, we evaluate the polarimetric sensitivity of absorbing aerosol properties in light of measurement uncertainties achievable for the next generation of multi-angle polarimetric imaging instruments, and demonstrate advantages and disadvantages of wavelength selection in the UV/VNIR range. In this work a vector Markov Chain radiative transfer code including bio-optical models was used to quantitatively evaluate in water leaving radiances between atmospheres containing realistic UV-enhanced and non-spherical aerosols and the SEADAS carbonaceous and dust-like aerosol models. The phase matrices for the spherical smoke particles were calculated using a standard Mie code, while those for non-spherical dust particles were calculated using the numerical approach developed for modeling dust for the AERONET network of ground-based sunphotometers. As a next step, we have developed a retrieval code that employs a coupled Markov Chain (MC) and adding/doubling radiative transfer method for joint retrieval of aerosol properties and water leaving radiance from Airborne Multiangle SpectroPolarimetric Imager-1 (AirMSPI-1) polarimetric observations. The AirMSPI-1 instrument has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. AirMSPI typically acquires observations of a target area at 9 view angles between ±67° at 10 m resolution. AirMSPI spectral channels are centered at 355, 380, 445, 470, 555, 660, and 865 nm, with 470, 660, and 865 reporting linear polarization. We tested prototype retrievals by comparing the retrieved aerosol concentration, size distribution, water-leaving radiance, and chlorophyll concentrations from Airborne Multiangle SpectroPolarimetric Imager-1 (AirMSPI-1) observations to values reported by the USC SeaPRISM AERONET-OC site off the coast of California. The retrieval was then applied to a variety of costal regions in California to evaluate variability in the water-leaving radiance under different atmospheric conditions. We will present results, and will discuss algorithm sensitivity and potential applications for future space-borne coastal monitoring.

  15. 48 CFR 26.205 - Disaster Response Registry.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....acquisition.gov to determine the availability of contractors for debris removal, distribution of supplies... retrieved using the CCR Search tool, which can be accessed via https://www.acquisition.gov. These vendors...

  16. CHRONIOUS: a wearable platform for monitoring and management of patients with chronic disease.

    PubMed

    Bellos, Christos; Papadopoulos, Athanassios; Rosso, Roberto; Fotiadis, Dimitrios I

    2011-01-01

    The CHRONIOUS system has been developed based on an open architecture design that consists of a set of subsystems which interact in order to provide all the needed services to the chronic disease patients. An advanced multi-parametric expert system is being implemented that fuses information effectively from various sources using intelligent techniques. Data are collected by sensors of a body network controlling vital signals while additional tools record dietary habits and plans, drug intake, environmental and biochemical parameters and activity data. The CHRONIOUS platform provides guidelines and standards for the future generations of "chronic disease management systems" and facilitates sophisticated monitoring tools. In addition, an ontological information retrieval system is being delivered satisfying the necessities for up-to-date clinical information of Chronic Obstructive pulmonary disease (COPD) and Chronic Kidney Disease (CKD). Moreover, support tools are being embedded in the system, such as the Mental Tools for the monitoring of patient mental health status. The integrated platform provides real-time patient monitoring and supervision, both indoors and outdoors and represents a generic platform for the management of various chronic diseases.

  17. Characterizing Aerosols over Southeast Asia using the AERONET Data Synergy Tool

    NASA Technical Reports Server (NTRS)

    Giles, David M.; Holben, Brent N.; Eck, Thomas F.; Slutsker, Ilya; Slutsker, Ilya; Welton, Ellsworth, J.; Chin, Mian; Kucsera, Thomas; Schmaltz, Jeffery E.; Diehl, Thomas; hide

    2007-01-01

    Biomass burning, urban pollution and dust aerosols have significant impacts on the radiative forcing of the atmosphere over Asia. In order to better quanti@ these aerosol characteristics, the Aerosol Robotic Network (AERONET) has established over 200 sites worldwide with an emphasis in recent years on the Asian continent - specifically Southeast Asia. A total of approximately 15 AERONET sun photometer instruments have been deployed to China, India, Pakistan, Thailand, and Vietnam. Sun photometer spectral aerosol optical depth measurements as well as microphysical and optical aerosol retrievals over Southeast Asia will be analyzed and discussed with supporting ground-based instrument, satellite, and model data sets, which are freely available via the AERONET Data Synergy tool at the AERONET web site (http://aeronet.gsfc.nasa.gov). This web-based data tool provides access to groundbased (AERONET and MPLNET), satellite (MODIS, SeaWiFS, TOMS, and OMI) and model (GOCART and back trajectory analyses) databases via one web portal. Future development of the AERONET Data Synergy Tool will include the expansion of current data sets as well as the implementation of other Earth Science data sets pertinent to advancing aerosol research.

  18. A Tool Supporting Collaborative Data Analytics Workflow Design and Management

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Bao, Q.; Lee, T. J.

    2016-12-01

    Collaborative experiment design could significantly enhance the sharing and adoption of the data analytics algorithms and models emerged in Earth science. Existing data-oriented workflow tools, however, are not suitable to support collaborative design of such a workflow, to name a few, to support real-time co-design; to track how a workflow evolves over time based on changing designs contributed by multiple Earth scientists; and to capture and retrieve collaboration knowledge on workflow design (discussions that lead to a design). To address the aforementioned challenges, we have designed and developed a technique supporting collaborative data-oriented workflow composition and management, as a key component toward supporting big data collaboration through the Internet. Reproducibility and scalability are two major targets demanding fundamental infrastructural support. One outcome of the project os a software tool, supporting an elastic number of groups of Earth scientists to collaboratively design and compose data analytics workflows through the Internet. Instead of recreating the wheel, we have extended an existing workflow tool VisTrails into an online collaborative environment as a proof of concept.

  19. A prototype supervised intelligent robot for helping astronauts

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Grimm, K. A.; Pendleton, T. W.

    1994-01-01

    The development status is described of a prototype supervised intelligent robot for space application for purposes of (1) helping the crew of a spacecraft such as the Space Station with various tasks such as holding objects and retrieving/replacing tools and other objects from/into storage, and for purposes of (2) retrieving detached objects, such as equipment or crew, that have become separated from their spacecraft. In addition to this set of tasks in this low Earth orbiting spacecraft environment, it is argued that certain aspects of the technology can be viewed as generic in approach, thereby offering insight into intelligent robots for other tasks and environments. Also described are characterization results on the usable reduced gravity environment in an aircraft flying parabolas (to simulate weightlessness) and results on hardware performance there. These results show it is feasible to use that environment for evaluative testing of dexterous grasping based on real-time visual sensing of freely rotating and translating objects.

  20. Fringe pattern information retrieval using wavelets

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Patimo, Caterina; Manicone, Pasquale D.; Lamberti, Luciano

    2005-08-01

    Two-dimensional phase modulation is currently the basic model used in the interpretation of fringe patterns that contain displacement information, moire, holographic interferometry, speckle techniques. Another way to look to these two-dimensional signals is to consider them as frequency modulated signals. This alternative interpretation has practical implications similar to those that exist in radio engineering for handling frequency modulated signals. Utilizing this model it is possible to obtain frequency information by using the energy approach introduced by Ville in 1944. A natural complementary tool of this process is the wavelet methodology. The use of wavelet makes it possible to obtain the local values of the frequency in a one or two dimensional domain without the need of previous phase retrieval and differentiation. Furthermore from the properties of wavelets it is also possible to obtain at the same time the phase of the signal with the advantage of a better noise removal capabilities and the possibility of developing simpler algorithms for phase unwrapping due to the availability of the derivative of the phase.

  1. Recruit--An Ontology Based Information Retrieval System for Clinical Trials Recruitment.

    PubMed

    Patrão, Diogo F C; Oleynik, Michel; Massicano, Felipe; Morassi Sasso, Ariane

    2015-01-01

    Clinical trials are studies designed to assess whether a new intervention is better than the current alternatives. However, most of them fail to recruit participants on schedule. It is hard to use Electronic Health Record (EHR) data to find eligible patients, therefore studies rely on manual assessment, which is time consuming, inefficient and requires specialized training. In this work we describe the design and development of an information retrieval system with the objective of finding eligible patients for cancer trials. The Recruit system has been in use at A. C. Camargo Cancer Center since August/2014 and contains data from more than 500,000 patients and 9 databases. It uses ontologies to integrate data from several sources and represent medical knowledge, which helps enhance results. One can search both in structured data and inside free text reports. The preliminary quality assessments shows excellent recall rates. Recruit proved to be an useful tool for researchers and its modular design could be applied to other clinical conditions and hospitals.

  2. The data operation centre tool. Architecture and population strategies

    NASA Astrophysics Data System (ADS)

    Dal Pra, Stefano; Crescente, Alberto

    2012-12-01

    Keeping track of the layout of the informatic resources in a big datacenter is a complex task. DOCET is a database-based webtool designed and implemented at INFN. It aims at providing a uniform interface to manage and retrieve needed information about one or more datacenter, such as available hardware, software and their status. Having a suitable application is however useless until most of the information about the centre are not inserted in the DOCET'S database. Manually inserting all the information from scratch is an unfeasible task. After describing DOCET'S high level architecture, its main features and current development track, we present and discuss the work done to populate the DOCET database for the INFN-T1 site by retrieving information from a heterogenous variety of authoritative sources, such as DNS, DHCP, Quattor host profiles, etc. We then describe the work being done to integrate DOCET with some common management operation, such as adding a newly installed host to DHCP and DNS, or creating a suitable Quattor profile template for it.

  3. A Systematic Review of Reporting Tools Applicable to Sexual and Reproductive Health Programmes: Step 1 in Developing Programme Reporting Standards

    PubMed Central

    Ali, Moazzam; Chandra-Mouli, Venkatraman; Tran, Nhan; Gülmezoglu, A. Metin

    2015-01-01

    Background Complete and accurate reporting of programme preparation, implementation and evaluation processes in the field of sexual and reproductive health (SRH) is essential to understand the impact of SRH programmes, as well as to guide their replication and scale-up. Objectives To provide an overview of existing reporting tools and identify core items used in programme reporting with a focus on programme preparation, implementation and evaluation processes. Methods A systematic review was completed for the period 2000–2014. Reporting guidelines, checklists and tools, irrespective of study design, applicable for reporting on programmes targeting SRH outcomes, were included. Two independent reviewers screened the title and abstract of all records. Full texts were assessed in duplicate, followed by data extraction on the focus, content area, year of publication, validation and description of reporting items. Data was synthesized using an iterative thematic approach, where items related to programme preparation, implementation and evaluation in each tool were extracted and aggregated into a consolidated list. Results Out of the 3,656 records screened for title and abstracts, full texts were retrieved for 182 articles, out of which 108 were excluded. Seventy-four full text articles corresponding to 45 reporting tools were retained for synthesis. The majority of tools were developed for reporting on intervention research (n = 15), randomized controlled trials (n = 8) and systematic reviews (n = 7). We identified a total of 50 reporting items, across three main domains and corresponding sub-domains: programme preparation (objective/focus, design, piloting); programme implementation (content, timing/duration/location, providers/staff, participants, delivery, implementation outcomes), and programme evaluation (process evaluation, implementation barriers/facilitators, outcome/impact evaluation). Conclusions Over the past decade a wide range of tools have been developed to improve the reporting of health research. Development of Programme Reporting Standards (PRS) for SRH can fill a significant gap in existing reporting tools. This systematic review is the first step in the development of such standards. In the next steps, we will draft a preliminary version of the PRS based on the aggregate list of identified items, and finalize the tool using a consensus process among experts and user-testing. PMID:26418859

  4. A Systematic Review of Reporting Tools Applicable to Sexual and Reproductive Health Programmes: Step 1 in Developing Programme Reporting Standards.

    PubMed

    Kågesten, Anna; Tunçalp, Ӧzge; Ali, Moazzam; Chandra-Mouli, Venkatraman; Tran, Nhan; Gülmezoglu, A Metin

    2015-01-01

    Complete and accurate reporting of programme preparation, implementation and evaluation processes in the field of sexual and reproductive health (SRH) is essential to understand the impact of SRH programmes, as well as to guide their replication and scale-up. To provide an overview of existing reporting tools and identify core items used in programme reporting with a focus on programme preparation, implementation and evaluation processes. A systematic review was completed for the period 2000-2014. Reporting guidelines, checklists and tools, irrespective of study design, applicable for reporting on programmes targeting SRH outcomes, were included. Two independent reviewers screened the title and abstract of all records. Full texts were assessed in duplicate, followed by data extraction on the focus, content area, year of publication, validation and description of reporting items. Data was synthesized using an iterative thematic approach, where items related to programme preparation, implementation and evaluation in each tool were extracted and aggregated into a consolidated list. Out of the 3,656 records screened for title and abstracts, full texts were retrieved for 182 articles, out of which 108 were excluded. Seventy-four full text articles corresponding to 45 reporting tools were retained for synthesis. The majority of tools were developed for reporting on intervention research (n = 15), randomized controlled trials (n = 8) and systematic reviews (n = 7). We identified a total of 50 reporting items, across three main domains and corresponding sub-domains: programme preparation (objective/focus, design, piloting); programme implementation (content, timing/duration/location, providers/staff, participants, delivery, implementation outcomes), and programme evaluation (process evaluation, implementation barriers/facilitators, outcome/impact evaluation). Over the past decade a wide range of tools have been developed to improve the reporting of health research. Development of Programme Reporting Standards (PRS) for SRH can fill a significant gap in existing reporting tools. This systematic review is the first step in the development of such standards. In the next steps, we will draft a preliminary version of the PRS based on the aggregate list of identified items, and finalize the tool using a consensus process among experts and user-testing.

  5. Developing A Web-based User Interface for Semantic Information Retrieval

    NASA Technical Reports Server (NTRS)

    Berrios, Daniel C.; Keller, Richard M.

    2003-01-01

    While there are now a number of languages and frameworks that enable computer-based systems to search stored data semantically, the optimal design for effective user interfaces for such systems is still uncle ar. Such interfaces should mask unnecessary query detail from users, yet still allow them to build queries of arbitrary complexity without significant restrictions. We developed a user interface supporting s emantic query generation for Semanticorganizer, a tool used by scient ists and engineers at NASA to construct networks of knowledge and dat a. Through this interface users can select node types, node attribute s and node links to build ad-hoc semantic queries for searching the S emanticOrganizer network.

  6. A fuzzy case based reasoning tool for model based approach to rocket engine health monitoring

    NASA Technical Reports Server (NTRS)

    Krovvidy, Srinivas; Nolan, Adam; Hu, Yong-Lin; Wee, William G.

    1992-01-01

    In this system we develop a fuzzy case based reasoner that can build a case representation for several past anomalies detected, and we develop case retrieval methods that can be used to index a relevant case when a new problem (case) is presented using fuzzy sets. The choice of fuzzy sets is justified by the uncertain data. The new problem can be solved using knowledge of the model along with the old cases. This system can then be used to generalize the knowledge from previous cases and use this generalization to refine the existing model definition. This in turn can help to detect failures using the model based algorithms.

  7. Performance of Case-Based Reasoning Retrieval Using Classification Based on Associations versus Jcolibri and FreeCBR: A Further Validation Study

    NASA Astrophysics Data System (ADS)

    Aljuboori, Ahmed S.; Coenen, Frans; Nsaif, Mohammed; Parsons, David J.

    2018-05-01

    Case-Based Reasoning (CBR) plays a major role in expert system research. However, a critical problem can be met when a CBR system retrieves incorrect cases. Class Association Rules (CARs) have been utilized to offer a potential solution in a previous work. The aim of this paper was to perform further validation of Case-Based Reasoning using a Classification based on Association Rules (CBRAR) to enhance the performance of Similarity Based Retrieval (SBR). The CBRAR strategy uses a classed frequent pattern tree algorithm (FP-CAR) in order to disambiguate wrongly retrieved cases in CBR. The research reported in this paper makes contributions to both fields of CBR and Association Rules Mining (ARM) in that full target cases can be extracted from the FP-CAR algorithm without invoking P-trees and union operations. The dataset used in this paper provided more efficient results when the SBR retrieves unrelated answers. The accuracy of the proposed CBRAR system outperforms the results obtained by existing CBR tools such as Jcolibri and FreeCBR.

  8. Retrieval of atmospheric backscatter and extinction profiles with the aladin airborne demonstrator (A2D)

    NASA Astrophysics Data System (ADS)

    Geiss, Alexander; Marksteiner, Uwe; Lux, Oliver; Lemmerz, Christian; Reitebuch, Oliver; Kanitz, Thomas; Straume-Lindner, Anne Grete

    2018-04-01

    By the end of 2017, the European Space Agency (ESA) will launch the Atmospheric laser Doppler instrument (ALADIN), a direct detection Doppler wind lidar operating at 355 nm. An important tool for the validation and optimization of ALADIN's hardware and data processors for wind retrievals with real atmospheric signals is the ALADIN airborne demonstrator A2D. In order to be able to validate and test aerosol retrieval algorithms from ALADIN, an algorithm for the retrieval of atmospheric backscatter and extinction profiles from A2D is necessary. The A2D is utilizing a direct detection scheme by using a dual Fabry-Pérot interferometer to measure molecular Rayleigh signals and a Fizeau interferometer to measure aerosol Mie returns. Signals are captured by accumulation charge coupled devices (ACCD). These specifications make different steps in the signal preprocessing necessary. In this paper, the required steps to retrieve aerosol optical products, i. e. particle backscatter coefficient βp, particle extinction coefficient αp and lidar ratio Sp from A2D raw signals are described.

  9. On the psychology of the recognition heuristic: retrieval primacy as a key determinant of its use.

    PubMed

    Pachur, Thorsten; Hertwig, Ralph

    2006-09-01

    The recognition heuristic is a prime example of a boundedly rational mind tool that rests on an evolved capacity, recognition, and exploits environmental structures. When originally proposed, it was conjectured that no other probabilistic cue reverses the recognition-based inference (D. G. Goldstein & G. Gigerenzer, 2002). More recent studies challenged this view and gave rise to the argument that recognition enters inferences just like any other probabilistic cue. By linking research on the heuristic with research on recognition memory, the authors argue that the retrieval of recognition information is not tantamount to the retrieval of other probabilistic cues. Specifically, the retrieval of subjective recognition precedes that of an objective probabilistic cue and occurs at little to no cognitive cost. This retrieval primacy gives rise to 2 predictions, both of which have been empirically supported: Inferences in line with the recognition heuristic (a) are made faster than inferences inconsistent with it and (b) are more prevalent under time pressure. Suspension of the heuristic, in contrast, requires additional time, and direct knowledge of the criterion variable, if available, can trigger such suspension. Copyright 2006 APA

  10. Evaluation of Micronutrient Sensors for Food Matrices in Resource-Limited Settings: A Systematic Narrative Review.

    PubMed

    Waller, Anna W; Lotton, Jennifer L; Gaur, Shashank; Andrade, Jeanette M; Andrade, Juan E

    2018-06-21

    In resource-limited settings, mass food fortification is a common strategy to ensure the population consumes appropriate quantities of essential micronutrients. Food and government organizations in these settings, however, lack tools to monitor the quality and compliance of fortified products and their efficacy to enhance nutrient status. The World Health Organization has developed general guidelines known as ASSURED (Affordable, Sensitive, Specific, User-friendly, Rapid and Robust, Equipment-free, and Deliverable to end-users) to aid the development of useful diagnostic tools for these settings. These guidelines assume performance aspects such as sufficient accuracy, reliability, and validity. The purpose of this systematic narrative review is to examine the micronutrient sensor literature on its adherence towards the ASSURED criteria along with accuracy, reliability, and validation when developing micronutrient sensors for resource-limited settings. Keyword searches were conducted in three databases: Web of Science, PubMed, and Scopus and were based on 6-point inclusion criteria. A 16-question quality assessment tool was developed to determine the adherence towards quality and performance criteria. Of the 2,365 retrieved studies, 42 sensors were included based on inclusion/exclusion criteria. Results showed that improvements to the current sensor design are necessary, especially their affordability, user-friendliness, robustness, equipment-free, and deliverability within the ASSURED criteria, and accuracy and validity of the additional criteria to be useful in resource-limited settings. Although it requires further validation, the 16-question quality assessment tool can be used as a guide in the development of sensors for resource-limited settings. © 2018 Institute of Food Technologists®.

  11. Review of Forensic Tools for Smartphones

    NASA Astrophysics Data System (ADS)

    Jahankhani, Hamid; Azam, Amir

    The technological capability of mobile devices in particular Smartphones makes their use of value to the criminal community as a data terminal in the facilitation of organised crime or terrorism. The effective targeting of these devices from criminal and security intelligence perspectives and subsequent detailed forensic examination of the targeted device will significantly enhance the evidence available to the law enforcement community. When phone devices are involved in crimes, forensic examiners require tools that allow the proper retrieval and prompt examination of information present on these devices. Smartphones that are compliant to Global System for Mobile Communication (GSM) standards, will maintains their identity and user's personal information on Subscriber Identity Module (SIM). Beside SIM cards, substantial amount of information is stored on device's internal memory and external memory modules. The aim of this paper is to give an overview of the currently available forensic software tools that are developed to carry out forensic investigation of mobile devices and point to current weaknesses within this process.

  12. What Can Graph Theory Tell Us About Word Learning and Lexical Retrieval?

    PubMed Central

    Vitevitch, Michael S.

    2008-01-01

    Purpose Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of phonological word-forms. Method Pajek, a program for large network analysis and visualization (V. Batagelj & A. Mvrar, 1998), was used to examine several characteristics of a network derived from a computerized database of the adult lexicon. Nodes in the network represented words, and a link connected two nodes if the words were phonological neighbors. Results The average path length and clustering coefficient suggest that the phonological network exhibits small-world characteristics. The degree distribution was fit better by an exponential rather than a power-law function. Finally, the network exhibited assortative mixing by degree. Some of these structural characteristics were also found in graphs that were formed by 2 simple stochastic processes suggesting that similar processes might influence the development of the lexicon. Conclusions The graph theoretic perspective may provide novel insights about the mental lexicon and lead to future studies that help us better understand language development and processing. PMID:18367686

  13. BanTeC: a software tool for management of corneal transplantation.

    PubMed

    López-Alvarez, P; Caballero, F; Trias, J; Cortés, U; López-Navidad, A

    2005-11-01

    Until recently, all cornea information at our tissue bank was managed manually, no specific database or computer tool had been implemented to provide electronic versions of documents and medical reports. The main objective of the BanTeC project was therefore to create a computerized system to integrate and classify all the information and documents used in the center in order to facilitate management of retrieved, transplanted corneal tissues. We used the Windows platform to develop the project. Microsoft Access and Microsoft Jet Engine were used at the database level and Data Access Objects was the chosen data access technology. In short, the BanTeC software seeks to computerize the tissue bank. All the initial stages of the development have now been completed, from specification of needs, program design and implementation of the software components, to the total integration of the final result in the real production environment. BanTeC will allow the generation of statistical reports for analysis to improve our performance.

  14. Rapid Accurate Identification of Tuberculous Meningitis Among South African Children Using a Novel Clinical Decision Tool.

    PubMed

    Goenka, Anu; Jeena, Prakash M; Mlisana, Koleka; Solomon, Tom; Spicer, Kevin; Stephenson, Rebecca; Verma, Arpana; Dhada, Barnesh; Griffiths, Michael J

    2018-03-01

    Early diagnosis of tuberculous meningitis (TBM) is crucial to achieve optimum outcomes. There is no effective rapid diagnostic test for use in children. We aimed to develop a clinical decision tool to facilitate the early diagnosis of childhood TBM. Retrospective case-control study was performed across 7 hospitals in KwaZulu-Natal, South Africa (2010-2014). We identified the variables most predictive of microbiologically confirmed TBM in children (3 months to 15 years) by univariate analysis. These variables were modelled into a clinical decision tool and performance tested on an independent sample group. Of 865 children with suspected TBM, 3% (25) were identified with microbiologically confirmed TBM. Clinical information was retrieved for 22 microbiologically confirmed cases of TBM and compared with 66 controls matched for age, ethnicity, sex and geographical origin. The 9 most predictive variables among the confirmed cases were used to develop a clinical decision tool (CHILD TB LP): altered Consciousness; caregiver HIV infected; Illness length >7 days; Lethargy; focal neurologic Deficit; failure to Thrive; Blood/serum sodium <132 mmol/L; CSF >10 Lymphocytes ×10/L; CSF Protein >0.65 g/L. This tool successfully classified an independent sample of 7 cases and 21 controls with a sensitivity of 100% and specificity of 90%. The CHILD TB LP decision tool accurately classified microbiologically confirmed TBM. We propose that CHILD TB LP is prospectively evaluated as a novel rapid diagnostic tool for use in the initial evaluation of children with suspected neurologic infection presenting to hospitals in similar settings.

  15. Discounting the value of safety: effects of perceived risk and effort.

    PubMed

    Sigurdsson, Sigurdur O; Taylor, Matthew A; Wirth, Oliver

    2013-09-01

    Although falls from heights remain the most prevalent cause of fatalities in the construction industry, factors impacting safety-related choices associated with work at heights are not completely understood. Better tools are needed to identify and study the factors influencing safety-related choices and decision making. Using a computer-based task within a behavioral economics paradigm, college students were presented a choice between two hypothetical scenarios that differed in working height and effort associated with retrieving and donning a safety harness. Participants were instructed to choose the scenario in which they were more likely to wear the safety harness. Based on choice patterns, switch points were identified, indicating when the perceived risk in both scenarios was equivalent. Switch points were a systematic function of working height and effort, and the quantified relation between perceived risk and effort was described well by a hyperbolic equation. Choice patterns revealed that the perceived risk of working at heights decreased as the effort to retrieve and don a safety harness increased. Results contribute to the development of computer-based procedure for assessing risk discounting within a behavioral economics framework. Such a procedure can be used as a research tool to study factors that influence safety-related decision making with a goal of informing more effective prevention and intervention strategies. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  16. Unlocking the potential of publicly available microarray data using inSilicoDb and inSilicoMerging R/Bioconductor packages.

    PubMed

    Taminau, Jonatan; Meganck, Stijn; Lazar, Cosmin; Steenhoff, David; Coletta, Alain; Molter, Colin; Duque, Robin; de Schaetzen, Virginie; Weiss Solís, David Y; Bersini, Hugues; Nowé, Ann

    2012-12-24

    With an abundant amount of microarray gene expression data sets available through public repositories, new possibilities lie in combining multiple existing data sets. In this new context, analysis itself is no longer the problem, but retrieving and consistently integrating all this data before delivering it to the wide variety of existing analysis tools becomes the new bottleneck. We present the newly released inSilicoMerging R/Bioconductor package which, together with the earlier released inSilicoDb R/Bioconductor package, allows consistent retrieval, integration and analysis of publicly available microarray gene expression data sets. Inside the inSilicoMerging package a set of five visual and six quantitative validation measures are available as well. By providing (i) access to uniformly curated and preprocessed data, (ii) a collection of techniques to remove the batch effects between data sets from different sources, and (iii) several validation tools enabling the inspection of the integration process, these packages enable researchers to fully explore the potential of combining gene expression data for downstream analysis. The power of using both packages is demonstrated by programmatically retrieving and integrating gene expression studies from the InSilico DB repository [https://insilicodb.org/app/].

  17. 1975 Automotive Characteristics Data Base

    DOT National Transportation Integrated Search

    1976-10-01

    A study of automobile characteristics as a supportive tool for auto energy consumption, fuel economy monitoring, and fleet analysis studies is presented. This report emphasizes the utility of efficient data retrieval methods in fuel economy analysis,...

  18. A neotropical Miocene pollen database employing image-based search and semantic modeling.

    PubMed

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-08-01

    Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.

  19. Encoding and decoding of digital spiral imaging based on bidirectional transformation of light's spatial eigenmodes.

    PubMed

    Zhang, Wuhong; Chen, Lixiang

    2016-06-15

    Digital spiral imaging has been demonstrated as an effective optical tool to encode optical information and retrieve topographic information of an object. Here we develop a conceptually new and concise scheme for optical image encoding and decoding toward free-space digital spiral imaging. We experimentally demonstrate that the optical lattices with ℓ=±50 orbital angular momentum superpositions and a clover image with nearly 200 Laguerre-Gaussian (LG) modes can be well encoded and successfully decoded. It is found that an image encoded/decoded with a two-index LG spectrum (considering both azimuthal and radial indices, ℓ and p) possesses much higher fidelity than that with a one-index LG spectrum (only considering the ℓ index). Our work provides an alternative tool for the image encoding/decoding scheme toward free-space optical communications.

  20. How can knowledge discovery methods uncover spatio-temporal patterns in environmental data?

    NASA Astrophysics Data System (ADS)

    Wachowicz, Monica

    2000-04-01

    This paper proposes the integration of KDD, GVis and STDB as a long-term strategy, which will allow users to apply knowledge discovery methods for uncovering spatio-temporal patterns in environmental data. The main goal is to combine innovative techniques and associated tools for exploring very large environmental data sets in order to arrive at valid, novel, potentially useful, and ultimately understandable spatio-temporal patterns. The GeoInsight approach is described using the principles and key developments in the research domains of KDD, GVis, and STDB. The GeoInsight approach aims at the integration of these research domains in order to provide tools for performing information retrieval, exploration, analysis, and visualization. The result is a knowledge-based design, which involves visual thinking (perceptual-cognitive process) and automated information processing (computer-analytical process).

  1. An XML-based Generic Tool for Information Retrieval in Solar Databases

    NASA Astrophysics Data System (ADS)

    Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain

    This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.

  2. The Transgenic RNAi Project at Harvard Medical School: Resources and Validation

    PubMed Central

    Perkins, Lizabeth A.; Holderbaum, Laura; Tao, Rong; Hu, Yanhui; Sopko, Richelle; McCall, Kim; Yang-Zhou, Donghui; Flockhart, Ian; Binari, Richard; Shim, Hye-Seok; Miller, Audrey; Housden, Amy; Foos, Marianna; Randkelv, Sakara; Kelley, Colleen; Namgyal, Pema; Villalta, Christians; Liu, Lu-Ping; Jiang, Xia; Huan-Huan, Qiao; Wang, Xia; Fujiyama, Asao; Toyoda, Atsushi; Ayers, Kathleen; Blum, Allison; Czech, Benjamin; Neumuller, Ralph; Yan, Dong; Cavallaro, Amanda; Hibbard, Karen; Hall, Don; Cooley, Lynn; Hannon, Gregory J.; Lehmann, Ruth; Parks, Annette; Mohr, Stephanie E.; Ueda, Ryu; Kondo, Shu; Ni, Jian-Quan; Perrimon, Norbert

    2015-01-01

    To facilitate large-scale functional studies in Drosophila, the Drosophila Transgenic RNAi Project (TRiP) at Harvard Medical School (HMS) was established along with several goals: developing efficient vectors for RNAi that work in all tissues, generating a genome-scale collection of RNAi stocks with input from the community, distributing the lines as they are generated through existing stock centers, validating as many lines as possible using RT–qPCR and phenotypic analyses, and developing tools and web resources for identifying RNAi lines and retrieving existing information on their quality. With these goals in mind, here we describe in detail the various tools we developed and the status of the collection, which is currently composed of 11,491 lines and covering 71% of Drosophila genes. Data on the characterization of the lines either by RT–qPCR or phenotype is available on a dedicated website, the RNAi Stock Validation and Phenotypes Project (RSVP, http://www.flyrnai.org/RSVP.html), and stocks are available from three stock centers, the Bloomington Drosophila Stock Center (United States), National Institute of Genetics (Japan), and TsingHua Fly Center (China). PMID:26320097

  3. EM-21 Retrieval Knowledge Center: Waste Retrieval Challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fellinger, Andrew P.; Rinker, Michael W.; Berglin, Eric J.

    EM-21 is the Waste Processing Division of the Office of Engineering and Technology, within the U.S. Department of Energy’s (DOE) Office of Environmental Management (EM). In August of 2008, EM-21 began an initiative to develop a Retrieval Knowledge Center (RKC) to provide the DOE, high level waste retrieval operators, and technology developers with centralized and focused location to share knowledge and expertise that will be used to address retrieval challenges across the DOE complex. The RKC is also designed to facilitate information sharing across the DOE Waste Site Complex through workshops, and a searchable database of waste retrieval technology information.more » The database may be used to research effective technology approaches for specific retrieval tasks and to take advantage of the lessons learned from previous operations. It is also expected to be effective for remaining current with state-of-the-art of retrieval technologies and ongoing development within the DOE Complex. To encourage collaboration of DOE sites with waste retrieval issues, the RKC team is co-led by the Savannah River National Laboratory (SRNL) and the Pacific Northwest National Laboratory (PNNL). Two RKC workshops were held in the Fall of 2008. The purpose of these workshops was to define top level waste retrieval functional areas, exchange lessons learned, and develop a path forward to support a strategic business plan focused on technology needs for retrieval. The primary participants involved in these workshops included retrieval personnel and laboratory staff that are associated with Hanford and Savannah River Sites since the majority of remaining DOE waste tanks are located at these sites. This report summarizes and documents the results of the initial RKC workshops. Technology challenges identified from these workshops and presented here are expected to be a key component to defining future RKC-directed tasks designed to facilitate tank waste retrieval solutions.« less

  4. Care episode retrieval: distributional semantic models for information retrieval in the clinical domain.

    PubMed

    Moen, Hans; Ginter, Filip; Marsi, Erwin; Peltonen, Laura-Maria; Salakoski, Tapio; Salanterä, Sanna

    2015-01-01

    Patients' health related information is stored in electronic health records (EHRs) by health service providers. These records include sequential documentation of care episodes in the form of clinical notes. EHRs are used throughout the health care sector by professionals, administrators and patients, primarily for clinical purposes, but also for secondary purposes such as decision support and research. The vast amounts of information in EHR systems complicate information management and increase the risk of information overload. Therefore, clinicians and researchers need new tools to manage the information stored in the EHRs. A common use case is, given a--possibly unfinished--care episode, to retrieve the most similar care episodes among the records. This paper presents several methods for information retrieval, focusing on care episode retrieval, based on textual similarity, where similarity is measured through domain-specific modelling of the distributional semantics of words. Models include variants of random indexing and the semantic neural network model word2vec. Two novel methods are introduced that utilize the ICD-10 codes attached to care episodes to better induce domain-specificity in the semantic model. We report on experimental evaluation of care episode retrieval that circumvents the lack of human judgements regarding episode relevance. Results suggest that several of the methods proposed outperform a state-of-the art search engine (Lucene) on the retrieval task.

  5. Stereoscopic Height and Wind Retrievals for Aerosol Plumes with the MISR INteractive eXplorer (MINX)

    NASA Technical Reports Server (NTRS)

    Nelson, D.L.; Garay, M.J.; Kahn, Ralph A.; Dunst, Ben A.

    2013-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard the Terra satellite acquires imagery at 275-m resolution at nine angles ranging from 0deg (nadir) to 70deg off-nadir. This multi-angle capability facilitates the stereoscopic retrieval of heights and motion vectors for clouds and aerosol plumes. MISR's operational stereo product uses this capability to retrieve cloud heights and winds for every satellite orbit, yielding global coverage every nine days. The MISR INteractive eXplorer (MINX) visualization and analysis tool complements the operational stereo product by providing users the ability to retrieve heights and winds locally for detailed studies of smoke, dust and volcanic ash plumes, as well as clouds, at higher spatial resolution and with greater precision than is possible with the operational product or with other space-based, passive, remote sensing instruments. This ability to investigate plume geometry and dynamics is becoming increasingly important as climate and air quality studies require greater knowledge about the injection of aerosols and the location of clouds within the atmosphere. MINX incorporates features that allow users to customize their stereo retrievals for optimum results under varying aerosol and underlying surface conditions. This paper discusses the stereo retrieval algorithms and retrieval options in MINX, and provides appropriate examples to explain how the program can be used to achieve the best results.

  6. Care episode retrieval: distributional semantic models for information retrieval in the clinical domain

    PubMed Central

    2015-01-01

    Patients' health related information is stored in electronic health records (EHRs) by health service providers. These records include sequential documentation of care episodes in the form of clinical notes. EHRs are used throughout the health care sector by professionals, administrators and patients, primarily for clinical purposes, but also for secondary purposes such as decision support and research. The vast amounts of information in EHR systems complicate information management and increase the risk of information overload. Therefore, clinicians and researchers need new tools to manage the information stored in the EHRs. A common use case is, given a - possibly unfinished - care episode, to retrieve the most similar care episodes among the records. This paper presents several methods for information retrieval, focusing on care episode retrieval, based on textual similarity, where similarity is measured through domain-specific modelling of the distributional semantics of words. Models include variants of random indexing and the semantic neural network model word2vec. Two novel methods are introduced that utilize the ICD-10 codes attached to care episodes to better induce domain-specificity in the semantic model. We report on experimental evaluation of care episode retrieval that circumvents the lack of human judgements regarding episode relevance. Results suggest that several of the methods proposed outperform a state-of-the art search engine (Lucene) on the retrieval task. PMID:26099735

  7. RADER: a RApid DEcoy Retriever to facilitate decoy based assessment of virtual screening.

    PubMed

    Wang, Ling; Pang, Xiaoqian; Li, Yecheng; Zhang, Ziying; Tan, Wen

    2017-04-15

    Evaluation of the capacity for separating actives from challenging decoys is a crucial metric of performance related to molecular docking or a virtual screening workflow. The Directory of Useful Decoys (DUD) and its enhanced version (DUD-E) provide a benchmark for molecular docking, although they only contain a limited set of decoys for limited targets. DecoyFinder was released to compensate the limitations of DUD or DUD-E for building target-specific decoy sets. However, desirable query template design, generation of multiple decoy sets of similar quality, and computational speed remain bottlenecks, particularly when the numbers of queried actives and retrieved decoys increases to hundreds or more. Here, we developed a program suite called RApid DEcoy Retriever (RADER) to facilitate the decoy-based assessment of virtual screening. This program adopts a novel database-management regime that supports rapid and large-scale retrieval of decoys, enables high portability of databases, and provides multifaceted options for designing initial query templates from a large number of active ligands and generating subtle decoy sets. RADER provides two operational modes: as a command-line tool and on a web server. Validation of the performance and efficiency of RADER was also conducted and is described. RADER web server and a local version are freely available at http://rcidm.org/rader/ . lingwang@scut.edu.cn or went@scut.edu.cn . Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Atmospheric Parameter Climatologies from AIRS: Monitoring Short-, and Longer-Term Climate Variabilities and 'Trends'

    NASA Technical Reports Server (NTRS)

    Molnar, Gyula; Susskind, Joel

    2008-01-01

    The AIRS instrument is currently the best space-based tool to simultaneously monitor the vertical distribution of key climatically important atmospheric parameters as well as surface properties, and has provided high quality data for more than 5 years. AIRS analysis results produced at the GODDARD/DAAC, based on Versions 4 & 5 of the AIRS retrieval algorithm, are currently available for public use. Here, first we present an assessment of interrelationships of anomalies (proxies of climate variability based on 5 full years, since Sept. 2002) of various climate parameters at different spatial scales. We also present AIRS-retrievals-based global, regional and 1x1 degree grid-scale "trend"-analyses of important atmospheric parameters for this 5-year period. Note that here "trend" simply means the linear fit to the anomaly (relative the mean seasonal cycle) time series of various parameters at the above-mentioned spatial scales, and we present these to illustrate the usefulness of continuing AIRS-based climate observations. Preliminary validation efforts, in terms of intercomparisons of interannual variabilities with other available satellite data analysis results, will also be addressed. For example, we show that the outgoing longwave radiation (OLR) interannual spatial variabilities from the available state-of-the-art CERES measurements and from the AIRS computations are in remarkably good agreement. Version 6 of the AIRS retrieval scheme (currently under development) promises to further improve bias agreements for the absolute values by implementing a more accurate radiative transfer model for the OLR computations and by improving surface emissivity retrievals.

  9. D-MATRIX: A web tool for constructing weight matrix of conserved DNA motifs

    PubMed Central

    Sen, Naresh; Mishra, Manoj; Khan, Feroz; Meena, Abha; Sharma, Ashok

    2009-01-01

    Despite considerable efforts to date, DNA motif prediction in whole genome remains a challenge for researchers. Currently the genome wide motif prediction tools required either direct pattern sequence (for single motif) or weight matrix (for multiple motifs). Although there are known motif pattern databases and tools for genome level prediction but no tool for weight matrix construction. Considering this, we developed a D-MATRIX tool which predicts the different types of weight matrix based on user defined aligned motif sequence set and motif width. For retrieval of known motif sequences user can access the commonly used databases such as TFD, RegulonDB, DBTBS, Transfac. D­MATRIX program uses a simple statistical approach for weight matrix construction, which can be converted into different file formats according to user requirement. It provides the possibility to identify the conserved motifs in the co­regulated genes or whole genome. As example, we successfully constructed the weight matrix of LexA transcription factor binding site with the help of known sos­box cis­regulatory elements in Deinococcus radiodurans genome. The algorithm is implemented in C-Sharp and wrapped in ASP.Net to maintain a user friendly web interface. D­MATRIX tool is accessible through the CIMAP domain network. Availability http://203.190.147.116/dmatrix/ PMID:19759861

  10. D-MATRIX: a web tool for constructing weight matrix of conserved DNA motifs.

    PubMed

    Sen, Naresh; Mishra, Manoj; Khan, Feroz; Meena, Abha; Sharma, Ashok

    2009-07-27

    Despite considerable efforts to date, DNA motif prediction in whole genome remains a challenge for researchers. Currently the genome wide motif prediction tools required either direct pattern sequence (for single motif) or weight matrix (for multiple motifs). Although there are known motif pattern databases and tools for genome level prediction but no tool for weight matrix construction. Considering this, we developed a D-MATRIX tool which predicts the different types of weight matrix based on user defined aligned motif sequence set and motif width. For retrieval of known motif sequences user can access the commonly used databases such as TFD, RegulonDB, DBTBS, Transfac. D-MATRIX program uses a simple statistical approach for weight matrix construction, which can be converted into different file formats according to user requirement. It provides the possibility to identify the conserved motifs in the co-regulated genes or whole genome. As example, we successfully constructed the weight matrix of LexA transcription factor binding site with the help of known sos-box cis-regulatory elements in Deinococcus radiodurans genome. The algorithm is implemented in C-Sharp and wrapped in ASP.Net to maintain a user friendly web interface. D-MATRIX tool is accessible through the CIMAP domain network. http://203.190.147.116/dmatrix/

  11. Efficacy of teaching methods used to develop critical thinking in nursing and midwifery undergraduate students: A systematic review of the literature.

    PubMed

    Carter, Amanda G; Creedy, Debra K; Sidebotham, Mary

    2016-05-01

    The value and importance of incorporating strategies that promote critical thinking in nursing and midwifery undergraduate programmes are well documented. However, relatively little is known about the effectiveness of teaching strategies in promoting CT. Evaluating effectiveness is important to promote 'best practise' in teaching. To evaluate the efficacy of teaching methods used to develop critical thinking skills in nursing and midwifery undergraduate students. The following six databases; CINAHL, Ovid Medline, ERIC, Informit, PsycINFO and Scopus were searched and resulted in the retrieval of 1315 papers. After screening for inclusion, each paper was evaluated using the Critical Appraisal Skills Programme tool. Twenty-eight studies met the inclusion criteria and quality appraisal. Twelve different teaching interventions were tested in 8 countries. Results varied, with little consistency across studies using the same type of intervention or outcome tool. Sixteen tools were used to measure the efficacy of teaching in developing critical thinking. Seventeen studies identified a significant increase in critical thinking, while nine studies found no increases, and two found unexplained decreases in CT when using a similar educational intervention. Whilst this review aimed to identify effective teaching strategies that promote and develop critical thinking, flaws in methodology and outcome measures contributed to inconsistent findings. The continued use of generalised CT tools is unlikely to help identify appropriate teaching methods that will improve CT abilities of midwifery and nursing students and prepare them for practise. The review was limited to empirical studies published in English that used measures of critical thinking with midwifery and nursing students. Discipline specific strategies and tools that measure students' abilities to apply CT in practise are needed. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  12. Semantic e-Science in Space Physics - A Case Study

    NASA Astrophysics Data System (ADS)

    Narock, T.; Yoon, V.; Merka, J.; Szabo, A.

    2009-05-01

    Several search and retrieval systems for space physics data are currently under development in NASA's heliophysics data environment. We present a case study of two such systems, and describe our efforts in implementing an ontology to aid in data discovery. In doing so we highlight the various aspects of knowledge representation and show how they led to our ontology design, creation, and implementation. We discuss advantages that scientific reasoning allows, as well as difficulties encountered in current tools and standards. Finally, we present a space physics research project conducted with and without e-Science and contrast the two approaches.

  13. Development of a personalized training system using the Lung Image Database Consortium and Image Database resource Initiative Database.

    PubMed

    Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong

    2014-12-01

    The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  14. Mercury- Distributed Metadata Management, Data Discovery and Access System

    NASA Astrophysics Data System (ADS)

    Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.

    2007-12-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  15. Conversion and Retrievability of Hard Copy and Digital Documents on Optical Disks

    DTIC Science & Technology

    1992-03-01

    53 B. CURRENT THESIS PREPARATION TOOLS . ....... 54 1. Thesis Preparation using G-Thesis ..... 55 2. Thesis Preparation using Framemaker ...School mainframe. • Computer Science department students can use a software package called Framemaker , available on Sun work stations in their...by most thesis typists and students. For this reason, the discussion of thesis preparation tools will be limited to; G-thesis, Framemaker and

  16. PMD2HD--a web tool aligning a PubMed search results page with the local German Cancer Research Centre library collection.

    PubMed

    Bohne-Lang, Andreas; Lang, Elke; Taube, Anke

    2005-06-27

    Web-based searching is the accepted contemporary mode of retrieving relevant literature, and retrieving as many full text articles as possible is a typical prerequisite for research success. In most cases only a proportion of references will be directly accessible as digital reprints through displayed links. A large number of references, however, have to be verified in library catalogues and, depending on their availability, are accessible as print holdings or by interlibrary loan request. The problem of verifying local print holdings from an initial retrieval set of citations can be solved using Z39.50, an ANSI protocol for interactively querying library information systems. Numerous systems include Z39.50 interfaces and therefore can process Z39.50 interactive requests. However, the programmed query interaction command structure is non-intuitive and inaccessible to the average biomedical researcher. For the typical user, it is necessary to implement the protocol within a tool that hides and handles Z39.50 syntax, presenting a comfortable user interface. PMD2HD is a web tool implementing Z39.50 to provide an appropriately functional and usable interface to integrate into the typical workflow that follows an initial PubMed literature search, providing users with an immediate asset to assist in the most tedious step in literature retrieval, checking for subscription holdings against a local online catalogue. PMD2HD can facilitate literature access considerably with respect to the time and cost of manual comparisons of search results with local catalogue holdings. The example presented in this article is related to the library system and collections of the German Cancer Research Centre. However, the PMD2HD software architecture and use of common Z39.50 protocol commands allow for transfer to a broad range of scientific libraries using Z39.50-compatible library information systems.

  17. A browser-based 3D Visualization Tool designed for comparing CERES/CALIOP/CloudSAT level-2 data sets.

    NASA Astrophysics Data System (ADS)

    Chu, C.; Sun-Mack, S.; Chen, Y.; Heckert, E.; Doelling, D. R.

    2017-12-01

    In Langley NASA, Clouds and the Earth's Radiant Energy System (CERES) and Moderate Resolution Imaging Spectroradiometer (MODIS) are merged with Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) and CloudSat Cloud Profiling Radar (CPR). The CERES merged product (C3M) matches up to three CALIPSO footprints with each MODIS pixel along its ground track. It then assigns the nearest CloudSat footprint to each of those MODIS pixels. The cloud properties from MODIS, retrieved using the CERES algorithms, are included in C3M with the matched CALIPSO and CloudSat products along with radiances from 18 MODIS channels. The dataset is used to validate the CERES retrieved MODIS cloud properties and the computed TOA and surface flux difference using MODIS or CALIOP/CloudSAT retrieved clouds. This information is then used to tune the computed fluxes to match the CERES observed TOA flux. A visualization tool will be invaluable to determine the cause of these large cloud and flux differences in order to improve the methodology. This effort is part of larger effort to allow users to order the CERES C3M product sub-setted by time and parameter as well as the previously mentioned visualization capabilities. This presentation will show a new graphical 3D-interface, 3D-CERESVis, that allows users to view both passive remote sensing satellites (MODIS and CERES) and active satellites (CALIPSO and CloudSat), such that the detailed vertical structures of cloud properties from CALIPSO and CloudSat are displayed side by side with horizontally retrieved cloud properties from MODIS and CERES. Similarly, the CERES computed profile fluxes whether using MODIS or CALIPSO and CloudSat clouds can also be compared. 3D-CERESVis is a browser-based visualization tool that makes uses of techniques such as multiple synchronized cursors, COLLADA format data and Cesium.

  18. Development of a Global Evaporative Stress Index Based on Thermal and Microwave LST towards Improved Monitoring of Agricultural Drought

    NASA Astrophysics Data System (ADS)

    Hain, C.; Anderson, M. C.; Otkin, J.; Holmes, T. R.; Gao, F.

    2017-12-01

    This presentation will describe the development of a global agricultural monitoring tool, with a focus on providing early warning of developing vegetation stress for agricultural decision-makers and stakeholders at relatively high spatial resolution (5-km). The tool is based on remotely sensed estimates of evapotranspiration, retrieved via energy balance principals using observations of land surface temperature. The Evaporative Stress Index (ESI) represents anomalies in the ratio of actual-to-potential ET generated with the ALEXI surface energy balance model. The LST inputs to ESI have been shown to provide early warning information about the development of vegetation stress with stress-elevated canopy temperatures observed well before a decrease in greenness is detected in remotely sensed vegetation indices. As a diagnostic indicator of actual ET, the ESI requires no information regarding antecedent precipitation or soil moisture storage capacity - the current available moisture to vegetation is deduced directly from the remotely sensed LST signal. This signal also inherently accounts for both precipitation and non-precipitation related inputs/sinks to the plant-available soil moisture pool (e.g., irrigation) which can modify crop response to rainfall anomalies. Independence from precipitation data is a benefit for global agricultural monitoring applications due to sparseness in existing ground-based precipitation networks, and time delays in public reporting. Several enhancements to the current ESI framework will be addressed as requested from project stakeholders: (a) integration of "all-sky" MW Ka-band LST retrievals to augment "clear-sky" thermal-only ESI in persistently cloudy regions; (b) operational production of ESI Rapid Change Indices which provide important early warning information related to onset of actual vegetation stress; and (c) assessment of ESI as a predictor of global yield anomalies; initial studies have shown the ability of intra-seasonal ESI to provide an early indication of at-harvest agricultural yield anomalies.

  19. Managing Written Directives: A Software Solution to Streamline Workflow.

    PubMed

    Wagner, Robert H; Savir-Baruch, Bital; Gabriel, Medhat S; Halama, James R; Bova, Davide

    2017-06-01

    A written directive is required by the U.S. Nuclear Regulatory Commission for any use of 131 I above 1.11 MBq (30 μCi) and for patients receiving radiopharmaceutical therapy. This requirement has also been adopted and must be enforced by the agreement states. As the introduction of new radiopharmaceuticals increases therapeutic options in nuclear medicine, time spent on regulatory paperwork also increases. The pressure of managing these time-consuming regulatory requirements may heighten the potential for inaccurate or incomplete directive data and subsequent regulatory violations. To improve on the paper-trail method of directive management, we created a software tool using a Health Insurance Portability and Accountability Act (HIPAA)-compliant database. This software allows for secure data-sharing among physicians, technologists, and managers while saving time, reducing errors, and eliminating the possibility of loss and duplication. Methods: The software tool was developed using Visual Basic, which is part of the Visual Studio development environment for the Windows platform. Patient data are deposited in an Access database on a local HIPAA-compliant secure server or hard disk. Once a working version had been developed, it was installed at our institution and used to manage directives. Updates and modifications of the software were released regularly until no more significant problems were found with its operation. Results: The software has been used at our institution for over 2 y and has reliably kept track of all directives. All physicians and technologists use the software daily and find it superior to paper directives. They can retrieve active directives at any stage of completion, as well as completed directives. Conclusion: We have developed a software solution for the management of written directives that streamlines and structures the departmental workflow. This solution saves time, centralizes the information for all staff to share, and decreases confusion about the creation, completion, filing, and retrieval of directives. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  20. Utilization of AERONET polarimetric measurements for improving retrieval of aerosol microphysics: GSFC, Beijing and Dakar data analysis

    NASA Astrophysics Data System (ADS)

    Fedarenka, Anton; Dubovik, Oleg; Goloub, Philippe; Li, Zhengqiang; Lapyonok, Tatyana; Litvinov, Pavel; Barel, Luc; Gonzalez, Louis; Podvin, Thierry; Crozel, Didier

    2016-08-01

    The study presents the efforts on including the polarimetric data to the routine inversion of the radiometric ground-based measurements for characterization of the atmospheric aerosols and analysis of the obtained advantages in retrieval results. First, to operationally process the large amount of polarimetric data the data preparation tool was developed. The AERONET inversion code adapted for inversion of both intensity and polarization measurements was used for processing. Second, in order to estimate the effect from utilization of polarimetric information on aerosol retrieval results, both synthetic data and the real measurements were processed using developed routine and analyzed. The sensitivity study has been carried out using simulated data based on three main aerosol models: desert dust, urban industrial and urban clean aerosols. The test investigated the effects of utilization of polarization data in the presence of random noise, bias in measurements of optical thickness and angular pointing shift. The results demonstrate the advantage of polarization data utilization in the cases of aerosols with pronounced concentration of fine particles. Further, the extended set of AERONET observations was processed. The data for three sites have been used: GSFC, USA (clean urban aerosol dominated by fine particles), Beijing, China (polluted industrial aerosol characterized by pronounced mixture of both fine and coarse modes) and Dakar, Senegal (desert dust dominated by coarse particles). The results revealed considerable advantage of polarimetric data applying for characterizing fine mode dominated aerosols including industrial pollution (Beijing). The use of polarization corrects particle size distribution by decreasing overestimated fine mode and increasing the coarse mode. It also increases underestimated real part of the refractive index and improves the retrieval of the fraction of spherical particles due to high sensitivity of polarization to particle shape. Overall, the study demonstrates a substantial value of polarimetric data for improving aerosol characterization.

  1. 48 CFR 26.205 - Disaster Response Registry.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... retrieved using the System for Award Management (SAM) search tool, which can be accessed via https://www...”. Contractors are required to register with SAM in order to gain access to the Disaster Response Registry. [74...

  2. 48 CFR 26.205 - Disaster Response Registry.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... retrieved using the System for Award Management (SAM) search tool, which can be accessed via https://www...”. Contractors are required to register with SAM in order to gain access to the Disaster Response Registry. [74...

  3. 32 CFR 290.5 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Definitions. The terms used in this rule with the exception of the following are defined in DCAAP 5410.14. (a... means. This does not include computer software, which is the tool by which to create, store, or retrieve...

  4. 32 CFR 290.5 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Definitions. The terms used in this rule with the exception of the following are defined in DCAAP 5410.14. (a... means. This does not include computer software, which is the tool by which to create, store, or retrieve...

  5. 32 CFR 290.5 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Definitions. The terms used in this rule with the exception of the following are defined in DCAAP 5410.14. (a... means. This does not include computer software, which is the tool by which to create, store, or retrieve...

  6. 32 CFR 290.5 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Definitions. The terms used in this rule with the exception of the following are defined in DCAAP 5410.14. (a... means. This does not include computer software, which is the tool by which to create, store, or retrieve...

  7. An overview of the CellML API and its implementation

    PubMed Central

    2010-01-01

    Background CellML is an XML based language for representing mathematical models, in a machine-independent form which is suitable for their exchange between different authors, and for archival in a model repository. Allowing for the exchange and archival of models in a computer readable form is a key strategic goal in bioinformatics, because of the associated improvements in scientific record accuracy, the faster iterative process of scientific development, and the ability to combine models into large integrative models. However, for CellML models to be useful, tools which can process them correctly are needed. Due to some of the more complex features present in CellML models, such as imports, developing code ab initio to correctly process models can be an onerous task. For this reason, there is a clear and pressing need for an application programming interface (API), and a good implementation of that API, upon which tools can base their support for CellML. Results We developed an API which allows the information in CellML models to be retrieved and/or modified. We also developed a series of optional extension APIs, for tasks such as simplifying the handling of connections between variables, dealing with physical units, validating models, and translating models into different procedural languages. We have also provided a Free/Open Source implementation of this application programming interface, optimised to achieve good performance. Conclusions Tools have been developed using the API which are mature enough for widespread use. The API has the potential to accelerate the development of additional tools capable of processing CellML, and ultimately lead to an increased level of sharing of mathematical model descriptions. PMID:20377909

  8. An overview of the CellML API and its implementation.

    PubMed

    Miller, Andrew K; Marsh, Justin; Reeve, Adam; Garny, Alan; Britten, Randall; Halstead, Matt; Cooper, Jonathan; Nickerson, David P; Nielsen, Poul F

    2010-04-08

    CellML is an XML based language for representing mathematical models, in a machine-independent form which is suitable for their exchange between different authors, and for archival in a model repository. Allowing for the exchange and archival of models in a computer readable form is a key strategic goal in bioinformatics, because of the associated improvements in scientific record accuracy, the faster iterative process of scientific development, and the ability to combine models into large integrative models.However, for CellML models to be useful, tools which can process them correctly are needed. Due to some of the more complex features present in CellML models, such as imports, developing code ab initio to correctly process models can be an onerous task. For this reason, there is a clear and pressing need for an application programming interface (API), and a good implementation of that API, upon which tools can base their support for CellML. We developed an API which allows the information in CellML models to be retrieved and/or modified. We also developed a series of optional extension APIs, for tasks such as simplifying the handling of connections between variables, dealing with physical units, validating models, and translating models into different procedural languages.We have also provided a Free/Open Source implementation of this application programming interface, optimised to achieve good performance. Tools have been developed using the API which are mature enough for widespread use. The API has the potential to accelerate the development of additional tools capable of processing CellML, and ultimately lead to an increased level of sharing of mathematical model descriptions.

  9. Dissociating distractor inhibition and episodic retrieval processes in children: No evidence for developmental deficits.

    PubMed

    Giesen, Carina; Weissmann, Francesca; Rothermund, Klaus

    2018-02-01

    It is often assumed that children show reduced or absent inhibition of distracting material due to pending cognitive maturation, although empirical findings do not provide strong support for the idea of an "inhibitory deficit" in children. Most of this evidence, however, is based on findings from the negative priming paradigm, which confounds distractor inhibition and episodic retrieval processes. To resolve this confound, we adopted a sequential distractor repetition paradigm of Giesen, Frings, and Rothermund (2012), which provides independent estimates of distractor inhibition and episodic retrieval processes. Children (aged 7-9years) and young adults (aged 18-29years) identified centrally presented target fruit stimuli among two flanking distractor fruits that were always response incompatible. Children showed both reliable distractor inhibition effects as well as robust episodic retrieval effects of distractor-response bindings. Age group comparisons suggest that processes of distractor inhibition and episodic retrieval are already present and functionally intact in children and are comparable to those of young adults. The current findings highlight that the sequential distractor repetition paradigm of Giesen et al. (2012) is a versatile tool to investigate distractor inhibition and episodic retrieval separately and in an unbiased way and is also of merit for the examination of age differences with regard to these processes. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Enhancing the MeSH thesaurus to retrieve French online health resources in a quality-controlled gateway.

    PubMed

    Douyère, Magaly; Soualmia, Lina F; Névéol, Aurélie; Rogozan, Alexandrina; Dahamna, Badisse; Leroy, Jean-Philippe; Thirion, Benoît; Darmoni, Stefan J

    2004-12-01

    The amount of health information available on the Internet is considerable. In this context, several health gateways have been developed. Among them, CISMeF (Catalogue and Index of Health Resources in French) was designed to catalogue and index health resources in French. The goal of this article is to describe the various enhancements to the MeSH thesaurus developed by the CISMeF team to adapt this terminology to the broader field of health Internet resources instead of scientific articles for the medline bibliographic database. CISMeF uses two standard tools for organizing information: the MeSH thesaurus and several metadata element sets, in particular the Dublin Core metadata format. The heterogeneity of Internet health resources led the CISMeF team to enhance the MeSH thesaurus with the introduction of two new concepts, respectively, resource types and metaterms. CISMeF resource types are a generalization of the publication types of medline. A resource type describes the nature of the resource and MeSH keyword/qualifier pairs describe the subject of the resource. A metaterm is generally a medical specialty or a biological science, which has semantic links with one or more MeSH keywords, qualifiers and resource types. The CISMeF terminology is exploited for several tasks: resource indexing performed manually, resource categorization performed automatically, visualization and navigation through the concept hierarchies and information retrieval using the Doc'CISMeF search engine. The CISMeF health gateway uses several MeSH thesaurus enhancements to optimize information retrieval, hierarchy navigation and automatic indexing.

  11. Database resources of the National Center for Biotechnology Information

    PubMed Central

    Wheeler, David L.; Barrett, Tanya; Benson, Dennis A.; Bryant, Stephen H.; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M.; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Geer, Lewis Y.; Helmberg, Wolfgang; Kapustin, Yuri; Kenton, David L.; Khovayko, Oleg; Lipman, David J.; Madden, Thomas L.; Maglott, Donna R.; Ostell, James; Pruitt, Kim D.; Schuler, Gregory D.; Schriml, Lynn M.; Sequeira, Edwin; Sherry, Stephen T.; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Suzek, Tugba O.; Tatusov, Roman; Tatusova, Tatiana A.; Wagner, Lukas; Yaschenko, Eugene

    2006-01-01

    In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's Web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups, Retroviral Genotyping Tools, HIV-1, Human Protein Interaction Database, SAGEmap, Gene Expression Omnibus, Entrez Probe, GENSAT, Online Mendelian Inheritance in Man, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized datasets. All of the resources can be accessed through the NCBI home page at: . PMID:16381840

  12. Database resources of the National Center for Biotechnology Information.

    PubMed

    Sayers, Eric W; Barrett, Tanya; Benson, Dennis A; Bolton, Evan; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; Dicuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M; Geer, Lewis Y; Helmberg, Wolfgang; Kapustin, Yuri; Krasnov, Sergey; Landsman, David; Lipman, David J; Lu, Zhiyong; Madden, Thomas L; Madej, Tom; Maglott, Donna R; Marchler-Bauer, Aron; Miller, Vadim; Karsch-Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Stephen T; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A; Wagner, Lukas; Wang, Yanli; Wilbur, W John; Yaschenko, Eugene; Ye, Jian

    2012-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Website. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Probe, Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.

  13. Database resources of the National Center for Biotechnology

    PubMed Central

    Wheeler, David L.; Church, Deanna M.; Federhen, Scott; Lash, Alex E.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Sequeira, Edwin; Tatusova, Tatiana A.; Wagner, Lukas

    2003-01-01

    In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's Web site. NCBI resources include Entrez, PubMed, PubMed Central (PMC), LocusLink, the NCBITaxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR (e-PCR), Open Reading Frame (ORF) Finder, References Sequence (RefSeq), UniGene, HomoloGene, ProtEST, Database of Single Nucleotide Polymorphisms (dbSNP), Human/Mouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes and related tools, the Map Viewer, Model Maker (MM), Evidence Viewer (EV), Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), and the Conserved Domain Architecture Retrieval Tool (CDART). Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at: http://www.ncbi.nlm.nih.gov. PMID:12519941

  14. Multi-Instrument Manager Tool for Data Acquisition and Merging of Optical and Electrical Mobility Size Distributions

    NASA Astrophysics Data System (ADS)

    Tritscher, Torsten; Koched, Amine; Han, Hee-Siew; Filimundi, Eric; Johnson, Tim; Elzey, Sherrie; Avenido, Aaron; Kykal, Carsten; Bischof, Oliver F.

    2015-05-01

    Electrical mobility classification (EC) followed by Condensation Particle Counter (CPC) detection is the technique combined in Scanning Mobility Particle Sizers(SMPS) to retrieve nanoparticle size distributions in the range from 2.5 nm to 1 μm. The detectable size range of SMPS systems can be extended by the addition of an Optical Particle Sizer(OPS) that covers larger sizes from 300 nm to 10 μm. This optical sizing method reports an optical equivalent diameter, which is often different from the electrical mobility diameter measured by the standard SMPS technique. Multi-Instrument Manager (MIMTM) software developed by TSI incorporates algorithms that facilitate merging SMPS data sets with data based on optical equivalent diameter to compile single, wide-range size distributions. Here we present MIM 2.0, the next-generation of the data merging tool that offers many advanced features for data merging and post-processing. MIM 2.0 allows direct data acquisition with OPS and NanoScan SMPS instruments to retrieve real-time particle size distributions from 10 nm to 10 μm, which we show in a case study at a fireplace. The merged data can be adjusted using one of the merging options, which automatically determines an overall aerosol effective refractive index. As a result an indirect and average characterization of aerosol optical and shape properties is possible. The merging tool allows several pre-settings, data averaging and adjustments, as well as the export of data sets and fitted graphs. MIM 2.0 also features several post-processing options for SMPS data and differences can be visualized in a multi-peak sample over a narrow size range.

  15. Development of real-time voltage stability monitoring tool for power system transmission network using Synchrophasor data

    NASA Astrophysics Data System (ADS)

    Pulok, Md Kamrul Hasan

    Intelligent and effective monitoring of power system stability in control centers is one of the key issues in smart grid technology to prevent unwanted power system blackouts. Voltage stability analysis is one of the most important requirements for control center operation in smart grid era. With the advent of Phasor Measurement Unit (PMU) or Synchrophasor technology, real time monitoring of voltage stability of power system is now a reality. This work utilizes real-time PMU data to derive a voltage stability index to monitor the voltage stability related contingency situation in power systems. The developed tool uses PMU data to calculate voltage stability index that indicates relative closeness of the instability by producing numerical indices. The IEEE 39 bus, New England power system was modeled and run on a Real-time Digital Simulator that stream PMU data over the Internet using IEEE C37.118 protocol. A Phasor data concentrator (PDC) is setup that receives streaming PMU data and stores them in Microsoft SQL database server. Then the developed voltage stability monitoring (VSM) tool retrieves phasor measurement data from SQL server, performs real-time state estimation of the whole network, calculate voltage stability index, perform real-time ranking of most vulnerable transmission lines, and finally shows all the results in a graphical user interface. All these actions are done in near real-time. Control centers can easily monitor the systems condition by using this tool and can take precautionary actions if needed.

  16. Utilization of a CRT display light pen in the design of feedback control systems

    NASA Technical Reports Server (NTRS)

    Thompson, J. G.; Young, K. R.

    1972-01-01

    A hierarchical structure of the interlinked programs was developed to provide a flexible computer-aided design tool. A graphical input technique and a data structure are considered which provide the capability of entering the control system model description into the computer in block diagram form. An information storage and retrieval system was developed to keep track of the system description, and analysis and simulation results, and to provide them to the correct routines for further manipulation or display. Error analysis and diagnostic capabilities are discussed, and a technique was developed to reduce a transfer function to a set of nested integrals suitable for digital simulation. A general, automated block diagram reduction procedure was set up to prepare the system description for the analysis routines.

  17. Open-source Software for Exoplanet Atmospheric Modeling

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph

    2018-01-01

    I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.

  18. Development and testing of a text-mining approach to analyse patients' comments on their experiences of colorectal cancer care.

    PubMed

    Wagland, Richard; Recio-Saucedo, Alejandra; Simon, Michael; Bracher, Michael; Hunt, Katherine; Foster, Claire; Downing, Amy; Glaser, Adam; Corner, Jessica

    2016-08-01

    Quality of cancer care may greatly impact on patients' health-related quality of life (HRQoL). Free-text responses to patient-reported outcome measures (PROMs) provide rich data but analysis is time and resource-intensive. This study developed and tested a learning-based text-mining approach to facilitate analysis of patients' experiences of care and develop an explanatory model illustrating impact on HRQoL. Respondents to a population-based survey of colorectal cancer survivors provided free-text comments regarding their experience of living with and beyond cancer. An existing coding framework was tested and adapted, which informed learning-based text mining of the data. Machine-learning algorithms were trained to identify comments relating to patients' specific experiences of service quality, which were verified by manual qualitative analysis. Comparisons between coded retrieved comments and a HRQoL measure (EQ5D) were explored. The survey response rate was 63.3% (21 802/34 467), of which 25.8% (n=5634) participants provided free-text comments. Of retrieved comments on experiences of care (n=1688), over half (n=1045, 62%) described positive care experiences. Most negative experiences concerned a lack of post-treatment care (n=191, 11% of retrieved comments) and insufficient information concerning self-management strategies (n=135, 8%) or treatment side effects (n=160, 9%). Associations existed between HRQoL scores and coded algorithm-retrieved comments. Analysis indicated that the mechanism by which service quality impacted on HRQoL was the extent to which services prevented or alleviated challenges associated with disease and treatment burdens. Learning-based text mining techniques were found useful and practical tools to identify specific free-text comments within a large dataset, facilitating resource-efficient qualitative analysis. This method should be considered for future PROM analysis to inform policy and practice. Study findings indicated that perceived care quality directly impacts on HRQoL. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Mathematics and Information Retrieval.

    ERIC Educational Resources Information Center

    Salton, Gerald

    1979-01-01

    Examines the main mathematical approaches to information retrieval, including both algebraic and probabilistic models, and describes difficulties which impede formalization of information retrieval processes. A number of developments are covered where new theoretical understandings have directly led to improved retrieval techniques and operations.…

  20. DORS: DDC Online Retrieval System.

    ERIC Educational Resources Information Center

    Liu, Songqiao; Svenonius, Elaine

    1991-01-01

    Describes the Dewey Online Retrieval System (DORS), which was developed at the University of California, Los Angeles (UCLA), to experiment with classification-based search strategies in online catalogs. Classification structures in automated information retrieval are discussed; and specifications for a classification retrieval interface are…

  1. Establishment of a bleeding score as a diagnostic tool for patients with rare bleeding disorders.

    PubMed

    Palla, Roberta; Siboni, Simona M; Menegatti, Marzia; Musallam, Khaled M; Peyvandi, Flora

    2016-12-01

    Bleeding manifestations among patients with rare bleeding disorders (RBDs) vary significantly between disorders and patients, even when affected with the same disorder. In response to the challenge represented by the clinical assessment of the presence and severity of bleeding symptoms, a number of bleeding score systems (BSSs) or bleeding assessment tools (BATs) were developed. The majority of these were specifically developed for patients with more common bleeding disorders than RBDs. Few RBDs patients were evaluated with these tools and without conclusive results. A new BSS was developed using data retrieved from a large group of patients with RBDs enrolled in the EN-RBD database and from healthy subjects. These data included previous bleeding symptoms, frequency, spontaneity, extent, localization, and relationship to prophylaxis and acute treatment. The predictive power of this BSS was also compared with the ISTH-BAT and examined for the severity of RBDs based on coagulant factor activity. This BSS was able to differentiate patients with RBDs from healthy individuals with a bleeding score value of 1.5 having the highest sum of sensitivity (67.1%) and specificity (73.8%) in discriminating patients with RBD from those without. An easy-to-use calculation was also developed to assess the probability of having a RBD. Its comparison with the ISTH-BAT confirmed its utility. Finally, in RBDs patients, there was a significant negative correlation between BS and coagulant factor activity level, which was strongest for fibrinogen and FXIII deficiencies. The use of this quantitative method may represent a valuable support tool to clinicians. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Using an interdisciplinary partnership to develop nursing students' information literacy skills: an evaluation.

    PubMed

    Turnbull, Bev; Royal, Bernadette; Purnell, Margaret

    2011-01-01

    As learning paradigms shift to student-centred active learning, development of effective skills in locating and retrieving information using electronic sources is integral to promoting lifelong learning. Recency of information that is evidence based is a critical factor in a dynamic field such as health. A changing demographic is evident among nursing students with greater numbers of mature age students who may not possess the computer skills often assumed with school leavers, and whose study preference is mostly by external mode. Development of interdisciplinary partnerships between faculties and librarians can provide the attributes and innovation of new and improved ways to better support student learning, whether or not students attend on campus. The Health Online Tutorial, an online database searching tool developed through a collaborative, interdisciplinary partnership at Charles Darwin University is one such example.

  3. VIIRS validation and algorithm development efforts in coastal and inland Waters

    NASA Astrophysics Data System (ADS)

    Stengel, E.; Ondrusek, M.

    2016-02-01

    Accurate satellite ocean color measurements in coastal and inland waters are more challenging than open-ocean measurements. Complex water and atmospheric conditions can limit the utilization of remote sensing data in coastal waters where it is most needed. The Coastal Optical Characterization Experiment (COCE) is an ongoing project at NOAA/NESDIS/STAR Satellite Oceanography and Climatology Division. The primary goals of COCE are satellite ocean color validation and application development. Currently, this effort concentrates on the initialization and validation of the Joint Polar Satellite System (JPSS) VIIRS sensor using a Satlantic HyperPro II radiometer as a validation tool. A report on VIIRS performance in coastal waters will be given by presenting comparisons between in situ ground truth measurements and VIIRS retrievals made in the Chesapeake Bay, and inland waters of the Gulf of Mexico and Puerto Rico. The COCE application development effort focuses on developing new ocean color satellite remote sensing tools for monitoring relevant coastal ocean parameters. A new VIIRS total suspended matter algorithm will be presented for the Chesapeake Bay. These activities improve the utility of ocean color satellite data in monitoring and analyzing coastal and oceanic processes. Progress on these activities will be reported.

  4. Dental Informatics tool “SOFPRO” for the study of oral submucous fibrosis

    PubMed Central

    Erlewad, Dinesh Masajirao; Mundhe, Kalpana Anandrao; Hazarey, Vinay K

    2016-01-01

    Background: Dental informatics is an evolving branch widely used in dental education and practice. Numerous applications that support clinical care, education and research have been developed. However, very few such applications are developed and utilized in the epidemiological studies of oral submucous fibrosis (OSF) which is affecting a significant population of Asian countries. Aims and Objectives: To design and develop an user friendly software for the descriptive epidemiological study of OSF. Materials and Methods: With the help of a software engineer a computer program SOFPRO was designed and developed by using, Ms-Visual Basic 6.0 (VB), Ms-Access 2000, Crystal Report 7.0 and Ms-Paint in operating system XP. For the analysis purpose the available OSF data from the departmental precancer registry was fed into the SOFPRO. Results: Known data, not known and null data are successfully accepted in data entry and represented in data analysis of OSF. Smooth working of SOFPRO and its correct data flow was tested against real-time data of OSF. Conclusion: SOFPRO was found to be a user friendly automated tool for easy data collection, retrieval, management and analysis of OSF patients. PMID:27601808

  5. 48 CFR 42.1501 - General.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... System (CPARS) and Past Performance Information Retrieval System (PPIRS) metric tools to measure the... CONTRACT ADMINISTRATION AND AUDIT SERVICES Contractor Performance Information 42.1501 General. (a) Past performance information (including the ratings and supporting narratives) is relevant information, for future...

  6. 48 CFR 42.1501 - General.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Reporting System (CPARS) and Past Performance Information Retrieval System (PPIRS) metric tools to measure... CONTRACT ADMINISTRATION AND AUDIT SERVICES Contractor Performance Information 42.1501 General. (a) Past performance information (including the ratings and supporting narratives) is relevant information, for future...

  7. Wilson in Node 1 Unity

    NASA Image and Video Library

    2010-04-10

    S131-E-008502 (10 April 2010) --- NASA astronaut Stephanie Wilson, STS-131 mission specialist, retrieves a tool from a drawer in the Unity node of the International Space Station while space shuttle Discovery remains docked with the station.

  8. Patient Safety—Incorporating Drawing Software into Root Cause Analysis Software

    PubMed Central

    Williams, Linda; Grayson, Diana; Gosbee, John

    2001-01-01

    Drawing software from Lassalle Technologies1 (France) designed for Visual Basic is the tool we used to standardize the creation, storage, and retrieval of flow diagrams containing information about adverse events and close calls.

  9. Patient Safety—Incorporating Drawing Software into Root Cause Analysis Software

    PubMed Central

    Williams, Linda; Grayson, Diana; Gosbee, John

    2002-01-01

    Drawing software from Lassalle Technologies1 (France) designed for Visual Basic is the tool we used to standardize the creation, storage, and retrieval of flow diagrams containing information about adverse events and close calls.

  10. Update on Genomic Databases and Resources at the National Center for Biotechnology Information.

    PubMed

    Tatusova, Tatiana

    2016-01-01

    The National Center for Biotechnology Information (NCBI), as a primary public repository of genomic sequence data, collects and maintains enormous amounts of heterogeneous data. Data for genomes, genes, gene expressions, gene variation, gene families, proteins, and protein domains are integrated with the analytical, search, and retrieval resources through the NCBI website, text-based search and retrieval system, provides a fast and easy way to navigate across diverse biological databases.Comparative genome analysis tools lead to further understanding of evolution processes quickening the pace of discovery. Recent technological innovations have ignited an explosion in genome sequencing that has fundamentally changed our understanding of the biology of living organisms. This huge increase in DNA sequence data presents new challenges for the information management system and the visualization tools. New strategies have been designed to bring an order to this genome sequence shockwave and improve the usability of associated data.

  11. Retrieval System for Calcined Waste for the Idaho Cleanup Project - 12104

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eastman, Randy L.; Johnston, Beau A.; Lower, Danielle E.

    This paper describes the conceptual approach to retrieve radioactive calcine waste, hereafter called calcine, from stainless steel storage bins contained within concrete vaults. The retrieval system will allow evacuation of the granular solids (calcine) from the storage bins through the use of stationary vacuum nozzles. The nozzles will use air jets for calcine fluidization and will be able to rotate and direct the fluidization or displacement of the calcine within the bin. Each bin will have a single retrieval system installed prior to operation to prevent worker exposure to the high radiation fields. The addition of an articulated camera armmore » will allow for operations monitoring and will be equipped with contingency tools to aid in calcine removal. Possible challenges (calcine bridging and rat-holing) associated with calcine retrieval and transport, including potential solutions for bin pressurization, calcine fluidization and waste confinement, are also addressed. The Calcine Disposition Project has the responsibility to retrieve, treat, and package HLW calcine. The calcine retrieval system has been designed to incorporate the functions and technical characteristics as established by the retrieval system functional analysis. By adequately implementing the highest ranking technical characteristics into the design of the retrieval system, the system will be able to satisfy the functional requirements. The retrieval system conceptual design provides the means for removing bulk calcine from the bins of the CSSF vaults. Top-down vacuum retrieval coupled with an articulating camera arm will allow for a robust, contained process capable of evacuating bulk calcine from bins and transporting it to the processing facility. The system is designed to fluidize, vacuum, transport and direct the calcine from its current location to the CSSF roof-top transport lines. An articulating camera arm, deployed through an adjacent access riser, will work in conjunction with the retrieval nozzle to aid in calcine fluidization, remote viewing, clumped calcine breaking and recovery from off-normal conditions. As the design of the retrieval system progresses from conceptual to preliminary, increasing attention will be directed toward detailed design and proof-of- concept testing. (authors)« less

  12. EXPLORING BIASES OF ATMOSPHERIC RETRIEVALS IN SIMULATED JWST TRANSMISSION SPECTRA OF HOT JUPITERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocchetto, M.; Waldmann, I. P.; Tinetti, G.

    2016-12-10

    With a scheduled launch in 2018 October, the James Webb Space Telescope ( JWST ) is expected to revolutionize the field of atmospheric characterization of exoplanets. The broad wavelength coverage and high sensitivity of its instruments will allow us to extract far more information from exoplanet spectra than what has been possible with current observations. In this paper, we investigate whether current retrieval methods will still be valid in the era of JWST , exploring common approximations used when retrieving transmission spectra of hot Jupiters. To assess biases, we use 1D photochemical models to simulate typical hot Jupiter cloud-free atmospheresmore » and generate synthetic observations for a range of carbon-to-oxygen ratios. Then, we retrieve these spectra using TauREx, a Bayesian retrieval tool, using two methodologies: one assuming an isothermal atmosphere, and one assuming a parameterized temperature profile. Both methods assume constant-with-altitude abundances. We found that the isothermal approximation biases the retrieved parameters considerably, overestimating the abundances by about one order of magnitude. The retrieved abundances using the parameterized profile are usually within 1 σ of the true state, and we found the retrieved uncertainties to be generally larger compared to the isothermal approximation. Interestingly, we found that by using the parameterized temperature profile we could place tight constraints on the temperature structure. This opens the possibility of characterizing the temperature profile of the terminator region of hot Jupiters. Lastly, we found that assuming a constant-with-altitude mixing ratio profile is a good approximation for most of the atmospheres under study.« less

  13. Rapid Diagnostics of Onboard Sequences

    NASA Technical Reports Server (NTRS)

    Starbird, Thomas W.; Morris, John R.; Shams, Khawaja S.; Maimone, Mark W.

    2012-01-01

    Keeping track of sequences onboard a spacecraft is challenging. When reviewing Event Verification Records (EVRs) of sequence executions on the Mars Exploration Rover (MER), operators often found themselves wondering which version of a named sequence the EVR corresponded to. The lack of this information drastically impacts the operators diagnostic capabilities as well as their situational awareness with respect to the commands the spacecraft has executed, since the EVRs do not provide argument values or explanatory comments. Having this information immediately available can be instrumental in diagnosing critical events and can significantly enhance the overall safety of the spacecraft. This software provides auditing capability that can eliminate that uncertainty while diagnosing critical conditions. Furthermore, the Restful interface provides a simple way for sequencing tools to automatically retrieve binary compiled sequence SCMFs (Space Command Message Files) on demand. It also enables developers to change the underlying database, while maintaining the same interface to the existing applications. The logging capabilities are also beneficial to operators when they are trying to recall how they solved a similar problem many days ago: this software enables automatic recovery of SCMF and RML (Robot Markup Language) sequence files directly from the command EVRs, eliminating the need for people to find and validate the corresponding sequences. To address the lack of auditing capability for sequences onboard a spacecraft during earlier missions, extensive logging support was added on the Mars Science Laboratory (MSL) sequencing server. This server is responsible for generating all MSL binary SCMFs from RML input sequences. The sequencing server logs every SCMF it generates into a MySQL database, as well as the high-level RML file and dictionary name inputs used to create the SCMF. The SCMF is then indexed by a hash value that is automatically included in all command EVRs by the onboard flight software. Second, both the binary SCMF result and the RML input file can be retrieved simply by specifying the hash to a Restful web interface. This interface enables command line tools as well as large sophisticated programs to download the SCMF and RMLs on-demand from the database, enabling a vast array of tools to be built on top of it. One such command line tool can retrieve and display RML files, or annotate a list of EVRs by interleaving them with the original sequence commands. This software has been integrated with the MSL sequencing pipeline where it will serve sequences useful in diagnostics, debugging, and situational awareness throughout the mission.

  14. Development of concepts for satellite retrieval devices

    NASA Technical Reports Server (NTRS)

    Pruett, E. C.; Robertson, K. B., III; Loughead, T. E.

    1979-01-01

    The teleoperator being developed to augment the Space Transportation System (STS) for satellite placement, retrieval, or servicing at altitudes or orbital planes where it would be impractical to use the shuttle is primarily a general purpose propulsion stage that can be fitted with manipulator arms, automated servicers and satellite retrieval devices for particular missions. Design concepts for a general purpose retrieval device for docking with a satellite to which a grappling fixture has been attached, and for a retrieval device for docking with the Solar Maximum Mission (SMM) spacecraft were defined. The mechanical aspects of these two devices are discussed as well as the crew operations involved and problems created by the requirement for remote control. Drawings for the two retrieval device concepts are included.

  15. Developmental origins of recoding and decoding in memory.

    PubMed

    Kibbe, Melissa M; Feigenson, Lisa

    2014-12-01

    Working memory is severely limited in both adults and children, but one way that adults can overcome this limit is through the process of recoding. Recoding happens when representations of individual items are chunked together into a higher order representation, and the chunk is assigned a label. That label can then be decoded to retrieve the individual items from long-term memory. Whereas this ability has been extensively studied in adults (as, for example, in classic studies of memory in chess), little is known about recoding's developmental origins. Here we asked whether 2- to 3-year-old children also can recode-that is, can they restructure representations of individual objects into a higher order chunk, assign this new representation a verbal label, and then later decode the label to retrieve the represented individuals from memory. In Experiments 1 and 2, we showed children identical blocks that could be connected to make tools. Children learned a novel name for a tool that could be built from two blocks, and for a tool that could be built from three blocks. Later we told children that one of the tools was hidden in a box, with no visual information provided. Children were allowed to search the box and retrieve varying numbers of blocks. Critically, the retrieved blocks were identical and unconnected, so the only way children could know whether any blocks remained was by using the verbal label to recall how many objects comprised each tool (or chunk). We found that even children who could not yet count adjusted their searching of the box depending on the label they had heard. This suggests that they had recoded representations of individual blocks into higher-order chunks, attached labels to the chunks, and then later decoded the labels to infer how many blocks were hidden. In Experiments 3 and 4 we asked whether recoding also can expand the number of individual objects children could remember, as in the classic studies with adults. We found that when no information was provided to support recoding, children showed the standard failure to remember more than three hidden objects at once. But when provided recoding information, children successfully represented up to five individual objects in the box, thereby overcoming typical working memory limits. These results are the first demonstration of recoding by young children; we close by discussing their implications for understanding the structure of memory throughout the lifespan. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. SEOM's Sentinel-3/OLCI' project CAWA: advanced GRASP aerosol retrieval

    NASA Astrophysics Data System (ADS)

    Dubovik, Oleg; litvinov, Pavel; Huang, Xin; Aspetsberger, Michael; Fuertes, David; Brockmann, Carsten; Fischer, Jürgen; Bojkov, Bojan

    2016-04-01

    The CAWA "Advanced Clouds, Aerosols and WAter vapour products for Sentinel-3/OLCI" ESA-SEOM project aims on the development of advanced atmospheric retrieval algorithms for the Sentinel-3/OLCI mission, and is prepared using Envisat/MERIS and Aqua/MODIS datasets. This presentation discusses mainly CAWA aerosol product developments and results. CAWA aerosol retrieval uses recently developed GRASP algorithm (Generalized Retrieval of Aerosol and Surface Properties) algorithm described by Dubovik et al. (2014). GRASP derives extended set of atmospheric parameters using multi-pixel concept - a simultaneous fitting of a large group of pixels under additional a priori constraints limiting the time variability of surface properties and spatial variability of aerosol properties. Over land GRASP simultaneously retrieves properties of both aerosol and underlying surface even over bright surfaces. GRAPS doesn't use traditional look-up-tables and performs retrieval as search in continuous space of solution. All radiative transfer calculations are performed as part of the retrieval. The results of comprehensive sensitivity tests, as well as results obtained from real Envisat/MERIS data will be presented. The tests analyze various aspects of aerosol and surface reflectance retrieval accuracy. In addition, the possibilities of retrieval improvement by means of implementing synergetic inversion of a combination of OLCI data with observations by SLSTR are explored. Both the results of numerical tests, as well as the results of processing several years of Envisat/MERIS data illustrate demonstrate reliable retrieval of AOD (Aerosol Optical Depth) and surface BRDF. Observed retrieval issues and advancements will be discussed. For example, for some situations we illustrate possibilities of retrieving aerosol absorption - property that hardly accessible from satellite observations with no multi-angular and polarimetric capabilities.

  17. Clinical Bioinformatics: challenges and opportunities

    PubMed Central

    2012-01-01

    Background Network Tools and Applications in Biology (NETTAB) Workshops are a series of meetings focused on the most promising and innovative ICT tools and to their usefulness in Bioinformatics. The NETTAB 2011 workshop, held in Pavia, Italy, in October 2011 was aimed at presenting some of the most relevant methods, tools and infrastructures that are nowadays available for Clinical Bioinformatics (CBI), the research field that deals with clinical applications of bioinformatics. Methods In this editorial, the viewpoints and opinions of three world CBI leaders, who have been invited to participate in a panel discussion of the NETTAB workshop on the next challenges and future opportunities of this field, are reported. These include the development of data warehouses and ICT infrastructures for data sharing, the definition of standards for sharing phenotypic data and the implementation of novel tools to implement efficient search computing solutions. Results Some of the most important design features of a CBI-ICT infrastructure are presented, including data warehousing, modularity and flexibility, open-source development, semantic interoperability, integrated search and retrieval of -omics information. Conclusions Clinical Bioinformatics goals are ambitious. Many factors, including the availability of high-throughput "-omics" technologies and equipment, the widespread availability of clinical data warehouses and the noteworthy increase in data storage and computational power of the most recent ICT systems, justify research and efforts in this domain, which promises to be a crucial leveraging factor for biomedical research. PMID:23095472

  18. Aircraft Measurements of Aerosol Phase Matrix Elements by the Polarized Imaging Nephelometer (Invited)

    NASA Astrophysics Data System (ADS)

    Dolgos, G.; Martins, J.; Espinosa, R.; Dubovik, O.; Beyersdorf, A. J.; Ziemba, L. D.; Hair, J. W.

    2013-12-01

    Aerosols have a significant impact on the radiative balance and water cycle of our planet through influencing atmospheric radiation. Remote sensing of aerosols relies on scattering phase matrix information to retrieve aerosol properties with frequent global coverage, the assumed phase matrices must be validated by measurements. At the Laboratory for Aerosols, Clouds and Optics (LACO) at the University of Maryland, Baltimore County (UMBC) we developed a new technique to directly measure the aerosol phase function (P11), the degree of linear polarization of the scattered light (-P12/P11), and the volume scattering coefficient (SCAT). We designed and built a portable instrument called the Polarized Imaging Nephelometer (PI-Neph), shown in Figure 1 (a). The PI-Neph successfully participated in dozens of flights of the NASA Development and Evaluation of satellite ValidatiOn Tools by Experimenters (DEVOTE) project and the Deep Convective Clouds and Chemistry (DC3) project and the January and February deployment of the Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality (Discover-AQ) mission. The ambient aerosol enters the PI-Neph through an inlet and the sample is illuminated by laser light (wavelength of 532 nm); the scattered light is imaged by a stationary wide field of view camera in the scattering angle range of 2° to 178° (in some cases stray light limited the scattering angle range to 3° to 176°). Data for P11, P12, and SCAT were taken every 12 seconds, example datasets from DEVOTE of P11 times SCAT are shown on Figure 1 (b). The talk will highlight results from the three field deployments and will show microphysical retrievals from the scattering data. The size distribution and the average complex refractive index of the ambient aerosol ensemble can be retrieved from the data by an algorithm similar to that of AERONET, as illustrated in Figure 1 (c). Particle sphericity can potentially be retrieved as well, this will be investigated in the near future. The instrument will be applied to the validation of aerosol retrievals of AERONET and airborne polarimeters. The PI-Neph instrument has recently been upgraded to three wavelengths, and a second instrument was built as well. The LACO group is active in developing an advanced open path version of the Imaging Nephelometer that does not require an inlet but measures undisturbed particles under the aircraft wing. Figure 1. (a) The Polarized Imaging Nephelometer instrument inside the B200 aircraft of NASA Langley. (b) Phase function times volume scattering coefficient data from DEVOTE. (c) Retrievals of particle size distribution based on the data in panel (b).

  19. A Comparison of Foliage Profiles in the Sierra National Forest Obtained with a Full-Waveform Under-Canopy EVI Lidar System with the Foliage Profiles Obtained with an Airborne Full-Waveform LVIS Lidar System

    NASA Technical Reports Server (NTRS)

    Zhao, Feng; Yang, Xiaoyuan; Strahler, Alan H.; Schaaf, Crystal L.; Yao, Tian; Wang, Zhuosen; Roman, Miguel O.; Woodcock, Curtis E.; Ni-Meister, Wenge; Jupp, David L. B.; hide

    2013-01-01

    Foliage profiles retrieved froma scanning, terrestrial, near-infrared (1064 nm), full-waveformlidar, the Echidna Validation Instrument (EVI), agree well with those obtained from an airborne, near-infrared, full-waveform, large footprint lidar, the Lidar Vegetation Imaging Sensor (LVIS). We conducted trials at 5 plots within a conifer stand at Sierra National Forest in August, 2008. Foliage profiles retrieved from these two lidar systems are closely correlated (e.g., r = 0.987 at 100 mhorizontal distances) at large spatial coverage while they differ significantly at small spatial coverage, indicating the apparent scanning perspective effect on foliage profile retrievals. Alsowe noted the obvious effects of local topography on foliage profile retrievals, particularly on the topmost height retrievals. With a fine spatial resolution and a small beam size, terrestrial lidar systems complement the strengths of the airborne lidars by making a detailed characterization of the crowns from a small field site, and thereby serving as a validation tool and providing localized tuning information for future airborne and spaceborne lidar missions.

  20. Influence of aerosol estimation on coastal water products retrieved from HICO images

    NASA Astrophysics Data System (ADS)

    Patterson, Karen W.; Lamela, Gia

    2011-06-01

    The Hyperspectral Imager for the Coastal Ocean (HICO) is a hyperspectral sensor which was launched to the International Space Station in September 2009. The Naval Research Laboratory (NRL) has been developing the Coastal Water Signatures Toolkit (CWST) to estimate water depth, bottom type and water column constituents such as chlorophyll, suspended sediments and chromophoric dissolved organic matter from hyperspectral imagery. The CWST uses a look-up table approach, comparing remote sensing reflectance spectra observed in an image to a database of modeled spectra for pre-determined water column constituents, depth and bottom type. In order to successfully use this approach, the remote sensing reflectances must be accurate which implies accurately correcting for the atmospheric contribution to the HICO top of the atmosphere radiances. One tool the NRL is using to atmospherically correct HICO imagery is Correction of Coastal Ocean Atmospheres (COCOA), which is based on Tafkaa 6S. One of the user input parameters to COCOA is aerosol optical depth or aerosol visibility, which can vary rapidly over short distances in coastal waters. Changes to the aerosol thickness results in changes to the magnitude of the remote sensing reflectances. As such, the CWST retrievals for water constituents, depth and bottom type can be expected to vary in like fashion. This work is an illustration of the variability in CWST retrievals due to inaccurate aerosol thickness estimation during atmospheric correction of HICO images.

  1. Infrared atmospheric sounding interferometer correlation interferometry for the retrieval of atmospheric gases: the case of H2O and CO2.

    PubMed

    Grieco, Giuseppe; Masiello, Guido; Serio, Carmine; Jones, Roderic L; Mead, Mohammed I

    2011-08-01

    Correlation interferometry is a particular application of Fourier transform spectroscopy with partially scanned interferograms. Basically, it is a technique to obtain the difference between the spectra of atmospheric radiance at two diverse spectral resolutions. Although the technique could be exploited to design an appropriate correlation interferometer, in this paper we are concerned with the analytical aspects of the method and its application to high-spectral-resolution infrared observations in order to separate the emission of a given atmospheric gas from a spectral signal dominated by surface emission, such as in the case of satellite spectrometers operated in the nadir looking mode. The tool will be used to address some basic questions concerning the vertical spatial resolution of H2O and to develop an algorithm to retrieve the columnar amount of CO2. An application to complete interferograms from the Infrared Atmospheric Sounding Interferometer will be presented and discussed. For H2O, we have concluded that the vertical spatial resolution in the lower troposphere mostly depends on broad features associated with the spectrum, whereas for CO2, we have derived a technique capable of retrieving a CO2 columnar amount with accuracy of ≈±7 parts per million by volume at the level of each single field of view.

  2. A Logic Basis for Information Retrieval.

    ERIC Educational Resources Information Center

    Watters, C. R.; Shepherd, M. A.

    1987-01-01

    Discusses the potential of recent work in artificial intelligence, especially expert systems, for the development of more effective information retrieval systems. Highlights include the role of an expert bibliographic retrieval system and a prototype expert retrieval system, PROBIB-2, that uses MicroProlog to provide deductive reasoning…

  3. Imaging informatics-based multimedia ePR system for data management and decision support in rehabilitation research

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Verma, Sneha; Qin, Yi; Sterling, Josh; Zhou, Alyssa; Zhang, Jeffrey; Martinez, Clarisa; Casebeer, Narissa; Koh, Hyunwook; Winstein, Carolee; Liu, Brent

    2013-03-01

    With the rapid development of science and technology, large-scale rehabilitation centers and clinical rehabilitation trials usually involve significant volumes of multimedia data. Due to the global aging crisis, millions of new patients with age-related chronic diseases will produce huge amounts of data and contribute to soaring costs of medical care. Hence, a solution for effective data management and decision support will significantly reduce the expenditure and finally improve the patient life quality. Inspired from the concept of the electronic patient record (ePR), we developed a prototype system for the field of rehabilitation engineering. The system is subject or patient-oriented and customized for specific projects. The system components include data entry modules, multimedia data presentation and data retrieval. To process the multimedia data, the system includes a DICOM viewer with annotation tools and video/audio player. The system also serves as a platform for integrating decision-support tools and data mining tools. Based on the prototype system design, we developed two specific applications: 1) DOSE (a phase 1 randomized clinical trial to determine the optimal dose of therapy for rehabilitation of the arm and hand after stroke.); and 2) NEXUS project from the Rehabilitation Engineering Research Center(RERC, a NIDRR funded Rehabilitation Engineering Research Center). Currently, the system is being evaluated in the context of the DOSE trial with a projected enrollment of 60 participants over 5 years, and will be evaluated by the NEXUS project with 30 subjects. By applying the ePR concept, we developed a system in order to improve the current research workflow, reduce the cost of managing data, and provide a platform for the rapid development of future decision-support tools.

  4. Neural correlates of active controlled retrieval development: An exploratory ERP study.

    PubMed

    Simard, France; Cadoret, Geneviève

    2018-07-01

    Working memory is composed of different processes and encompasses not only the temporary storage of information but also its manipulation in order to perform complex cognitive activities. During childhood, one of these manipulation processes, namely active controlled retrieval, improves significantly between the age of 6 to 10, suggesting that the neuronal network supporting this function undergoes substantial maturational changes. The present study examined the neural activity of 14 healthy children and 14 adults while performing an active controlled retrieval task. Results showed differences in brain activity according to active controlled retrieval in a 300-500 ms window corresponding to the retrieval period. Active controlled retrieval was associated with a P3b-like potential in parietal sites for both children and adults. In fronto-central sites, children demonstrated a "N400 like" potential associated with active retrieval processing. These results are discussed in terms of maturational development. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Simulation and Assessment of a Ku-Band Full-Polarized Radar Scatterometer for Ocean Surface Vector Wind Measurement

    NASA Astrophysics Data System (ADS)

    Dong, X.; Lin, W.; Zhu, D.; Song, Z.

    2011-12-01

    Spaceborne radar scatterometry is the most important tool for global ocean surface wind vector (OSVW) measurement. Performances under condition of high-wind speed and accuracy of wind direction retrievals are two very important concerns for the development of OSVW measurement techniques by radar scatterometry. Co-polarized sigma 0 measurements are employed, for all the spaceborne radar scatterometers developed in past, and future planned missions. The main disadvantages of co-polarized only radar scatterometers for OSVW measurement are: firstly, wind vector retrieval performances varies with the position of the wind vector cells (WVC) within the swath, where WVCs with small incident angels with weaker modulation effect between sigma0 and azimuth incident angle, and the WVCs located in the outer part of the swath with lower signal-to-noise ratio and lower radiometric accuracies, have worse retrieval performances; secondly, for co-polarization measurements, Sigma 0 is the even function of the azimuth incident angle with respect to the real wind direction, which can results in directional ambiguity, and more additional information is need for the ambiguity removal. Theoretical and experimental results show that the cross-polarization measurement can provide complementary directional information to the co-polarization measurements, which can provide useful improvement to the wind vector retrieval performances. In this paper, the simulation and performance assessment of a full-polarized Ku-band radar scatterometer are provided. Some important conclusions are obtained: (1) Compared with available dual co-polarized radar scatterometer, the introduction of cross-polarization information can significantly improve the OSVW retrieval accuracies, where a relatively identical performance can be obtained within the whole swath. Simulation show that without significantly power increase, system design based on rotating-pencil beam design has much better performances than rotation fan-beam system due to its higher antenna gain and signal-to-noise ratio; (2) The performances of the full-polarized measurement, where all the 9 element covariant coefficient elements will be measurement, only have a little improvement compared with the "dual-co-polarization+HVVV" design, which is because of the almost identical characteristics of HVVV and VHHH measurement due to reciprocity; (3) The propagation error of rotation pencil-beam system is obviously much smaller than that of the rotation fan-beam system, which is due to the significant difference of antenna gains and signal-to-noise ratios; (4) Introduction of cross-polarized HVVV measurement can lead to almost identical wind direction retrieval performance for both the rotation pencil-beam and rotation fan-beam systems, which show that the cross-polarization information can significantly improve the wind direction retrieval performances by increasing the number of look angles, compared with the available fixed-fan-beam systems.

  6. ONAV - An Expert System for the Space Shuttle Mission Control Center

    NASA Technical Reports Server (NTRS)

    Mills, Malise; Wang, Lui

    1992-01-01

    The ONAV (Onboard Navigation) Expert System is being developed as a real-time console assistant to the ONAV flight controller for use in the Mission Control Center at the Johnson Space Center. Currently, Oct. 1991, the entry and ascent systems have been certified for use on console as support tools, and were used for STS-48. The rendezvous system is in verification with the goal to have the system certified for STS-49, Intelsat retrieval. To arrive at this stage, from a prototype to real-world application, the ONAV project has had to deal with not only Al issues but operating environment issues. The Al issues included the maturity of Al languages and the debugging tools, verification, and availability, stability and size of the expert pool. The environmental issues included real time data acquisition, hardware suitability, and how to achieve acceptance by users and management.

  7. A Consistent Retrieval Analysis of 10 Hot Jupiters Observed in Transmission

    NASA Astrophysics Data System (ADS)

    Barstow, J. K.; Aigrain, S.; Irwin, P. G. J.; Sing, D. K.

    2017-01-01

    We present a consistent optimal estimation retrieval analysis of 10 hot Jupiter exoplanets, each with transmission spectral data spanning the visible to near-infrared wavelength range. Using the NEMESIS radiative transfer and retrieval tool, we calculate a range of possible atmospheric states for WASP-6b, WASP-12b, WASP-17b, WASP-19b, WASP-31b, WASP-39b, HD 189733b, HD 209458b, HAT-P-1b, and HAT-P-12b. We find that the spectra of all 10 planets are consistent with the presence of some atmospheric aerosol; WASP-6b, WASP-12b, WASP-17b, WASP-19b, HD 189733b, and HAT-P-12b are all fit best by Rayleigh scattering aerosols, whereas WASP-31b, WASP-39b and HD 209458b are better represented by a gray cloud model. HAT-P-1b has solutions that fall into both categories. WASP-6b, HAT-P-12b, HD 189733b, and WASP-12b must have aerosol extending to low atmospheric pressures (below 0.1 mbar). In general, planets with equilibrium temperatures between 1300 and 1700 K are best represented by deeper, gray cloud layers, whereas cooler or hotter planets are better fit using high Rayleigh scattering aerosol. We find little evidence for the presence of molecular absorbers other than H2O. Retrieval methods can provide a consistent picture across a range of hot Jupiter atmospheres with existing data, and will be a powerful tool for the interpretation of James Webb Space Telescope observations.

  8. Testing search strategies for systematic reviews in the Medline literature database through PubMed.

    PubMed

    Volpato, Enilze S N; Betini, Marluci; El Dib, Regina

    2014-04-01

    A high-quality electronic search is essential in ensuring accuracy and completeness in retrieved records for the conducting of a systematic review. We analysed the available sample of search strategies to identify the best method for searching in Medline through PubMed, considering the use or not of parenthesis, double quotation marks, truncation and use of a simple search or search history. In our cross-sectional study of search strategies, we selected and analysed the available searches performed during evidence-based medicine classes and in systematic reviews conducted in the Botucatu Medical School, UNESP, Brazil. We analysed 120 search strategies. With regard to the use of phrase searches with parenthesis, there was no difference between the results with and without parenthesis and simple searches or search history tools in 100% of the sample analysed (P = 1.0). The number of results retrieved by the searches analysed was smaller using double quotations marks and using truncation compared with the standard strategy (P = 0.04 and P = 0.08, respectively). There is no need to use phrase-searching parenthesis to retrieve studies; however, we recommend the use of double quotation marks when an investigator attempts to retrieve articles in which a term appears to be exactly the same as what was proposed in the search form. Furthermore, we do not recommend the use of truncation in search strategies in the Medline via PubMed. Although the results of simple searches or search history tools were the same, we recommend using the latter.

  9. A CONSISTENT RETRIEVAL ANALYSIS OF 10 HOT JUPITERS OBSERVED IN TRANSMISSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barstow, J. K.; Aigrain, S.; Irwin, P. G. J.

    We present a consistent optimal estimation retrieval analysis of 10 hot Jupiter exoplanets, each with transmission spectral data spanning the visible to near-infrared wavelength range. Using the NEMESIS radiative transfer and retrieval tool, we calculate a range of possible atmospheric states for WASP-6b, WASP-12b, WASP-17b, WASP-19b, WASP-31b, WASP-39b, HD 189733b, HD 209458b, HAT-P-1b, and HAT-P-12b. We find that the spectra of all 10 planets are consistent with the presence of some atmospheric aerosol; WASP-6b, WASP-12b, WASP-17b, WASP-19b, HD 189733b, and HAT-P-12b are all fit best by Rayleigh scattering aerosols, whereas WASP-31b, WASP-39b and HD 209458b are better represented by a gray cloudmore » model. HAT-P-1b has solutions that fall into both categories. WASP-6b, HAT-P-12b, HD 189733b, and WASP-12b must have aerosol extending to low atmospheric pressures (below 0.1 mbar). In general, planets with equilibrium temperatures between 1300 and 1700 K are best represented by deeper, gray cloud layers, whereas cooler or hotter planets are better fit using high Rayleigh scattering aerosol. We find little evidence for the presence of molecular absorbers other than H{sub 2}O. Retrieval methods can provide a consistent picture across a range of hot Jupiter atmospheres with existing data, and will be a powerful tool for the interpretation of James Webb Space Telescope observations.« less

  10. Search and dissemination in data processing. [searches performed for Aviation Technology Newsletter

    NASA Technical Reports Server (NTRS)

    Gold, C. H.; Moore, A. M.; Dodd, B.; Dittmar, V.

    1974-01-01

    Manual retrieval methods were used to complete 54 searches of interest for the General Aviation Newsletter. Subjects of search ranged from television transmission to machine tooling, Apollo moon landings, electronic equipment, and aerodynamics studies.

  11. Wearable Learning Tools.

    ERIC Educational Resources Information Center

    Bowskill, Jerry; Dyer, Nick

    1999-01-01

    Describes wearable computers, or information and communication technology devices that are designed to be mobile. Discusses how such technologies can enhance computer-mediated communications, focusing on collaborative working for learning. Describes an experimental system, MetaPark, which explores communications, data retrieval and recording, and…

  12. A PDA study management tool (SMT) utilizing wireless broadband and full DICOM viewing capability

    NASA Astrophysics Data System (ADS)

    Documet, Jorge; Liu, Brent; Zhou, Zheng; Huang, H. K.; Documet, Luis

    2007-03-01

    During the last 4 years IPI (Image Processing and Informatics) Laboratory has been developing a web-based Study Management Tool (SMT) application that allows Radiologists, Film librarians and PACS-related (Picture Archiving and Communication System) users to dynamically and remotely perform Query/Retrieve operations in a PACS network. The users utilizing a regular PDA (Personal Digital Assistant) can remotely query a PACS archive to distribute any study to an existing DICOM (Digital Imaging and Communications in Medicine) node. This application which has proven to be convenient to manage the Study Workflow [1, 2] has been extended to include a DICOM viewing capability in the PDA. With this new feature, users can take a quick view of DICOM images providing them mobility and convenience at the same time. In addition, we are extending this application to Metropolitan-Area Wireless Broadband Networks. This feature requires Smart Phones that are capable of working as a PDA and have access to Broadband Wireless Services. With the extended application to wireless broadband technology and the preview of DICOM images, the Study Management Tool becomes an even more powerful tool for clinical workflow management.

  13. Everglades Depth Estimation Network (EDEN) Applications: Tools to View, Extract, Plot, and Manipulate EDEN Data

    USGS Publications Warehouse

    Telis, Pamela A.; Henkel, Heather

    2009-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated system of real-time water-level monitoring, ground-elevation data, and water-surface elevation modeling to provide scientists and water managers with current on-line water-depth information for the entire freshwater part of the greater Everglades. To assist users in applying the EDEN data to their particular needs, a series of five EDEN tools, or applications (EDENapps), were developed. Using EDEN's tools, scientists can view the EDEN datasets of daily water-level and ground elevations, compute and view daily water depth and hydroperiod surfaces, extract data for user-specified locations, plot transects of water level, and animate water-level transects over time. Also, users can retrieve data from the EDEN datasets for analysis and display in other analysis software programs. As scientists and managers attempt to restore the natural volume, timing, and distribution of sheetflow in the wetlands, such information is invaluable. Information analyzed and presented with these tools is used to advise policy makers, planners, and decision makers of the potential effects of water management and restoration scenarios on the natural resources of the Everglades.

  14. Comparing the quality of accessing medical literature using content-based visual and textual information retrieval

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William

    2009-02-01

    Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently, visual retrieval alone does not achieve the performance necessary for real-world clinical applications. Most of the common visual retrieval techniques have a MAP (Mean Average Precision) of around 2-3%, which is much lower than that achieved using textual retrieval (MAP=29%). Advanced machine learning techniques, together with good training data, have been shown to improve the performance of visual retrieval systems in the past. Multimodal retrieval (basing retrieval on both visual and textual information) can achieve better results than purely visual, but only when carefully applied. In many cases, multimodal retrieval systems performed even worse than purely textual retrieval systems. On the other hand, some multimodal retrieval systems demonstrated significantly increased early precision, which has been shown to be a desirable behavior in real-world systems.

  15. Arthroscopic suture retrievers and shuttles: a biomechanical investigation of the force required for tendon penetration and defect size.

    PubMed

    Lenz, Christopher G; Wieser, Karl; Lajtai, Georg; Meyer, Dominik C

    2015-11-17

    To compare instruments designed for arthroscopic suture handling during arthroscopic rotator cuff repair, to assess the force needed to penetrate the tendon, and to evaluate the residual defect size. Twenty-one instruments were each tested ten times on thawed sheep infraspinatus tendons. The force needed to pierce the tendon with each instrument was measured using a custom setup. Bone wax plates were used to make the perforation marks visible and to quantify the lesions each instrument created. The force to pierce a tendon had a range of 5.6-18.5 N/mm. Within the group of suture retrievers, the angled instruments required in average 85 % higher forces than straight instruments. The lesion area had a range of 2-7 mm(2). Suture retrievers produced significantly larger lesion sizes compared with suture shuttles. For the identical task of passing a suture through a tendon, differences exist regarding the ease of tendon penetration and potential damage to the tendon for different tools. The design, function, and resulting lesion size may be relevant and important for surgical handling and to avoid excess structural damage to the tendon. These results suggest that choosing the most appropriate tools for arthroscopic suture stitching influences the ease of handling and final integrity of the tissue.

  16. Compression and fast retrieval of SNP data.

    PubMed

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Compression and fast retrieval of SNP data

    PubMed Central

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-01-01

    Motivation: The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. Results: We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Availability and implementation: Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. Contact: sambofra@dei.unipd.it or cobelli@dei.unipd.it. PMID:25064564

  18. Screening tools to identify patients with complex health needs at risk of high use of health care services: A scoping review.

    PubMed

    Marcoux, Valérie; Chouinard, Maud-Christine; Diadiou, Fatoumata; Dufour, Isabelle; Hudon, Catherine

    2017-01-01

    Many people with chronic conditions have complex health needs often due to multiple chronic conditions, psychiatric comorbidities, psychosocial issues, or a combination of these factors. They are at high risk of frequent use of healthcare services. To offer these patients interventions adapted to their needs, it is crucial to be able to identify them early. The aim of this study was to find all existing screening tools that identify patients with complex health needs at risk of frequent use of healthcare services, and to highlight their principal characteristics. Our purpose was to find a short, valid screening tool to identify adult patients of all ages. A scoping review was performed on articles published between 1985 and July 2016, retrieved through a comprehensive search of the Scopus and CINAHL databases, following the methodological framework developed by Arksey and O'Malley (2005), and completed by Levac et al. (2010). Of the 3,818 articles identified, 30 were included, presenting 14 different screening tools. Seven tools were self-reported. Five targeted adult patients, and nine geriatric patients. Two tools were designed for specific populations. Four can be completed in 15 minutes or less. Most screening tools target elderly persons. The INTERMED self-assessment (IM-SA) targets adults of all ages and can be completed in less than 15 minutes. Future research could evaluate its usefulness as a screening tool for identifying patients with complex needs at risk of becoming high users of healthcare services.

  19. Millimeter-wave Imaging Radiometer (MIR) data processing and development of water vapor retrieval algorithms

    NASA Technical Reports Server (NTRS)

    Chang, L. Aron

    1995-01-01

    This document describes the progress of the task of the Millimeter-wave Imaging Radiometer (MIR) data processing and the development of water vapor retrieval algorithms, for the second six-month performing period. Aircraft MIR data from two 1995 field experiments were collected and processed with a revised data processing software. Two revised versions of water vapor retrieval algorithm were developed, one for the execution of retrieval on a supercomputer platform, and one for using pressure as the vertical coordinate. Two implementations of incorporating products from other sensors into the water vapor retrieval system, one from the Special Sensor Microwave Imager (SSM/I), the other from the High-resolution Interferometer Sounder (HIS). Water vapor retrievals were performed for both airborne MIR data and spaceborne SSM/T-2 data, during field experiments of TOGA/COARE, CAMEX-1, and CAMEX-2. The climatology of water vapor during TOGA/COARE was examined by SSM/T-2 soundings and conventional rawinsonde.

  20. A Vertical Census of Precipitation Characteristics using Ground-based Dual-polarimetric Radar Data

    NASA Astrophysics Data System (ADS)

    Wolff, D. B.; Petersen, W. A.; Marks, D. A.; Pippitt, J. L.; Tokay, A.; Gatlin, P. N.

    2017-12-01

    Characterization of the vertical structure/variability of precipitation and resultant microphysics is critical in providing physical validation of space-based precipitation retrievals. In support of NASAs Global Precipitation Measurement (GPM) mission Ground Validation (GV) program, NASA has invested in a state-of-art dual-polarimetric radar known as NPOL. NPOL is routinely deployed on the Delmarva Peninsula in support of NASAs GPM Precipitation Research Facility (PRF). NPOL has also served as the backbone of several GPM field campaigns in Oklahoma, Iowa, South Carolina and most recently in the Olympic Mountains in Washington state. When precipitation is present, NPOL obtains very high-resolution vertical profiles of radar observations (e.g. reflectivity (ZH) and differential reflectivity (ZDR)), from which important particle size distribution parameters are retrieved such as the mass-weight mean diameter (Dm) and the intercept parameter (Nw). These data are then averaged horizontally to match the nadir resolution of the dual-frequency radar (DPR; 5 km) on board the GPM satellite. The GPM DPR, Combined, and radiometer algorithms (such as GPROF) rely on functional relationships built from assumed parametric relationships and/or retrieved parameter profiles and spatial distributions of particle size (PSD), water content, and hydrometeor phase within a given sample volume. Thus, the NPOL-retrieved profiles provide an excellent tool for characterization of the vertical profile structure and variability during GPM overpasses. In this study, we will use many such overpass comparisons to quantify an estimate of the true sub-IFOV variability as a function of hydrometeor and rain type (convective or stratiform). This presentation will discuss the development of a relational database to help provide a census of the vertical structure of precipitation via analysis and correlation of reflectivity, differential reflectivity, mean-weight drop diameter and the normalized intercept parameter of the gamma drop size distribution.

  1. BioModels.net Web Services, a free and integrated toolkit for computational modelling software.

    PubMed

    Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille

    2010-05-01

    Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.

  2. Vector Sky Glint Corrections for Above Surface Retrieval of the Subsurface Polarized Light Field

    NASA Astrophysics Data System (ADS)

    Gilerson, A.; Foster, R.; McGilloway, A.; Ibrahim, A.; El-habashi, A.; Carrizo, C.; Ahmed, S.

    2016-02-01

    Knowledge of the underwater light field is fundamental to determining the health of the world's oceans and coastal regions. For decades, traditional remote sensing retrieval methods that rely solely on the spectral intensity of the water-leaving light have provided indicators of marine ecosystem health. As the demand for retrieval accuracy rises, use of the polarized nature of light as an additional remote sensing tool is becoming necessary. In order to observe the underwater polarized light field from above the surface (for ship, shore, or satellite applications), a method of correcting the above water signal for the effects of polarized surface-reflected skylight is needed. For three weeks in July-August 2014, the NASA Ship Aircraft Bio-Optical Research (SABOR) cruise continuously observed the polarized radiance of the ocean and the sky using a HyperSAS-POL system. The system autonomously tracks the Sun position and the heading of the research vessel in order to maintain a fixed relative solar azimuth angle (i.e. ±90°) and therefore avoid the specular reflection of the sunlight. Additionally, in-situ inherent optical properties (IOPs) were continuously acquired using a set of instrument packages modified for underway measurement, hyperspectral radiometric measurements were taken manually at all stations, and an underwater polarimeter was deployed when conditions permitted. All measurements, above and below the sea surface, were combined and compared in an effort to first develop a glint (sky + Sun) correction scheme for the upwelling polarized signal from a wind-driven ocean surface and compare with one assuming that the ocean surface is flat. Accurate retrieval of the subsurface vector light field is demonstrated through comparisons with polarized radiative transfer codes and direct measurements made by the underwater polarimeter.

  3. 7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...

  4. 7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...

  5. 7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...

  6. 7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...

  7. Variation in Relevance Judgments and the Measurement of Retrieval Effectiveness.

    ERIC Educational Resources Information Center

    Voorhees, Ellen M.

    2000-01-01

    Discusses the test collections developed in the TREC (Text REtrieval Conference) workshops for information retrieval research and describes a study by NIST (National Institute of Standards and Technology) that verified their reliability by investigating the effect changes in the relevance assessments have on the evaluation of retrieval results.…

  8. A community resource benchmarking predictions of peptide binding to MHC-I molecules.

    PubMed

    Peters, Bjoern; Bui, Huynh-Hoa; Frankild, Sune; Nielson, Morten; Lundegaard, Claus; Kostem, Emrah; Basch, Derek; Lamberth, Kasper; Harndahl, Mikkel; Fleri, Ward; Wilson, Stephen S; Sidney, John; Lund, Ole; Buus, Soren; Sette, Alessandro

    2006-06-09

    Recognition of peptides bound to major histocompatibility complex (MHC) class I molecules by T lymphocytes is an essential part of immune surveillance. Each MHC allele has a characteristic peptide binding preference, which can be captured in prediction algorithms, allowing for the rapid scan of entire pathogen proteomes for peptide likely to bind MHC. Here we make public a large set of 48,828 quantitative peptide-binding affinity measurements relating to 48 different mouse, human, macaque, and chimpanzee MHC class I alleles. We use this data to establish a set of benchmark predictions with one neural network method and two matrix-based prediction methods extensively utilized in our groups. In general, the neural network outperforms the matrix-based predictions mainly due to its ability to generalize even on a small amount of data. We also retrieved predictions from tools publicly available on the internet. While differences in the data used to generate these predictions hamper direct comparisons, we do conclude that tools based on combinatorial peptide libraries perform remarkably well. The transparent prediction evaluation on this dataset provides tool developers with a benchmark for comparison of newly developed prediction methods. In addition, to generate and evaluate our own prediction methods, we have established an easily extensible web-based prediction framework that allows automated side-by-side comparisons of prediction methods implemented by experts. This is an advance over the current practice of tool developers having to generate reference predictions themselves, which can lead to underestimating the performance of prediction methods they are not as familiar with as their own. The overall goal of this effort is to provide a transparent prediction evaluation allowing bioinformaticians to identify promising features of prediction methods and providing guidance to immunologists regarding the reliability of prediction tools.

  9. The state of the art of medical imaging technology: from creation to archive and back.

    PubMed

    Gao, Xiaohong W; Qian, Yu; Hui, Rui

    2011-01-01

    Medical imaging has learnt itself well into modern medicine and revolutionized medical industry in the last 30 years. Stemming from the discovery of X-ray by Nobel laureate Wilhelm Roentgen, radiology was born, leading to the creation of large quantities of digital images as opposed to film-based medium. While this rich supply of images provides immeasurable information that would otherwise not be possible to obtain, medical images pose great challenges in archiving them safe from corrupted, lost and misuse, retrievable from databases of huge sizes with varying forms of metadata, and reusable when new tools for data mining and new media for data storing become available. This paper provides a summative account on the creation of medical imaging tomography, the development of image archiving systems and the innovation from the existing acquired image data pools. The focus of this paper is on content-based image retrieval (CBIR), in particular, for 3D images, which is exemplified by our developed online e-learning system, MIRAGE, home to a repository of medical images with variety of domains and different dimensions. In terms of novelties, the facilities of CBIR for 3D images coupled with image annotation in a fully automatic fashion have been developed and implemented in the system, resonating with future versatile, flexible and sustainable medical image databases that can reap new innovations.

  10. The State of the Art of Medical Imaging Technology: from Creation to Archive and Back

    PubMed Central

    Gao, Xiaohong W; Qian, Yu; Hui, Rui

    2011-01-01

    Medical imaging has learnt itself well into modern medicine and revolutionized medical industry in the last 30 years. Stemming from the discovery of X-ray by Nobel laureate Wilhelm Roentgen, radiology was born, leading to the creation of large quantities of digital images as opposed to film-based medium. While this rich supply of images provides immeasurable information that would otherwise not be possible to obtain, medical images pose great challenges in archiving them safe from corrupted, lost and misuse, retrievable from databases of huge sizes with varying forms of metadata, and reusable when new tools for data mining and new media for data storing become available. This paper provides a summative account on the creation of medical imaging tomography, the development of image archiving systems and the innovation from the existing acquired image data pools. The focus of this paper is on content-based image retrieval (CBIR), in particular, for 3D images, which is exemplified by our developed online e-learning system, MIRAGE, home to a repository of medical images with variety of domains and different dimensions. In terms of novelties, the facilities of CBIR for 3D images coupled with image annotation in a fully automatic fashion have been developed and implemented in the system, resonating with future versatile, flexible and sustainable medical image databases that can reap new innovations. PMID:21915232

  11. Deciphering the Hot Giant Atmospheres Orbiting Nearby Extrasolar Systems with JWST

    NASA Astrophysics Data System (ADS)

    Afrin Badhan, Mahmuda; Batalha, Natasha; Deming, Drake; Domagal-Goldman, Shawn; HEBRARD, Eric; Kopparapu, Ravi Kumar; Irwin, Patrick Gerard Joseph

    2016-10-01

    Unique and exotic planets give us an opportunity to understand how planetary systems form and evolve over their lifetime, by placing our own planetary system in the context of the vastly different extrasolar systems that are being continually discovered by present space missions. With orbital separations that are less than one-tenth of the Mercury-Sun distance, these close-in planets provide us with valuable insights about the host stellar atmosphere and planetary atmospheres subjected to their enormous stellar insolation. Observed spectroscopic signatures reveal all spectrally active species in a planet, along with information about its thermal structure and dynamics, allowing us to characterize the planet's atmosphere. NASA's upcoming missions will give us the high-resolution spectra necessary to constrain the atmospheric properties with unprecedented accuracy. However, to interpret the observed signals from exoplanetary transit events with any certainty, we need reliable atmospheric retrieval tools that can model the expected observables adequately. In my work thus far, I have built a Markov Chain Monte Carlo (MCMC) convergence scheme, with an analytical radiative equilibrium formulation for the thermal structures, within the NEMESIS atmospheric modeling tool, to allow sufficient (and efficient) exploration of the parameter space. I also augmented the opacity tables to improve the speed and reliability of retrieval models. I then utilized this upgraded version to infer the pressure-temperature (P-T) structures and volume-mixing ratios (VMRs) of major gas species in hot Jupiter dayside atmospheres, from their emission spectra. I have employed a parameterized thermal structure to retrieve plausible P-T profiles, along with altitude-invariant VMRs. Here I show my retrieval results on published datasets of HD189733b, and compare them with both medium and high spectral resolution JWST/NIRSPEC simulations. In preparation for the upcoming JWST mission, my current work expands on these efforts by exploring the observable impacts of chemistry in the hot Jupiter models and retrievals.

  12. How Complementary and Alternative Medicine Practitioners Use PubMed

    PubMed Central

    Quint-Rapoport, Mia

    2007-01-01

    Background PubMed is the largest bibliographic index in the life sciences. It is freely available online and is used by professionals and the public to learn more about medical research. While primarily intended to serve researchers, PubMed provides an array of tools and services that can help a wider readership in the location, comprehension, evaluation, and utilization of medical research. Objective This study sought to establish the potential contributions made by a range of PubMed tools and services to the use of the database by complementary and alternative medicine practitioners. Methods In this study, 10 chiropractors, 7 registered massage therapists, and a homeopath (N = 18), 11 with prior research training and 7 without, were taken through a 2-hour introductory session with PubMed. The 10 PubMed tools and services considered in this study can be divided into three functions: (1) information retrieval (Boolean Search, Limits, Related Articles, Author Links, MeSH), (2) information access (Publisher Link, LinkOut, Bookshelf ), and (3) information management (History, Send To, Email Alert). Participants were introduced to between six and 10 of these tools and services. The participants were asked to provide feedback on the value of each tool or service in terms of their information needs, which was ranked as positive, positive with emphasis, negative, or indifferent. Results The participants in this study expressed an interest in the three types of PubMed tools and services (information retrieval, access, and management), with less well-regarded tools including MeSH Database and Bookshelf. In terms of their comprehension of the research, the tools and services led the participants to reflect on their understanding as well as their critical reading and use of the research. There was universal support among the participants for greater access to complete articles, beyond the approximately 15% that are currently open access. The abstracts provided by PubMed were felt to be necessary in selecting literature to read but entirely inadequate for both evaluating and learning from the research. Thus, the restrictions and fees the participants faced in accessing full-text articles were points of frustration. Conclusions The study found strong indications of PubMed’s potential value in the professional development of these complementary and alternative medicine practitioners in terms of engaging with and understanding research. It provides support for the various initiatives intended to increase access, including a recommendation that the National Library of Medicine tap into the published research that is being archived by authors in institutional archives and through other websites. PMID:17613489

  13. Mobile object retrieval in server-based image databases

    NASA Astrophysics Data System (ADS)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  14. In-situ Microwave Brightness Temperature Variability from Ground-based Radiometer Measurements at Dome C in Antarctica Induced by Wind-formed Features

    NASA Technical Reports Server (NTRS)

    Royer, A.; Picard, G.; Arnaud, L.; Brucker, L.; Fily, M..

    2014-01-01

    Space-borne microwave radiometers are among the most useful tools to study snow and to collect information on the Antarctic climate. They have several advantages over other remote sensing techniques: high sensitivity to snow properties of interest (temperature, grain size, density), subdaily coverage in the polar regions, and their observations are independent of cloud conditions and solar illumination. Thus, microwave radiometers are widely used to retrieve information over snow-covered regions. For the Antarctic Plateau, many studies presenting retrieval algorithms or numerical simulations have assumed, explicitly or not, that the subpixel-scale heterogeneity is negligible and that the retrieved properties were representative of whole pixels. In this presentation, we investigate the spatial variations of brightness temperature over arange of a few kilometers in the Dome C area (Antarctic Plateau).

  15. Users guide for information retrieval using APL

    NASA Technical Reports Server (NTRS)

    Shapiro, A.

    1974-01-01

    A Programming Language (APL) is a precise, concise, and powerful computer programming language. Several features make APL useful to managers and other potential computer users. APL is interactive; therefore, the user can communicate with his program or data base in near real-time. This, coupled with the fact that APL has excellent debugging features, reduces program checkout time to minutes or hours rather than days or months. Of particular importance is the fact that APL can be utilized as a management science tool using such techniques as operations research, statistical analysis, and forecasting. The gap between the scientist and the manager could be narrowed by showing how APL can be used to do what the scientists and the manager each need to do, retrieve information. Sometimes, the information needs to be retrieved rapidly. In this case APL is ideally suited for this challenge.

  16. System for pathology categorization and retrieval in chest radiographs

    NASA Astrophysics Data System (ADS)

    Avni, Uri; Greenspan, Hayit; Konen, Eli; Sharon, Michal; Goldberger, Jacob

    2011-03-01

    In this paper we present an overview of a system we have been developing for the past several years for efficient image categorization and retrieval in large radiograph archives. The methodology is based on local patch representation of the image content, using a bag of visual words approach and similarity-based categorization with a kernel based SVM classifier. We show an application to pathology-level categorization of chest x-ray data, the most popular examination in radiology. Our study deals with pathology detection and identification of individual pathologies including right and left pleural effusion, enlarged heart and cases of enlarged mediastinum. The input from a radiologist provided a global label for the entire image (healthy/pathology), and the categorization was conducted on the entire image, with no need for segmentation algorithms or any geometrical rules. An automatic diagnostic-level categorization, even on such an elementary level as healthy vs pathological, provides a useful tool for radiologists on this popular and important examination. This is a first step towards similarity-based categorization, which has a major clinical implications for computer-assisted diagnostics.

  17. Retrieving acoustic energy densities and local pressure amplitudes in microfluidics by holographic time-lapse imaging.

    PubMed

    Cacace, Teresa; Bianco, Vittorio; Paturzo, Melania; Memmolo, Pasquale; Vassalli, Massimo; Fraldi, Massimiliano; Mensitieri, Giuseppe; Ferraro, Pietro

    2018-06-26

    The development of techniques able to characterize and map the pressure field is crucial for the widespread use of acoustofluidic devices in biotechnology and lab-on-a-chip platforms. In fact, acoustofluidic devices are powerful tools for driving precise manipulation of microparticles and cells in microfluidics in non-contact modality. Here, we report a full and accurate characterization of the movement of particles subjected to acoustophoresis in a microfluidic environment by holographic imaging. The particle displacement along the direction of the ultrasound wave propagation, coinciding with the optical axis, is observed and investigated. Two resonance frequencies are explored, varying for each the amplitude of the applied signal. The trajectories of individual tracers, accomplished by holographic measurements, are fitted with the theoretical model thus allowing the retrieval of the acoustic energy densities and pressure amplitudes through full holographic analysis. The absence of prior calibration, being independent of the object shape and the possibility of implementing automatic analysis make the use of holography very appealing for applications in devices for biotechnologies.

  18. A structured vocabulary for indexing dietary supplements in databases in the United States

    PubMed Central

    Saldanha, Leila G; Dwyer, Johanna T; Holden, Joanne M; Ireland, Jayne D.; Andrews, Karen W; Bailey, Regan L; Gahche, Jaime J.; Hardy, Constance J; Møller, Anders; Pilch, Susan M.; Roseland, Janet M

    2011-01-01

    Food composition databases are critical to assess and plan dietary intakes. Dietary supplement databases are also needed because dietary supplements make significant contributions to total nutrient intakes. However, no uniform system exists for classifying dietary supplement products and indexing their ingredients in such databases. Differing approaches to classifying these products make it difficult to retrieve or link information effectively. A consistent approach to classifying information within food composition databases led to the development of LanguaL™, a structured vocabulary. LanguaL™ is being adapted as an interface tool for classifying and retrieving product information in dietary supplement databases. This paper outlines proposed changes to the LanguaL™ thesaurus for indexing dietary supplement products and ingredients in databases. The choice of 12 of the original 14 LanguaL™ facets pertinent to dietary supplements, modifications to their scopes, and applications are described. The 12 chosen facets are: Product Type; Source; Part of Source; Physical State, Shape or Form; Ingredients; Preservation Method, Packing Medium, Container or Wrapping; Contact Surface; Consumer Group/Dietary Use/Label Claim; Geographic Places and Regions; and Adjunct Characteristics of food. PMID:22611303

  19. An intelligent robot for helping astronauts

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Grimm, K. A.; Pendleton, T. W.

    1994-01-01

    This paper describes the development status of a prototype supervised intelligent robot for space application for purposes of (1) helping the crew of a spacecraft such as the Space Station with various tasks, such as holding objects and retrieving/replacing tools and other objects from/into storage, and (2) for purposes of retrieving detached objects, such as equipment or crew, that have become separated from their spacecraft. In addition to this set of tasks in this low-Earth-orbiting spacecraft environment, it is argued that certain aspects of the technology can be viewed as generic in approach, thereby offering insight into intelligent robots for other tasks and environments. Candidate software architectures and their key technical issues which enable real work in real environments to be accomplished safely and robustly are addressed. Results of computer simulations of grasping floating objects are presented. Also described are characterization results on the usable reduced gravity environment in an aircraft flying parabola (to simulate weightlessness) and results on hardware performance there. These results show it is feasible to use that environment for evaluative testing of dexterous grasping based on real-time vision of freely rotating and translating objects.

  20. Comparison of ISS, NISS, and RTS score as predictor of mortality in pediatric fall.

    PubMed

    Soni, Kapil Dev; Mahindrakar, Santosh; Gupta, Amit; Kumar, Subodh; Sagar, Sushma; Jhakal, Ashish

    2017-01-01

    Studies to identify an ideal trauma score tool representing prediction of outcomes of the pediatric fall patient remains elusive. Our study was undertaken to identify better predictor of mortality in the pediatric fall patients. Data was retrieved from prospectively maintained trauma registry project at level 1 trauma center developed as part of Multicentric Project-Towards Improving Trauma Care Outcomes (TITCO) in India. Single center data retrieved from a prospectively maintained trauma registry at a level 1 trauma center, New Delhi, for a period ranging from 1 October 2013 to 17 February 2015 was evaluated. Standard anatomic scores Injury Severity Score (ISS) and New Injury Severity Score (NISS) were compared with physiologic score Revised Trauma Score (RTS) using receiver operating curve (ROC). Heart rate and RTS had a statistical difference among the survivors to nonsurvivors. ISS, NISS, and RTS were having 50, 50, and 86% of area under the curve on ROCs, and RTS was statistically significant among them. Physiologically based trauma score systems (RTS) are much better predictors of inhospital mortality in comparison to anatomical based scoring systems (ISS and NISS) for unintentional pediatric falls.

Top