Sample records for information retrieval software

  1. The information retrieval system of {open_quotes}heat abd nass transfer{close_quotes} software complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsitsin, A.G.

    A project is discussed which is aimed at creating the International Center for certification of software complexes (SC), intended to for soling various heat and mass transfer problems. Information on the experience gained in the operation of an information retrieval SC system is presented.

  2. ASIST 2001. Information in a Networked World: Harnessing the Flow. Part III: Poster Presentations.

    ERIC Educational Resources Information Center

    Proceedings of the ASIST Annual Meeting, 2001

    2001-01-01

    Topics of Poster Presentations include: electronic preprints; intranets; poster session abstracts; metadata; information retrieval; watermark images; video games; distributed information retrieval; subject domain knowledge; data mining; information theory; course development; historians' use of pictorial images; information retrieval software;…

  3. International Inventory of Software Packages in the Information Field.

    ERIC Educational Resources Information Center

    Keren, Carl, Ed.; Sered, Irina, Ed.

    Designed to provide guidance in selecting appropriate software for library automation, information storage and retrieval, or management of bibliographic databases, this inventory describes 188 computer software packages. The information was obtained through a questionnaire survey of 600 software suppliers and developers who were asked to describe…

  4. Software for Information Storage and Retrieval Tested, Evaluated and Compared: Part VI--Various Additional Programs.

    ERIC Educational Resources Information Center

    Sieverts, Eric G.; And Others

    1993-01-01

    Reports on tests evaluating nine microcomputer software packages designed for information storage and retrieval: BRS-Search, dtSearch, InfoBank, Micro-OPC, Q&A, STN-PFS, Strix, TINman, and ZYindex. Tables and narrative evaluations detail results related to security, hardware, user features, search capability, indexing, input, maintenance of files,…

  5. 42 CFR 433.111 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval... information retrieval system” or “system” means the system of software and hardware used to process Medicaid... management information required by the Medicaid single State agency and Federal Government for program...

  6. 42 CFR 433.111 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval... information retrieval system” or “system” means the system of software and hardware used to process Medicaid... management information required by the Medicaid single State agency and Federal Government for program...

  7. 42 CFR 433.111 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval... information retrieval system” or “system” means the system of software and hardware used to process Medicaid... management information required by the Medicaid single State agency and Federal Government for program...

  8. 42 CFR 433.111 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval... information retrieval system” or “system” means the system of software and hardware used to process Medicaid... management information required by the Medicaid single State agency and Federal Government for program...

  9. 42 CFR 433.111 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... management information required by the Medicaid single State agency and Federal Government for program... ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval... information retrieval system” or “system” means the system of software and hardware used to process Medicaid...

  10. Applying Hypertext Structures to Software Documentation.

    ERIC Educational Resources Information Center

    French, James C.; And Others

    1997-01-01

    Describes a prototype system for software documentation management called SLEUTH (Software Literacy Enhancing Usefulness to Humans) being developed at the University of Virginia. Highlights include information retrieval techniques, hypertext links that are installed automatically, a WAIS (Wide Area Information Server) search engine, user…

  11. HUC--A User Designed System for All Recorded Knowledge and Information.

    ERIC Educational Resources Information Center

    Hilton, Howard J.

    This paper proposes a user designed system, HUC, intended to provide a single index and retrieval system covering all recorded knowledge and information capable of being retrieved from all modes of storage, from manual to the most sophisticated retrieval system. The concept integrates terminal hardware, software, and database structure to allow…

  12. Knowledge Retrieval Solutions.

    ERIC Educational Resources Information Center

    Khan, Kamran

    1998-01-01

    Excalibur RetrievalWare offers true knowledge retrieval solutions. Its fundamental technologies, Adaptive Pattern Recognition Processing and Semantic Networks, have capabilities for knowledge discovery and knowledge management of full-text, structured and visual information. The software delivers a combination of accuracy, extensibility,…

  13. Artificial Intelligence and Information Retrieval.

    ERIC Educational Resources Information Center

    Teodorescu, Ioana

    1987-01-01

    Compares artificial intelligence and information retrieval paradigms for natural language understanding, reviews progress to date, and outlines the applicability of artificial intelligence to question answering systems. A list of principal artificial intelligence software for database front end systems is appended. (CLB)

  14. Exploring Hardware-Based Primitives to Enhance Parallel Security Monitoring in a Novel Computing Architecture

    DTIC Science & Technology

    2007-03-01

    software level retrieve state information that can inherently contain more contextual information . As a result, such mechanisms can be applied in more...ease by which state information can be gathered for monitoring purposes. For example, we consider soft security to allow for easier state retrieval ...files are to be checked and what parameters are to be verified. The independent auditor periodically retrieves information pertaining to the files in

  15. Design implications for task-specific search utilities for retrieval and re-engineering of code

    NASA Astrophysics Data System (ADS)

    Iqbal, Rahat; Grzywaczewski, Adam; Halloran, John; Doctor, Faiyaz; Iqbal, Kashif

    2017-05-01

    The importance of information retrieval systems is unquestionable in the modern society and both individuals as well as enterprises recognise the benefits of being able to find information effectively. Current code-focused information retrieval systems such as Google Code Search, Codeplex or Koders produce results based on specific keywords. However, these systems do not take into account developers' context such as development language, technology framework, goal of the project, project complexity and developer's domain expertise. They also impose additional cognitive burden on users in switching between different interfaces and clicking through to find the relevant code. Hence, they are not used by software developers. In this paper, we discuss how software engineers interact with information and general-purpose information retrieval systems (e.g. Google, Yahoo!) and investigate to what extent domain-specific search and recommendation utilities can be developed in order to support their work-related activities. In order to investigate this, we conducted a user study and found that software engineers followed many identifiable and repeatable work tasks and behaviours. These behaviours can be used to develop implicit relevance feedback-based systems based on the observed retention actions. Moreover, we discuss the implications for the development of task-specific search and collaborative recommendation utilities embedded with the Google standard search engine and Microsoft IntelliSense for retrieval and re-engineering of code. Based on implicit relevance feedback, we have implemented a prototype of the proposed collaborative recommendation system, which was evaluated in a controlled environment simulating the real-world situation of professional software engineers. The evaluation has achieved promising initial results on the precision and recall performance of the system.

  16. Software Helps Retrieve Information Relevant to the User

    NASA Technical Reports Server (NTRS)

    Mathe, Natalie; Chen, James

    2003-01-01

    The Adaptive Indexing and Retrieval Agent (ARNIE) is a code library, designed to be used by an application program, that assists human users in retrieving desired information in a hypertext setting. Using ARNIE, the program implements a computational model for interactively learning what information each human user considers relevant in context. The model, called a "relevance network," incrementally adapts retrieved information to users individual profiles on the basis of feedback from the users regarding specific queries. The model also generalizes such knowledge for subsequent derivation of relevant references for similar queries and profiles, thereby, assisting users in filtering information by relevance. ARNIE thus enables users to categorize and share information of interest in various contexts. ARNIE encodes the relevance and structure of information in a neural network dynamically configured with a genetic algorithm. ARNIE maintains an internal database, wherein it saves associations, and from which it returns associated items in response to a query. A C++ compiler for a platform on which ARNIE will be utilized is necessary for creating the ARNIE library but is not necessary for the execution of the software.

  17. Beyond Information Retrieval: Ways To Provide Content in Context.

    ERIC Educational Resources Information Center

    Wiley, Deborah Lynne

    1998-01-01

    Provides an overview of information retrieval from mainframe systems to Web search engines; discusses collaborative filtering, data extraction, data visualization, agent technology, pattern recognition, classification and clustering, and virtual communities. Argues that rather than huge data-storage centers and proprietary software, we need…

  18. Mobile medical visual information retrieval.

    PubMed

    Depeursinge, Adrien; Duc, Samuel; Eggel, Ivan; Müller, Henning

    2012-01-01

    In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical information retrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.

  19. IPAT: a freely accessible software tool for analyzing multiple patent documents with inbuilt landscape visualizer.

    PubMed

    Ajay, Dara; Gangwal, Rahul P; Sangamwar, Abhay T

    2015-01-01

    Intelligent Patent Analysis Tool (IPAT) is an online data retrieval tool, operated based on text mining algorithm to extract specific patent information in a predetermined pattern into an Excel sheet. The software is designed and developed to retrieve and analyze technology information from multiple patent documents and generate various patent landscape graphs and charts. The software is C# coded in visual studio 2010, which extracts the publicly available patent information from the web pages like Google Patent and simultaneously study the various technology trends based on user-defined parameters. In other words, IPAT combined with the manual categorization will act as an excellent technology assessment tool in competitive intelligence and due diligence for predicting the future R&D forecast.

  20. Positive facial expressions during retrieval of self-defining memories.

    PubMed

    Gandolphe, Marie Charlotte; Nandrino, Jean Louis; Delelis, Gérald; Ducro, Claire; Lavallee, Audrey; Saloppe, Xavier; Moustafa, Ahmed A; El Haj, Mohamad

    2017-11-14

    In this study, we investigated, for the first time, facial expressions during the retrieval of Self-defining memories (i.e., those vivid and emotionally intense memories of enduring concerns or unresolved conflicts). Participants self-rated the emotional valence of their Self-defining memories and autobiographical retrieval was analyzed with a facial analysis software. This software (Facereader) synthesizes the facial expression information (i.e., cheek, lips, muscles, eyebrow muscles) to describe and categorize facial expressions (i.e., neutral, happy, sad, surprised, angry, scared, and disgusted facial expressions). We found that participants showed more emotional than neutral facial expressions during the retrieval of Self-defining memories. We also found that participants showed more positive than negative facial expressions during the retrieval of Self-defining memories. Interestingly, participants attributed positive valence to the retrieved memories. These findings are the first to demonstrate the consistency between facial expressions and the emotional subjective experience of Self-defining memories. These findings provide valuable physiological information about the emotional experience of the past.

  1. Patient Safety—Incorporating Drawing Software into Root Cause Analysis Software

    PubMed Central

    Williams, Linda; Grayson, Diana; Gosbee, John

    2001-01-01

    Drawing software from Lassalle Technologies1 (France) designed for Visual Basic is the tool we used to standardize the creation, storage, and retrieval of flow diagrams containing information about adverse events and close calls.

  2. Patient Safety—Incorporating Drawing Software into Root Cause Analysis Software

    PubMed Central

    Williams, Linda; Grayson, Diana; Gosbee, John

    2002-01-01

    Drawing software from Lassalle Technologies1 (France) designed for Visual Basic is the tool we used to standardize the creation, storage, and retrieval of flow diagrams containing information about adverse events and close calls.

  3. Specifications for Thesaurus Software.

    ERIC Educational Resources Information Center

    Milstead, Jessica L.

    1991-01-01

    Presents specifications for software that is designed to support manual development and maintenance of information retrieval thesauri. Evaluation of existing software and design of custom software is discussed, requirements for integration with larger systems and for the user interface are described, and relationships among terms are discussed.…

  4. Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.

    ERIC Educational Resources Information Center

    Pieska, K. A. O.

    1986-01-01

    Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)

  5. Information Retrieval Using ADABAS-NATURAL (with Applications for Television and Radio).

    ERIC Educational Resources Information Center

    Silbergeld, I.; Kutok, P.

    1984-01-01

    Describes use of the software ADABAS (general purpose database management system) and NATURAL (interactive programing language) in development and implementation of an information retrieval system for the National Television and Radio Network of Israel. General design considerations, files contained in each archive, search strategies, and keywords…

  6. Incorporating a Human-Computer Interaction Course into Software Development Curriculums

    ERIC Educational Resources Information Center

    Janicki, Thomas N.; Cummings, Jeffrey; Healy, R. Joseph

    2015-01-01

    Individuals have increasing options on retrieving information related to hardware and software. Specific hardware devices include desktops, tablets and smart devices. Also, the number of software applications has significantly increased the user's capability to access data. Software applications include the traditional web site, smart device…

  7. Software Documentation for the Bartlesville Public Schools: Part One. The Bartlesville System Total Guidance Information Support System.

    ERIC Educational Resources Information Center

    Roberts, Tommy L.; And Others

    The Total Guidance Information Support System (TGISS), is an information storage and retrieval system for counselors. The total TGISS, including hardware and software, extends the counselor's capabilities by providing ready access to student information under secure conditions. The hardware required includes: (1) IBM 360/50 central processing…

  8. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  9. Protein Annotators' Assistant: A Novel Application of Information Retrieval Techniques.

    ERIC Educational Resources Information Center

    Wise, Michael J.

    2000-01-01

    Protein Annotators' Assistant (PAA) is a software system which assists protein annotators in assigning functions to newly sequenced proteins. PAA employs a number of information retrieval techniques in a novel setting and is thus related to text categorization, where multiple categories may be suggested, except that in this case none of the…

  10. Development of medical data information systems

    NASA Technical Reports Server (NTRS)

    Anderson, J.

    1971-01-01

    Computerized storage and retrieval of medical information is discussed. Tasks which were performed in support of the project are: (1) flight crew health stabilization computer system, (2) medical data input system, (3) graphic software development, (4) lunar receiving laboratory support, and (5) Statos V printer/plotter software development.

  11. The Bartlesville System; TGISS Software Documentation.

    ERIC Educational Resources Information Center

    Roberts, Tommy L.; And Others

    TGISS (Total Guidance Information Support System) is an information storage and retrieval system specifically designed to meet the needs and requirements of a counselor in the Bartlesville Public School environment. The system, which is a combination of man/machine capabilities, includes the hardware and software necessary to extend the…

  12. WWW Entrez: A Hypertext Retrieval Tool for Molecular Biology.

    ERIC Educational Resources Information Center

    Epstein, Jonathan A.; Kans, Jonathan A.; Schuler, Gregory D.

    This article describes the World Wide Web (WWW) Entrez server which is based upon the National Center for Biotechnology Information's (NCBI) Entrez retrieval database and software. Entrez is a molecular sequence retrieval system that contains an integrated view of portions of Medline and all publicly available nucleotide and protein databases,…

  13. Project W-211, initial tank retrieval systems, retrieval control system software configuration management plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    RIECK, C.A.

    1999-02-23

    This Software Configuration Management Plan (SCMP) provides the instructions for change control of the W-211 Project, Retrieval Control System (RCS) software after initial approval/release but prior to the transfer of custody to the waste tank operations contractor. This plan applies to the W-211 system software developed by the project, consisting of the computer human-machine interface (HMI) and programmable logic controller (PLC) software source and executable code, for production use by the waste tank operations contractor. The plan encompasses that portion of the W-211 RCS software represented on project-specific AUTOCAD drawings that are released as part of the C1 definitive designmore » package (these drawings are identified on the drawing list associated with each C-1 package), and the associated software code. Implementation of the plan is required for formal acceptance testing and production release. The software configuration management plan does not apply to reports and data generated by the software except where specifically identified. Control of information produced by the software once it has been transferred for operation is the responsibility of the receiving organization.« less

  14. The Organization of a Computer Software Collection Using an Information Storage and Retrieval Software Package.

    ERIC Educational Resources Information Center

    Davies, Denise M.

    1985-01-01

    Discusses design, development, and use of a database to provide organization and access to a computer software collection at the University of Hawaii School of Library Studies. Field specifications, samples of report forms, and a description of the physical organization of the software collection are included. (MBR)

  15. Database Software Selection for the Egyptian National STI Network.

    ERIC Educational Resources Information Center

    Slamecka, Vladimir

    The evaluation and selection of information/data management system software for the Egyptian National Scientific and Technical (STI) Network are described. An overview of the state-of-the-art of database technology elaborates on the differences between information retrieval and database management systems (DBMS). The desirable characteristics of…

  16. Natural language processing-based COTS software and related technologies survey.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stickland, Michael G.; Conrad, Gregory N.; Eaton, Shelley M.

    Natural language processing-based knowledge management software, traditionally developed for security organizations, is now becoming commercially available. An informal survey was conducted to discover and examine current NLP and related technologies and potential applications for information retrieval, information extraction, summarization, categorization, terminology management, link analysis, and visualization for possible implementation at Sandia National Laboratories. This report documents our current understanding of the technologies, lists software vendors and their products, and identifies potential applications of these technologies.

  17. A hybrid approach to software repository retrieval: Blending faceted classification and type signatures

    NASA Technical Reports Server (NTRS)

    Eichmann, David A.

    1992-01-01

    We present a user interface for software reuse repository that relies both on the informal semantics of faceted classification and the formal semantics of type signatures for abstract data types. The result is an interface providing both structural and qualitative feedback to a software reuser.

  18. Methods and Software for Building Bibliographic Data Bases.

    ERIC Educational Resources Information Center

    Daehn, Ralph M.

    1985-01-01

    This in-depth look at database management systems (DBMS) for microcomputers covers data entry, information retrieval, security, DBMS software and design, and downloading of literature search results. The advantages of in-house systems versus online search vendors are discussed, and specifications of three software packages and 14 sources are…

  19. Assessing repository technology. Where do we go from here?

    NASA Technical Reports Server (NTRS)

    Eichmann, David

    1992-01-01

    Three sample information retrieval systems, archie, autoLib, and Wide Area Information Service (WAIS), are compared with regard to their expressiveness and usefulness, first in the general context of information retrieval, and then as prospective software reuse repositories. While the representational capabilities of these systems are limited, they provide a useful foundation for future repository efforts, particularly from the perspective of repository distribution and coherent user interface design.

  20. Assessing repository technology: Where do we go from here?

    NASA Technical Reports Server (NTRS)

    Eichmann, David A.

    1992-01-01

    Three sample information retrieval systems, archie, autoLib, and Wide Area Information Service (WAIS), are compared with regard to their expressiveness and usefulness, first in the general context of information retrieval, and then as perspective software reuse repositories. While the representational capabilities of these systems are limited, they provide a useful foundation for future repository efforts, particularly from the perspective of repository distribution and coherent user interface design.

  1. A Unified Framework for Periodic, On-Demand, and User-Specified Software Information

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.

    2004-01-01

    Although grid computing can increase the number of resources available to a user; not all resources on the grid may have a software environment suitable for running a given application. To provide users with the necessary assistance for selecting resources with compatible software environments and/or for automatically establishing such environments, it is necessary to have an accurate source of information about the software installed across the grid. This paper presents a new OGSI-compliant software information service that has been implemented as part of NASA's Information Power Grid project. This service is built on top of a general framework for reconciling information from periodic, on-demand, and user-specified sources. Information is retrieved using standard XPath queries over a single unified namespace independent of the information's source. Two consumers of the provided software information, the IPG Resource Broker and the IPG Neutralization Service, are briefly described.

  2. 28 CFR 802.4 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Administration COURT SERVICES AND OFFENDER SUPERVISION AGENCY FOR THE DISTRICT OF COLUMBIA DISCLOSURE OF RECORDS... proprietary interest in the information. (e) Computer software means tools by which records are created, stored, and retrieved. Normally, computer software, including source code, object code, and listings of...

  3. The JPL Library information retrieval system

    NASA Technical Reports Server (NTRS)

    Walsh, J.

    1975-01-01

    The development, capabilities, and products of the computer-based retrieval system of the Jet Propulsion Laboratory Library are described. The system handles books and documents, produces a book catalog, and provides a machine search capability. Programs and documentation are available to the public through NASA's computer software dissemination program.

  4. The Oklahoma Geographic Information Retrieval System

    NASA Technical Reports Server (NTRS)

    Blanchard, W. A.

    1982-01-01

    The Oklahoma Geographic Information Retrieval System (OGIRS) is a highly interactive data entry, storage, manipulation, and display software system for use with geographically referenced data. Although originally developed for a project concerned with coal strip mine reclamation, OGIRS is capable of handling any geographically referenced data for a variety of natural resource management applications. A special effort has been made to integrate remotely sensed data into the information system. The timeliness and synoptic coverage of satellite data are particularly useful attributes for inclusion into the geographic information system.

  5. Digital Equipment Corporation's CRDOM Software and Database Publications.

    ERIC Educational Resources Information Center

    Adams, Michael Q.

    1986-01-01

    Acquaints information professionals with Digital Equipment Corporation's compact optical disk read-only-memory (CDROM) search and retrieval software and growing library of CDROM database publications (COMPENDEX, Chemical Abstracts Services). Highlights include MicroBASIS, boolean operators, range operators, word and phrase searching, proximity…

  6. An overview of the National Space Science data Center Standard Information Retrieval System (SIRS)

    NASA Technical Reports Server (NTRS)

    Shapiro, A.; Blecher, S.; Verson, E. E.; King, M. L. (Editor)

    1974-01-01

    A general overview is given of the National Space Science Data Center (NSSDC) Standard Information Retrieval System. A description, in general terms, the information system that contains the data files and the software system that processes and manipulates the files maintained at the Data Center. Emphasis is placed on providing users with an overview of the capabilities and uses of the NSSDC Standard Information Retrieval System (SIRS). Examples given are taken from the files at the Data Center. Detailed information about NSSDC data files is documented in a set of File Users Guides, with one user's guide prepared for each file processed by SIRS. Detailed information about SIRS is presented in the SIRS Users Guide.

  7. Natural Language Processing.

    ERIC Educational Resources Information Center

    Chowdhury, Gobinda G.

    2003-01-01

    Discusses issues related to natural language processing, including theoretical developments; natural language understanding; tools and techniques; natural language text processing systems; abstracting; information extraction; information retrieval; interfaces; software; Internet, Web, and digital library applications; machine translation for…

  8. Peeling the Onion: Okapi System Architecture and Software Design Issues.

    ERIC Educational Resources Information Center

    Jones, S.; And Others

    1997-01-01

    Discusses software design issues for Okapi, an information retrieval system that incorporates both search engine and user interface and supports weighted searching, relevance feedback, and query expansion. The basic search system, adjacency searching, and moving toward a distributed system are discussed. (Author/LRW)

  9. Information Retrieval Research and ESPRIT.

    ERIC Educational Resources Information Center

    Smeaton, Alan F.

    1987-01-01

    Describes the European Strategic Programme of Research and Development in Information Technology (ESPRIT), and its five programs: advanced microelectronics, software technology, advanced information processing, office systems, and computer integrated manufacturing. The emphasis on logic programming and ESPRIT as the European response to the…

  10. Space Images for NASA JPL Android Version

    NASA Technical Reports Server (NTRS)

    Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice

    2013-01-01

    This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.

  11. NLM microcomputer-based tutorials (for microcomputers). Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, M.

    1990-04-01

    The package consists of TOXLEARN--a microcomputer-based training package for TOXLINE (Toxicology Information Online), CHEMLEARN-a microcomputer-based training package for CHEMLINE (Chemical Information Online), MEDTUTOR--a microcomputer-based training package for MEDLINE (Medical Information Online), and ELHILL LEARN--a microcomputer-based training package for the ELHILL search and retrieval software that supports the above-mentioned databases...Software Description: The programs were developed under PILOTplus using the NLM LEARN Programmer. They run on IBM-PC, XT, AT, PS/2, and fully compatible computers. The programs require 512K RAM memory, one disk drive, and DOS 2.0 or higher. The software supports most monochrome, color graphics, enhanced color graphics, or visual graphics displays.

  12. The Apache OODT Project: An Introduction

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.; Crichton, D. J.; Hughes, J. S.; Ramirez, P.; Goodale, C. E.; Hart, A. F.

    2012-12-01

    Apache OODT is a science data system framework, borne over the past decade, with 100s of FTEs of investment, tens of sponsoring agencies (NASA, NIH/NCI, DoD, NSF, universities, etc.), and hundreds of projects and science missions that it powers everyday to their success. At its core, Apache OODT carries with it two fundamental classes of software services and components: those that deal with information integration from existing science data repositories and archives, that themselves have already-in-use business processes and models for populating those archives. Information integration allows search, retrieval, and dissemination across these heterogeneous systems, and ultimately rapid, interactive data access, and retrieval. The other suite of services and components within Apache OODT handle population and processing of those data repositories and archives. Workflows, resource management, crawling, remote data retrieval, curation and ingestion, along with science data algorithm integration all are part of these Apache OODT software elements. In this talk, I will provide an overview of the use of Apache OODT to unlock and populate information from science data repositories and archives. We'll cover the basics, along with some advanced use cases and success stories.

  13. A Management Information System for Bare Base Civil Engineering Commanders

    DTIC Science & Technology

    1988-09-01

    initial beddown stage. The purpose of this research was to determine the feasibility of developing a microcomputer based management information system (MIS...the software best suited to synthesize four of the categories into a prototype field MIS. Keyword: Management information system , Bare bases, Civil engineering, Data bases, Information retrieval.

  14. Automation and hypermedia technology applications

    NASA Technical Reports Server (NTRS)

    Jupin, Joseph H.; Ng, Edward W.; James, Mark L.

    1993-01-01

    This paper represents a progress report on HyLite (Hypermedia Library technology): a research and development activity to produce a versatile system as part of NASA's technology thrusts in automation, information sciences, and communications. HyLite can be used as a system or tool to facilitate the creation and maintenance of large distributed electronic libraries. The contents of such a library may be software components, hardware parts or designs, scientific data sets or databases, configuration management information, etc. Proliferation of computer use has made the diversity and quantity of information too large for any single user to sort, process, and utilize effectively. In response to this information deluge, we have created HyLite to enable the user to process relevant information into a more efficient organization for presentation, retrieval, and readability. To accomplish this end, we have incorporated various AI techniques into the HyLite hypermedia engine to facilitate parameters and properties of the system. The proposed techniques include intelligent searching tools for the libraries, intelligent retrievals, and navigational assistance based on user histories. HyLite itself is based on an earlier project, the Encyclopedia of Software Components (ESC) which used hypermedia to facilitate and encourage software reuse.

  15. P7-S Combining Workflow-Based Project Organization with Protein-Dependant Data Retrieval for the Retrieval of Extensive Proteome Information

    PubMed Central

    Glandorf, J.; Thiele, H.; Macht, M.; Vorm, O.; Podtelejnikov, A.

    2007-01-01

    In the course of a full-scale proteomics experiment, the handling of the data as well as the retrieval of the relevant information from the results is a major challenge due to the massive amount of generated data (gel images, chromatograms, and spectra) as well as associated result information (sequences, literature, etc.). To obtain meaningful information from these data, one has to filter the results in an easy way. Possibilities to do so can be based on GO terms or structural features such as transmembrane domains, involvement in certain pathways, etc. In this presentation we will show how a combination of a software package with a workflow-based result organization (Bruker ProteinScape) and a protein-centered data-mining software (Proxeon ProteinCenter) can assist in the comparison of the results from large projects, such as comparison of cross-platform results from 2D PAGE/MS with shotgun LC-ESI-MS/MS. We will present differences between different technologies and show how these differences can be easily identified and how they allow us to draw conclusions on the involved technologies.

  16. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  17. User's operating procedures. Volume 1: Scout project information programs

    NASA Technical Reports Server (NTRS)

    Harris, C. G.; Harris, D. K.

    1985-01-01

    A review of the user's operating procedures for the Scout Project Automatic Data System, called SPADS is given. SPADS is the result of the past seven years of software development on a Prime minicomputer located at the Scout Project Office. SPADS was developed as a single entry, multiple cross reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. The instructions to operate the Scout Project Information programs in data retrieval and file maintenance via the user friendly menu drivers is presented.

  18. User's operating procedures. Volume 3: Projects directorate information programs

    NASA Technical Reports Server (NTRS)

    Haris, C. G.; Harris, D. K.

    1985-01-01

    A review of the user's operating procedures for the scout project automatic data system, called SPADS is presented. SPADS is the results of the past seven years of software development on a prime mini-computer. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, three of three, provides the instructions to operate the projects directorate information programs in data retrieval and file maintenance via the user friendly menu drivers.

  19. Video Information Communication and Retrieval/Image Based Information System (VICAR/IBIS)

    NASA Technical Reports Server (NTRS)

    Wherry, D. B.

    1981-01-01

    The acquisition, operation, and planning stages of installing a VICAR/IBIS system are described. The system operates in an IBM mainframe environment, and provides image processing of raster data. System support problems with software and documentation are discussed.

  20. A Community Data Model for Hydrologic Observations

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.; Jennings, B.

    2006-12-01

    The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. Hydrologic information science involves the description of hydrologic environments in a consistent way, using data models for information integration. This includes a hydrologic observations data model for the storage and retrieval of hydrologic observations in a relational database designed to facilitate data retrieval for integrated analysis of information collected by multiple investigators. It is intended to provide a standard format to facilitate the effective sharing of information between investigators and to facilitate analysis of information within a single study area or hydrologic observatory, or across hydrologic observatories and regions. The observations data model is designed to store hydrologic observations and sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used and provide traceable heritage from raw measurements to usable information. The design is based on the premise that a relational database at the single observation level is most effective for providing querying capability and cross dimension data retrieval and analysis. This premise is being tested through the implementation of a prototype hydrologic observations database, and the development of web services for the retrieval of data from and ingestion of data into the database. These web services hosted by the San Diego Supercomputer center make data in the database accessible both through a Hydrologic Data Access System portal and directly from applications software such as Excel, Matlab and ArcGIS that have Standard Object Access Protocol (SOAP) capability. This paper will (1) describe the data model; (2) demonstrate the capability for representing diverse data in the same database; (3) demonstrate the use of the database from applications software for the performance of hydrologic analysis across different observation types.

  1. PASSIM--an open source software system for managing information in biomedical studies.

    PubMed

    Viksna, Juris; Celms, Edgars; Opmanis, Martins; Podnieks, Karlis; Rucevskis, Peteris; Zarins, Andris; Barrett, Amy; Neogi, Sudeshna Guha; Krestyaninova, Maria; McCarthy, Mark I; Brazma, Alvis; Sarkans, Ugis

    2007-02-09

    One of the crucial aspects of day-to-day laboratory information management is collection, storage and retrieval of information about research subjects and biomedical samples. An efficient link between sample data and experiment results is absolutely imperative for a successful outcome of a biomedical study. Currently available software solutions are largely limited to large-scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but often implies sufficient investment of time, effort and funds, which are not always available. There is a clear need for lightweight open source systems for patient and sample information management. We present a web-based tool for submission, management and retrieval of sample and research subject data. The system secures confidentiality by separating anonymized sample information from individuals' records. It is simple and generic, and can be customised for various biomedical studies. Information can be both entered and accessed using the same web interface. User groups and their privileges can be defined. The system is open-source and is supplied with an on-line tutorial and necessary documentation. It has proven to be successful in a large international collaborative project. The presented system closes the gap between the need and the availability of lightweight software solutions for managing information in biomedical studies involving human research subjects.

  2. SaDA: From Sampling to Data Analysis—An Extensible Open Source Infrastructure for Rapid, Robust and Automated Management and Analysis of Modern Ecological High-Throughput Microarray Data

    PubMed Central

    Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola

    2015-01-01

    One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies. PMID:26047146

  3. SaDA: From Sampling to Data Analysis-An Extensible Open Source Infrastructure for Rapid, Robust and Automated Management and Analysis of Modern Ecological High-Throughput Microarray Data.

    PubMed

    Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola

    2015-06-03

    One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies.

  4. Tautomerism in chemical information management systems

    NASA Astrophysics Data System (ADS)

    Warr, Wendy A.

    2010-06-01

    Tautomerism has an impact on many of the processes in chemical information management systems including novelty checking during registration into chemical structure databases; storage of structures; exact and substructure searching in chemical structure databases; and depiction of structures retrieved by a search. The approaches taken by 27 different software vendors and database producers are compared. It is hoped that this comparison will act as a discussion document that could ultimately improve databases and software for researchers in the future.

  5. Computer programs: Information retrieval and data analysis, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.

  6. A description of the index of active Florida water data collection stations and a user's guide for station or site information retrieval using computer program Findex H578

    USGS Publications Warehouse

    Merritt, M.L.

    1977-01-01

    A computerized index of water-data collection activities and retrieval software to generate publication list of this information was developed for Florida. This system serves a vital need in the administration of the many and diverse water-data collection activities. Previously, needed data was very difficult to assemble for use in program planning or project implementation. Largely descriptive, the report tells how a file of computer card images has been established which contains entries for all sites in Florida at which there is currently a water-data-collection activity. Entries include information such as identification number, station name, location, type of site, county, information about data collection, funding, and other pertinent details. The computer program FINDEX selectively retrieves entries and lists them in a format suitable for publication. Updating the index is done routinely. (Woodard-USGS)

  7. An introduction to information retrieval: applications in genomics

    PubMed Central

    Nadkarni, P M

    2011-01-01

    Information retrieval (IR) is the field of computer science that deals with the processing of documents containing free text, so that they can be rapidly retrieved based on keywords specified in a user’s query. IR technology is the basis of Web-based search engines, and plays a vital role in biomedical research, because it is the foundation of software that supports literature search. Documents can be indexed by both the words they contain, as well as the concepts that can be matched to domain-specific thesauri; concept matching, however, poses several practical difficulties that make it unsuitable for use by itself. This article provides an introduction to IR and summarizes various applications of IR and related technologies to genomics. PMID:12049181

  8. Improved parallel data partitioning by nested dissection with applications to information retrieval.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar

    The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less

  9. Automated reuseable components system study results

    NASA Technical Reports Server (NTRS)

    Gilroy, Kathy

    1989-01-01

    The Automated Reusable Components System (ARCS) was developed under a Phase 1 Small Business Innovative Research (SBIR) contract for the U.S. Army CECOM. The objectives of the ARCS program were: (1) to investigate issues associated with automated reuse of software components, identify alternative approaches, and select promising technologies, and (2) to develop tools that support component classification and retrieval. The approach followed was to research emerging techniques and experimental applications associated with reusable software libraries, to investigate the more mature information retrieval technologies for applicability, and to investigate the applicability of specialized technologies to improve the effectiveness of a reusable component library. Various classification schemes and retrieval techniques were identified and evaluated for potential application in an automated library system for reusable components. Strategies for library organization and management, component submittal and storage, and component search and retrieval were developed. A prototype ARCS was built to demonstrate the feasibility of automating the reuse process. The prototype was created using a subset of the classification and retrieval techniques that were investigated. The demonstration system was exercised and evaluated using reusable Ada components selected from the public domain. A requirements specification for a production-quality ARCS was also developed.

  10. Retrieval of land cover information under thin fog in Landsat TM image

    NASA Astrophysics Data System (ADS)

    Wei, Yuchun

    2008-04-01

    Thin fog, which often appears in remote sensing image of subtropical climate region, has resulted in the low image quantity and bad image mapping. Therefore, it is necessary to develop the image processing method to retrieve land cover information under thin fog. In this paper, the Landsat TM image near the Taihu Lake that is in the subtropical climate zone of China was used as an example, and the workflow and method used to retrieve the land cover information under thin fog have been built based on ENVI software and a single TM image. The basic step covers three parts: 1) isolating the thin fog area in image according to the spectral difference of different bands; 2) retrieving the visible band information of different land cover types under thin fog from the near-infrared bands according to the relationships between near-infrared bands and visible bands of different land cover types in the area without fog; 3) image post-process. The result showed that the method in the paper is easy and suitable, and can be used to improve the quantity of TM image mapping more effectively.

  11. BIBLIO: A Computer System Designed to Support the Near-Library User Model of Information Retrieval.

    ERIC Educational Resources Information Center

    Belew, Richard K.; Holland, Maurita Peterson

    1988-01-01

    Description of the development of the Information Exchange Facility, a prototype microcomputer-based personal bibliographic facility, covers software selection, user selection, overview of the system, and evaluation. The plan for an integrated system, BIBLIO, and the future role of libraries are discussed. (eight references) (MES)

  12. Installing an Integrated Information System in a Centralized Network.

    ERIC Educational Resources Information Center

    Mendelson, Andrew D.

    1992-01-01

    Many schools are looking at ways to centralize the distribution and retrieval of video, voice, and data transmissions in an integrate information system (IIS). A centralized system offers greater control of hardware and software. Describes media network planning to retrofit an Illinois' high school with a fiber optic-based IIS. (MLF)

  13. Redundant array of independent disks: practical on-line archiving of nuclear medicine image data.

    PubMed

    Lear, J L; Pratt, J P; Trujillo, N

    1996-02-01

    While various methods for long-term archiving of nuclear medicine image data exist, none support rapid on-line search and retrieval of information. We assembled a 90-Gbyte redundant array of independent disks (RAID) system using 10-, 9-Gbyte disk drives. The system was connected to a personal computer and software was used to partition the array into 4-Gbyte sections. All studies (50,000) acquired over a 7-year period were archived in the system. Based on patient name/number and study date, information could be located within 20 seconds and retrieved for display and analysis in less than 5 seconds. RAID offers a practical, redundant method for long-term archiving of nuclear medicine studies that supports rapid on-line retrieval.

  14. Using the Saccharomyces Genome Database (SGD) for analysis of genomic information

    PubMed Central

    Skrzypek, Marek S.; Hirschman, Jodi

    2011-01-01

    Analysis of genomic data requires access to software tools that place the sequence-derived information in the context of biology. The Saccharomyces Genome Database (SGD) integrates functional information about budding yeast genes and their products with a set of analysis tools that facilitate exploring their biological details. This unit describes how the various types of functional data available at SGD can be searched, retrieved, and analyzed. Starting with the guided tour of the SGD Home page and Locus Summary page, this unit highlights how to retrieve data using YeastMine, how to visualize genomic information with GBrowse, how to explore gene expression patterns with SPELL, and how to use Gene Ontology tools to characterize large-scale datasets. PMID:21901739

  15. An Automated Medical Information Management System (OpScan-MIMS) in a Clinical Setting

    PubMed Central

    Margolis, S.; Baker, T.G.; Ritchey, M.G.; Alterescu, S.; Friedman, C.

    1981-01-01

    This paper describes an automated medical information management system within a clinic setting. The system includes an optically scanned data entry system (OpScan), a generalized, interactive retrieval and storage software system(Medical Information Management System, MIMS) and the use of time-sharing. The system has the advantages of minimal hardware purchase and maintenance, rapid data entry and retrieval, user-created programs, no need for user knowledge of computer language or technology and is cost effective. The OpScan-MIMS system has been operational for approximately 16 months in a sexually transmitted disease clinic. The system's application to medical audit, quality assurance, clinic management and clinical training are demonstrated.

  16. PCI bus content-addressable-memory (CAM) implementation on FPGA for pattern recognition/image retrieval in a distributed environment

    NASA Astrophysics Data System (ADS)

    Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.

    2004-11-01

    Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.

  17. Experimental evaluation of ontology-based HIV/AIDS frequently asked question retrieval system.

    PubMed

    Ayalew, Yirsaw; Moeng, Barbara; Mosweunyane, Gontlafetse

    2018-05-01

    This study presents the results of experimental evaluations of an ontology-based frequently asked question retrieval system in the domain of HIV and AIDS. The main purpose of the system is to provide answers to questions on HIV/AIDS using ontology. To evaluate the effectiveness of the frequently asked question retrieval system, we conducted two experiments. The first experiment focused on the evaluation of the quality of the ontology we developed using the OQuaRE evaluation framework which is based on software quality metrics and metrics designed for ontology quality evaluation. The second experiment focused on evaluating the effectiveness of the ontology in retrieving relevant answers. For this we used an open-source information retrieval platform, Terrier, with retrieval models BM25 and PL2. For the measurement of performance, we used the measures mean average precision, mean reciprocal rank, and precision at 5. The results suggest that frequently asked question retrieval with ontology is more effective than frequently asked question retrieval without ontology in the domain of HIV/AIDS.

  18. Video-Based Systems Research, Analysis, and Applications Opportunities

    DTIC Science & Technology

    1981-07-30

    as a COM software consultant, marketing its own COMTREVE software; * DatagraphiX Inc., San Diego, offers several versions of its COM recorders. AutoCOM...Metropolitan Microforms Ltd. in New York markets its MCAR system, which satisfies the need for a one- or multiple-user information retrieval and input...targeted to the market for high-speed data communications within a single facility, such as a university campus. The first commercial installations were set

  19. Agile Software Development in Defense Acquisition: A Mission Assurance Perspective

    DTIC Science & Technology

    2012-03-23

    based information retrieval system, we might say that this program works like a hive of bees , going out for pollen and bringing it back to the hive...developers ® Six Siqma is reqistered in the U. S. Patent and Trademark Office by Motorola ^_ 33 @ AEROSPACE Major Areas in a Typical Software...requirements - Capturing and evaluating quality metrics, identifying common problem areas **» Despite its positive impact on quality, pair programming

  20. Spatial Data Management System (SDMS)

    NASA Technical Reports Server (NTRS)

    Hutchison, Mark W.

    1994-01-01

    The Spatial Data Management System (SDMS) is a testbed for retrieval and display of spatially related material. SDMS permits the linkage of large graphical display objects with detail displays and explanations of its smaller components. SDMS combines UNIX workstations, MIT's X Window system, TCP/IP and WAIS information retrieval technology to prototype a means of associating aggregate data linked via spatial orientation. SDMS capitalizes upon and extends previous accomplishments of the Software Technology Branch in the area of Virtual Reality and Automated Library Systems.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages ofmore » information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.« less

  2. HOPE information system review

    NASA Astrophysics Data System (ADS)

    Suzuki, Yoshiaki; Nishiyama, Kenji; Ono, Shuuji; Fukuda, Kouin

    1992-08-01

    An overview of the review conducted on H-2 Orbiting Plane (HOPE) is presented. A prototype model was constructed by inputting various technical information proposed by related laboratories. Especially operation flow which enables understanding of correlation between various analysis items, judgement criteria, technical data, and interfaces with others was constructed. Technical information data base and retrieval systems were studied. A Macintosh personal computer was selected for information shaping because of its excellent function, performance, operability, and software completeness.

  3. Wireless data collection retrievals of bridge inspection/management information.

    DOT National Transportation Integrated Search

    2017-02-28

    To increase the efficiency and reliability of bridge inspections, MDOT contracted to have a 3D-model-based data entry application for mobile tablets developed to aid inspectors in the field. The 3D Bridge App is a mobile software tool designed to fac...

  4. Mobile Agents Applications.

    ERIC Educational Resources Information Center

    Martins, Rosane Maria; Chaves, Magali Ribeiro; Pirmez, Luci; Rust da Costa Carmo, Luiz Fernando

    2001-01-01

    Discussion of the need to filter and retrieval relevant information from the Internet focuses on the use of mobile agents, specific software components which are based on distributed artificial intelligence and integrated systems. Surveys agent technology and discusses the agent building package used to develop two applications using IBM's Aglet…

  5. Database in Artificial Intelligence.

    ERIC Educational Resources Information Center

    Wilkinson, Julia

    1986-01-01

    Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…

  6. Free software to analyse the clinical relevance of drug interactions with antiretroviral agents (SIMARV®) in patients with HIV/AIDS.

    PubMed

    Giraldo, N A; Amariles, P; Monsalve, M; Faus, M J

    Highly active antiretroviral therapy has extended the expected lifespan of patients with HIV/AIDS. However, the therapeutic benefits of some drugs used simultaneously with highly active antiretroviral therapy may be adversely affected by drug interactions. The goal was to design and develop a free software to facilitate analysis, assessment, and clinical decision making according to the clinical relevance of drug interactions in patients with HIV/AIDS. A comprehensive Medline/PubMed database search of drug interactions was performed. Articles that recognized any drug interactions in HIV disease were selected. The publications accessed were limited to human studies in English or Spanish, with full texts retrieved. Drug interactions were analyzed, assessed, and grouped into four levels of clinical relevance according to gravity and probability. Software to systematize the information regarding drug interactions and their clinical relevance was designed and developed. Overall, 952 different references were retrieved and 446 selected; in addition, 67 articles were selected from the citation lists of identified articles. A total of 2119 pairs of drug interactions were identified; of this group, 2006 (94.7%) were drug-drug interactions, 1982 (93.5%) had an identified pharmacokinetic mechanism, and 1409 (66.5%) were mediated by enzyme inhibition. In terms of clinical relevance, 1285 (60.6%) drug interactions were clinically significant in patients with HIV (levels 1 and 2). With this information, a software program that facilitates identification and assessment of the clinical relevance of antiretroviral drug interactions (SIMARV ® ) was developed. A free software package with information on 2119 pairs of antiretroviral drug interactions was designed and developed that could facilitate analysis, assessment, and clinical decision making according to the clinical relevance of drug interactions in patients with HIV/AIDS. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Techniques for Soundscape Retrieval and Synthesis

    NASA Astrophysics Data System (ADS)

    Mechtley, Brandon Michael

    The study of acoustic ecology is concerned with the manner in which life interacts with its environment as mediated through sound. As such, a central focus is that of the soundscape: the acoustic environment as perceived by a listener. This dissertation examines the application of several computational tools in the realms of digital signal processing, multimedia information retrieval, and computer music synthesis to the analysis of the soundscape. Namely, these tools include a) an open source software library, Sirens, which can be used for the segmentation of long environmental field recordings into individual sonic events and compare these events in terms of acoustic content, b) a graph-based retrieval system that can use these measures of acoustic similarity and measures of semantic similarity using the lexical database WordNet to perform both text-based retrieval and automatic annotation of environmental sounds, and c) new techniques for the dynamic, realtime parametric morphing of multiple field recordings, informed by the geographic paths along which they were recorded.

  8. Implementation of a Wavefront-Sensing Algorithm

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce; Aronstein, David

    2013-01-01

    A computer program has been written as a unique implementation of an image-based wavefront-sensing algorithm reported in "Iterative-Transform Phase Retrieval Using Adaptive Diversity" (GSC-14879-1), NASA Tech Briefs, Vol. 31, No. 4 (April 2007), page 32. This software was originally intended for application to the James Webb Space Telescope, but is also applicable to other segmented-mirror telescopes. The software is capable of determining optical-wavefront information using, as input, a variable number of irradiance measurements collected in defocus planes about the best focal position. The software also uses input of the geometrical definition of the telescope exit pupil (otherwise denoted the pupil mask) to identify the locations of the segments of the primary telescope mirror. From the irradiance data and mask information, the software calculates an estimate of the optical wavefront (a measure of performance) of the telescope generally and across each primary mirror segment specifically. The software is capable of generating irradiance data, wavefront estimates, and basis functions for the full telescope and for each primary-mirror segment. Optionally, each of these pieces of information can be measured or computed outside of the software and incorporated during execution of the software.

  9. Methods, Software and Tools for Three Numerical Applications. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E. R. Jessup

    2000-03-01

    This is a report of the results of the authors work supported by DOE contract DE-FG03-97ER25325. They proposed to study three numerical problems. They are: (1) the extension of the PMESC parallel programming library; (2) the development of algorithms and software for certain generalized eigenvalue and singular value (SVD) problems, and (3) the application of techniques of linear algebra to an information retrieval technique known as latent semantic indexing (LSI).

  10. IRIT at TREC 2012 Contextual Suggestion Track

    DTIC Science & Technology

    2012-11-01

    the 2nd International Conference on Software and Data Technologies, volume 3, pages 166–171. INSTICC Press, 2007. [HM09] Gilles Hubert and Josiane...geographic information retrieval systems: Evaluation framework and case study. Int. J. Digit. Libr ., pages 91–109, 2010. [PSC+12] Damien Palacio, Christian

  11. Imaging Technology in Libraries: Photo CD Offers New Possibilities.

    ERIC Educational Resources Information Center

    Beiser, Karl

    1993-01-01

    Describes Kodak's Photo CD technology, a format for the storage and retrieval of photographic images in electronic form. Highlights include current and future Photo CD formats; computer imaging technology; ownership issues; hardware for using Photo CD; software; library and information center applications, including image collections and…

  12. Teaching Information Retrieval: Lessons from Cornell.

    ERIC Educational Resources Information Center

    Stewart, Linda Guyotte; Markiewicz, James

    1986-01-01

    This article describes two separate workshops offered to college faculty during the spring 1984 semester: one in online bibliographic searching, one in using computers to manage personal files of bibliographic references (examination of existing systems, types of software available). A telephone survey evaluation conducted 3 months after sessions…

  13. The Use of Microcomputers for Information Retrieval.

    ERIC Educational Resources Information Center

    Scharff, L.; And Others

    1983-01-01

    Provides a detailed description of the microcomputer hardware and software used by the Euronet-Diane Launch Team for automating a number of functions associated with online searching. Background, login procedures, program architecture, maintenance problems, comments on lessons learned, and suggestions for avoiding difficulties with this type of…

  14. The Role of Metadata Standards in EOSDIS Search and Retrieval Applications

    NASA Technical Reports Server (NTRS)

    Pfister, Robin

    1999-01-01

    Metadata standards play a critical role in data search and retrieval systems. Metadata tie software to data so the data can be processed, stored, searched, retrieved and distributed. Without metadata these actions are not possible. The process of populating metadata to describe science data is an important service to the end user community so that a user who is unfamiliar with the data, can easily find and learn about a particular dataset before an order decision is made. Once a good set of standards are in place, the accuracy with which data search can be performed depends on the degree to which metadata standards are adhered during product definition. NASA's Earth Observing System Data and Information System (EOSDIS) provides examples of how metadata standards are used in data search and retrieval.

  15. A data-management system for detailed areal interpretive data

    USGS Publications Warehouse

    Ferrigno, C.F.

    1986-01-01

    A data storage and retrieval system has been developed to organize and preserve areal interpretive data. This system can be used by any study where there is a need to store areal interpretive data that generally is presented in map form. This system provides the capability to grid areal interpretive data for input to groundwater flow models at any spacing and orientation. The data storage and retrieval system is designed to be used for studies that cover small areas such as counties. The system is built around a hierarchically structured data base consisting of related latitude-longitude blocks. The information in the data base can be stored at different levels of detail, with the finest detail being a block of 6 sec of latitude by 6 sec of longitude (approximately 0.01 sq mi). This system was implemented on a mainframe computer using a hierarchical data base management system. The computer programs are written in Fortran IV and PL/1. The design and capabilities of the data storage and retrieval system, and the computer programs that are used to implement the system are described. Supplemental sections contain the data dictionary, user documentation of the data-system software, changes that would need to be made to use this system for other studies, and information on the computer software tape. (Lantz-PTT)

  16. NSWC-NADC interactive communication links for AN/UYS-1 loadtape creation and retrieval

    NASA Astrophysics Data System (ADS)

    Greathouse, D. M.

    1984-09-01

    This report contains an alternative method of communication (interactive vs. remote batch) with the Naval Air Development Center for the creation and retrieval of AN/UYS-1 Advanced Signal Processor (ASP) operational software loadtapes. Operational software for the Digital Acoustic Sensor Simulator (DASS) program is developed and maintained at the Naval Air Development Center (NADC). The Facility for Automated Software Production (FASP), an NADC-resident software generation facility, provides the support tools necessary for data base creation, software development and maintenance, and loadtape generation. Once a loadtape file is generated at NADC, it must be retrieved via telephone transmission and placed in a format suitable for loading into the AN/UYS-1 Advanced Signal Processor (ASP).

  17. Spacecraft software training needs assessment research

    NASA Technical Reports Server (NTRS)

    Ratcliff, Shirley; Golas, Katharine

    1990-01-01

    The problems were identified, along with their causes and potential solutions, that the management analysts were encountering in performing their jobs. It was concluded that sophisticated training applications would provide the most effective solution to a substantial portion of the analysts' problems. The remainder could be alleviated through the introduction of tools that could help make retrieval of the needed information from the vast and complex information resources feasible.

  18. A Description of the Text Data Base System TDBS. Stockholm Papers in Library and Information Science.

    ERIC Educational Resources Information Center

    Lofstrom, Mats

    Because experience with large information retrieval (IR) and database management (DBM) systems has shown that they are not adequate for the handling of textual material, two Swedish companies--Paralog and AU-System Network--have joined in a venture to develop a software package which combines features from IR and DMB systems to form a Text Data…

  19. Teaching Three-Dimensional Structural Chemistry Using Crystal Structure Databases. 3. The Cambridge Structural Database System: Information Content and Access Software in Educational Applications

    ERIC Educational Resources Information Center

    Battle, Gary M.; Allen, Frank H.; Ferrence, Gregory M.

    2011-01-01

    Parts 1 and 2 of this series described the educational value of experimental three-dimensional (3D) chemical structures determined by X-ray crystallography and retrieved from the crystallographic databases. In part 1, we described the information content of the Cambridge Structural Database (CSD) and discussed a representative teaching subset of…

  20. Computer Courseware Evaluations. January 1988 to December 1988. Volume VIII.

    ERIC Educational Resources Information Center

    Riome, Carol-Anne, Comp.

    The eighth in a series, this report reviews microcomputer software authorized by the Alberta (Canada) Department of Education from January 1988 through December 1988. This edition provides detailed evaluations of 40 authorized programs for teaching business education, computer literacy, databases, file management, French, information retrieval,…

  1. A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text

    ERIC Educational Resources Information Center

    Nguyen, Bao-An; Yang, Don-Lin

    2012-01-01

    An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…

  2. Catalog of Resources for Education in Ada (Trade Name) and Software Engineering (CREASE). Version 4.0.

    DTIC Science & Technology

    1986-05-01

    offering the course is a company. Name and Address of offeror: Tachyon Corporation 2725 Congress Street Suite 2H San Diego, CA 92110 Offeror’s...Background: Tachyon Corporation specializes in Ada software quality assurance, computer hosted instruction and information retrieval systems, authoring tools...easy to use (on-line help) and can look up or search for terms. Tachyon Corporation 20 CDURSE OFFERINGS 2.2. Lecture/Seminar Courses 2.2.1. Company

  3. Public Access to Digital Material; A Call to Researchers: Digital Libraries Need Collaboration across Disciplines; Greenstone: Open-Source Digital Library Software; Retrieval Issues for the Colorado Digitization Project's Heritage Database; Report on the 5th European Conference on Digital Libraries, ECDL 2001; Report on the First Joint Conference on Digital Libraries.

    ERIC Educational Resources Information Center

    Kahle, Brewster; Prelinger, Rick; Jackson, Mary E.; Boyack, Kevin W.; Wylie, Brian N.; Davidson, George S.; Witten, Ian H.; Bainbridge, David; Boddie, Stefan J.; Garrison, William A.; Cunningham, Sally Jo; Borgman, Christine L.; Hessel, Heather

    2001-01-01

    These six articles discuss various issues relating to digital libraries. Highlights include public access to digital materials; intellectual property concerns; the need for collaboration across disciplines; Greenstone software for construction and presentation of digital information collections; the Colorado Digitization Project; and conferences…

  4. Vegetation Phenology Metrics Derived from Temporally Smoothed and Gap-filled MODIS Data

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Morisette, Jeff; Wolfe, Robert; Esaias, Wayne; Gao, Feng; Ederer, Greg; Nightingale, Joanne; Nickeson, Jamie E.; Ma, Pete; Pedely, Jeff

    2012-01-01

    Smoothed and gap-filled VI provides a good base for estimating vegetation phenology metrics. The TIMESAT software was improved by incorporating the ancillary information from MODIS products. A simple assessment of the association between retrieved greenup dates and ground observations indicates satisfactory result from improved TIMESAT software. One application example shows that mapping Nectar Flow Phenology is tractable on a continental scale using hive weight and satellite vegetation data. The phenology data product is supporting more researches in ecology, climate change fields.

  5. Open-source Software for Exoplanet Atmospheric Modeling

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph

    2018-01-01

    I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.

  6. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    PubMed

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  7. Medical-Information-Management System

    NASA Technical Reports Server (NTRS)

    Alterescu, Sidney; Friedman, Carl A.; Frankowski, James W.

    1989-01-01

    Medical Information Management System (MIMS) computer program interactive, general-purpose software system for storage and retrieval of information. Offers immediate assistance where manipulation of large data bases required. User quickly and efficiently extracts, displays, and analyzes data. Used in management of medical data and handling all aspects of data related to care of patients. Other applications include management of data on occupational safety in public and private sectors, handling judicial information, systemizing purchasing and procurement systems, and analyses of cost structures of organizations. Written in Microsoft FORTRAN 77.

  8. Patent information retrieval: approaching a method and analysing nanotechnology patent collaborations.

    PubMed

    Ozcan, Sercan; Islam, Nazrul

    2017-01-01

    Many challenges still remain in the processing of explicit technological knowledge documents such as patents. Given the limitations and drawbacks of the existing approaches, this research sets out to develop an improved method for searching patent databases and extracting patent information to increase the efficiency and reliability of nanotechnology patent information retrieval process and to empirically analyse patent collaboration. A tech-mining method was applied and the subsequent analysis was performed using Thomson data analyser software. The findings show that nations such as Korea and Japan are highly collaborative in sharing technological knowledge across academic and corporate organisations within their national boundaries, and China presents, in some cases, a great illustration of effective patent collaboration and co-inventorship. This study also analyses key patent strengths by country, organisation and technology.

  9. LD2SNPing: linkage disequilibrium plotter and RFLP enzyme mining for tag SNPs

    PubMed Central

    Chang, Hsueh-Wei; Chuang, Li-Yeh; Chang, Yan-Jhu; Cheng, Yu-Huei; Hung, Yu-Chen; Chen, Hsiang-Chi; Yang, Cheng-Hong

    2009-01-01

    Background Linkage disequilibrium (LD) mapping is commonly used to evaluate markers for genome-wide association studies. Most types of LD software focus strictly on LD analysis and visualization, but lack supporting services for genotyping. Results We developed a freeware called LD2SNPing, which provides a complete package of mining tools for genotyping and LD analysis environments. The software provides SNP ID- and gene-centric online retrievals for SNP information and tag SNP selection from dbSNP/NCBI and HapMap, respectively. Restriction fragment length polymorphism (RFLP) enzyme information for SNP genotype is available to all SNP IDs and tag SNPs. Single and multiple SNP inputs are possible in order to perform LD analysis by online retrieval from HapMap and NCBI. An LD statistics section provides D, D', r2, δQ, ρ, and the P values of the Hardy-Weinberg Equilibrium for each SNP marker, and Chi-square and likelihood-ratio tests for the pair-wise association of two SNPs in LD calculation. Finally, 2D and 3D plots, as well as plain-text output of the results, can be selected. Conclusion LD2SNPing thus provides a novel visualization environment for multiple SNP input, which facilitates SNP association studies. The software, user manual, and tutorial are freely available at . PMID:19500380

  10. Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2012-01-01

    This software retrieves the surface and atmosphere parameters of multi-angle, multiband spectra. The synthetic spectra are generated by applying the modified Rahman-Pinty-Verstraete Bidirectional Reflectance Distribution Function (BRDF) model, and a single-scattering dominated atmosphere model to surface reflectance data from Multiangle Imaging SpectroRadiometer (MISR). The aerosol physical model uses a single scattering approximation using Rayleigh scattering molecules, and Henyey-Greenstein aerosols. The surface and atmosphere parameters of the models are retrieved using the Lavenberg-Marquardt algorithm. The software can retrieve the surface and atmosphere parameters with two different scales. The surface parameters are retrieved pixel-by-pixel while the atmosphere parameters are retrieved for a group of pixels where the same atmosphere model parameters are applied. This two-scale approach allows one to select the natural scale of the atmosphere properties relative to surface properties. The software also takes advantage of an intelligent initial condition given by the solution of the neighbor pixels.

  11. Web-Based Environment for Maintaining Legacy Software

    NASA Technical Reports Server (NTRS)

    Tigges, Michael; Thompson, Nelson; Orr, Mark; Fox, Richard

    2007-01-01

    Advanced Tool Integration Environment (ATIE) is the name of both a software system and a Web-based environment created by the system for maintaining an archive of legacy software and expertise involved in developing the legacy software. ATIE can also be used in modifying legacy software and developing new software. The information that can be encapsulated in ATIE includes experts documentation, input and output data of tests cases, source code, and compilation scripts. All of this information is available within a common environment and retained in a database for ease of access and recovery by use of powerful search engines. ATIE also accommodates the embedment of supporting software that users require for their work, and even enables access to supporting commercial-off-the-shelf (COTS) software within the flow of the experts work. The flow of work can be captured by saving the sequence of computer programs that the expert uses. A user gains access to ATIE via a Web browser. A modern Web-based graphical user interface promotes efficiency in the retrieval, execution, and modification of legacy code. Thus, ATIE saves time and money in the support of new and pre-existing programs.

  12. Programmatic access to data and information at the IRIS DMC via web services

    NASA Astrophysics Data System (ADS)

    Weertman, B. R.; Trabant, C.; Karstens, R.; Suleiman, Y. Y.; Ahern, T. K.; Casey, R.; Benson, R. B.

    2011-12-01

    The IRIS Data Management Center (DMC) has developed a suite of web services that provide access to the DMC's time series holdings, their related metadata and earthquake catalogs. In addition, services are available to perform simple, on-demand time series processing at the DMC prior to being shipped to the user. The primary goal is to provide programmatic access to data and processing services in a manner usable by and useful to the research community. The web services are relatively simple to understand and use and will form the foundation on which future DMC access tools will be built. Based on standard Web technologies they can be accessed programmatically with a wide range of programming languages (e.g. Perl, Python, Java), command line utilities such as wget and curl or with any web browser. We anticipate these services being used for everything from simple command line access, used in shell scripts and higher programming languages to being integrated within complex data processing software. In addition to improving access to our data by the seismological community the web services will also make our data more accessible to other disciplines. The web services available from the DMC include ws-bulkdataselect for the retrieval of large volumes of miniSEED data, ws-timeseries for the retrieval of individual segments of time series data in a variety of formats (miniSEED, SAC, ASCII, audio WAVE, and PNG plots) with optional signal processing, ws-station for station metadata in StationXML format, ws-resp for the retrieval of instrument response in RESP format, ws-sacpz for the retrieval of sensor response in the SAC poles and zeros convention and ws-event for the retrieval of earthquake catalogs. To make the services even easier to use, the DMC is developing a library that allows Java programmers to seamlessly retrieve and integrate DMC information into their own programs. The library will handle all aspects of dealing with the services and will parse the returned data. By using this library a developer will not need to learn the details of the service interfaces or understand the data formats returned. This library will be used to build the software bridge needed to request data and information from within MATLAB°. We also provide several client scripts written in Perl for the retrieval of waveform data, metadata and earthquake catalogs using command line programs. For more information on the DMC's web services please visit http://www.iris.edu/ws/

  13. Image/text automatic indexing and retrieval system using context vector approach

    NASA Astrophysics Data System (ADS)

    Qing, Kent P.; Caid, William R.; Ren, Clara Z.; McCabe, Patrick

    1995-11-01

    Thousands of documents and images are generated daily both on and off line on the information superhighway and other media. Storage technology has improved rapidly to handle these data but indexing this information is becoming very costly. HNC Software Inc. has developed a technology for automatic indexing and retrieval of free text and images. This technique is demonstrated and is based on the concept of `context vectors' which encode a succinct representation of the associated text and features of sub-image. In this paper, we will describe the Automated Librarian System which was designed for free text indexing and the Image Content Addressable Retrieval System (ICARS) which extends the technique from the text domain into the image domain. Both systems have the ability to automatically assign indices for a new document and/or image based on the content similarities in the database. ICARS also has the capability to retrieve images based on similarity of content using index terms, text description, and user-generated images as a query without performing segmentation or object recognition.

  14. 45 CFR 205.35 - Mechanized claims processing and information retrieval systems; definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... software and hardware used: (1) To introduce, control and account for data items in providing public... undertaken, and the resources required to complete the project; (2) The preparation of an APD; (3) The preparation of a detailed project plan describing when and how the computer system will be designed and...

  15. A general UNIX interface for biocomputing and network information retrieval software.

    PubMed

    Kiong, B K; Tan, T W

    1993-10-01

    We describe a UNIX program, HYBROW, which can integrate without modification a wide range of UNIX biocomputing and network information retrieval software. HYBROW works in conjunction with a separate set of ASCII files containing embedded hypertext-like links. The program operates like a hypertext browser featuring five basic links: file link, execute-only link, execute-display link, directory-browse link and field-filling link. Useful features of the interface may be developed using combinations of these links with simple shell scripts and examples of these are briefly described. The system manager who supports biocomputing users should find the program easy to maintain, and useful in assisting new and infrequent users; it is also simple to incorporate new programs. Moreover, the individual user can customize the interface, create dynamic menus, hypertext a document, invoke shell scripts and new programs simply with a basic understanding of the UNIX operating system and any text editor. This program was written in C language and uses the UNIX curses and termcap libraries. It is freely available as a tar compressed file (by anonymous FTP from nuscc.nus.sg).

  16. A qualitative study on personal information management (PIM) in clinical and basic sciences faculty members of a medical university in Iran

    PubMed Central

    Sedghi, Shahram; Abdolahi, Nida; Azimi, Ali; Tahamtan, Iman; Abdollahi, Leila

    2015-01-01

    Background: Personal Information Management (PIM) refers to the tools and activities to save and retrieve personal information for future uses. This study examined the PIM activities of faculty members of Iran University of Medical Sciences (IUMS) regarding their preferred PIM tools and four aspects of acquiring, organizing, storing and retrieving personal information. Methods: The qualitative design was based on phenomenology approach and we carried out 37 interviews with clinical and basic sciences faculty members of IUMS in 2014. The participants were selected using a random sampling method. All interviews were recorded by a digital voice recorder, and then transcribed, codified and finally analyzed using NVivo 8 software. Results: The use of PIM electronic tools (e-tools) was below expectation among the studied sample and just 37% had reasonable knowledge of PIM e-tools such as, external hard drivers, flash memories etc. However, all participants used both paper and electronic devices to store and access information. Internal mass memories (in Laptops) and flash memories were the most used e-tools to save information. Most participants used "subject" (41.00%) and "file name" (33.7 %) to save, organize and retrieve their stored information. Most users preferred paper-based rather than electronic tools to keep their personal information. Conclusion: Faculty members had little knowledge about PIM techniques and tools. Those who organized personal information could easier retrieve the stored information for future uses. Enhancing familiarity with PIM tools and training courses of PIM tools and techniques are suggested. PMID:26793648

  17. Energy and Environmental Issues in Eastern Europe and Central Asia: An Annotated Guide to Information Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gant, K.S.

    2000-10-09

    Energy and environmental problems undermine the potential for sustained economic development and contribute to political and economic instability in the strategically important region surrounding the Caspian and Black Seas. Many organizations supporting efforts to resolve problems in this region have found that consensus building--a prerequisite for action--is a difficult process. Reaching agreement on priorities for investment, technical collaboration, and policy incentives depends upon informed decision-making by governments and local stakeholders. And while vast quantities of data and numerous analyses and reports are more accessible than ever, wading through the many potential sources in search of timely and relevant data ismore » a formidable task. To facilitate more successful data searches and retrieval, this document provides annotated references to over 200 specific information sources, and over twenty primary search engines and data retrieval services, that provide relevant and timely information related to the environment, energy, and economic development around the Caspian and Black Seas. This document is an advance copy of the content that Oak Ridge National Laboratory (ORNL) plans to transfer to the web in HTML format to facilitate interactive search and retrieval of information using standard web-browser software.« less

  18. A Bayesian Retrieval of Greenland Ice Sheet Internal Temperature from Ultra-wideband Software-defined Microwave Radiometer (UWBRAD) Measurements

    NASA Astrophysics Data System (ADS)

    Duan, Y.; Durand, M. T.; Jezek, K. C.; Yardim, C.; Bringer, A.; Aksoy, M.; Johnson, J. T.

    2017-12-01

    The ultra-wideband software-defined microwave radiometer (UWBRAD) is designed to provide ice sheet internal temperature product via measuring low frequency microwave emission. Twelve channels ranging from 0.5 to 2.0 GHz are covered by the instrument. A Greenland air-borne demonstration was demonstrated in September 2016, provided first demonstration of Ultra-wideband radiometer observations of geophysical scenes, including ice sheets. Another flight is planned for September 2017 for acquiring measurements in central ice sheet. A Bayesian framework is designed to retrieve the ice sheet internal temperature from simulated UWBRAD brightness temperature (Tb) measurements over Greenland flight path with limited prior information of the ground. A 1-D heat-flow model, the Robin Model, was used to model the ice sheet internal temperature profile with ground information. Synthetic UWBRAD Tb observations was generated via the partially coherent radiation transfer model, which utilizes the Robin model temperature profile and an exponential fit of ice density from Borehole measurement as input, and corrupted with noise. The effective surface temperature, geothermal heat flux, the variance of upper layer ice density, and the variance of fine scale density variation at deeper ice sheet were treated as unknown variables within the retrieval framework. Each parameter is defined with its possible range and set to be uniformly distributed. The Markov Chain Monte Carlo (MCMC) approach is applied to make the unknown parameters randomly walk in the parameter space. We investigate whether the variables can be improved over priors using the MCMC approach and contribute to the temperature retrieval theoretically. UWBRAD measurements near camp century from 2016 was also treated with the MCMC to examine the framework with scattering effect. The fine scale density fluctuation is an important parameter. It is the most sensitive yet highly unknown parameter in the estimation framework. Including the fine scale density fluctuation greatly improved the retrieval results. The ice sheet vertical temperature profile, especially the 10m temperature, can be well retrieved via the MCMC process. Future retrieval work will apply the Bayesian approach to UWBRAD airborne measurements.

  19. Partial Automation of Requirements Tracing

    NASA Technical Reports Server (NTRS)

    Hayes, Jane; Dekhtyar, Alex; Sundaram, Senthil; Vadlamudi, Sravanthi

    2006-01-01

    Requirements Tracing on Target (RETRO) is software for after-the-fact tracing of textual requirements to support independent verification and validation of software. RETRO applies one of three user-selectable information-retrieval techniques: (1) term frequency/inverse document frequency (TF/IDF) vector retrieval, (2) TF/IDF vector retrieval with simple thesaurus, or (3) keyword extraction. One component of RETRO is the graphical user interface (GUI) for use in initiating a requirements-tracing project (a pair of artifacts to be traced to each other, such as a requirements spec and a design spec). Once the artifacts have been specified and the IR technique chosen, another component constructs a representation of the artifact elements and stores it on disk. Next, the IR technique is used to produce a first list of candidate links (potential matches between the two artifact levels). This list, encoded in Extensible Markup Language (XML), is optionally processed by a filtering component designed to make the list somewhat smaller without sacrificing accuracy. Through the GUI, the user examines a number of links and returns decisions (yes, these are links; no, these are not links). Coded in XML, these decisions are provided to a "feedback processor" component that prepares the data for the next application of the IR technique. The feedback reduces the incidence of erroneous candidate links. Unlike related prior software, RETRO does not require the user to assign keywords, and automatically builds a document index.

  20. How to Choose a Media Retrieval System.

    ERIC Educational Resources Information Center

    Huber, Joe

    1995-01-01

    Provides guidelines for schools choosing a media retrieval system. Topics include broadband, baseband, coaxial cable, or fiber optic decisions; the control network; selecting scheduling software; presentation software; device control; control from the classroom; and a comparison of systems offered by five companies. (LRW)

  1. The SoRReL papers: Recent publications of the Software Reuse Repository Lab

    NASA Technical Reports Server (NTRS)

    Eichmann, David A. (Editor)

    1992-01-01

    The entire publication is presented of some of the papers recently published by the SoRReL. Some typical titles are as follows: Design of a Lattice-Based Faceted Classification System; A Hybrid Approach to Software Reuse Repository Retrieval; Selecting Reusable Components Using Algebraic Specifications; Neural Network-Based Retrieval from Reuse Repositories; and A Neural Net-Based Approach to Software Metrics.

  2. The Healthcare Administrator's Associate: an experiment in distributed healthcare information systems.

    PubMed Central

    Fowler, J.; Martin, G.

    1997-01-01

    The Healthcare Administrator's Associate is a collection of portable tools designed to support analysis of data retrieved via the Internet from diverse distributed healthcare information systems by means of the InfoSleuth system of distributed software agents. Development of these tools is part of an effort to enhance access to diverse and geographically distributed healthcare data in order to improve the basis upon which administrative and clinical decisions are made. PMID:9357686

  3. Conceptual design for the National Water Information System

    USGS Publications Warehouse

    Edwards, Melvin D.; Putnam, Arthur L.; Hutchison, Norman E.

    1986-01-01

    The Water Resources Division of the U.S. Geological Survey began the design and development of a National Water Information System (NWIS) in 1983. The NWIS will replace and integrate the existing data systems of the National Water Data Storage and Retrieval System, National Water Data Exchange, National Water-Use Information Program, and Water Resources Scientific Information Center. The NWIS has been designed as an interactive, distributed data system. The software system has been designed in a modular manner which integrates existing software functions and allows multiple use of software modules. The data base has been designed as a relational data model that allows integrated storage of the existing water data, water-use data, and water-data indexing information by using a common relational data base management system. The NWIS will be operated on microcomputers located in each of the Water Resources Division's District offices and many of its State, subdistrict, and field offices. The microcomputers will be linked together through a national telecommunication network maintained by the U. S. Geological Survey. The NWIS is scheduled to be placed in operation in 1990.

  4. High-level specification of a proposed information architecture for support of a bioterrorism early-warning system.

    PubMed

    Berkowitz, Murray R

    2013-01-01

    Current information systems for use in detecting bioterrorist attacks lack a consistent, overarching information architecture. An overview of the use of biological agents as weapons during a bioterrorist attack is presented. Proposed are the design, development, and implementation of a medical informatics system to mine pertinent databases, retrieve relevant data, invoke appropriate biostatistical and epidemiological software packages, and automatically analyze these data. The top-level information architecture is presented. Systems requirements and functional specifications for this level are presented. Finally, future studies are identified.

  5. Stackfile Database

    NASA Technical Reports Server (NTRS)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  6. Data systems and computer science programs: Overview

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  7. A biomedical information system for retrieval and manipulation of NHANES data.

    PubMed

    Mukherjee, Sukrit; Martins, David; Norris, Keith C; Jenders, Robert A

    2013-01-01

    The retrieval and manipulation of data from large public databases like the U.S. National Health and Nutrition Examination Survey (NHANES) may require sophisticated statistical software and significant expertise that may be unavailable in the university setting. In response, we have developed the Data Retrieval And Manipulation System (DReAMS), an automated information system to handle all processes of data extraction and cleaning and then joining different subsets to produce analysis-ready output. The system is a browser-based data warehouse application in which the input data from flat files or operational systems are aggregated in a structured way so that the desired data can be read, recoded, queried and extracted efficiently. The current pilot implementation of the system provides access to a limited amount of NHANES database. We plan to increase the amount of data available through the system in the near future and to extend the techniques to other large databases from CDU archive with a current holding of about 53 databases.

  8. A navigation and control system for an autonomous rescue vehicle in the space station environment

    NASA Technical Reports Server (NTRS)

    Merkel, Lawrence

    1991-01-01

    A navigation and control system was designed and implemented for an orbital autonomous rescue vehicle envisioned to retrieve astronauts or equipment in the case that they become disengaged from the space station. The rescue vehicle, termed the Extra-Vehicular Activity Retriever (EVAR), has an on-board inertial measurement unit ahd GPS receivers for self state estimation, a laser range imager (LRI) and cameras for object state estimation, and a data link for reception of space station state information. The states of the retriever and objects (obstacles and the target object) are estimated by inertial state propagation which is corrected via measurements from the GPS, the LRI system, or the camera system. Kalman filters are utilized to perform sensor fusion and estimate the state propagation errors. Control actuation is performed by a Manned Maneuvering Unit (MMU). Phase plane control techniques are used to control the rotational and translational state of the retriever. The translational controller provides station-keeping or motion along either Clohessy-Wiltshire trajectories or straight line trajectories in the LVLH frame of any sufficiently observed object or of the space station. The software was used to successfully control a prototype EVAR on an air bearing floor facility, and a simulated EVAR operating in a simulated orbital environment. The design of the navigation system and the control system are presented. Also discussed are the hardware systems and the overall software architecture.

  9. Architectural Design Document for the Technology Demonstration of the Joint Network Defence and Management System (JNDMS) Project

    DTIC Science & Technology

    2009-09-21

    specified by contract no. W7714-040875/001/SV. This document contains the design of the JNDMS software to the system architecture level. Other...alternative for the presentation functions. ASP, Java, ActiveX , DLL, HTML, DHTML, SOAP, .NET HTML, DHTML, XML, Jscript, VBScript, SOAP, .NET...retrieved through the network, typically by a network management console. Information is contained in a Management Information Base (MIB), which is a data

  10. HARV ANSER Flight Test Data Retrieval and Processing Procedures

    NASA Technical Reports Server (NTRS)

    Yeager, Jessie C.

    1997-01-01

    Under the NASA High-Alpha Technology Program the High Alpha Research Vehicle (HARV) was used to conduct flight tests of advanced control effectors, advanced control laws, and high-alpha design guidelines for future super-maneuverable fighters. The High-Alpha Research Vehicle is a pre-production F/A-18 airplane modified with a multi-axis thrust-vectoring system for augmented pitch and yaw control power and Actuated Nose Strakes for Enhanced Rolling (ANSER) to augment body-axis yaw control power. Flight testing at the Dryden Flight Research Center (DFRC) began in July 1995 and continued until May 1996. Flight data will be utilized to evaluate control law performance and aircraft dynamics, determine aircraft control and stability derivatives using parameter identification techniques, and validate design guidelines. To accomplish these purposes, essential flight data parameters were retrieved from the DFRC data system and stored on the Dynamics and Control Branch (DCB) computer complex at Langley. This report describes the multi-step task used to retrieve and process this data and documents the results of these tasks. Documentation includes software listings, flight information, maneuver information, time intervals for which data were retrieved, lists of data parameters and definitions, and example data plots.

  11. Distributed nuclear medicine applications using World Wide Web and Java technology.

    PubMed

    Knoll, P; Höll, K; Mirzaei, S; Koriska, K; Köhn, H

    2000-01-01

    At present, medical applications applying World Wide Web (WWW) technology are mainly used to view static images and to retrieve some information. The Java platform is a relative new way of computing, especially designed for network computing and distributed applications which enables interactive connection between user and information via the WWW. The Java 2 Software Development Kit (SDK) including Java2D API, Java Remote Method Invocation (RMI) technology, Object Serialization and the Java Advanced Imaging (JAI) extension was used to achieve a robust, platform independent and network centric solution. Medical image processing software based on this technology is presented and adequate performance capability of Java is demonstrated by an iterative reconstruction algorithm for single photon emission computerized tomography (SPECT).

  12. Applying Use Cases to Describe the Role of Standards in e-Health Information Systems

    NASA Astrophysics Data System (ADS)

    Chávez, Emma; Finnie, Gavin; Krishnan, Padmanabhan

    Individual health records (IHRs) contain a person's lifetime records of their key health history and care within a health system (National E-Health Transition Authority, Retrieved Jan 12, 2009 from http://www.nehta.gov.au/coordinated-care/whats-in-iehr, 2004). This information can be processed and stored in different ways. The record should be available electronically to authorized health care providers and the individual anywhere, anytime, to support high-quality care. Many organizations provide a diversity of solutions for e-health and its services. Standards play an important role to enable these organizations to support information interchange and improve efficiency of health care delivery. However, there are numerous standards to choose from and not all of them are accessible to the software developer. This chapter proposes a framework to describe the e-health standards that can be used by software engineers to implement e-health information systems.

  13. User's operating procedures. Volume 2: Scout project financial analysis program

    NASA Technical Reports Server (NTRS)

    Harris, C. G.; Haris, D. K.

    1985-01-01

    A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.

  14. Formalizing structured file services for the data storage and retrieval subsystem of the data management system for Spacestation Freedom

    NASA Technical Reports Server (NTRS)

    Jamsek, Damir A.

    1993-01-01

    A brief example of the use of formal methods techniques in the specification of a software system is presented. The report is part of a larger effort targeted at defining a formal methods pilot project for NASA. One possible application domain that may be used to demonstrate the effective use of formal methods techniques within the NASA environment is presented. It is not intended to provide a tutorial on either formal methods techniques or the application being addressed. It should, however, provide an indication that the application being considered is suitable for a formal methods by showing how such a task may be started. The particular system being addressed is the Structured File Services (SFS), which is a part of the Data Storage and Retrieval Subsystem (DSAR), which in turn is part of the Data Management System (DMS) onboard Spacestation Freedom. This is a software system that is currently under development for NASA. An informal mathematical development is presented. Section 3 contains the same development using Penelope (23), an Ada specification and verification system. The complete text of the English version Software Requirements Specification (SRS) is reproduced in Appendix A.

  15. Data-Base Software For Tracking Technological Developments

    NASA Technical Reports Server (NTRS)

    Aliberti, James A.; Wright, Simon; Monteith, Steve K.

    1996-01-01

    Technology Tracking System (TechTracS) computer program developed for use in storing and retrieving information on technology and related patent information developed under auspices of NASA Headquarters and NASA's field centers. Contents of data base include multiple scanned still images and quick-time movies as well as text. TechTracS includes word-processing, report-editing, chart-and-graph-editing, and search-editing subprograms. Extensive keyword searching capabilities enable rapid location of technologies, innovators, and companies. System performs routine functions automatically and serves multiple users.

  16. The data operation centre tool. Architecture and population strategies

    NASA Astrophysics Data System (ADS)

    Dal Pra, Stefano; Crescente, Alberto

    2012-12-01

    Keeping track of the layout of the informatic resources in a big datacenter is a complex task. DOCET is a database-based webtool designed and implemented at INFN. It aims at providing a uniform interface to manage and retrieve needed information about one or more datacenter, such as available hardware, software and their status. Having a suitable application is however useless until most of the information about the centre are not inserted in the DOCET'S database. Manually inserting all the information from scratch is an unfeasible task. After describing DOCET'S high level architecture, its main features and current development track, we present and discuss the work done to populate the DOCET database for the INFN-T1 site by retrieving information from a heterogenous variety of authoritative sources, such as DNS, DHCP, Quattor host profiles, etc. We then describe the work being done to integrate DOCET with some common management operation, such as adding a newly installed host to DHCP and DNS, or creating a suitable Quattor profile template for it.

  17. Mineral Resources Data System (MRDS)

    USGS Publications Warehouse

    Mason, G.T.; Arndt, R.E.

    1996-01-01

    The U.S. Geological Survey (USGS) operates the Mineral Resources Data System (MRDS), a digital system that contained 111,955 records on Sept. 1, 1995. Records describe metallic and industrial commodity deposits, mines, prospects, and occurrences in the United States and selected other countries. These records have been created over the years by USGS commodity specialists and through cooperative agreements with geological surveys of U.S. States and other countries. This CD-ROM contains the complete MRDS data base, several subsets of it, and software to allow data retrieval and display. Data retrievals are made by using GSSEARCH, a program that is included on this CD-ROM. Retrievals are made by specifying fields or any combination of the fields that provide information on deposit name, location, commodity, deposit model type, geology, mineral production, reserves, and references. A tutorial is included. Retrieved records may be printed or written to a hard disk file in four different formats: ascii, fixed, comma delimited, and DBASE compatible.

  18. Mining biomedical images towards valuable information retrieval in biomedical and life sciences

    PubMed Central

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578

  19. Methods for estimating magnitude and frequency of floods in Arizona, developed with unregulated and rural peak-flow data through water year 2010

    USGS Publications Warehouse

    Paretti, Nicholas V.; Kennedy, Jeffrey R.; Turney, Lovina A.; Veilleux, Andrea G.

    2014-01-01

    The regional regression equations were integrated into the U.S. Geological Survey’s StreamStats program. The StreamStats program is a national map-based web application that allows the public to easily access published flood frequency and basin characteristic statistics. The interactive web application allows a user to select a point within a watershed (gaged or ungaged) and retrieve flood-frequency estimates derived from the current regional regression equations and geographic information system data within the selected basin. StreamStats provides users with an efficient and accurate means for retrieving the most up to date flood frequency and basin characteristic data. StreamStats is intended to provide consistent statistics, minimize user error, and reduce the need for large datasets and costly geographic information system software.

  20. Estimating Software-Development Costs With Greater Accuracy

    NASA Technical Reports Server (NTRS)

    Baker, Dan; Hihn, Jairus; Lum, Karen

    2008-01-01

    COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.

  1. Third Molars on the Internet: A Guide for Assessing Information Quality and Readability.

    PubMed

    Hanna, Kamal; Brennan, David; Sambrook, Paul; Armfield, Jason

    2015-10-06

    Directing patients suffering from third molars (TMs) problems to high-quality online information is not only medically important, but also could enable better engagement in shared decision making. This study aimed to develop a scale that measures the scientific information quality (SIQ) for online information concerning wisdom tooth problems and to conduct a quality evaluation for online TMs resources. In addition, the study evaluated whether a specific piece of readability software (Readability Studio Professional 2012) might be reliable in measuring information comprehension, and explored predictors for the SIQ Scale. A cross-sectional sample of websites was retrieved using certain keywords and phrases such as "impacted wisdom tooth problems" using 3 popular search engines. The retrieved websites (n=150) were filtered. The retained 50 websites were evaluated to assess their characteristics, usability, accessibility, trust, readability, SIQ, and their credibility using DISCERN and Health on the Net Code (HoNCode). Websites' mean scale scores varied significantly across website affiliation groups such as governmental, commercial, and treatment provider bodies. The SIQ Scale had a good internal consistency (alpha=.85) and was significantly correlated with DISCERN (r=.82, P<.01) and HoNCode (r=.38, P<.01). Less than 25% of websites had SIQ scores above 75%. The mean readability grade (10.3, SD 1.9) was above the recommended level, and was significantly correlated with the Scientific Information Comprehension Scale (r=.45. P<.01), which provides evidence for convergent validity. Website affiliation and DISCERN were significantly associated with SIQ (P<.01) and explained 76% of the SIQ variance. The developed SIQ Scale was found to demonstrate reliability and initial validity. Website affiliation, DISCERN, and HoNCode were significant predictors for the quality of scientific information. The Readability Studio software estimates were associated with scientific information comprehensiveness measures.

  2. The Online Bioinformatics Resources Collection at the University of Pittsburgh Health Sciences Library System--a one-stop gateway to online bioinformatics databases and software tools.

    PubMed

    Chen, Yi-Bu; Chattopadhyay, Ansuman; Bergen, Phillip; Gadd, Cynthia; Tannery, Nancy

    2007-01-01

    To bridge the gap between the rising information needs of biological and medical researchers and the rapidly growing number of online bioinformatics resources, we have created the Online Bioinformatics Resources Collection (OBRC) at the Health Sciences Library System (HSLS) at the University of Pittsburgh. The OBRC, containing 1542 major online bioinformatics databases and software tools, was constructed using the HSLS content management system built on the Zope Web application server. To enhance the output of search results, we further implemented the Vivísimo Clustering Engine, which automatically organizes the search results into categories created dynamically based on the textual information of the retrieved records. As the largest online collection of its kind and the only one with advanced search results clustering, OBRC is aimed at becoming a one-stop guided information gateway to the major bioinformatics databases and software tools on the Web. OBRC is available at the University of Pittsburgh's HSLS Web site (http://www.hsls.pitt.edu/guides/genetics/obrc).

  3. EOS MLS Level 2 Data Processing Software Version 3

    NASA Technical Reports Server (NTRS)

    Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.; hide

    2011-01-01

    This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.

  4. REFLECT: Logiciel de restitution des reflectances au sol pour l'amelioration de la qualite de l'information extraite des images satellitales a haute resolution spatiale

    NASA Astrophysics Data System (ADS)

    Bouroubi, Mohamed Yacine

    Multi-spectral satellite imagery, especially at high spatial resolution (finer than 30 m on the ground), represents an invaluable source of information for decision making in various domains in connection with natural resources management, environment preservation or urban planning and management. The mapping scales may range from local (finer resolution than 5 m) to regional (resolution coarser than 5m). The images are characterized by objects reflectance in the electromagnetic spectrum witch represents the key information in many applications. However, satellite sensor measurements are also affected by parasite input due to illumination and observation conditions, to the atmosphere, to topography and to sensor properties. Two questions have oriented this research. What is the best approach to retrieve surface reflectance with the measured values while taking into account these parasite factors? Is this retrieval a sine qua non condition for reliable image information extraction for the diverse domains of application for the images (mapping, environmental monitoring, landscape change detection, resources inventory, etc.)? The goals we have delineated for this research are as follow: (1) Develop software to retrieve ground reflectance while taking into account the aspects mentioned earlier. This software had to be modular enough to allow improvement and adaptation to diverse remote sensing application problems; and (2) Apply this software in various context (urban, agricultural, forest) and analyse results to evaluate the accuracy gain of extracted information from remote sensing imagery transformed in ground reflectance images to demonstrate the necessity of operating in this way, whatever the type of application. During this research, we have developed a tool to retrieve ground reflectance (the new version of the REFLECT software). This software is based on the formulas (and routines) of the 6S code (Second Simulation of Satellite Signal in the Solar Spectrum) and on the dark targets method to estimated the aerosol optical thickness, representing the most difficult factor to correct. Substantial improvements have been made to the existing models. These improvements essentially concern the aerosols properties (integration of a more recent model, improvement of the dark targets selection to estimate the AOD), the adjacency effect, the adaptation to most used high resolution (Landsat TM and ETM+, all HR SPOT 1 to 5, EO-1 ALI and ASTER) and very high resolution (QuickBird et Ikonos) sensors and the correction of topographic effects with a model that separate direct and diffuse solar radiation components and the adaptation of this model to forest canopy. Validation has shown that ground reflectance estimation with REFLECT is performed with an accuracy of approximately +/-0.01 in reflectance units (for in the visible, near-infrared and middle-infrared spectral bands) even for a surface with varying topography. This software has allowed demonstrating, through apparent reflectance simulations, how much parasite factors influencing numerical values of the images may alter the ground reflectance (errors ranging from 10 to 50%). REFLECT has also been used to examine the usefulness of ground reflectance instead of raw data for various common remote sensing applications in domains such as classification, change detection, agriculture and forestry. In most applications (multi-temporal change monitoring, use of vegetation indices, biophysical parameters estimation, etc.) image correction is a crucial step to obtain reliable results. From the computer environment standpoint, REFLECT is organized as a series of menus, corresponding to different steps of: input parameters introducing, gas transmittances calculation, AOD estimation, and finally image correction application, with the possibility of using the fast option witch process an image of 5000 by 5000 pixels in approximately 15 minutes. (Abstract shortened by UMI.)

  5. QCS : a system for querying, clustering, and summarizing documents.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.

    2006-08-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test setsmore » from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence ''trimming'', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.« less

  6. QCS: a system for querying, clustering and summarizing documents.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.; Schlesinger, Judith D.; O'Leary, Dianne P.

    2006-10-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test setsmore » from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design, and the value of this particular combination of modules.« less

  7. A novel software architecture for the provision of context-aware semantic transport information.

    PubMed

    Moreno, Asier; Perallos, Asier; López-de-Ipiña, Diego; Onieva, Enrique; Salaberria, Itziar; Masegosa, Antonio D

    2015-05-26

    The effectiveness of Intelligent Transportation Systems depends largely on the ability to integrate information from diverse sources and the suitability of this information for the specific user. This paper describes a new approach for the management and exchange of this information, related to multimodal transportation. A novel software architecture is presented, with particular emphasis on the design of the data model and the enablement of services for information retrieval, thereby obtaining a semantic model for the representation of transport information. The publication of transport data as semantic information is established through the development of a Multimodal Transport Ontology (MTO) and the design of a distributed architecture allowing dynamic integration of transport data. The advantages afforded by the proposed system due to the use of Linked Open Data and a distributed architecture are stated, comparing it with other existing solutions. The adequacy of the information generated in regard to the specific user's context is also addressed. Finally, a working solution of a semantic trip planner using actual transport data and running on the proposed architecture is presented, as a demonstration and validation of the system.

  8. An inventory of state natural resources information systems. [including Puerto Rico and the U.S. Virgin Islands

    NASA Technical Reports Server (NTRS)

    Martinko, E. A. (Principal Investigator); Caron, L. M.; Stewart, D. S.

    1984-01-01

    Data bases and information systems developed and maintained by state agencies to support planning and management of environmental and nutural resources were inventoried for all 50 states, Puerto Rico, and U.S. Virgin Islands. The information obtained is assembled into a computerized data base catalog which is throughly cross-referecence. Retrieval is possible by code, state, data base name, data base acronym, agency, computer, GIS capability, language, specialized software, data category name, geograhic reference, data sources, and level of reliability. The 324 automated data bases identified are described.

  9. NASA Technology Takes Center Stage

    NASA Technical Reports Server (NTRS)

    2004-01-01

    In today's fast-paced business world, there is often more information available to researchers than there is time to search through it. Data mining has become the answer to finding the proverbial "needle in a haystack," as companies must be able to quickly locate specific pieces of information from large collections of data. Perilog, a suite of data-mining tools, searches for hidden patterns in large databases to determine previously unrecognized relationships. By retrieving and organizing contextually relevant data from any sequence of terms - from genetic data to musical notes - the software can intelligently compile information about desired topics from databases.

  10. Spatial Paradigm for Information Retrieval and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.

  11. SPIRE1.03. Spatial Paradigm for Information Retrieval and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, K.J.; Bohn, S.; Crow, V.

    The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.

  12. Applying the metro map to software development management

    NASA Astrophysics Data System (ADS)

    Aguirregoitia, Amaia; Dolado, J. Javier; Presedo, Concepción

    2010-01-01

    This paper presents MetroMap, a new graphical representation model for controlling and managing the software development process. Metromap uses metaphors and visual representation techniques to explore several key indicators in order to support problem detection and resolution. The resulting visualization addresses diverse management tasks, such as tracking of deviations from the plan, analysis of patterns of failure detection and correction, overall assessment of change management policies, and estimation of product quality. The proposed visualization uses a metaphor with a metro map along with various interactive techniques to represent information concerning the software development process and to deal efficiently with multivariate visual queries. Finally, the paper shows the implementation of the tool in JavaFX with data of a real project and the results of testing the tool with the aforementioned data and users attempting several information retrieval tasks. The conclusion shows the results of analyzing user response time and efficiency using the MetroMap visualization system. The utility of the tool was positively evaluated.

  13. BanTeC: a software tool for management of corneal transplantation.

    PubMed

    López-Alvarez, P; Caballero, F; Trias, J; Cortés, U; López-Navidad, A

    2005-11-01

    Until recently, all cornea information at our tissue bank was managed manually, no specific database or computer tool had been implemented to provide electronic versions of documents and medical reports. The main objective of the BanTeC project was therefore to create a computerized system to integrate and classify all the information and documents used in the center in order to facilitate management of retrieved, transplanted corneal tissues. We used the Windows platform to develop the project. Microsoft Access and Microsoft Jet Engine were used at the database level and Data Access Objects was the chosen data access technology. In short, the BanTeC software seeks to computerize the tissue bank. All the initial stages of the development have now been completed, from specification of needs, program design and implementation of the software components, to the total integration of the final result in the real production environment. BanTeC will allow the generation of statistical reports for analysis to improve our performance.

  14. JANIS: NEA JAva-based Nuclear Data Information System

    NASA Astrophysics Data System (ADS)

    Soppera, Nicolas; Bossant, Manuel; Cabellos, Oscar; Dupont, Emmeric; Díez, Carlos J.

    2017-09-01

    JANIS (JAva-based Nuclear Data Information System) software is developed by the OECD Nuclear Energy Agency (NEA) Data Bank to facilitate the visualization and manipulation of nuclear data, giving access to evaluated nuclear data libraries, such as ENDF, JEFF, JENDL, TENDL etc., and also to experimental nuclear data (EXFOR) and bibliographical references (CINDA). It is available as a standalone Java program, downloadable and distributed on DVD and also a web application available on the NEA website. One of the main new features in JANIS is the scripting capability via command line, which notably automatizes plots generation and permits automatically extracting data from the JANIS database. Recent NEA software developments rely on these JANIS features to access nuclear data, for example the Nuclear Data Sensitivity Tool (NDaST) makes use of covariance data in BOXER and COVERX formats, which are retrieved from the JANIS database. New features added in this version of the JANIS software are described along this paper with some examples.

  15. A-MADMAN: Annotation-based microarray data meta-analysis tool

    PubMed Central

    Bisognin, Andrea; Coppe, Alessandro; Ferrari, Francesco; Risso, Davide; Romualdi, Chiara; Bicciato, Silvio; Bortoluzzi, Stefania

    2009-01-01

    Background Publicly available datasets of microarray gene expression signals represent an unprecedented opportunity for extracting genomic relevant information and validating biological hypotheses. However, the exploitation of this exceptionally rich mine of information is still hampered by the lack of appropriate computational tools, able to overcome the critical issues raised by meta-analysis. Results This work presents A-MADMAN, an open source web application which allows the retrieval, annotation, organization and meta-analysis of gene expression datasets obtained from Gene Expression Omnibus. A-MADMAN addresses and resolves several open issues in the meta-analysis of gene expression data. Conclusion A-MADMAN allows i) the batch retrieval from Gene Expression Omnibus and the local organization of raw data files and of any related meta-information, ii) the re-annotation of samples to fix incomplete, or otherwise inadequate, metadata and to create user-defined batches of data, iii) the integrative analysis of data obtained from different Affymetrix platforms through custom chip definition files and meta-normalization. Software and documentation are available on-line at . PMID:19563634

  16. Advanced software development workstation project ACCESS user's guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    ACCESS is a knowledge based software information system designed to assist the user in modifying retrieved software to satisfy user specifications. A user's guide is presented for the knowledge engineer who wishes to create for ACCESS a knowledge base consisting of representations of objects in some software system. This knowledge is accessible to an end user who wishes to use the catalogued software objects to create a new application program or an input stream for an existing system. The application specific portion of an ACCESS knowledge base consists of a taxonomy of object classes, as well as instances of these classes. All objects in the knowledge base are stored in an associative memory. ACCESS provides a standard interface for the end user to browse and modify objects. In addition, the interface can be customized by the addition of application specific data entry forms and by specification of display order for the taxonomy and object attributes. These customization options are described.

  17. Mining biomedical images towards valuable information retrieval in biomedical and life sciences.

    PubMed

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. © The Author(s) 2016. Published by Oxford University Press.

  18. ATLAS@AWS

    NASA Astrophysics Data System (ADS)

    Gehrcke, Jan-Philip; Kluth, Stefan; Stonjek, Stefan

    2010-04-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  19. Content analysis of cancer blog posts.

    PubMed

    Kim, Sujin

    2009-10-01

    The efficacy of user-defined subject tagging and software-generated subject tagging for describing and organizing cancer blog contents was explored. The Technorati search engine was used to search the blogosphere for cancer blog postings generated during a two-month period. Postings were mined for relevant subject concepts, and blogger-defined tags and Text Analysis Portal for Research (TAPoR) software-defined tags were generated for each message. Descriptive data were collected, and the blogger-defined tags were compared with software-generated tags. Three standard vocabularies (Opinion Templates, Basic Resource, and Medical Subject Headings [MeSH] Resource) were used to assign subject terms to the blogs, with results compared for efficacy in information retrieval. Descriptive data showed that most of the studied cancer blogs (80%) contained fewer than 500 words each. The numbers of blogger-defined tags per posting (M = 4.49 per posting) were significantly smaller than the TAPoR keywords (M = 23.55 per posting). Both blogger-defined subject tags and software-generated subject tags were often overly broad or overly narrow in focus, producing less than effective search results for those seeking to extract information from cancer blogs. Additional exploration into methods for systematically organizing cancer blog postings is necessary if blogs are to become stable and efficacious information resources for cancer patients, friends, families, or providers.

  20. Managing Written Directives: A Software Solution to Streamline Workflow.

    PubMed

    Wagner, Robert H; Savir-Baruch, Bital; Gabriel, Medhat S; Halama, James R; Bova, Davide

    2017-06-01

    A written directive is required by the U.S. Nuclear Regulatory Commission for any use of 131 I above 1.11 MBq (30 μCi) and for patients receiving radiopharmaceutical therapy. This requirement has also been adopted and must be enforced by the agreement states. As the introduction of new radiopharmaceuticals increases therapeutic options in nuclear medicine, time spent on regulatory paperwork also increases. The pressure of managing these time-consuming regulatory requirements may heighten the potential for inaccurate or incomplete directive data and subsequent regulatory violations. To improve on the paper-trail method of directive management, we created a software tool using a Health Insurance Portability and Accountability Act (HIPAA)-compliant database. This software allows for secure data-sharing among physicians, technologists, and managers while saving time, reducing errors, and eliminating the possibility of loss and duplication. Methods: The software tool was developed using Visual Basic, which is part of the Visual Studio development environment for the Windows platform. Patient data are deposited in an Access database on a local HIPAA-compliant secure server or hard disk. Once a working version had been developed, it was installed at our institution and used to manage directives. Updates and modifications of the software were released regularly until no more significant problems were found with its operation. Results: The software has been used at our institution for over 2 y and has reliably kept track of all directives. All physicians and technologists use the software daily and find it superior to paper directives. They can retrieve active directives at any stage of completion, as well as completed directives. Conclusion: We have developed a software solution for the management of written directives that streamlines and structures the departmental workflow. This solution saves time, centralizes the information for all staff to share, and decreases confusion about the creation, completion, filing, and retrieval of directives. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  1. Design and Implementation of a C++ Software Package to scan for and parse Tsunami Messages issued by the Tsunami Warning Centers for Operational use at the Pacific Tsunami Warning Center

    NASA Astrophysics Data System (ADS)

    Sardina, V.

    2012-12-01

    The US Tsunami Warning Centers (TWCs) have traditionally generated their tsunami message products primarily as blocks of text then tagged with headers that identify them on each particular communications' (comms) circuit. Each warning center has a primary area of responsibility (AOR) within which it has an authoritative role regarding parameters such as earthquake location and magnitude. This means that when a major tsunamigenic event occurs the other warning centers need to quickly access the earthquake parameters issued by the authoritative warning center before issuing their message products intended for customers in their own AOR. Thus, within the operational context of the TWCs the scientists on duty have an operational need to access the information contained in the message products issued by other warning centers as quickly as possible. As a solution to this operational problem we designed and implemented a C++ software package that allows scanning for and parsing the entire suite of tsunami message products issued by the Pacific Tsunami Warning Center (PTWC), the West Coast and Alaska Tsunami Warning Center (WCATWC), and the Japan Meteorological Agency (JMA). The scanning and parsing classes composing the resulting C++ software package allow parsing both non-official message products(observatory messages) routinely issued by the TWCs, and all official tsunami message products such as tsunami advisories, watches, and warnings. This software package currently allows scientists on duty at the PTWC to automatically retrieve the parameters contained in tsunami messages issued by WCATWC, JMA, or PTWC itself. Extension of the capabilities of the classes composing the software package would make it possible to generate XML and CAP compliant versions of the TWCs' message products until new messaging software natively adds this capabilities. Customers who receive the TWCs' tsunami message products could also use the package to automatically retrieve information from messages sent via any text-based communications' circuit currently used by the TWCs to disseminate their tsunami message products.

  2. Environmental factor(tm) system: RCRA hazardous waste handler information (on CD-ROM). Data file

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-11-01

    Environmental Factor(trademark) RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity, and compliance history for facilities found in the EPA Research Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management, and minimization by companies who are large quantity generators; and (3) Data on the waste management practices of treatment, storage, and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action, or violation information, TSD status, generator and transporter status, and more. (2) View compliance information - dates of evaluation, violation, enforcement, and corrective action. (3) Lookup facilities by waste processing categories of marketing, transporting, processing, and energy recovery. (4) Use owner/operator information and names, titles, and telephone numbers of project managers for prospecting. (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving, and exporting.« less

  3. Technology test results from an intelligent, free-flying robot for crew and equipment retrieval in space

    NASA Technical Reports Server (NTRS)

    Erickson, J.; Goode, R.; Grimm, K.; Hess, C.; Norsworthy, R.; Anderson, G.; Merkel, L.; Phinney, D.

    1992-01-01

    The ground-based demonstrations of Extra Vehicular Activity (EVA) Retriever, a voice-supervised, intelligent, free-flying robot, are designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The EVA Retriever software is required to autonomously plan and execute a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles with subsequent object handover. The software architecture incorporates a heirarchical decomposition of the control system that is horizontally partitioned into five major functional subsystems: sensing, perception, world model, reasoning, and acting. The design provides for supervised autonomy as the primary mode of operation. It is intended to be an evolutionary system improving in capability over time and as it earns crew trust through reliable and safe operation. This paper gives an overview of the hardware, a focus on software, and a summary of results achieved recently from both computer simulations and air bearing floor demonstrations. Limitations of the technology used are evaluated. Plans for the next phase, during which moving targets and obstacles drive realtime behavior requirements, are discussed.

  4. Technology test results from an intelligent, free-flying robot for crew and equipment retrieval in space

    NASA Astrophysics Data System (ADS)

    Erickson, Jon D.; Goode, R.; Grimm, K. A.; Hess, Clifford W.; Norsworthy, Robert S.; Anderson, Greg D.; Merkel, L.; Phinney, Dale E.

    1992-03-01

    The ground-based demonstrations of Extra Vehicular Activity (EVA) Retriever, a voice- supervised, intelligent, free-flying robot, are designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the space station. The EVA Retriever software is required to autonomously plan and execute a target rendezvous, grapple, and return to base while avoiding stationary and moving obstacles with subsequent object handover. The software architecture incorporates a hierarchical decomposition of the control system that is horizontally partitioned into five major functional subsystems: sensing, perception, world model, reasoning, and acting. The design provides for supervised autonomy as the primary mode of operation. It is intended to be an evolutionary system improving in capability over time and as it earns crew trust through reliable and safe operation. This paper gives an overview of the hardware, a focus on software, and a summary of results achieved recently from both computer simulations and air bearing floor demonstrations. Limitations of the technology used are evaluated. Plans for the next phase, during which moving targets and obstacles drive realtime behavior requirements, are discussed.

  5. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  6. A Collaborative Knowledge Management Process for Implementing Healthcare Enterprise Information Systems

    NASA Astrophysics Data System (ADS)

    Cheng, Po-Hsun; Chen, Sao-Jie; Lai, Jin-Shin; Lai, Feipei

    This paper illustrates a feasible health informatics domain knowledge management process which helps gather useful technology information and reduce many knowledge misunderstandings among engineers who have participated in the IBM mainframe rightsizing project at National Taiwan University (NTU) Hospital. We design an asynchronously sharing mechanism to facilitate the knowledge transfer and our health informatics domain knowledge management process can be used to publish and retrieve documents dynamically. It effectively creates an acceptable discussion environment and even lessens the traditional meeting burden among development engineers. An overall description on the current software development status is presented. Then, the knowledge management implementation of health information systems is proposed.

  7. IMAT graphics manual

    NASA Technical Reports Server (NTRS)

    Stockwell, Alan E.; Cooper, Paul A.

    1991-01-01

    The Integrated Multidisciplinary Analysis Tool (IMAT) consists of a menu driven executive system coupled with a relational database which links commercial structures, structural dynamics and control codes. The IMAT graphics system, a key element of the software, provides a common interface for storing, retrieving, and displaying graphical information. The IMAT Graphics Manual shows users of commercial analysis codes (MATRIXx, MSC/NASTRAN and I-DEAS) how to use the IMAT graphics system to obtain high quality graphical output using familiar plotting procedures. The manual explains the key features of the IMAT graphics system, illustrates their use with simple step-by-step examples, and provides a reference for users who wish to take advantage of the flexibility of the software to customize their own applications.

  8. BIRD: Bio-Image Referral Database. Design and implementation of a new web based and patient multimedia data focused system for effective medical diagnosis and therapy.

    PubMed

    Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario

    2004-01-01

    This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.

  9. Computers in medicine: liability issues for physicians.

    PubMed

    Hafner, A W; Filipowicz, A B; Whitely, W P

    1989-07-01

    Physicians routinely use computers to store, access, and retrieve medical information. As computer use becomes even more widespread in medicine, failure to utilize information systems may be seen as a violation of professional custom and lead to findings of professional liability. Even when a technology is not widespread, failure to incorporate it into medical practice may give rise to liability if the technology is accessible to the physician and reduces risk to the patient. Improvement in the availability of medical information sources imposes a greater burden on the physician to keep current and to obtain informed consent from patients. To routinely perform computer-assisted literature searches for informed consent and diagnosis is 'good medicine'. Clinical and diagnostic applications of computer technology now include computer-assisted decision making with the aid of sophisticated databases. Although such systems will expand the knowledge base and competence of physicians, malfunctioning software raises a major liability question. Also, complex computer-driven technology is used in direct patient care. Defective or improperly used hardware or software can lead to patient injury, thus raising additional complicated questions of professional liability and product liability.

  10. Semantics of User Interface for Image Retrieval: Possibility Theory and Learning Techniques.

    ERIC Educational Resources Information Center

    Crehange, M.; And Others

    1989-01-01

    Discusses the need for a rich semantics for the user interface in interactive image retrieval and presents two methods for building such interfaces: possibility theory applied to fuzzy data retrieval, and a machine learning technique applied to learning the user's deep need. Prototypes developed using videodisks and knowledge-based software are…

  11. A Novel Software Architecture for the Provision of Context-Aware Semantic Transport Information

    PubMed Central

    Moreno, Asier; Perallos, Asier; López-de-Ipiña, Diego; Onieva, Enrique; Salaberria, Itziar; Masegosa, Antonio D.

    2015-01-01

    The effectiveness of Intelligent Transportation Systems depends largely on the ability to integrate information from diverse sources and the suitability of this information for the specific user. This paper describes a new approach for the management and exchange of this information, related to multimodal transportation. A novel software architecture is presented, with particular emphasis on the design of the data model and the enablement of services for information retrieval, thereby obtaining a semantic model for the representation of transport information. The publication of transport data as semantic information is established through the development of a Multimodal Transport Ontology (MTO) and the design of a distributed architecture allowing dynamic integration of transport data. The advantages afforded by the proposed system due to the use of Linked Open Data and a distributed architecture are stated, comparing it with other existing solutions. The adequacy of the information generated in regard to the specific user’s context is also addressed. Finally, a working solution of a semantic trip planner using actual transport data and running on the proposed architecture is presented, as a demonstration and validation of the system. PMID:26016915

  12. A multimedia interactive education system for prostate cancer patients: development and preliminary evaluation.

    PubMed

    Diefenbach, Michael A; Butz, Brian P

    2004-01-21

    A cancer diagnosis is highly distressing. Yet, to make informed treatment choices patients have to learn complicated disease and treatment information that is often fraught with medical and statistical terminology. Thus, patients need accurate and easy-to-understand information. To introduce the development and preliminary evaluation through focus groups of a novel highly-interactive multimedia-education software program for patients diagnosed with localized prostate cancer. The prostate interactive education system uses the metaphor of rooms in a virtual health center (ie, reception area, a library, physician offices, group meeting room) to organize information. Text information contained in the library is tailored to a person's information-seeking preference (ie, high versus low information seeker). We conducted a preliminary evaluation through 5 separate focus groups with prostate cancer survivors (N = 18) and their spouses (N = 15). Focus group results point to the timeliness and high acceptability of the software among the target audience. Results also underscore the importance of a guide or tutor who assists in navigating the program and who responds to queries to facilitate information retrieval. Focus groups have established the validity of our approach and point to new directions to further enhance the user interface.

  13. Where's My Data - WMD

    NASA Technical Reports Server (NTRS)

    Quach, William L.; Sesplaukis, Tadas; Owen-Mankovich, Kyran J.; Nakamura, Lori L.

    2012-01-01

    WMD provides a centralized interface to access data stored in the Mission Data Processing and Control System (MPCS) GDS (Ground Data Systems) databases during MSL (Mars Science Laboratory) Testbeds and ATLO (Assembly, Test, and Launch Operations) test sessions. The MSL project organizes its data based on venue (Testbed, ATLO, Ops), with each venue's data stored on a separate database, making it cumbersome for users to access data across the various venues. WMD allows sessions to be retrieved through a Web-based search using several criteria: host name, session start date, or session ID number. Sessions matching the search criteria will be displayed and users can then select a session to obtain and analyze the associated data. The uniqueness of this software comes from its collection of data retrieval and analysis features provided through a single interface. This allows users to obtain their data and perform the necessary analysis without having to worry about where and how to get the data, which may be stored in various locations. Additionally, this software is a Web application that only requires a standard browser without additional plug-ins, providing a cross-platform, lightweight solution for users to retrieve and analyze their data. This software solves the problem of efficiently and easily finding and retrieving data from thousands of MSL Testbed and ATLO sessions. WMD allows the user to retrieve their session in as little as one mouse click, and then to quickly retrieve additional data associated with the session.

  14. cPath: open source software for collecting, storing, and querying biological pathways.

    PubMed

    Cerami, Ethan G; Bader, Gary D; Gross, Benjamin E; Sander, Chris

    2006-11-13

    Biological pathways, including metabolic pathways, protein interaction networks, signal transduction pathways, and gene regulatory networks, are currently represented in over 220 diverse databases. These data are crucial for the study of specific biological processes, including human diseases. Standard exchange formats for pathway information, such as BioPAX, CellML, SBML and PSI-MI, enable convenient collection of this data for biological research, but mechanisms for common storage and communication are required. We have developed cPath, an open source database and web application for collecting, storing, and querying biological pathway data. cPath makes it easy to aggregate custom pathway data sets available in standard exchange formats from multiple databases, present pathway data to biologists via a customizable web interface, and export pathway data via a web service to third-party software, such as Cytoscape, for visualization and analysis. cPath is software only, and does not include new pathway information. Key features include: a built-in identifier mapping service for linking identical interactors and linking to external resources; built-in support for PSI-MI and BioPAX standard pathway exchange formats; a web service interface for searching and retrieving pathway data sets; and thorough documentation. The cPath software is freely available under the LGPL open source license for academic and commercial use. cPath is a robust, scalable, modular, professional-grade software platform for collecting, storing, and querying biological pathways. It can serve as the core data handling component in information systems for pathway visualization, analysis and modeling.

  15. Lidar stand-alone retrieval of atmospheric aerosol microphysical properties during SLOPE

    NASA Astrophysics Data System (ADS)

    Ortiz-Amezcua, Pablo; Samaras, Stefanos; Böckmann, Christine; Antonio Benavent-Oltra, Jose; Luis Guerrero-Rascado, Juan; Román, Roberto; Alados-Arboledas, Lucas

    2018-04-01

    Two cases from SLOPE campaign at Granada are analyzed in terms of particle microphysical properties using novel software developed at Potsdam University. Multiwavelength Raman lidar measurements of particle extinction and backscatter coefficients as well as linear particle depolarization ratios are used as input for the software. The result of the retrieval is a 2-dimensional particle volume distribution as a function of radius and aspect ratio, from which the particle microphysical properties are obtained.

  16. The interactive digital video interface

    NASA Technical Reports Server (NTRS)

    Doyle, Michael D.

    1989-01-01

    A frequent complaint in the computer oriented trade journals is that current hardware technology is progressing so quickly that software developers cannot keep up. A example of this phenomenon can be seen in the field of microcomputer graphics. To exploit the advantages of new mechanisms of information storage and retrieval, new approaches must be made towards incorporating existing programs as well as developing entirely new applications. A particular area of need is the correlation of discrete image elements to textural information. The interactive digital video (IDV) interface embodies a new concept in software design which addresses these needs. The IDV interface is a patented device and language independent process for identifying image features on a digital video display and which allows a number of different processes to be keyed to that identification. Its capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. Sophisticated interrelationships can be set up between images, text, and program control mechanisms.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less

  18. Design of a lattice-based faceted classification system

    NASA Technical Reports Server (NTRS)

    Eichmann, David A.; Atkins, John

    1992-01-01

    We describe a software reuse architecture supporting component retrieval by facet classes. The facets are organized into a lattice of facet sets and facet n-tuples. The query mechanism supports precise retrieval and flexible browsing.

  19. Development of a Database Design for Serials Control in the Defense Communications Agency.

    DTIC Science & Technology

    1985-10-21

    USERS 0 UNCLASSIFIED 22s. NAME OF RESPONSIBLE INDIVIDUAL 22b. TELEPHONE NUMBER 22c. OFFICE SYMBOL (Include Area Code) GUERRIERO, Donald A. (703) 692-0373...effective control over the subscriptions. Studies had shown that an automated system was both possible and desirable from a technical and management...way, and to determine what software and hardware could be used to enter, store, and retrieve the information. This study showed the logical structure

  20. The Computer: An Effective Research Assistant

    PubMed Central

    Gancher, Wendy

    1984-01-01

    The development of software packages such as data management systems and statistical packages has made it possible to process large amounts of research data. Data management systems make the organization and manipulation of such data easier. Floppy disks ease the problem of storing and retrieving records. Patient information can be kept confidential by limiting access to computer passwords linked with research files, or by using floppy disks. These attributes make the microcomputer essential to modern primary care research. PMID:21279042

  1. Environmental Factor(tm) system: RCRA hazardous waste handler information (on cd-rom). Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-04-01

    Environmental Factor(tm) RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity and compliance history for facilities found in the EPA Resource Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management and minimization by companies who are large quantity generators, and (3) Data on the waste management practices of treatment, storage and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action or violation information, TSD status, generator and transporter status and more; (2) View compliance information - dates of evaluation, violation, enforcement and corrective action; (3) Lookup facilities by waste processing categories of marketing, transporting, processing and energy recovery; (4) Use owner/operator information and names, titles and telephone numbers of project managers for prospecting; and (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving and exporting. Hotline support is also available for no additional charge.« less

  2. Environmental Factor{trademark} system: RCRA hazardous waste handler information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1999-03-01

    Environmental Factor{trademark} RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity and compliance history for facilities found in the EPA Resource Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management and minimization by companies who are large quantity generators, and (3) Data on the waste management practices of treatment, storage and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action or violation information, TSD status, generator and transporter status and more; (2) View compliance information -- dates of evaluation, violation, enforcement and corrective action; (3) Lookup facilities by waste processing categories of marketing, transporting, processing and energy recovery; (4) Use owner/operator information and names, titles and telephone numbers of project managers for prospecting; and (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving and exporting. Hotline support is also available for no additional charge.« less

  3. A Multimodal Search Engine for Medical Imaging Studies.

    PubMed

    Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos

    2017-02-01

    The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.

  4. Third Molars on the Internet: A Guide for Assessing Information Quality and Readability

    PubMed Central

    Brennan, David; Sambrook, Paul; Armfield, Jason

    2015-01-01

    Background Directing patients suffering from third molars (TMs) problems to high-quality online information is not only medically important, but also could enable better engagement in shared decision making. Objectives This study aimed to develop a scale that measures the scientific information quality (SIQ) for online information concerning wisdom tooth problems and to conduct a quality evaluation for online TMs resources. In addition, the study evaluated whether a specific piece of readability software (Readability Studio Professional 2012) might be reliable in measuring information comprehension, and explored predictors for the SIQ Scale. Methods A cross-sectional sample of websites was retrieved using certain keywords and phrases such as “impacted wisdom tooth problems” using 3 popular search engines. The retrieved websites (n=150) were filtered. The retained 50 websites were evaluated to assess their characteristics, usability, accessibility, trust, readability, SIQ, and their credibility using DISCERN and Health on the Net Code (HoNCode). Results Websites’ mean scale scores varied significantly across website affiliation groups such as governmental, commercial, and treatment provider bodies. The SIQ Scale had a good internal consistency (alpha=.85) and was significantly correlated with DISCERN (r=.82, P<.01) and HoNCode (r=.38, P<.01). Less than 25% of websites had SIQ scores above 75%. The mean readability grade (10.3, SD 1.9) was above the recommended level, and was significantly correlated with the Scientific Information Comprehension Scale (r=.45. P<.01), which provides evidence for convergent validity. Website affiliation and DISCERN were significantly associated with SIQ (P<.01) and explained 76% of the SIQ variance. Conclusion The developed SIQ Scale was found to demonstrate reliability and initial validity. Website affiliation, DISCERN, and HoNCode were significant predictors for the quality of scientific information. The Readability Studio software estimates were associated with scientific information comprehensiveness measures. PMID:26443470

  5. Lessons learned from a pilot implementation of the UMLS information sources map.

    PubMed

    Miller, P L; Frawley, S J; Wright, L; Roderer, N K; Powsner, S M

    1995-01-01

    To explore the software design issues involved in implementing an operational information sources map (ISM) knowledge base (KB) and system of navigational tools that can help medical users access network-based information sources relevant to a biomedical question. A pilot biomedical ISM KB and associated client-server software (ISM/Explorer) have been developed to help students, clinicians, researchers, and staff access network-based information sources, as part of the National Library of Medicine's (NLM) multi-institutional Unified Medical Language System (UMLS) project. The system allows the user to specify and constrain a search for a biomedical question of interest. The system then returns a list of sources matching the search. At this point the user may request 1) further information about a source, 2) that the list of sources be regrouped by different criteria to allow the user to get a better overall appreciation of the set of retrieved sources as a whole, or 3) automatic connection to a source. The pilot system operates in client-server mode and currently contains coded information for 121 sources. It is in routine use from approximately 40 workstations at the Yale School of Medicine. The lessons that have been learned are that: 1) it is important to make access to different versions of a source as seamless as possible, 2) achieving seamless, cross-platform access to heterogeneous sources is difficult, 3) significant differences exist between coding the subject content of an electronic information resource versus that of an article or a book, 4) customizing the ISM to multiple institutions entails significant complexities, and 5) there are many design trade-offs between specifying searches and viewing sets of retrieved sources that must be taken into consideration. An ISM KB and navigational tools have been constructed. In the process, much has been learned about the complexities of development and evaluation in this new environment, which are different from those for Gopher, wide area information servers (WAIS), World-Wide-Web (WWW), and MOSAIC resources.

  6. Software for Managing Personal Files.

    ERIC Educational Resources Information Center

    Lundeen, Gerald

    1989-01-01

    Discusses the special characteristics of personal file management software and compares four microcomputer software packages: Notebook II with Bibliography and Convert, Pro-Cite with Biblio-Links, askSam, and Reference Manager. Each package is evaluated in terms of the user interface, file maintenance, retrieval capabilities, output, and…

  7. A Physical Validation Program for the GPM Mission

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2003-01-01

    The GPM mission is currently planned for start in the late 2007 - early 2008 time frame. Its main scientific goal is to help answer pressing scientific problems arising within the context of global and regional water cycling. These problems cut across a hierarchy of scales and include climate-water cycle interactions, techniques for improving weather and climate predictions, and better methods for combining observed precipitation with hydrometeorological prediction models for applications to hazardous flood-producing storms, seasonal flood draught conditions, and fresh water resource assessments. The GPM mission will expand the scope of precipitation measurement through the use of a constellation of some 9 satellites, one of which will be an advanced TRMM-like core satellite carrying a dual-frequency Ku-Ka band precipitation radar and an advanced, multifrequency passive microwave radiometer with vertical-horizontal polarization discrimination. The other constellation members will include new dedicated satellites and co-existing operational/research satellites carrying similar (but not identical) passive microwave radiometers. The goal of the constellation is to achieve approximately 3-hour sampling at any spot on the globe -- continuously. The constellation's orbit architecture will consist of a mix of sun-synchronous and non-sun-synchronous satellites with the core satellite providing measurements of cloud-precipitation microphysical processes plus calibration-quality rainrate retrievals to be used with the other retrieval information to ensure bias-free constellation coverage. A major requirement before the retrieved rainfall information generated by the GPM mission can be used effectively by prognostic models to improve weather forecasts, hydrometeorological forecasts, and climate model reanalysis simulations is a capability to quantify the error characteristics of the retrievals. A solution for this problem has been upheld in past precipitation missions because of the lack of suitable error modeling systems incorporated into the validation programs and data distribution systems. An overview of how NASA intends to overcome this problem for the GPM mission using a physically-based error modeling approach within a multi-faceted validation program is described. The solution is to first identify specific user requirements and then determine the most stringent of these requirements that embodies all essential error characterization information needed by the entire user community. In the context of NASA s scientific agenda for the GPM mission, the most stringent user requirement is found within the data assimilation community. The fundamental theory of data assimilation vis-a-vis ingesting satellite precipitation information into the pre-forecast initializations is based on quantifying the conditional bias and precision errors of individual rain retrievals, and the space-time structure of the precision error (i.e., the spatial-temporal error covariance). By generating the hardware and software capability to produce this information in a near real-time fashion, and to couple the derived quantitative error properties to the actual retrieved rainrates, all key validation users can be satisfied. The talk will describe the essential components of the hardware and software systems needed to generate such near real-time error properties, as well as the various paradigm shifts needed within the validation community to produce a validation program relevant to the precipitation user community.

  8. GeneStoryTeller: a mobile app for quick and comprehensive information retrieval of human genes.

    PubMed

    Eleftheriou, Stergiani V; Bourdakou, Marilena M; Athanasiadis, Emmanouil I; Spyrou, George M

    2015-01-01

    In the last few years, mobile devices such as smartphones and tablets have become an integral part of everyday life, due to their software/hardware rapid development, as well as the increased portability they offer. Nevertheless, up to now, only few Apps have been developed in the field of bioinformatics, capable to perform fast and robust access to services. We have developed the GeneStoryTeller, a mobile application for Android platforms, where users are able to instantly retrieve information regarding any recorded human gene, derived from eight publicly available databases, as a summary story. Complementary information regarding gene-drugs interactions, functional annotation and disease associations for each selected gene is also provided in the gene story. The most challenging part during the development of the GeneStoryTeller was to keep balance between storing data locally within the app and obtaining the updated content dynamically via a network connection. This was accomplished with the implementation of an administrative site where data are curated and synchronized with the application requiring a minimum human intervention. © The Author(s) 2015. Published by Oxford University Press.

  9. Updates to the QBIC system

    NASA Astrophysics Data System (ADS)

    Niblack, Carlton W.; Zhu, Xiaoming; Hafner, James L.; Breuel, Tom; Ponceleon, Dulce B.; Petkovic, Dragutin; Flickner, Myron D.; Upfal, Eli; Nin, Sigfredo I.; Sull, Sanghoon; Dom, Byron E.; Yeo, Boon-Lock; Srinivasan, Savitha; Zivkovic, Dan; Penner, Mike

    1997-12-01

    QBICTM (Query By Image Content) is a set of technologies and associated software that allows a user to search, browse, and retrieve image, graphic, and video data from large on-line collections. This paper discusses current research directions of the QBIC project such as indexing for high-dimensional multimedia data, retrieval of gray level images, and storyboard generation suitable for video. It describes aspects of QBIC software including scripting tools, application interfaces, and available GUIs, and gives examples of applications and demonstration systems using it.

  10. Reliability, Validity, and Usability of Data Extraction Programs for Single-Case Research Designs.

    PubMed

    Moeyaert, Mariola; Maggin, Daniel; Verkuilen, Jay

    2016-11-01

    Single-case experimental designs (SCEDs) have been increasingly used in recent years to inform the development and validation of effective interventions in the behavioral sciences. An important aspect of this work has been the extension of meta-analytic and other statistical innovations to SCED data. Standard practice within SCED methods is to display data graphically, which requires subsequent users to extract the data, either manually or using data extraction programs. Previous research has examined issues of reliability and validity of data extraction programs in the past, but typically at an aggregate level. Little is known, however, about the coding of individual data points. We focused on four different software programs that can be used for this purpose (i.e., Ungraph, DataThief, WebPlotDigitizer, and XYit), and examined the reliability of numeric coding, the validity compared with real data, and overall program usability. This study indicates that the reliability and validity of the retrieved data are independent of the specific software program, but are dependent on the individual single-case study graphs. Differences were found in program usability in terms of user friendliness, data retrieval time, and license costs. Ungraph and WebPlotDigitizer received the highest usability scores. DataThief was perceived as unacceptable and the time needed to retrieve the data was double that of the other three programs. WebPlotDigitizer was the only program free to use. As a consequence, WebPlotDigitizer turned out to be the best option in terms of usability, time to retrieve the data, and costs, although the usability scores of Ungraph were also strong. © The Author(s) 2016.

  11. BioModels.net Web Services, a free and integrated toolkit for computational modelling software.

    PubMed

    Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille

    2010-05-01

    Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.

  12. Automated Predictive Diagnosis (APD): A 3-tiered shell for building expert systems for automated predictions and decision making

    NASA Technical Reports Server (NTRS)

    Steib, Michael

    1991-01-01

    The APD software features include: On-line help, Three level architecture, (Logic environments, Setup/Application environment, Data environment), Explanation capability, and File handling. The kinds of experimentation and record keeping that leads to effective expert systems is facilitated by: (1) a library of inferencing modules (in the logic environment); (2) an explanation capability which reveals logic strategies to users; (3) automated file naming conventions; (4) an information retrieval system; and (5) on-line help. These aid with effective use of knowledge, debugging and experimentation. Since the APD software anticipates the logical rules becoming complicated, it is embedded in a production system language (CLIPS) to insure the full power of the production system paradigm of CLIPS and availability of the procedural language C. The development is discussed of the APD software and three example applications: toy, experimental, and operational prototype for submarine maintenance predictions.

  13. More emotional facial expressions during episodic than during semantic autobiographical retrieval.

    PubMed

    El Haj, Mohamad; Antoine, Pascal; Nandrino, Jean Louis

    2016-04-01

    There is a substantial body of research on the relationship between emotion and autobiographical memory. Using facial analysis software, our study addressed this relationship by investigating basic emotional facial expressions that may be detected during autobiographical recall. Participants were asked to retrieve 3 autobiographical memories, each of which was triggered by one of the following cue words: happy, sad, and city. The autobiographical recall was analyzed by a software for facial analysis that detects and classifies basic emotional expressions. Analyses showed that emotional cues triggered the corresponding basic facial expressions (i.e., happy facial expression for memories cued by happy). Furthermore, we dissociated episodic and semantic retrieval, observing more emotional facial expressions during episodic than during semantic retrieval, regardless of the emotional valence of cues. Our study provides insight into facial expressions that are associated with emotional autobiographical memory. It also highlights an ecological tool to reveal physiological changes that are associated with emotion and memory.

  14. Operator interface design considerations for a PACS information management system

    NASA Astrophysics Data System (ADS)

    Steinke, James E.; Nabijee, Kamal H.; Freeman, Rick H.; Prior, Fred W.

    1990-08-01

    As prototype PACS grow into fully digital departmental and hospital-wide systems, effective information storage and retrieval mechanisms become increasingly important. Thus far, designers of PACS workstations have concentrated on image communication and display functionality. The new challenge is to provide appropriate operator interface environments to facilitate information retrieval. The "Marburg Model" 1 provides a detailed analysis of the functions, control flows and data structures used in Radiology. It identifies a set of "actors" who perform information manipulation functions. Drawing on this model and its associated methodology it is possible to identify four modes of use of information systems in Radiology: Clinical Routine, Research, Consultation, and Administration. Each mode has its own specific access requirements and views of information. An operator interface strategy appropriate for each mode will be proposed. Clinical Routine mode is the principal concern of PACS primary diagnosis workstations. In a full PACS implementation, such workstations must provide a simple and consistent navigational aid for the on-line image database, a local work list of cases to be reviewed, and easy access to information from other hospital information systems. A hierarchical method of information access is preferred because it provides the ability to start at high-level entities and iteratively narrow the scope of information from which to select subsequent operations. An implementation using hierarchical, nested software windows which fulfills such requirements shall be examined.

  15. Computerized system for the follow-up of patients with heart valve replacements.

    PubMed

    Bain, W H; Fyfe, I C; Rodger, R A

    1985-04-01

    A system is described which will accept, store, retrieve and analyze information on large numbers of patients who undergo valve replacement surgery. The purpose of the database is to yield readily available facts concerning the patient's clinical course, prosthetic valve function, length of survival, and incidence of complications. The system uses the Apple Macintosh computer, which is one of the current examples of small, desk-top microprocessors. The software for the input, editing and analysis programs has been written by a professional software writer in close collaboration with a cardiac surgeon. Its content is based on 8 years' experience of computer-based valve follow-up. The system is inexpensive and has proved easy to use in practice.

  16. The medical matters wiki: building a library Web site 2.0.

    PubMed

    Robertson, Justin; Burnham, Judy; Li, Jie; Sayed, Ellen

    2008-01-01

    New and innovative information technologies drive the ever-evolving library profession. From clay tablet to parchment scroll to manufactured paper to computer screen pixel, information storage, retrieval, and delivery methods continue to evolve, and each advance irrevocably affects the way libraries, and librarians, work. The Internet has forever altered information and library science, both in theory and practice, but even within this context the progression continues. Though ambiguously defined, Web 2.0 offers a new outlook and new software, presenting librarians with potentially invaluable new tools and methods. This paper discusses the creation, implementation, and maintenance of a Web 2.0 technology, the wiki, as a resource tool for an academic biomedical library.

  17. DICOM image quantification secondary capture (DICOM IQSC) integrated with numeric results, regions, and curves: implementation and applications in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan

    2017-03-01

    In this paper, we describe an enhanced DICOM Secondary Capture (SC) that integrates Image Quantification (IQ) results, Regions of Interest (ROIs), and Time Activity Curves (TACs) with screen shots by embedding extra medical imaging information into a standard DICOM header. A software toolkit of DICOM IQSC has been developed to implement the SC-centered information integration of quantitative analysis for routine practice of nuclear medicine. Primary experiments show that the DICOM IQSC method is simple and easy to implement seamlessly integrating post-processing workstations with PACS for archiving and retrieving IQ information. Additional DICOM IQSC applications in routine nuclear medicine and clinic research are also discussed.

  18. Phenotypic and genotypic data integration and exploration through a web-service architecture.

    PubMed

    Nuzzo, Angelo; Riva, Alberto; Bellazzi, Riccardo

    2009-10-15

    Linking genotypic and phenotypic information is one of the greatest challenges of current genetics research. The definition of an Information Technology infrastructure to support this kind of studies, and in particular studies aimed at the analysis of complex traits, which require the definition of multifaceted phenotypes and the integration genotypic information to discover the most prevalent diseases, is a paradigmatic goal of Biomedical Informatics. This paper describes the use of Information Technology methods and tools to develop a system for the management, inspection and integration of phenotypic and genotypic data. We present the design and architecture of the Phenotype Miner, a software system able to flexibly manage phenotypic information, and its extended functionalities to retrieve genotype information from external repositories and to relate it to phenotypic data. For this purpose we developed a module to allow customized data upload by the user and a SOAP-based communications layer to retrieve data from existing biomedical knowledge management tools. In this paper we also demonstrate the system functionality by an example application of the system in which we analyze two related genomic datasets. In this paper we show how a comprehensive, integrated and automated workbench for genotype and phenotype integration can facilitate and improve the hypothesis generation process underlying modern genetic studies.

  19. A method to account for the temperature sensitivity of TCCON total column measurements

    NASA Astrophysics Data System (ADS)

    Niebling, Sabrina G.; Wunch, Debra; Toon, Geoffrey C.; Wennberg, Paul O.; Feist, Dietrich G.

    2014-05-01

    The Total Carbon Column Observing Network (TCCON) consists of ground-based Fourier Transform Spectrometer (FTS) systems all around the world. It achieves better than 0.25% precision and accuracy for total column measurements of CO2 [Wunch et al. (2011)]. In recent years, the TCCON data processing and retrieval software (GGG) has been improved to achieve better and better results (e. g. ghost correction, improved a priori profiles, more accurate spectroscopy). However, a small error is also introduced by the insufficent knowledge of the true temperature profile in the atmosphere above the individual instruments. This knowledge is crucial to retrieve highly precise gas concentrations. In the current version of the retrieval software, we use six-hourly NCEP reanalysis data to produce one temperature profile at local noon for each measurement day. For sites in the mid latitudes which can have a large diurnal variation of the temperature in the lowermost kilometers of the atmosphere, this approach can lead to small errors in the final gas concentration of the total column. Here, we present and describe a method to account for the temperature sensitivity of the total column measurements. We exploit the fact that H2O is most abundant in the lowermost kilometers of the atmosphere where the largest diurnal temperature variations occur. We use single H2O absorption lines with different temperature sensitivities to gain information about the temperature variations over the course of the day. This information is used to apply a posteriori correction of the retrieved gas concentration of total column. In addition, we show that the a posteriori temperature correction is effective by applying it to data from Lamont, Oklahoma, USA (36,6°N and 97,5°W). We chose this site because regular radiosonde launches with a time resolution of six hours provide detailed information of the real temperature in the atmosphere and allow us to test the effectiveness of our correction. References: Wunch, D., Toon, G. C., Blavier, J.-F. L., Washenfelder, R. A., Notholt, J., Connor, B. J., Griffith, D. W. T., Sherlock, V., and Wennberg, P. O.: The Total Carbon Column Observing Network, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369, 2087-2112, 2011.

  20. Information technologies for astrophysics circa 2001

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1990-01-01

    It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include mineaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large datasets. Three limiting paradigms are saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage and retrieval off the shelf; and the linear mode of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade.

  1. New Methods for Air Quality Model Evaluation with Satellite Data

    NASA Astrophysics Data System (ADS)

    Holloway, T.; Harkey, M.

    2015-12-01

    Despite major advances in the ability of satellites to detect gases and aerosols in the atmosphere, there remains significant, untapped potential to apply space-based data to air quality regulatory applications. Here, we showcase research findings geared toward increasing the relevance of satellite data to support operational air quality management, focused on model evaluation. Particular emphasis is given to nitrogen dioxide (NO2) and formaldehyde (HCHO) from the Ozone Monitoring Instrument aboard the NASA Aura satellite, and evaluation of simulations from the EPA Community Multiscale Air Quality (CMAQ) model. This work is part of the NASA Air Quality Applied Sciences Team (AQAST), and is motivated by ongoing dialog with state and federal air quality management agencies. We present the response of satellite-derived NO2 to meteorological conditions, satellite-derived HCHO:NO2 ratios as an indicator of ozone production regime, and the ability of models to capture these sensitivities over the continental U.S. In the case of NO2-weather sensitivities, we find boundary layer height, wind speed, temperature, and relative humidity to be the most important variables in determining near-surface NO2 variability. CMAQ agreed with relationships observed in satellite data, as well as in ground-based data, over most regions. However, we find that the southwest U.S. is a problem area for CMAQ, where modeled NO2 responses to insolation, boundary layer height, and other variables are at odds with the observations. Our analyses utilize a software developed by our team, the Wisconsin Horizontal Interpolation Program for Satellites (WHIPS): a free, open-source program designed to make satellite-derived air quality data more usable. WHIPS interpolates level 2 satellite retrievals onto a user-defined fixed grid, in effect creating custom-gridded level 3 satellite product. Currently, WHIPS can process the following data products: OMI NO2 (NASA retrieval); OMI NO2 (KNMI retrieval); OMI HCHO (NASA retrieval); MOPITT CO (NASA retrieval); MODIS AOD (NASA retrieval). More information at http://nelson.wisc.edu/sage/data-and-models/software.php.

  2. 3DRT-MPASS

    NASA Technical Reports Server (NTRS)

    Lickly, Ben

    2005-01-01

    Data from all current JPL missions are stored in files called SPICE kernels. At present, animators who want to use data from these kernels have to either read through the kernels looking for the desired data, or write programs themselves to retrieve information about all the needed objects for their animations. In this project, methods of automating the process of importing the data from the SPICE kernels were researched. In particular, tools were developed for creating basic scenes in Maya, a 3D computer graphics software package, from SPICE kernels.

  3. IGDS/TRAP Interface Program (ITIP). Software Design Document

    NASA Technical Reports Server (NTRS)

    Jefferys, Steve; Johnson, Wendell

    1981-01-01

    The preliminary design of the IGDS/TRAP Interface Program (ITIP) is described. The ITIP is implemented on the PDP 11/70 and interfaces directly with the Interactive Graphics Design System and the Data Management and Retrieval System. The program provides an efficient method for developing a network flow diagram. Performance requirements, operational rquirements, and design requirements are discussed along with sources and types of input and destination and types of output. Information processing functions and data base requirements are also covered.

  4. Factor information retrieval system version 2. 0 (fire) (for microcomputers). Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    FIRE Version 2.0 contains EPA's unique recommended criteria and toxic air emission estimation factors. FIRE consists of: (1) an EPA internal repository system that contains emission factor data identified and collected, and (2) an external distribution system that contains only EPA's recommended factors. The emission factors, compiled from a review of the literature, are identified by pollutant name, CAS number, process and emission source descriptions, SIC code, SCC, and control status. The factors are rated for quality using AP-42 rating criteria.

  5. A web-based approach for electrocardiogram monitoring in the home.

    PubMed

    Magrabi, F; Lovell, N H; Celler, B G

    1999-05-01

    A Web-based electrocardiogram (ECG) monitoring service in which a longitudinal clinical record is used for management of patients, is described. The Web application is used to collect clinical data from the patient's home. A database on the server acts as a central repository where this clinical information is stored. A Web browser provides access to the patient's records and ECG data. We discuss the technologies used to automate the retrieval and storage of clinical data from a patient database, and the recording and reviewing of clinical measurement data. On the client's Web browser, ActiveX controls embedded in the Web pages provide a link between the various components including the Web server, Web page, the specialised client side ECG review and acquisition software, and the local file system. The ActiveX controls also implement FTP functions to retrieve and submit clinical data to and from the server. An intelligent software agent on the server is activated whenever new ECG data is sent from the home. The agent compares historical data with newly acquired data. Using this method, an optimum patient care strategy can be evaluated, a summarised report along with reminders and suggestions for action is sent to the doctor and patient by email.

  6. cPath: open source software for collecting, storing, and querying biological pathways

    PubMed Central

    Cerami, Ethan G; Bader, Gary D; Gross, Benjamin E; Sander, Chris

    2006-01-01

    Background Biological pathways, including metabolic pathways, protein interaction networks, signal transduction pathways, and gene regulatory networks, are currently represented in over 220 diverse databases. These data are crucial for the study of specific biological processes, including human diseases. Standard exchange formats for pathway information, such as BioPAX, CellML, SBML and PSI-MI, enable convenient collection of this data for biological research, but mechanisms for common storage and communication are required. Results We have developed cPath, an open source database and web application for collecting, storing, and querying biological pathway data. cPath makes it easy to aggregate custom pathway data sets available in standard exchange formats from multiple databases, present pathway data to biologists via a customizable web interface, and export pathway data via a web service to third-party software, such as Cytoscape, for visualization and analysis. cPath is software only, and does not include new pathway information. Key features include: a built-in identifier mapping service for linking identical interactors and linking to external resources; built-in support for PSI-MI and BioPAX standard pathway exchange formats; a web service interface for searching and retrieving pathway data sets; and thorough documentation. The cPath software is freely available under the LGPL open source license for academic and commercial use. Conclusion cPath is a robust, scalable, modular, professional-grade software platform for collecting, storing, and querying biological pathways. It can serve as the core data handling component in information systems for pathway visualization, analysis and modeling. PMID:17101041

  7. Astronomical Software Directory Service

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.; Payne, Harry; Hayes, Jeffrey

    1997-01-01

    With the support of NASA's Astrophysics Data Program (NRA 92-OSSA-15), we have developed the Astronomical Software Directory Service (ASDS): a distributed, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URLs indexed for full-text searching. Users are performing about 400 searches per month. A new aspect of our service is the inclusion of telescope and instrumentation manuals, which prompted us to change the name to the Astronomical Software and Documentation Service. ASDS was originally conceived to serve two purposes: to provide a useful Internet service in an area of expertise of the investigators (astronomical software), and as a research project to investigate various architectures for searching through a set of documents distributed across the Internet. Two of the co-investigators were then installing and maintaining astronomical software as their primary job responsibility. We felt that a service which incorporated our experience in this area would be more useful than a straightforward listing of software packages. The original concept was for a service based on the client/server model, which would function as a directory/referral service rather than as an archive. For performing the searches, we began our investigation with a decision to evaluate the Isite software from the Center for Networked Information Discovery and Retrieval (CNIDR). This software was intended as a replacement for Wide-Area Information Service (WAIS), a client/server technology for performing full-text searches through a set of documents. Isite had some additional features that we considered attractive, and we enjoyed the cooperation of the Isite developers, who were happy to have ASDS as a demonstration project. We ended up staying with the software throughout the project, making modifications to take advantage of new features as they came along, as well as influencing the software development. The Web interface to the search engine is provided by a gateway program written in C++ by a consultant to the project (A. Warnock).

  8. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods.

    PubMed

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.

  9. A Multimedia Interactive Education System for Prostate Cancer Patients: Development and Preliminary Evaluation

    PubMed Central

    Butz, Brian P

    2004-01-01

    Background A cancer diagnosis is highly distressing. Yet, to make informed treatment choices patients have to learn complicated disease and treatment information that is often fraught with medical and statistical terminology. Thus, patients need accurate and easy-to-understand information. Objective To introduce the development and preliminary evaluation through focus groups of a novel highly-interactive multimedia-education software program for patients diagnosed with localized prostate cancer. Methods The prostate interactive education system uses the metaphor of rooms in a virtual health center (ie, reception area, a library, physician offices, group meeting room) to organize information. Text information contained in the library is tailored to a person's information-seeking preference (ie, high versus low information seeker). We conducted a preliminary evaluation through 5 separate focus groups with prostate cancer survivors (N = 18) and their spouses (N = 15). Results Focus group results point to the timeliness and high acceptability of the software among the target audience. Results also underscore the importance of a guide or tutor who assists in navigating the program and who responds to queries to facilitate information retrieval. Conclusions Focus groups have established the validity of our approach and point to new directions to further enhance the user interface. PMID:15111269

  10. Clinical results of HIS, RIS, PACS integration using data integration CASE tools

    NASA Astrophysics Data System (ADS)

    Taira, Ricky K.; Chan, Hing-Ming; Breant, Claudine M.; Huang, Lu J.; Valentino, Daniel J.

    1995-05-01

    Current infrastructure research in PACS is dominated by the development of communication networks (local area networks, teleradiology, ATM networks, etc.), multimedia display workstations, and hierarchical image storage architectures. However, limited work has been performed on developing flexible, expansible, and intelligent information processing architectures for the vast decentralized image and text data repositories prevalent in healthcare environments. Patient information is often distributed among multiple data management systems. Current large-scale efforts to integrate medical information and knowledge sources have been costly with limited retrieval functionality. Software integration strategies to unify distributed data and knowledge sources is still lacking commercially. Systems heterogeneity (i.e., differences in hardware platforms, communication protocols, database management software, nomenclature, etc.) is at the heart of the problem and is unlikely to be standardized in the near future. In this paper, we demonstrate the use of newly available CASE (computer- aided software engineering) tools to rapidly integrate HIS, RIS, and PACS information systems. The advantages of these tools include fast development time (low-level code is generated from graphical specifications), and easy system maintenance (excellent documentation, easy to perform changes, and centralized code repository in an object-oriented database). The CASE tools are used to develop and manage the `middle-ware' in our client- mediator-serve architecture for systems integration. Our architecture is scalable and can accommodate heterogeneous database and communication protocols.

  11. Software for Optical Archive and Retrieval (SOAR) user's guide, version 4.2

    NASA Technical Reports Server (NTRS)

    Davis, Charles

    1991-01-01

    The optical disk is an emerging technology. Because it is not a magnetic medium, it offers a number of distinct advantages over the established form of storage, advantages that make it extremely attractive. They are as follows: (1) the ability to store much more data within the same space; (2) the random access characteristics of the Write Once Read Many optical disk; (3) a much longer life than that of traditional storage media; and (4) much greater data access rate. Software for Optical Archive and Retrieval (SOAR) user's guide is presented.

  12. Rapid Diagnostics of Onboard Sequences

    NASA Technical Reports Server (NTRS)

    Starbird, Thomas W.; Morris, John R.; Shams, Khawaja S.; Maimone, Mark W.

    2012-01-01

    Keeping track of sequences onboard a spacecraft is challenging. When reviewing Event Verification Records (EVRs) of sequence executions on the Mars Exploration Rover (MER), operators often found themselves wondering which version of a named sequence the EVR corresponded to. The lack of this information drastically impacts the operators diagnostic capabilities as well as their situational awareness with respect to the commands the spacecraft has executed, since the EVRs do not provide argument values or explanatory comments. Having this information immediately available can be instrumental in diagnosing critical events and can significantly enhance the overall safety of the spacecraft. This software provides auditing capability that can eliminate that uncertainty while diagnosing critical conditions. Furthermore, the Restful interface provides a simple way for sequencing tools to automatically retrieve binary compiled sequence SCMFs (Space Command Message Files) on demand. It also enables developers to change the underlying database, while maintaining the same interface to the existing applications. The logging capabilities are also beneficial to operators when they are trying to recall how they solved a similar problem many days ago: this software enables automatic recovery of SCMF and RML (Robot Markup Language) sequence files directly from the command EVRs, eliminating the need for people to find and validate the corresponding sequences. To address the lack of auditing capability for sequences onboard a spacecraft during earlier missions, extensive logging support was added on the Mars Science Laboratory (MSL) sequencing server. This server is responsible for generating all MSL binary SCMFs from RML input sequences. The sequencing server logs every SCMF it generates into a MySQL database, as well as the high-level RML file and dictionary name inputs used to create the SCMF. The SCMF is then indexed by a hash value that is automatically included in all command EVRs by the onboard flight software. Second, both the binary SCMF result and the RML input file can be retrieved simply by specifying the hash to a Restful web interface. This interface enables command line tools as well as large sophisticated programs to download the SCMF and RMLs on-demand from the database, enabling a vast array of tools to be built on top of it. One such command line tool can retrieve and display RML files, or annotate a list of EVRs by interleaving them with the original sequence commands. This software has been integrated with the MSL sequencing pipeline where it will serve sequences useful in diagnostics, debugging, and situational awareness throughout the mission.

  13. MaROS: Information Management Service

    NASA Technical Reports Server (NTRS)

    Allard, Daniel A.; Gladden, Roy E.; Wright, Jesse J.; Hy, Franklin H.; Rabideau, Gregg R.; Wallick, Michael N.

    2011-01-01

    This software is provided by the Mars Relay Operations Service (MaROS) task to a variety of Mars projects for the purpose of coordinating communications sessions between landed spacecraft assets and orbiting spacecraft assets at Mars. The Information Management Service centralizes a set of functions previously distributed across multiple spacecraft operations teams, and as such, greatly improves visibility into the end-to-end strategic coordination process. Most of the process revolves around the scheduling of communications sessions between the spacecraft during periods of time when a landed asset on Mars is geometrically visible by an orbiting spacecraft. These relay sessions are used to transfer data both to and from the landed asset via the orbiting asset on behalf of Earth-based spacecraft operators. This software component is an application process running as a Java virtual machine. The component provides all service interfaces via a Representational State Transfer (REST) protocol over https to external clients. There are two general interaction modes with the service: upload and download of data. For data upload, the service must execute logic specific to the upload data type and trigger any applicable calculations including pass delivery latencies and overflight conflicts. For data download, the software must retrieve and correlate requested information and deliver to the requesting client. The provision of this service enables several key advancements over legacy processes and systems. For one, this service represents the first time that end-to-end relay information is correlated into a single shared repository. The software also provides the first multimission latency calculator; previous latency calculations had been performed on a mission-by-mission basis.

  14. A comparison of accuracy and computational feasibility of two record linkage algorithms in retrieving vital status information from HIV/AIDS patients registered in Brazilian public databases.

    PubMed

    de Paula, Adelzon Assis; Pires, Denise Franqueira; Filho, Pedro Alves; de Lemos, Kátia Regina Valente; Barçante, Eduardo; Pacheco, Antonio Guilherme

    2018-06-01

    While cross-referencing information from people living with HIV/AIDS (PLWHA) to the official mortality database is a critical step in monitoring the HIV/AIDS epidemic in Brazil, the accuracy of the linkage routine may compromise the validity of the final database, yielding to biased epidemiological estimates. We compared the accuracy and the total runtime of two linkage algorithms applied to retrieve vital status information from PLWHA in Brazilian public databases. Nominally identified records from PLWHA were obtained from three distinct government databases. Linkage routines included an algorithm in Python language (PLA) and Reclink software (RlS), a probabilistic software largely utilized in Brazil. Records from PLWHA 1 known to be alive were added to those from patients reported as deceased. Data were then searched into the mortality system. Scenarios where 5% and 50% of patients actually dead were simulated, considering both complete cases and 20% missing maternal names. When complete information was available both algorithms had comparable accuracies. In the scenario of 20% missing maternal names, PLA 2 and RlS 3 had sensitivities of 94.5% and 94.6% (p > 0.5), respectively; after manual reviewing, PLA sensitivity increased to 98.4% (96.6-100.0) exceeding that for RlS (p < 0.01). PLA had higher positive predictive value in 5% death proportion. Manual reviewing was intrinsically required by RlS in up to 14% register for people actually dead, whereas the corresponding proportion ranged from 1.5% to 2% for PLA. The lack of manual inspection did not alter PLA sensitivity when complete information was available. When incomplete data was available PLA sensitivity increased from 94.5% to 98.4%, thus exceeding that presented by RlS (94.6%, p < 0.05). RlS spanned considerably less processing time compared to PLA. Both linkage algorithms presented interchangeable accuracies in retrieving vital status data from PLWHA. RlS had a considerably lesser runtime but intrinsically required manually reviewing a fastidious proportion of the matched registries. On the other hand, PLA spent quite more runtime but spared manual reviewing at no expense of accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Neural network-based retrieval from software reuse repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David A.; Srinivas, Kankanahalli

    1992-01-01

    A significant hurdle confronts the software reuser attempting to select candidate components from a software repository - discriminating between those components without resorting to inspection of the implementation(s). We outline an approach to this problem based upon neural networks which avoids requiring the repository administrators to define a conceptual closeness graph for the classification vocabulary.

  16. Study and Analysis of The Robot-Operated Material Processing Systems (ROMPS)

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.

    1996-01-01

    This is a report presenting the progress of a research grant funded by NASA for work performed during 1 Oct. 1994 - 31 Sep. 1995. The report deals with the development and investigation of potential use of software for data processing for the Robot Operated Material Processing System (ROMPS). It reports on the progress of data processing of calibration samples processed by ROMPS in space and on earth. First data were retrieved using the I/O software and manually processed using MicroSoft Excel. Then the data retrieval and processing process was automated using a program written in C which is able to read the telemetry data and produce plots of time responses of sample temperatures and other desired variables. LabView was also employed to automatically retrieve and process the telemetry data.

  17. A Unified Mathematical Definition of Classical Information Retrieval.

    ERIC Educational Resources Information Center

    Dominich, Sandor

    2000-01-01

    Presents a unified mathematical definition for the classical models of information retrieval and identifies a mathematical structure behind relevance feedback. Highlights include vector information retrieval; probabilistic information retrieval; and similarity information retrieval. (Contains 118 references.) (Author/LRW)

  18. An information gathering system for medical image inspection

    NASA Astrophysics Data System (ADS)

    Lee, Young-Jin; Bajcsy, Peter

    2005-04-01

    We present an information gathering system for medical image inspection that consists of software tools for capturing computer-centric and human-centric information. Computer-centric information includes (1) static annotations, such as (a) image drawings enclosing any selected area, a set of areas with similar colors, a set of salient points, and (b) textual descriptions associated with either image drawings or links between pairs of image drawings, and (2) dynamic (or temporal) information, such as mouse movements, zoom level changes, image panning and frame selections from an image stack. Human-centric information is represented by video and audio signals that are acquired by computer-mounted cameras and microphones. The short-term goal of the presented system is to facilitate learning of medical novices from medical experts, while the long-term goal is to data mine all information about image inspection for assisting in making diagnoses. In this work, we built basic software functionality for gathering computer-centric and human-centric information of the aforementioned variables. Next, we developed the information playback capabilities of all gathered information for educational purposes. Finally, we prototyped text-based and image template-based search engines to retrieve information from recorded annotations, for example, (a) find all annotations containing the word "blood vessels", or (b) search for similar areas to a selected image area. The information gathering system for medical image inspection reported here has been tested with images from the Histology Atlas database.

  19. Object-oriented microcomputer software for earthquake seismology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroeger, G.C.

    1993-02-01

    A suite of graphically interactive applications for the retrieval, editing and modeling of earthquake seismograms have been developed using object-orientation programming methodology and the C++ language. Retriever is an application which allows the user to search for, browse, and extract seismic data from CD-ROMs produced by the National Earthquake Information Center (NEIC). The user can restrict the date, size, location and depth of desired earthquakes and extract selected data into a variety of common seismic file formats. Reformer is an application that allows the user to edit seismic data and data headers, and perform a variety of signal processing operationsmore » on that data. Synthesizer is a program for the generation and analysis of teleseismic P and SH synthetic seismograms. The program provides graphical manipulation of source parameters, crustal structures and seismograms, as well as near real-time response in generating synthetics for arbitrary flat-layered crustal structures. All three applications use class libraries developed for implementing geologic and seismic objects and views. Standard seismogram view objects and objects that encapsulate the reading and writing of different seismic data file formats are shared by all three applications. The focal mechanism views in Synthesizer are based on a generic stereonet view object. Interaction with the native graphical user interface is encapsulated in a class library in order to simplify the porting of the software to different operating systems and application programming interfaces. The software was developed on the Apple Macintosh and is being ported to UNIX/X-Window platforms.« less

  20. Future-saving audiovisual content for Data Science: Preservation of geoinformatics video heritage with the TIB|AV-Portal

    NASA Astrophysics Data System (ADS)

    Löwe, Peter; Plank, Margret; Ziedorn, Frauke

    2015-04-01

    In data driven research, the access to citation and preservation of the full triad consisting of journal article, research data and -software has started to become good scientific practice. To foster the adoption of this practice the significance of software tools has to be acknowledged, which enable scientists to harness auxiliary audiovisual content in their research work. The advent of ubiquitous computer-based audiovisual recording and corresponding Web 2.0 hosting platforms like Youtube, Slideshare and GitHub has created new ecosystems for contextual information related to scientific software and data, which continues to grow both in size and variety of content. The current Web 2.0 platforms lack capabilities for long term archiving and scientific citation, such as persistent identifiers allowing to reference specific intervals of the overall content. The audiovisual content currently shared by scientists ranges from commented howto-demonstrations on software handling, installation and data-processing, to aggregated visual analytics of the evolution of software projects over time. Such content are crucial additions to the scientific message, as they ensure that software-based data-processing workflows can be assessed, understood and reused in the future. In the context of data driven research, such content needs to be accessible by effective search capabilities, enabling the content to be retrieved and ensuring that the content producers receive credit for their efforts within the scientific community. Improved multimedia archiving and retrieval services for scientific audiovisual content which meet these requirements are currently implemented by the scientific library community. This paper exemplifies the existing challenges, requirements, benefits and the potential of the preservation, accessibility and citability of such audiovisual content for the Open Source communities based on the new audiovisual web service TIB|AV Portal of the German National Library of Science and Technology. The web-based portal allows for extended search capabilities based on enhanced metadata derived by automated video analysis. By combining state-of-the-art multimedia retrieval techniques such as speech-, text-, and image recognition with semantic analysis, content-based access to videos at the segment level is provided. Further, by using the open standard Media Fragment Identifier (MFID), a citable Digital Object Identifier is displayed for each video segment. In addition to the continuously growing footprint of contemporary content, the importance of vintage audiovisual information needs to be considered: This paper showcases the successful application of the TIB|AV-Portal in the preservation and provision of a newly discovered version of a GRASS GIS promotional video produced by US Army -Corps of Enginers Laboratory (US-CERL) in 1987. The video is provides insight into the constraints of the very early days of the GRASS GIS project, which is the oldest active Free and Open Source Software (FOSS) GIS project which has been active for over thirty years. GRASS itself has turned into a collaborative scientific platform and a repository of scientific peer-reviewed code and algorithm/knowledge hub for future generation of scientists [1]. This is a reference case for future preservation activities regarding semantic-enhanced Web 2.0 content from geospatial software projects within Academia and beyond. References: [1] Chemin, Y., Petras V., Petrasova, A., Landa, M., Gebbert, S., Zambelli, P., Neteler, M., Löwe, P.: GRASS GIS: a peer-reviewed scientific platform and future research Repository, Geophysical Research Abstracts, Vol. 17, EGU2015-8314-1, 2015 (submitted)

  1. Retrieval and Validation of Zenith and Slant Path Delays From the Irish GPS Network

    NASA Astrophysics Data System (ADS)

    Hanafin, Jennifer; Jennings, S. Gerard; O'Dowd, Colin; McGrath, Ray; Whelan, Eoin

    2010-05-01

    Retrieval of atmospheric integrated water vapour (IWV) from ground-based GPS receivers and provision of this data product for meteorological applications has been the focus of a number of Europe-wide networks and projects, most recently the EUMETNET GPS water vapour programme. The results presented here are from a project to provide such information about the state of the atmosphere around Ireland for climate monitoring and improved numerical weather prediction. Two geodetic reference GPS receivers have been deployed at Valentia Observatory in Co. Kerry and Mace Head Atmospheric Research Station in Co. Galway, Ireland. These two receivers supplement the existing Ordnance Survey Ireland active network of 17 permanent ground-based receivers. A system to retrieve column-integrated atmospheric water vapour from the data provided by this network has been developed, based on the GPS Analysis at MIT (GAMIT) software package. The data quality of the zenith retrievals has been assessed using co-located radiosondes at the Valentia site and observations from a microwave profiling radiometer at the Mace Head site. Validation of the slant path retrievals requires a numerical weather prediction model and HIRLAM (High-Resolution Limited Area Model) version 7.2, the current operational forecast model in use at Met Éireann for the region, has been used for this validation work. Results from the data processing and comparisons with the independent observations and model will be presented.

  2. Array data extractor (ADE): a LabVIEW program to extract and merge gene array data.

    PubMed

    Kurtenbach, Stefan; Kurtenbach, Sarah; Zoidl, Georg

    2013-12-01

    Large data sets from gene expression array studies are publicly available offering information highly valuable for research across many disciplines ranging from fundamental to clinical research. Highly advanced bioinformatics tools have been made available to researchers, but a demand for user-friendly software allowing researchers to quickly extract expression information for multiple genes from multiple studies persists. Here, we present a user-friendly LabVIEW program to automatically extract gene expression data for a list of genes from multiple normalized microarray datasets. Functionality was tested for 288 class A G protein-coupled receptors (GPCRs) and expression data from 12 studies comparing normal and diseased human hearts. Results confirmed known regulation of a beta 1 adrenergic receptor and further indicate novel research targets. Although existing software allows for complex data analyses, the LabVIEW based program presented here, "Array Data Extractor (ADE)", provides users with a tool to retrieve meaningful information from multiple normalized gene expression datasets in a fast and easy way. Further, the graphical programming language used in LabVIEW allows applying changes to the program without the need of advanced programming knowledge.

  3. Description of the U.S. Geological Survey Geo Data Portal data integration framework

    USGS Publications Warehouse

    Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Lucido, Jessica M.

    2012-01-01

    The U.S. Geological Survey has developed an open-standard data integration framework for working efficiently and effectively with large collections of climate and other geoscience data. A web interface accesses catalog datasets to find data services. Data resources can then be rendered for mapping and dataset metadata are derived directly from these web services. Algorithm configuration and information needed to retrieve data for processing are passed to a server where all large-volume data access and manipulation takes place. The data integration strategy described here was implemented by leveraging existing free and open source software. Details of the software used are omitted; rather, emphasis is placed on how open-standard web services and data encodings can be used in an architecture that integrates common geographic and atmospheric data.

  4. Resources, Equipment and Logistics in Support of Long-Term Monitoring at Fort Benning

    DTIC Science & Technology

    2005-08-01

    access to cell phone transceivers for data retrieval. Sensor manufacturers, state climatologists, and the EPA provide standards, guidelines...have cell phone access for data retrieval and any software driven maintenance. Figure 3. Typical meteorological data acquisition station Sensor...collect and store data every 30 minutes. Of three current stations, one has a cell phone data link for daily retrieval. Table 2 lists equipment

  5. Impact of a priori information on IASI ozone retrievals and trends

    NASA Astrophysics Data System (ADS)

    Barret, B.; Peiro, H.; Emili, E.; Le Flocgmoën, E.

    2017-12-01

    The IASI sensor documents atmospheric water vapor, temperature and composition since 2007. The Software for a Fast Retrieval of IASI Data (SOFRID) has been developped to retrieve O3 and CO profiles from IASI in near-real time on a global scale. Information content analyses have shown that IASI enables the quantification of O3 independently in the troposphere, the UTLS and the stratosphere. Validation studies have demonstrated that the daily to seasonal variability of tropospheric and UTLS O3 was well captured by IASI especially in the tropics. IASI-SOFRID retrievals have also been used to document the tropospheric composition during the Asian monsoon and participated to determine the O3 evolution during the 2008-2016 period in the framework of the TOAR project. Nevertheless, IASI-SOFRID O3 is biased high in the UTLS and in the tropical troposphere and the 8 years O3 trends from the different IASI products are significantly different from the O3 trends from UV-Vis satellite sensors (e.g. OMI)..SOFRID is based on the Optimal Estimation Method that requires a priori information to complete the information provided by the measured thermal infrared radiances. In SOFRID-O3 v1.5 used in TOAR the a priori consists of a single O3 profile and associated covariance matrix based on global O3 radiosoundings. Such a global a priori is characterized by a very large variabilty and does not represent our best kowledge of the O3 profile at a given time and location. Furthermore it is biased towards the northern hemisphere middle latitudes. We have therefore implemented the possibility to use dynamical a priori data in SOFRID and performed experiments using O3 climatological data and MLS O3 analyses. We will present O3 distributions and comparisons with O3 radiosoundings from the different SOFRID-O3 retrievals. We will in particular assess the impact of the use of different a priori data upon the O3 biases and trends during the IASI period.

  6. Information technologies for astrophysics circa 2001

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1991-01-01

    It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include miniaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is less easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large data sets. Three limiting paradigms are as follows: saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage, and retrieval off the shelf; and the linear model of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade.

  7. SWMPr: An R Package for Retrieving, Organizing, and ...

    EPA Pesticide Factsheets

    The System-Wide Monitoring Program (SWMP) was implemented in 1995 by the US National Estuarine Research Reserve System. This program has provided two decades of continuous monitoring data at over 140 fixed stations in 28 estuaries. However, the increasing quantity of data provided by the monitoring network has complicated broad-scale comparisons between systems and, in some cases, prevented simple trend analysis of water quality parameters at individual sites. This article describes the SWMPr package that provides several functions that facilitate data retrieval, organization, andanalysis of time series data in the reserve estuaries. Previously unavailable functions for estuaries are also provided to estimate rates of ecosystem metabolism using the open-water method. The SWMPr package has facilitated a cross-reserve comparison of water quality trends and links quantitative information with analysis tools that have use for more generic applications to environmental time series. The manuscript describes a software package that was recently developed to retrieve, organize, and analyze monitoring data from the National Estuarine Research Reserve System. Functions are explained in detail, including recent applications for trend analysis of ecosystem metabolism.

  8. Network Penetration Testing and Research

    NASA Technical Reports Server (NTRS)

    Murphy, Brandon F.

    2013-01-01

    This paper will focus the on research and testing done on penetrating a network for security purposes. This research will provide the IT security office new methods of attacks across and against a company's network as well as introduce them to new platforms and software that can be used to better assist with protecting against such attacks. Throughout this paper testing and research has been done on two different Linux based operating systems, for attacking and compromising a Windows based host computer. Backtrack 5 and BlackBuntu (Linux based penetration testing operating systems) are two different "attacker'' computers that will attempt to plant viruses and or NASA USRP - Internship Final Report exploits on a host Windows 7 operating system, as well as try to retrieve information from the host. On each Linux OS (Backtrack 5 and BlackBuntu) there is penetration testing software which provides the necessary tools to create exploits that can compromise a windows system as well as other operating systems. This paper will focus on two main methods of deploying exploits 1 onto a host computer in order to retrieve information from a compromised system. One method of deployment for an exploit that was tested is known as a "social engineering" exploit. This type of method requires interaction from unsuspecting user. With this user interaction, a deployed exploit may allow a malicious user to gain access to the unsuspecting user's computer as well as the network that such computer is connected to. Due to more advance security setting and antivirus protection and detection, this method is easily identified and defended against. The second method of exploit deployment is the method mainly focused upon within this paper. This method required extensive research on the best way to compromise a security enabled protected network. Once a network has been compromised, then any and all devices connected to such network has the potential to be compromised as well. With a compromised network, computers and devices can be penetrated through deployed exploits. This paper will illustrate the research done to test ability to penetrate a network without user interaction, in order to retrieve personal information from a targeted host.

  9. Observing Crop-Height Dynamics Using a UAV

    NASA Astrophysics Data System (ADS)

    Ziliani, M. G.; Parkes, S. D.; McCabe, M.

    2017-12-01

    Retrieval of vegetation height during a growing season is a key indicator for monitoring crop status, offering insight to the forecast yield relative to previous planting cycles. Improvement in Unmanned Aerial Vehicle (UAV) technologies, supported by advances in computer vision and photogrammetry software, has enabled retrieval of crop heights with much higher spatial resolution and coverage. These methodologies retrieve a Digital Surface Map (DSM), which combine terrain and crop elements to obtain a Crop Surface Map (CSM). Here we describe an automated method for deriving high resolution CSMs from a DSM, using RGB imagery from a UAV platform. Importantly, the approach does not require the need for a digital terrain map (DTM). The method involves distinguishing between vegetation and bare-ground cover pixels, using vegetation index maps from the RGB orthomosaic derived from the same flight as the DSM. We show that the absolute crop height can be extracted to within several centimeters, exploiting the data captured from a single UAV flight. In addition, the method is applied across five surveys during a maize growing cycle and compared against a terrain map constructed from a baseline UAV survey undertaken prior to crop growth. Results show that the approach is able to reproduce the observed spatial variability of the crop height within the maize field throughout the duration of the growing season. This is particularly valuable since it may be employed to detect intra-field problems (i.e. fertilizer variability, inefficiency in the irrigation system, salinity etc.) at different stages of the season, from which remedial action can be initiated to mitigate against yield loss. The method also demonstrates that UAV imagery combined with commercial photogrammetry software can determine a CSM from a single flight without the requirement of a prior DTM. This, together with the dynamic crop height estimation, provide useful information with which to inform precision agricultural management at the local scale.

  10. GeneStoryTeller: a mobile app for quick and comprehensive information retrieval of human genes

    PubMed Central

    Eleftheriou, Stergiani V.; Bourdakou, Marilena M.; Athanasiadis, Emmanouil I.; Spyrou, George M.

    2015-01-01

    In the last few years, mobile devices such as smartphones and tablets have become an integral part of everyday life, due to their software/hardware rapid development, as well as the increased portability they offer. Nevertheless, up to now, only few Apps have been developed in the field of bioinformatics, capable to perform fast and robust access to services. We have developed the GeneStoryTeller, a mobile application for Android platforms, where users are able to instantly retrieve information regarding any recorded human gene, derived from eight publicly available databases, as a summary story. Complementary information regarding gene–drugs interactions, functional annotation and disease associations for each selected gene is also provided in the gene story. The most challenging part during the development of the GeneStoryTeller was to keep balance between storing data locally within the app and obtaining the updated content dynamically via a network connection. This was accomplished with the implementation of an administrative site where data are curated and synchronized with the application requiring a minimum human intervention. Database URL: http://bioserver-3.bioacademy.gr/Bioserver/GeneStoryTeller/. PMID:26055097

  11. Information management and analysis system for groundwater data in Thailand

    NASA Astrophysics Data System (ADS)

    Gill, D.; Luckananurung, P.

    1992-01-01

    The Ground Water Division of the Thai Department of Mineral Resources maintains a large archive of groundwater data with information on some 50,000 water wells. Each well file contains information on well location, well completion, borehole geology, water levels, water quality, and pumping tests. In order to enable efficient use of this information a computer-based system for information management and analysis was created. The project was sponsored by the United Nations Development Program and the Thai Department of Mineral Resources. The system was designed to serve users who lack prior training in automated data processing. Access is through a friendly user/system dialogue. Tasks are segmented into a number of logical steps, each of which is managed by a separate screen. Selective retrieval is possible by four different methods of area definition and by compliance with user-specified constraints on any combination of database variables. The main types of outputs are: (1) files of retrieved data, screened according to users' specifications; (2) an assortment of pre-formatted reports; (3) computed geochemical parameters and various diagrams of water chemistry derived therefrom; (4) bivariate scatter diagrams and linear regression analysis; (5) posting of data and computed results on maps; and (6) hydraulic aquifer characteristics as computed from pumping tests. Data are entered directly from formatted screens. Most records can be copied directly from hand-written documents. The database-management program performs data integrity checks in real time, enabling corrections at the time of input. The system software can be grouped into: (1) database administration and maintenance—these functions are carried out by the SIR/DBMS software package; (2) user communication interface for task definition and execution control—the interface is written in the operating system command language (VMS/DCL) and in FORTRAN 77; and (3) scientific data-processing programs, written in FORTRAN 77. The system was implemented on a DEC MicroVAX II computer.

  12. Application of Rough Sets to Information Retrieval.

    ERIC Educational Resources Information Center

    Miyamoto, Sadaaki

    1998-01-01

    Develops a method of rough retrieval, an application of the rough set theory to information retrieval. The aim is to: (1) show that rough sets are naturally applied to information retrieval in which categorized information structure is used; and (2) show that a fuzzy retrieval scheme is induced from the rough retrieval. (AEF)

  13. Maximizing Accessibility to Spatially Referenced Digital Data.

    ERIC Educational Resources Information Center

    Hunt, Li; Joselyn, Mark

    1995-01-01

    Discusses some widely available spatially referenced datasets, including raster and vector datasets. Strategies for improving accessibility include: acquisition of data in a software-dependent format; reorganization of data into logical geographic units; acquisition of intelligent retrieval software; improving computer hardware; and intelligent…

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guyer, H.B.; McChesney, C.A.

    The overall primary Objective of HDAR is to create a repository of historical personnel security documents and provide the functionality needed for archival and retrieval use by other software modules and application users of the DISS/ET system. The software product to be produced from this specification is the Historical Document Archival and Retrieval Subsystem The product will provide the functionality to capture, retrieve and manage documents currently contained in the personnel security folders in DOE Operations Offices vaults at various locations across the United States. The long-term plan for DISS/ET includes the requirement to allow for capture and storage ofmore » arbitrary, currently undefined, clearance-related documents that fall outside the scope of the ``cradle-to-grave`` electronic processing provided by DISS/ET. However, this requirement is not within the scope of the requirements specified in this document.« less

  15. Automation of the CAS Document Delivery Service.

    ERIC Educational Resources Information Center

    Steensland, M. C.; Soukup, K. M.

    1986-01-01

    The automation of online order retrieval for Chemical Abstracts Service Document Delivery Service was accomplished by shifting to an order retrieval/dispatch process linked to a Unix network. The Unix-based environment, its terminal emulation, page-break, and user-friendly interface software, and later enhancements are reviewed. Resultant increase…

  16. Multimedia proceedings of the 10th Office Information Technology Conference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudson, B.

    1993-09-10

    The CD contains the handouts for all the speakers, demo software from Apple, Adobe, Microsoft, and Zylabs, and video movies of the keynote speakers. Adobe Acrobat is used to provide full-fidelity retrieval of the speakers` slides and Apple`s Quicktime for Macintosh and Windows is used for video playback. ZyIndex is included for Windows users to provide a full-text search engine for selected documents. There are separately labelled installation and operating instructions for Macintosh and Windows users and some general materials common to both sets of users.

  17. A qualitative evaluation of the crucial attributes of contextual information necessary in EHR design to support patient-centered medical home care.

    PubMed

    Weir, Charlene R; Staggers, Nancy; Gibson, Bryan; Doing-Harris, Kristina; Barrus, Robyn; Dunlea, Robert

    2015-04-16

    Effective implementation of a Primary Care Medical Home model of care (PCMH) requires integration of patients' contextual information (physical, mental, social and financial status) into an easily retrievable information source for the healthcare team and clinical decision-making. This project explored clinicians' perceptions about important attributes of contextual information for clinical decision-making, how contextual information is expressed in CPRS clinical documentation as well as how clinicians in a highly computerized environment manage information flow related to these areas. A qualitative design using Cognitive Task Analyses and a modified Critical Incident Technique were used. The study was conducted in a large VA with a fully implemented EHR located in the western United States. Seventeen providers working in a PCMH model of care in Primary Care, Home Based Care and Geriatrics reported on a recent difficult transition requiring contextual information for decision-making. The transcribed interviews were qualitatively analyzed for thematic development related to contextual information using an iterative process and multiple reviewers with ATLAS@ti software. Six overarching themes emerged as attributes of contextual information: Informativeness, goal language, temporality, source attribution, retrieval effort, and information quality. These results indicate that specific attributes are needed to in order for contextual information to fully support clinical decision-making in a Medical Home care delivery environment. Improved EHR designs are needed for ease of contextual information access, displaying linkages across time and settings, and explicit linkages to both clinician and patient goals. Implications relevant to providers' information needs, team functioning and EHR design are discussed.

  18. Using neural networks in software repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  19. INTERFACING SAS TO ORACLE IN THE UNIX ENVIRONMENT

    EPA Science Inventory

    SAS is an EPA standard data and statistical analysis software package while ORACLE is EPA's standard data base management system software package. RACLE has the advantage over SAS in data retrieval and storage capabilities but has limited data and statistical analysis capability....

  20. Optical encryption and QR codes: secure and noise-free information retrieval.

    PubMed

    Barrera, John Fredy; Mira, Alejandro; Torroba, Roberto

    2013-03-11

    We introduce for the first time the concept of an information "container" before a standard optical encrypting procedure. The "container" selected is a QR code which offers the main advantage of being tolerant to pollutant speckle noise. Besides, the QR code can be read by smartphones, a massively used device. Additionally, QR code includes another secure step to the encrypting benefits the optical methods provide. The QR is generated by means of worldwide free available software. The concept development probes that speckle noise polluting the outcomes of normal optical encrypting procedures can be avoided, then making more attractive the adoption of these techniques. Actual smartphone collected results are shown to validate our proposal.

  1. Ground-Based Correction of Remote-Sensing Spectral Imagery

    NASA Technical Reports Server (NTRS)

    Alder-Golden, Steven M.; Rochford, Peter; Matthew, Michael; Berk, Alexander

    2007-01-01

    Software has been developed for an improved method of correcting for the atmospheric optical effects (primarily, effects of aerosols and water vapor) in spectral images of the surface of the Earth acquired by airborne and spaceborne remote-sensing instruments. In this method, the variables needed for the corrections are extracted from the readings of a radiometer located on the ground in the vicinity of the scene of interest. The software includes algorithms that analyze measurement data acquired from a shadow-band radiometer. These algorithms are based on a prior radiation transport software model, called MODTRAN, that has been developed through several versions up to what are now known as MODTRAN4 and MODTRAN5 . These components have been integrated with a user-friendly Interactive Data Language (IDL) front end and an advanced version of MODTRAN4. Software tools for handling general data formats, performing a Langley-type calibration, and generating an output file of retrieved atmospheric parameters for use in another atmospheric-correction computer program known as FLAASH have also been incorporated into the present soft-ware. Concomitantly with the soft-ware described thus far, there has been developed a version of FLAASH that utilizes the retrieved atmospheric parameters to process spectral image data.

  2. Evaluation of patient satisfaction with tailored online patient education information.

    PubMed

    Atack, Lynda; Luke, Robert; Chien, Elise

    2008-01-01

    Patients are using the Internet for access to standardized health information in ever-growing numbers. Although increased access to health information can be helpful, the quality of information varies widely. All too often, the information retrieved is incomplete, inaccurate, or inappropriate. An interdisciplinary team of clinicians, librarians, software engineers, and multimedia designers developed an online patient education system that enables clinicians to "prescribe" tailored, evidence-based health information. The system provides access to text and video that patients can adapt for language, vision, and hearing preferences. Usability testing was conducted with eight patients in a usability laboratory using the "think-aloud" method, surveys, and interviews. Results indicated that patients were highly satisfied and that the site has the potential to become a valuable resource in disease management. Patients made several recommendations regarding system appearance, function, and content that will have application for other groups developing online patient education systems.

  3. Compliance program data management system for The Idaho National Engineering Laboratory/Environmental Protection Agency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertzler, C.L.; Poloski, J.P.; Bates, R.A.

    1988-01-01

    The Compliance Program Data Management System (DMS) developed at the Idaho National Engineering Laboratory (INEL) validates and maintains the integrity of data collected to support the Consent Order and Compliance Agreement (COCA) between the INEL and the Environmental Protection Agency (EPA). The system uses dBase III Plus programs and dBase III Plus in an interactive mode to enter, store, validate, manage, and retrieve analytical information provided on EPA Contract Laboratory Program (CLP) forms and CLP forms modified to accommodate 40 CFR 264 Appendix IX constituent analyses. Data analysis and presentation is performed utilizing SAS, a statistical analysis software program. Archivingmore » of data and results is performed at appropriate stages of data management. The DMS is useful for sampling and analysis programs where adherence to EPA CLP protocol, along with maintenance and retrieval of waste site investigation sampling results is desired or requested. 3 refs.« less

  4. Image acquisition unit for the Mayo/IBM PACS project

    NASA Astrophysics Data System (ADS)

    Reardon, Frank J.; Salutz, James R.

    1991-07-01

    The Mayo Clinic and IBM Rochester, Minnesota, have jointly developed a picture archiving, distribution and viewing system for use with Mayo's CT and MRI imaging modalities. Images are retrieved from the modalities and sent over the Mayo city-wide token ring network to optical storage subsystems for archiving, and to server subsystems for viewing on image review stations. Images may also be retrieved from archive and transmitted back to the modalities. The subsystems that interface to the modalities and communicate to the other components of the system are termed Image Acquisition Units (LAUs). The IAUs are IBM Personal System/2 (PS/2) computers with specially developed software. They operate independently in a network of cooperative subsystems and communicate with the modalities, archive subsystems, image review server subsystems, and a central subsystem that maintains information about the content and location of images. This paper provides a detailed description of the function and design of the Image Acquisition Units.

  5. A model for the electronic support of practice-based research networks.

    PubMed

    Peterson, Kevin A; Delaney, Brendan C; Arvanitis, Theodoros N; Taweel, Adel; Sandberg, Elisabeth A; Speedie, Stuart; Richard Hobbs, F D

    2012-01-01

    The principal goal of the electronic Primary Care Research Network (ePCRN) is to enable the development of an electronic infrastructure to support clinical research activities in primary care practice-based research networks (PBRNs). We describe the model that the ePCRN developed to enhance the growth and to expand the reach of PBRN research. Use cases and activity diagrams were developed from interviews with key informants from 11 PBRNs from the United States and United Kingdom. Discrete functions were identified and aggregated into logical components. Interaction diagrams were created, and an overall composite diagram was constructed describing the proposed software behavior. Software for each component was written and aggregated, and the resulting prototype application was pilot tested for feasibility. A practical model was then created by separating application activities into distinct software packages based on existing PBRN business rules, hardware requirements, network requirements, and security concerns. We present an information architecture that provides for essential interactions, activities, data flows, and structural elements necessary for providing support for PBRN translational research activities. The model describes research information exchange between investigators and clusters of independent data sites supported by a contracted research director. The model was designed to support recruitment for clinical trials, collection of aggregated anonymous data, and retrieval of identifiable data from previously consented patients across hundreds of practices. The proposed model advances our understanding of the fundamental roles and activities of PBRNs and defines the information exchange commonly used by PBRNs to successfully engage community health care clinicians in translational research activities. By describing the network architecture in a language familiar to that used by software developers, the model provides an important foundation for the development of electronic support for essential PBRN research activities.

  6. Ground-Based Microwave Radiometric Remote Sensing of the Tropical Atmosphere

    NASA Astrophysics Data System (ADS)

    Han, Yong

    A partially developed 9-channel ground-based microwave radiometer for the Department of Meteorology at Penn State was completed and tested. Complementary units were added, corrections to both hardware and software were made, and system software was corrected and upgraded. Measurements from this radiometer were used to infer tropospheric temperature, water vapor and cloud liquid water. The various weighting functions at each of the 9 channels were calculated and analyzed to estimate the sensitivities of the brightness temperatures to the desired atmospheric variables. The mathematical inversion problem, in a linear form, was viewed in terms of the theory of linear algebra. Several methods for solving the inversion problem were reviewed. Radiometric observations were conducted during the 1990 Tropical Cyclone Motion Experiment. The radiometer was installed on the island of Saipan in a tropical region. During this experiment, the radiometer was calibrated by using tipping curve and radiosonde data as well as measurements of the radiation from a blackbody absorber. A linear statistical method was first applied for the data inversion. The inversion coefficients in the equation were obtained using a large number of radiosonde profiles from Guam and a radiative transfer model. Retrievals were compared with those from local, Saipan, radiosonde measurements. Water vapor profiles, integrated water vapor, and integrated liquid water were retrieved successfully. For temperature profile retrievals, however, it was shown that the radiometric measurements with experimental noises added no more profile information to the inversion than that which was available from a climatological mean. Although successful retrievals of the geopotential heights were made, it was shown that they were determined mainly by the surface pressure measurements. The reasons why the radiometer did not contribute to the retrievals of temperature profiles and geopotential heights were discussed. A method was developed to derive the integrated water vapor and liquid water from combined radiometer and ceilometer measurements. Under certain assumptions, the cloud absorption coefficients and mean radiating temperature, used in the physical or statistical inversion equation, were determined from the measurements. It was shown that significant improvement on radiometric measurements of the integrated liquid water can be gained with this method.

  7. A Numerical Testbed for Remote Sensing of Aerosols, and its Demonstration for Evaluating Retrieval Synergy from a Geostationary Satellite Constellation of GEO-CAPE and GOES-R

    NASA Technical Reports Server (NTRS)

    Wang, Jun; Xu, Xiaoguang; Ding, Shouguo; Zeng, Jing; Spurr, Robert; Liu, Xiong; Chance, Kelly; Mishchenko, Michael I.

    2014-01-01

    We present a numerical testbed for remote sensing of aerosols, together with a demonstration for evaluating retrieval synergy from a geostationary satellite constellation. The testbed combines inverse (optimal-estimation) software with a forward model containing linearized code for computing particle scattering (for both spherical and non-spherical particles), a kernel-based (land and ocean) surface bi-directional reflectance facility, and a linearized radiative transfer model for polarized radiance. Calculation of gas absorption spectra uses the HITRAN (HIgh-resolution TRANsmission molecular absorption) database of spectroscopic line parameters and other trace species cross-sections. The outputs of the testbed include not only the Stokes 4-vector elements and their sensitivities (Jacobians) with respect to the aerosol single scattering and physical parameters (such as size and shape parameters, refractive index, and plume height), but also DFS (Degree of Freedom for Signal) values for retrieval of these parameters. This testbed can be used as a tool to provide an objective assessment of aerosol information content that can be retrieved for any constellation of (planned or real) satellite sensors and for any combination of algorithm design factors (in terms of wavelengths, viewing angles, radiance and/or polarization to be measured or used). We summarize the components of the testbed, including the derivation and validation of analytical formulae for Jacobian calculations. Benchmark calculations from the forward model are documented. In the context of NASA's Decadal Survey Mission GEOCAPE (GEOstationary Coastal and Air Pollution Events), we demonstrate the use of the testbed to conduct a feasibility study of using polarization measurements in and around the O2 A band for the retrieval of aerosol height information from space, as well as an to assess potential improvement in the retrieval of aerosol fine and coarse mode aerosol optical depth (AOD) through the synergic use of two future geostationary satellites, GOES-R (Geostationary Operational Environmental Satellite R-series) and TEMPO (Tropospheric Emissions: Monitoring of Pollution). Strong synergy between GEOS-R and TEMPO are found especially in their characterization of surface bi-directional reflectance, and thereby, can potentially improve the AOD retrieval to the accuracy required by GEO-CAPE.

  8. Adjusted Levenberg-Marquardt method application to methene retrieval from IASI/METOP spectra

    NASA Astrophysics Data System (ADS)

    Khamatnurova, Marina; Gribanov, Konstantin

    2016-04-01

    Levenberg-Marquardt method [1] with iteratively adjusted parameter and simultaneous evaluation of averaging kernels together with technique of parameters selection are developed and applied to the retrieval of methane vertical profiles in the atmosphere from IASI/METOP spectra. Retrieved methane vertical profiles are then used for calculation of total atmospheric column amount. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder,USA) [2] are taken as initial guess for retrieval algorithm. Surface temperature, temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval for each selected spectrum. Modified software package FIRE-ARMS [3] were used for numerical experiments. To adjust parameters and validate the method we used ECMWF MACC reanalysis data [4]. Methane columnar values retrieved from cloudless IASI spectra demonstrate good agreement with MACC columnar values. Comparison is performed for IASI spectra measured in May of 2012 over Western Siberia. Application of the method for current IASI/METOP measurements are discussed. 1.Ma C., Jiang L. Some Research on Levenberg-Marquardt Method for the Nonlinear Equations // Applied Mathematics and Computation. 2007. V.184. P. 1032-1040 2.http://www.esrl.noaa.gov/psdhttp://www.esrl.noaa.gov/psd 3.Gribanov K.G., Zakharov V.I., Tashkun S.A., Tyuterev Vl.G.. A New Software Tool for Radiative Transfer Calculations and its application to IMG/ADEOS data // JQSRT.2001.V.68.№ 4. P. 435-451. 4.http://www.ecmwf.int/http://www.ecmwf.int

  9. Experiments with a novel content-based image retrieval software: can we eliminate classification systems in adolescent idiopathic scoliosis?

    PubMed

    Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma

    2014-02-01

    Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.

  10. Enhanced Information Retrieval Using AJAX

    NASA Astrophysics Data System (ADS)

    Kachhwaha, Rajendra; Rajvanshi, Nitin

    2010-11-01

    Information Retrieval deals with the representation, storage, organization of, and access to information items. The representation and organization of information items should provide the user with easy access to the information with the rapid development of Internet, large amounts of digitally stored information is readily available on the World Wide Web. This information is so huge that it becomes increasingly difficult and time consuming for the users to find the information relevant to their needs. The explosive growth of information on the Internet has greatly increased the need for information retrieval systems. However, most of the search engines are using conventional information retrieval systems. An information system needs to implement sophisticated pattern matching tools to determine contents at a faster rate. AJAX has recently emerged as the new tool such the of information retrieval process of information retrieval can become fast and information reaches the use at a faster pace as compared to conventional retrieval systems.

  11. COSMIC: Software catalog 1991 edition diskette format

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The PC edition of the annual COSMIC Software contains descriptions of the over 1,200 computer programs available for use within the United States as of January 1, 1991. By using the PC version of the catalog, it is possible to conduct extensive searches of the software inventory for programs that meet specific criteria. Elements such as program keywords, hardware specifications, source code languages, and title acronyms can be used for the basis of such searches. After isolating those programs that might be of best interest to the user, it is then possible to either view at the monitor, or generate a hardcopy listing of all information on those packages. In addition to the program elements that the user can search on, information such as total program size, distribution media, and program price, as well as extensive abstracts on the program, are also available to the user at this time. Another useful feature of the catalog allows for the retention of programs that meet certain search criteria between individual sessions of using the catalog. This allows users to save the information on those programs that are of interest to them in different areas of application. They can then recall a specific collection of programs for information retrieval or further search reduction if desired. In addition, this version of the catalog is adaptable to a network/shared resource environment, allowing multiple users access to a single copy of the catalog database simultaneously.

  12. PHASS99: A software program for retrieving and decoding the radiometric ages of igneous rocks from the international database IGBADAT

    NASA Astrophysics Data System (ADS)

    Al-Mishwat, Ali T.

    2016-05-01

    PHASS99 is a FORTRAN program designed to retrieve and decode radiometric and other physical age information of igneous rocks contained in the international database IGBADAT (Igneous Base Data File). In the database, ages are stored in a proprietary format using mnemonic representations. The program can handle up to 99 ages in an igneous rock specimen and caters to forty radiometric age systems. The radiometric age alphanumeric strings assigned to each specimen description in the database consist of four components: the numeric age and its exponential modifier, a four-character mnemonic method identification, a two-character mnemonic name of analysed material, and the reference number in the rock group bibliography vector. For each specimen, the program searches for radiometric age strings, extracts them, parses them, decodes the different age components, and converts them to high-level English equivalents. IGBADAT and similarly-structured files are used for input. The output includes three files: a flat raw ASCII text file containing retrieved radiometric age information, a generic spreadsheet-compatible file for data import to spreadsheets, and an error file. PHASS99 builds on the old program TSTPHA (Test Physical Age) decoder program and expands greatly its capabilities. PHASS99 is simple, user friendly, fast, efficient, and does not require users to have knowledge of programing.

  13. Heterogeneous Software System Interoperability Through Computer-Aided Resolution of Modeling Differences

    DTIC Science & Technology

    2002-06-01

    techniques for addressing the software component retrieval problem. Steigerwald [Ste91] introduced the use of algebraic specifications for defining the...provided in terms of a specification written using Luqi’s Prototype Specification Description Language (PSDL) [LBY88] augmented with an algebraic

  14. Quantitative Microbial Risk Assessment Tutorial: Installation of Software for Watershed Modeling in Support of QMRA

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling:• SDMProjectBuilder (which includes the Microbial Source Module as part...

  15. Protecting software agents from malicious hosts using quantum computing

    NASA Astrophysics Data System (ADS)

    Reisner, John; Donkor, Eric

    2000-07-01

    We evaluate how quantum computing can be applied to security problems for software agents. Agent-based computing, which merges technological advances in artificial intelligence and mobile computing, is a rapidly growing domain, especially in applications such as electronic commerce, network management, information retrieval, and mission planning. System security is one of the more eminent research areas in agent-based computing, and the specific problem of protecting a mobile agent from a potentially hostile host is one of the most difficult of these challenges. In this work, we describe our agent model, and discuss the capabilities and limitations of classical solutions to the malicious host problem. Quantum computing may be extremely helpful in addressing the limitations of classical solutions to this problem. This paper highlights some of the areas where quantum computing could be applied to agent security.

  16. Activities of information retrieval in Daicel Corporation : The roles and efforts of information retrieval team

    NASA Astrophysics Data System (ADS)

    Yamazaki, Towako

    In order to stabilize and improve quality of information retrieval service, the information retrieval team of Daicel Corporation has given some efforts on standard operating procedures, interview sheet for information retrieval, structured format for search report, and search expressions for some technological fields of Daicel. These activities and efforts will also lead to skill sharing and skill tradition between searchers. In addition, skill improvements are needed not only for a searcher individually, but also for the information retrieval team totally when playing searcher's new roles.

  17. Data Visualization in Information Retrieval and Data Mining (SIG VIS).

    ERIC Educational Resources Information Center

    Efthimiadis, Efthimis

    2000-01-01

    Presents abstracts that discuss using data visualization for information retrieval and data mining, including immersive information space and spatial metaphors; spatial data using multi-dimensional matrices with maps; TREC (Text Retrieval Conference) experiments; users' information needs in cartographic information retrieval; and users' relevance…

  18. Infrared Sky Imager (IRSI) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, Victor R.

    2016-04-01

    The Infrared Sky Imager (IRSI) deployed at the Atmospheric Radiation Measurement (ARM) Climate Research Facility is a Solmirus Corp. All Sky Infrared Visible Analyzer. The IRSI is an automatic, continuously operating, digital imaging and software system designed to capture hemispheric sky images and provide time series retrievals of fractional sky cover during both the day and night. The instrument provides diurnal, radiometrically calibrated sky imagery in the mid-infrared atmospheric window and imagery in the visible wavelengths for cloud retrievals during daylight hours. The software automatically identifies cloudy and clear regions at user-defined intervals and calculates fractional sky cover, providing amore » real-time display of sky conditions.« less

  19. User Interface Composition with COTS-UI and Trading Approaches: Application for Web-Based Environmental Information Systems

    NASA Astrophysics Data System (ADS)

    Criado, Javier; Padilla, Nicolás; Iribarne, Luis; Asensio, Jose-Andrés

    Due to the globalization of the information and knowledge society on the Internet, modern Web-based Information Systems (WIS) must be flexible and prepared to be easily accessible and manageable in real-time. In recent times it has received a special interest the globalization of information through a common vocabulary (i.e., ontologies), and the standardized way in which information is retrieved on the Web (i.e., powerful search engines, and intelligent software agents). These same principles of globalization and standardization should also be valid for the user interfaces of the WIS, but they are built on traditional development paradigms. In this paper we present an approach to reduce the gap of globalization/standardization in the generation of WIS user interfaces by using a real-time "bottom-up" composition perspective with COTS-interface components (type interface widgets) and trading services.

  20. Quantitative Microbial Risk Assessment Tutorial Installation of Software for Watershed Modeling in Support of QMRA - Updated 2017

    EPA Science Inventory

    This tutorial provides instructions for accessing, retrieving, and downloading the following software to install on a host computer in support of Quantitative Microbial Risk Assessment (QMRA) modeling: • QMRA Installation • SDMProjectBuilder (which includes the Microbial ...

  1. GSC configuration management plan

    NASA Technical Reports Server (NTRS)

    Withers, B. Edward

    1990-01-01

    The tools and methods used for the configuration management of the artifacts (including software and documentation) associated with the Guidance and Control Software (GCS) project are described. The GCS project is part of a software error studies research program. Three implementations of GCS are being produced in order to study the fundamental characteristics of the software failure process. The Code Management System (CMS) is used to track and retrieve versions of the documentation and software. Application of the CMS for this project is described and the numbering scheme is delineated for the versions of the project artifacts.

  2. Connectionist Interaction Information Retrieval.

    ERIC Educational Resources Information Center

    Dominich, Sandor

    2003-01-01

    Discussion of connectionist views for adaptive clustering in information retrieval focuses on a connectionist clustering technique and activation spreading-based information retrieval model using the interaction information retrieval method. Presents theoretical as well as simulation results as regards computational complexity and includes…

  3. A simple procedure for retrieval of a cement-retained implant-supported crown: a case report.

    PubMed

    Buzayan, Muaiyed Mahmoud; Mahmood, Wan Adida; Yunus, Norsiah Binti

    2014-02-01

    Retrieval of cement-retained implant prostheses can be more demanding than retrieval of screw-retained prostheses. This case report describes a simple and predictable procedure to locate the abutment screw access openings of cementretained implant-supported crowns in cases of fractured ceramic veneer. A conventional periapical radiography image was captured using a digital camera, transferred to a computer, and manipulated using Microsoft Word document software to estimate the location of the abutment screw access.

  4. The software and algorithms for hyperspectral data processing

    NASA Astrophysics Data System (ADS)

    Shyrayeva, Anhelina; Martinov, Anton; Ivanov, Victor; Katkovsky, Leonid

    2017-04-01

    Hyperspectral remote sensing technique is widely used for collecting and processing -information about the Earth's surface objects. Hyperspectral data are combined to form a three-dimensional (x, y, λ) data cube. Department of Aerospace Research of the Institute of Applied Physical Problems of the Belarusian State University presents a general model of the software for hyperspectral image data analysis and processing. The software runs in Windows XP/7/8/8.1/10 environment on any personal computer. This complex has been has been written in C++ language using QT framework and OpenGL for graphical data visualization. The software has flexible structure that consists of a set of independent plugins. Each plugin was compiled as Qt Plugin and represents Windows Dynamic library (dll). Plugins can be categorized in terms of data reading types, data visualization (3D, 2D, 1D) and data processing The software has various in-built functions for statistical and mathematical analysis, signal processing functions like direct smoothing function for moving average, Savitzky-Golay smoothing technique, RGB correction, histogram transformation, and atmospheric correction. The software provides two author's engineering techniques for the solution of atmospheric correction problem: iteration method of refinement of spectral albedo's parameters using Libradtran and analytical least square method. The main advantages of these methods are high rate of processing (several minutes for 1 GB data) and low relative error in albedo retrieval (less than 15%). Also, the software supports work with spectral libraries, region of interest (ROI) selection, spectral analysis such as cluster-type image classification and automatic hypercube spectrum comparison by similarity criterion with similar ones from spectral libraries, and vice versa. The software deals with different kinds of spectral information in order to identify and distinguish spectrally unique materials. Also, the following advantages should be noted: fast and low memory hypercube manipulation features, user-friendly interface, modularity, and expandability.

  5. Software Framework for Peer Data-Management Services

    NASA Technical Reports Server (NTRS)

    Hughes, John; Hardman, Sean; Crichton, Daniel; Hyon, Jason; Kelly, Sean; Tran, Thuy

    2007-01-01

    Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-to-peer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.

  6. Array data extractor (ADE): a LabVIEW program to extract and merge gene array data

    PubMed Central

    2013-01-01

    Background Large data sets from gene expression array studies are publicly available offering information highly valuable for research across many disciplines ranging from fundamental to clinical research. Highly advanced bioinformatics tools have been made available to researchers, but a demand for user-friendly software allowing researchers to quickly extract expression information for multiple genes from multiple studies persists. Findings Here, we present a user-friendly LabVIEW program to automatically extract gene expression data for a list of genes from multiple normalized microarray datasets. Functionality was tested for 288 class A G protein-coupled receptors (GPCRs) and expression data from 12 studies comparing normal and diseased human hearts. Results confirmed known regulation of a beta 1 adrenergic receptor and further indicate novel research targets. Conclusions Although existing software allows for complex data analyses, the LabVIEW based program presented here, “Array Data Extractor (ADE)”, provides users with a tool to retrieve meaningful information from multiple normalized gene expression datasets in a fast and easy way. Further, the graphical programming language used in LabVIEW allows applying changes to the program without the need of advanced programming knowledge. PMID:24289243

  7. Archiving a Software Development Project

    DTIC Science & Technology

    2013-04-01

    an ongoing monitoring system that identifies attempts and requests for retrieval, and ensures that the attempts and requests cannot proceed without...Intelligence Division Peter Fisher has worked as a consultant, systems analyst, software developer and project manager in Australia, Holland, the USA...4 3.1.3 DRMS – Defence Records Management System

  8. Listen up, eye movements play a role in verbal memory retrieval.

    PubMed

    Scholz, Agnes; Mehlhorn, Katja; Krems, Josef F

    2016-01-01

    People fixate on blank spaces if visual stimuli previously occupied these regions of space. This so-called "looking at nothing" (LAN) phenomenon is said to be a part of information retrieval from internal memory representations, but the exact nature of the relationship between LAN and memory retrieval is unclear. While evidence exists for an influence of LAN on memory retrieval for visuospatial stimuli, evidence for verbal information is mixed. Here, we tested the relationship between LAN behavior and memory retrieval in an episodic retrieval task where verbal information was presented auditorily during encoding. When participants were allowed to gaze freely during subsequent memory retrieval, LAN occurred, and it was stronger for correct than for incorrect responses. When eye movements were manipulated during memory retrieval, retrieval performance was higher when participants fixated on the area associated with to-be-retrieved information than when fixating on another area. Our results provide evidence for a functional relationship between LAN and memory retrieval that extends to verbal information.

  9. Topological Aspects of Information Retrieval.

    ERIC Educational Resources Information Center

    Egghe, Leo; Rousseau, Ronald

    1998-01-01

    Discusses topological aspects of theoretical information retrieval, including retrieval topology; similarity topology; pseudo-metric topology; document spaces as topological spaces; Boolean information retrieval as a subsystem of any topological system; and proofs of theorems. (LRW)

  10. Front End Software for Online Database Searching Part 1: Definitions, System Features, and Evaluation.

    ERIC Educational Resources Information Center

    Hawkins, Donald T.; Levy, Louise R.

    1985-01-01

    This initial article in series of three discusses barriers inhibiting use of current online retrieval systems by novice users and notes reasons for front end and gateway online retrieval systems. Definitions, front end features, user interface, location (personal computer, host mainframe), evaluation, and strengths and weaknesses are covered. (16…

  11. Information Retrieval in Biomedical Research: From Articles to Datasets

    ERIC Educational Resources Information Center

    Wei, Wei

    2017-01-01

    Information retrieval techniques have been applied to biomedical research for a variety of purposes, such as textual document retrieval and molecular data retrieval. As biomedical research evolves over time, information retrieval is also constantly facing new challenges, including the growing number of available data, the emerging new data types,…

  12. Ground-based microwave radiometric remote sensing of the tropical atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Yong.

    1992-01-01

    A partially developed 9-channel ground-based microwave radiometer for the Department of Meteorology at Penn State was completed and tested. Complementary units were added, corrections to both hardware and software were made, and system software was corrected and upgraded. Measurements from this radiometer were used to infer tropospheric temperature, water vapor and cloud liquid water. The various weighting functions at each of the 9 channels were calculated and analyzed to estimate the sensitivities of the brightness temperature to the desired atmospheric variables. The mathematical inversion problem, in a linear form, was viewed in terms of the theory of linear algebra. Severalmore » methods for solving the inversion problem were reviewed. Radiometric observations were conducted during the 1990 Tropical Cyclone Motion Experiment. The radiometer was installed on the island of Saipan in a tropical region. The radiometer was calibrated using tipping curve and radiosonde data as well as measurements of the radiation from a blackbody absorber. A linear statistical method was applied for the data inversion. The inversion coefficients in the equation were obtained using a large number of radiosonde profiles from Guam and a radiative transfer model. Retrievals were compared with those from local, Saipan, radiosonde measurements. Water vapor profiles, integrated water vapor, and integrated liquid water were retrieved successfully. For temperature profile retrievals, however, the radiometric measurements with experimental noises added no more profile information to the inversion than that they were determined mainly by the surface pressure measurements. A method was developed to derive the integrated water vapor and liquid water from combined radiometer and ceilometer measurements. Significant improvement on radiometric measurements of the integrated liquid water can be gained with this method.« less

  13. Information content of thermal infrared a microwave bands for simultaneous retrieval of cirrus ice water path and particle effective diameter

    NASA Astrophysics Data System (ADS)

    Bell, A.; Tang, G.; Yang, P.; Wu, D.

    2017-12-01

    Due to their high spatial and temporal coverage, cirrus clouds have a profound role in regulating the Earth's energy budget. Variability of their radiative, geometric, and microphysical properties can pose significant uncertainties in global climate model simulations if not adequately constrained. Thus, the development of retrieval methodologies able to accurately retrieve ice cloud properties and present associated uncertainties is essential. The effectiveness of cirrus cloud retrievals relies on accurate a priori understanding of ice radiative properties, as well as the current state of the atmosphere. Current studies have implemented information content theory analyses prior to retrievals to quantify the amount of information that should be expected on parameters to be retrieved, as well as the relative contribution of information provided by certain measurement channels. Through this analysis, retrieval algorithms can be designed in a way to maximize the information in measurements, and therefore ensure enough information is present to retrieve ice cloud properties. In this study, we present such an information content analysis to quantify the amount of information to be expected in retrievals of cirrus ice water path and particle effective diameter using sub-millimeter and thermal infrared radiometry. Preliminary results show these bands to be sensitive to changes in ice water path and effective diameter, and thus lend confidence their ability to simultaneously retrieve these parameters. Further quantification of sensitivity and the information provided from these bands can then be used to design and optimal retrieval scheme. While this information content analysis is employed on a theoretical retrieval combining simulated radiance measurements, the methodology could in general be applicable to any instrument or retrieval approach.

  14. 15 CFR 950.9 - Computerized Environmental Data and Information Retrieval Service.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Information Retrieval Service. 950.9 Section 950.9 Commerce and Foreign Trade Regulations Relating to Commerce... Computerized Environmental Data and Information Retrieval Service. The Environmental Data Index (ENDEX... computerized, information retrieval service provides a parallel subject-author-abstract referral service. A...

  15. 15 CFR 950.9 - Computerized Environmental Data and Information Retrieval Service.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Information Retrieval Service. 950.9 Section 950.9 Commerce and Foreign Trade Regulations Relating to Commerce... Computerized Environmental Data and Information Retrieval Service. The Environmental Data Index (ENDEX... computerized, information retrieval service provides a parallel subject-author-abstract referral service. A...

  16. 15 CFR 950.9 - Computerized Environmental Data and Information Retrieval Service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Information Retrieval Service. 950.9 Section 950.9 Commerce and Foreign Trade Regulations Relating to Commerce... Computerized Environmental Data and Information Retrieval Service. The Environmental Data Index (ENDEX... computerized, information retrieval service provides a parallel subject-author-abstract referral service. A...

  17. 15 CFR 950.9 - Computerized Environmental Data and Information Retrieval Service.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Information Retrieval Service. 950.9 Section 950.9 Commerce and Foreign Trade Regulations Relating to Commerce... Computerized Environmental Data and Information Retrieval Service. The Environmental Data Index (ENDEX... computerized, information retrieval service provides a parallel subject-author-abstract referral service. A...

  18. 15 CFR 950.9 - Computerized Environmental Data and Information Retrieval Service.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Information Retrieval Service. 950.9 Section 950.9 Commerce and Foreign Trade Regulations Relating to Commerce... Computerized Environmental Data and Information Retrieval Service. The Environmental Data Index (ENDEX... computerized, information retrieval service provides a parallel subject-author-abstract referral service. A...

  19. External Data and Attribute Hyperlink Programs for Promis*e(Registered Trademark)

    NASA Technical Reports Server (NTRS)

    Derengowski, Rich; Gruel, Andrew

    2001-01-01

    External Data and Attribute Hyperlink are computer programs that can be added to Promis*e(trademark) which is a commercial software system that automates routine tasks in the design (including drawing schematic diagrams) of electrical control systems. The programs were developed under the Stennis Space Center's (SSC) Dual Use Technology Development Program to provide capabilities for SSC's BMCS configuration management system which uses Promis*e(trademark). The External Data program enables the storage and management of information in an external database linked to a drawing. Changes can be made either in the database or on the drawing. Information that originates outside Promis*e(trademark) can be stored in custom fields that can be added to the database. Although this information is not available in Promis*e(trademark) printed drawings, it can be associated with symbols in the drawings, and can be retrieved through the drawings when the software is running. The Attribute Hyperlink program enables the addition of hyperlink information as attributes of symbols. This program enables the formation of a direct hyperlink between a schematic diagram and an Internet site or a file on a compact disk, on the user's hard drive, or on another computer on a network to which the user's computer is connected. The user can then obtain information directly related to the part (e.g., maintenance, or troubleshooting information) associated with the hyperlink.

  20. Metrinome: Continuous Monitoring and Security Validation of Distributed Systems

    DTIC Science & Technology

    2014-03-01

    Integration into the SDLC ( Software Development Life Cycle), Retrieved Nov 06 2013, https://www.owasp.org/ images/f/f6/Integration_into_the_SDLC.ppt [2...assessment as part of the software development life cycle, current approaches suffer from a number of shortcomings that limit their application in...with assessing security and correct functionality. Second, integrated and end-to-end testing and experimentation is often postponed until software

  1. KARL: A Knowledge-Assisted Retrieval Language. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1985-01-01

    Data classification and storage are tasks typically performed by application specialists. In contrast, information users are primarily non-computer specialists who use information in their decision-making and other activities. Interaction efficiency between such users and the computer is often reduced by machine requirements and resulting user reluctance to use the system. This thesis examines the problems associated with information retrieval for non-computer specialist users, and proposes a method for communicating in restricted English that uses knowledge of the entities involved, relationships between entities, and basic English language syntax and semantics to translate the user requests into formal queries. The proposed method includes an intelligent dictionary, syntax and semantic verifiers, and a formal query generator. In addition, the proposed system has a learning capability that can improve portability and performance. With the increasing demand for efficient human-machine communication, the significance of this thesis becomes apparent. As human resources become more valuable, software systems that will assist in improving the human-machine interface will be needed and research addressing new solutions will be of utmost importance. This thesis presents an initial design and implementation as a foundation for further research and development into the emerging field of natural language database query systems.

  2. 45 CFR 205.35 - Mechanized claims processing and information retrieval systems; definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... claims processing and information retrieval systems; definitions. Section 205.35 through 205.38 contain...: (a) A mechanized claims processing and information retrieval system, hereafter referred to as an automated application processing and information retrieval system (APIRS), or the system, means a system of...

  3. 45 CFR 205.35 - Mechanized claims processing and information retrieval systems; definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... claims processing and information retrieval systems; definitions. Section 205.35 through 205.38 contain...: (a) A mechanized claims processing and information retrieval system, hereafter referred to as an automated application processing and information retrieval system (APIRS), or the system, means a system of...

  4. Graph-Based Interactive Bibliographic Information Retrieval Systems

    ERIC Educational Resources Information Center

    Zhu, Yongjun

    2017-01-01

    In the big data era, we have witnessed the explosion of scholarly literature. This explosion has imposed challenges to the retrieval of bibliographic information. Retrieval of intended bibliographic information has become challenging due to the overwhelming search results returned by bibliographic information retrieval systems for given input…

  5. 45 CFR 205.35 - Mechanized claims processing and information retrieval systems; definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... claims processing and information retrieval systems; definitions. Section 205.35 through 205.38 contain...: (a) A mechanized claims processing and information retrieval system, hereafter referred to as an automated application processing and information retrieval system (APIRS), or the system, means a system of...

  6. New technology for using meteorological information in forest insect pest forecast and warning systems.

    PubMed

    Qin, Jiang-Lin; Yang, Xiu-Hao; Yang, Zhong-Wu; Luo, Ji-Tong; Lei, Xiu-Feng

    2017-12-01

    Near surface air temperature and rainfall are major weather factors affecting forest insect dynamics. The recent developments in remote sensing retrieval and geographic information system spatial analysis techniques enable the utilization of weather factors to significantly enhance forest pest forecasting and warning systems. The current study focused on building forest pest digital data structures as a platform of correlation analysis between weather conditions and forest pest dynamics for better pest forecasting and warning systems using the new technologies. The study dataset contained 3 353 425 small polygons with 174 defined attributes covering 95 counties of Guangxi province of China currently registering 292 forest pest species. Field data acquisition and information transfer systems were established with four software licences that provided 15-fold improvement compared to the systems currently used in China. Nine technical specifications were established including codes of forest districts, pest species and host tree species, and standard practices of forest pest monitoring and information management. Attributes can easily be searched using ArcGIS9.3 and/or the free QGIS2.16 software. Small polygons with pest relevant attributes are a new tool of precision farming and detailed forest insect pest management that are technologically advanced. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  7. In search of meta-knowledge

    NASA Technical Reports Server (NTRS)

    Lopez, Antonio M., Jr.

    1993-01-01

    Development of an Intelligent Information System (IIS) involves application of numerous artificial intelligence (AI) paradigms and advanced technologies. The National Aeronautics and Space Administration (NASA) is interested in an IIS that can automatically collect, classify, store and retrieve data, as well as develop, manipulate and restructure knowledge regarding the data and its application (Campbell et al., 1987, p.3). This interest stems in part from a NASA initiative in support of the interagency Global Change Research program. NASA's space data problems are so large and varied that scientific researchers will find it almost impossible to access the most suitable information from a software system if meta-information (metadata and meta-knowledge) is not embedded in that system. Even if more, faster, larger hardware is used, new innovative software systems will be required to organize, link, maintain, and properly archive the Earth Observing System (EOS) data that is to be stored and distributed by the EOS Data and Information System (EOSDIS) (Dozier, 1990). Although efforts are being made to specify the metadata that will be used in EOSDIS, meta-knowledge specification issues are not clear. With the expectation that EOSDIS might evolve into an IIS, this paper presents certain ideas on the concept of meta-knowledge and demonstrates how meta-knowledge might be represented in a pixel classification problem.

  8. AST: World Coordinate Systems in Astronomy

    NASA Astrophysics Data System (ADS)

    Berry, David S.; Warren-Smith, Rodney F.

    2014-04-01

    The AST library provides a comprehensive range of facilities for attaching world coordinate systems to astronomical data, for retrieving and interpreting that information in a variety of formats, including FITS-WCS, and for generating graphical output based on it. Core projection algorithms are provided by WCSLIB (ascl:1108.003) and astrometry is provided by the PAL (ascl:1606.002) and SOFA (ascl:1403.026) libraries. AST bindings are available in Python (pyast), Java (JNIAST) and Perl (Starlink::AST). AST is used as the plotting and astrometry library in DS9 and GAIA, and is distributed separately and as part of the Starlink software collection.

  9. [Analysis on acupuncture related articles published in periodicals in science citation index (SCI) in 2008].

    PubMed

    Wang, Chao; He, Wen-Ju; Guo, Yi

    2010-09-01

    Acupuncture related articles published in periodicals in Science Citation Index (SCI) in 2008 were summarized and analyzed. About 583 articles were collected using "acupuncture" and "in 2008" as keywords in the Web of Science data base by information retrieval. These papers were summarized and analyzed from various aspects of country, language, subject category, literature type, publication sources, impact factor, research method, acupoints, disease category and needling methods by using Excel software combined with manual sorting of the literature, the aim is to provide a reference for domestic acupuncture research.

  10. National Aeronautics and Space Administration Manned Spacecraft Center data base requirements study

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A study was conducted to evaluate the types of data that the Manned Spacecraft Center (MSC) should automate in order to make available essential management and technical information to support MSC's various functions and missions. In addition, the software and hardware capabilities to best handle the storage and retrieval of this data were analyzed. Based on the results of this study, recommendations are presented for a unified data base that provides a cost effective solution to MSC's data automation requirements. The recommendations are projected through a time frame that includes the earth orbit space station.

  11. Automated documentation generator for advanced protein crystal growth

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Provancha, Anna; Chattam, David

    1994-01-01

    To achieve an environment less dependent on the flow of paper, automated techniques of data storage and retrieval must be utilized. This software system, 'Automated Payload Experiment Tool,' seeks to provide a knowledge-based, hypertext environment for the development of NASA documentation. Once developed, the final system should be able to guide a Principal Investigator through the documentation process in a more timely and efficient manner, while supplying more accurate information to the NASA payload developer. The current system is designed for the development of the Science Requirements Document (SRD), the Experiment Requirements Document (ERD), the Project Plan, and the Safety Requirements Document.

  12. ChRIS--A web-based neuroimaging and informatics system for collecting, organizing, processing, visualizing and sharing of medical data.

    PubMed

    Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen

    2015-01-01

    The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.

  13. NASA Tech Briefs, October 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Topics covered include: Cryogenic Temperature-Gradient Foam/Substrate Tensile Tester; Flight Test of an Intelligent Flight-Control System; Slat Heater Boxes for Thermal Vacuum Testing; System for Testing Thermal Insulation of Pipes; Electrical-Impedance-Based Ice-Thickness Gauges; Simulation System for Training in Laparoscopic Surgery; Flasher Powered by Photovoltaic Cells and Ultracapacitors; Improved Autoassociative Neural Networks; Toroidal-Core Microinductors Biased by Permanent Magnets; Using Correlated Photons to Suppress Background Noise; Atmospheric-Fade-Tolerant Tracking and Pointing in Wireless Optical Communication; Curved Focal-Plane Arrays Using Back-Illuminated High-Purity Photodetectors; Software for Displaying Data from Planetary Rovers; Software for Refining or Coarsening Computational Grids; Software for Diagnosis of Multiple Coordinated Spacecraft; Software Helps Retrieve Information Relevant to the User; Software for Simulating a Complex Robot; Software for Planning Scientific Activities on Mars; Software for Training in Pre-College Mathematics; Switching and Rectification in Carbon-Nanotube Junctions; Scandia-and-Yttria-Stabilized Zirconia for Thermal Barriers; Environmentally Safer, Less Toxic Fire-Extinguishing Agents; Multiaxial Temperature- and Time-Dependent Failure Model; Cloverleaf Vibratory Microgyroscope with Integrated Post; Single-Vector Calibration of Wind-Tunnel Force Balances; Microgyroscope with Vibrating Post as Rotation Transducer; Continuous Tuning and Calibration of Vibratory Gyroscopes; Compact, Pneumatically Actuated Filter Shuttle; Improved Bearingless Switched-Reluctance Motor; Fluorescent Quantum Dots for Biological Labeling; Growing Three-Dimensional Corneal Tissue in a Bioreactor; Scanning Tunneling Optical Resonance Microscopy; The Micro-Arcsecond Metrology Testbed; Detecting Moving Targets by Use of Soliton Resonances; and Finite-Element Methods for Real-Time Simulation of Surgery.

  14. Term Relevance Weights in On-Line Information Retrieval

    ERIC Educational Resources Information Center

    Salton, G.; Waldstein, R. K.

    1978-01-01

    Term relevance weighting systems in interactive information retrieval are reviewed. An experiment in which information retrieval users ranked query terms in decreasing order of presumed importance prior to actual search and retrieval is described. (Author/KP)

  15. 42 CFR 433.116 - FFP for operation of mechanized claims processing and information retrieval systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... and information retrieval systems. 433.116 Section 433.116 Public Health CENTERS FOR MEDICARE... FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.116 FFP for operation of mechanized claims processing and information retrieval systems. (a) Subject to paragraph (j) of...

  16. 7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...

  17. 42 CFR 433.116 - FFP for operation of mechanized claims processing and information retrieval systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... and information retrieval systems. 433.116 Section 433.116 Public Health CENTERS FOR MEDICARE... FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.116 FFP for operation of mechanized claims processing and information retrieval systems. (a) Subject to paragraph (j) of...

  18. 7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...

  19. 42 CFR 433.116 - FFP for operation of mechanized claims processing and information retrieval systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... and information retrieval systems. 433.116 Section 433.116 Public Health CENTERS FOR MEDICARE... FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.116 FFP for operation of mechanized claims processing and information retrieval systems. (a) Subject to paragraph (j) of...

  20. 7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...

  1. 42 CFR 433.116 - FFP for operation of mechanized claims processing and information retrieval systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... and information retrieval systems. 433.116 Section 433.116 Public Health CENTERS FOR MEDICARE... FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.116 FFP for operation of mechanized claims processing and information retrieval systems. (a) Subject to paragraph (j) of...

  2. 7 CFR 277.18 - Establishment of an Automated Data Processing (ADP) and Information Retrieval System.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) and Information Retrieval System. 277.18 Section 277.18 Agriculture Regulations of the Department of... Data Processing (ADP) and Information Retrieval System. (a) Scope and application. This section... costs of planning, design, development or installation of ADP and information retrieval systems if the...

  3. Development of an electronic radiation oncology patient information management system.

    PubMed

    Mandal, Abhijit; Asthana, Anupam Kumar; Aggarwal, Lalit Mohan

    2008-01-01

    The quality of patient care is critically influenced by the availability of accurate information and its efficient management. Radiation oncology consists of many information components, for example there may be information related to the patient (e.g., profile, disease site, stage, etc.), to people (radiation oncologists, radiological physicists, technologists, etc.), and to equipment (diagnostic, planning, treatment, etc.). These different data must be integrated. A comprehensive information management system is essential for efficient storage and retrieval of the enormous amounts of information. A radiation therapy patient information system (RTPIS) has been developed using open source software. PHP and JAVA script was used as the programming languages, MySQL as the database, and HTML and CSF as the design tool. This system utilizes typical web browsing technology using a WAMP5 server. Any user having a unique user ID and password can access this RTPIS. The user ID and password is issued separately to each individual according to the person's job responsibilities and accountability, so that users will be able to only access data that is related to their job responsibilities. With this system authentic users will be able to use a simple web browsing procedure to gain instant access. All types of users in the radiation oncology department should find it user-friendly. The maintenance of the system will not require large human resources or space. The file storage and retrieval process would be be satisfactory, unique, uniform, and easily accessible with adequate data protection. There will be very little possibility of unauthorized handling with this system. There will also be minimal risk of loss or accidental destruction of information.

  4. System of experts for intelligent data management (SEIDAM)

    NASA Technical Reports Server (NTRS)

    Goodenough, David G.; Iisaka, Joji; Fung, KO

    1993-01-01

    A proposal to conduct research and development on a system of expert systems for intelligent data management (SEIDAM) is being developed. CCRS has much expertise in developing systems for integrating geographic information with space and aircraft remote sensing data and in managing large archives of remotely sensed data. SEIDAM will be composed of expert systems grouped in three levels. At the lowest level, the expert systems will manage and integrate data from diverse sources, taking account of symbolic representation differences and varying accuracies. Existing software can be controlled by these expert systems, without rewriting existing software into an Artificial Intelligence (AI) language. At the second level, SEIDAM will take the interpreted data (symbolic and numerical) and combine these with data models. at the top level, SEIDAM will respond to user goals for predictive outcomes given existing data. The SEIDAM Project will address the research areas of expert systems, data management, storage and retrieval, and user access and interfaces.

  5. System of Experts for Intelligent Data Management (SEIDAM)

    NASA Technical Reports Server (NTRS)

    Goodenough, David G.; Iisaka, Joji; Fung, KO

    1992-01-01

    It is proposed to conduct research and development on a system of expert systems for intelligent data management (SEIDAM). CCRS has much expertise in developing systems for integrating geographic information with space and aircraft remote sensing data and in managing large archives of remotely sensed data. SEIDAM will be composed of expert systems grouped in three levels. At the lowest level, the expert systems will manage and integrate data from diverse sources, taking account of symbolic representation differences and varying accuracies. Existing software can be controlled by these expert systems, without rewriting existing software into an Artificial Intelligence (AI) language. At the second level, SEIDAM will take the interpreted data (symbolic and numerical) and combine these with data models. At the top level, SEIDAM will respond to user goals for predictive outcomes given existing data. The SEIDAM Project will address the research areas of expert systems, data management, storage and retrieval, and user access and interfaces.

  6. Integrating query of relational and textual data in clinical databases: a case study.

    PubMed

    Fisk, John M; Mutalik, Pradeep; Levin, Forrest W; Erdos, Joseph; Taylor, Caroline; Nadkarni, Prakash

    2003-01-01

    The authors designed and implemented a clinical data mart composed of an integrated information retrieval (IR) and relational database management system (RDBMS). Using commodity software, which supports interactive, attribute-centric text and relational searches, the mart houses 2.8 million documents that span a five-year period and supports basic IR features such as Boolean searches, stemming, and proximity and fuzzy searching. Results are relevance-ranked using either "total documents per patient" or "report type weighting." Non-curated medical text has a significant degree of malformation with respect to spelling and punctuation, which creates difficulties for text indexing and searching. Presently, the IR facilities of RDBMS packages lack the features necessary to handle such malformed text adequately. A robust IR+RDBMS system can be developed, but it requires integrating RDBMSs with third-party IR software. RDBMS vendors need to make their IR offerings more accessible to non-programmers.

  7. A data management approach to quality assurance using colorectal carcinoma reports from two institutions as a model.

    PubMed

    Sorge, John P; Harmon, C Reid; Sherman, Susan M; Baillie, E Eugene

    2005-07-01

    We used data management software to compare pathology report data concerning regional lymph node sampling for colorectal carcinoma from 2 institutions using different dissection methods. Data were retrieved from 2 disparate anatomic pathology information systems for all cases of colorectal carcinoma in 2003 involving the ascending and descending colon. Initial sorting of the data included overall lymph node recovery to assess differences between the dissection methods at the 2 institutions. Additional segregation of the data was used to challenge the application's capability of accurately addressing the complexity of the process. This software approach can be used to evaluate data from disparate computer systems, and we demonstrate how an automated function can enable institutions to compare internal pathologic assessment processes and the results of those comparisons. The use of this process has future implications for pathology quality assurance in other areas.

  8. Taking it to the streets: recording medical outreach data on personal digital assistants.

    PubMed

    Buck, David S; Rochon, Donna; Turley, James P

    2005-01-01

    Carrying hundreds of patient files in a suitcase makes medical street outreach to the homeless clumsy and difficult. Healthcare for the Homeless--Houston (HHH) began a case study under the assumption that tracking patient information with a personal digital assistant (PDA) would greatly simplify the process. Equipping clinicians with custom-designed software loaded onto Palm V Handheld Computers (palmOne, Inc, Milpitas, CA), Healthcare for the Homeless--Houston assessed how this type of technology augmented medical care during street outreach to the homeless in a major metropolitan area. Preliminary evidence suggests that personal digital assistants free clinicians to focus on building relationships instead of recreating documentation during patient encounters. However, the limits of the PDA for storing and retrieving data made it impractical long-term. This outcome precipitated a new study to test the feasibility of tablet personal computers loaded with a custom-designed software application specific to the needs of homeless street patients.

  9. Regarding retrievals of methane in the atmosphere from IASI/Metop spectra and their comparison with ground-based FTIR measurements data

    NASA Astrophysics Data System (ADS)

    Khamatnurova, M. Yu.; Gribanov, K. G.; Zakharov, V. I.; Rokotyan, N. V.; Imasu, R.

    2017-11-01

    The algorithm for atmospheric methane distribution retrieval in atmosphere from IASI spectra has been developed. The feasibility of Levenberg-Marquardt method for atmospheric methane total column amount retrieval from the spectra measured by IASI/METOP modified for the case of lack of a priori covariance matrices for methane vertical profiles is studied in this paper. Method and algorithm were implemented into software package together with iterative estimation of a posteriori covariance matrices and averaging kernels for each individual retrieval. This allows retrieval quality selection using the properties of both types of matrices. Methane (XCH4) retrieval by Levenberg-Marquardt method from IASI/METOP spectra is presented in this work. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder, USA) were taken as initial guess. Surface temperature, air temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval. The data retrieved from ground-based measurements at the Ural Atmospheric Station and data of L2/IASI standard product were used for the verification of the method and results of methane retrieval from IASI/METOP spectra.

  10. The role of retrieval practice in memory and analogical problem-solving.

    PubMed

    Hostetter, Autumn B; Penix, Elizabeth A; Norman, Mackenzie Z; Batsell, W Robert; Carr, Thomas H

    2018-05-01

    Retrieval practice (e.g., testing) has been shown to facilitate long-term retention of information. In two experiments, we examine whether retrieval practice also facilitates use of the practised information when it is needed to solve analogous problems. When retrieval practice was not limited to the information most relevant to the problems (Experiment 1), it improved memory for the information a week later compared with copying or rereading the information, although we found no evidence that it improved participants' ability to apply the information to the problems. In contrast, when retrieval practice was limited to only the information most relevant to the problems (Experiment 2), we found that retrieval practice enhanced memory for the critical information, the ability to identify the schematic similarities between the two sources of information, and the ability to apply that information to solve an analogous problem after a hint was given to do so. These results suggest that retrieval practice, through its effect on memory, can facilitate application of information to solve novel problems but has minimal effects on spontaneous realisation that the information is relevant.

  11. Use of information-retrieval languages in automated retrieval of experimental data from long-term storage

    NASA Technical Reports Server (NTRS)

    Khovanskiy, Y. D.; Kremneva, N. I.

    1975-01-01

    Problems and methods are discussed of automating information retrieval operations in a data bank used for long term storage and retrieval of data from scientific experiments. Existing information retrieval languages are analyzed along with those being developed. The results of studies discussing the application of the descriptive 'Kristall' language used in the 'ASIOR' automated information retrieval system are presented. The development and use of a specialized language of the classification-descriptive type, using universal decimal classification indices as the main descriptors, is described.

  12. Evidence synthesis software.

    PubMed

    Park, Sophie Elizabeth; Thomas, James

    2018-06-07

    It can be challenging to decide which evidence synthesis software to choose when doing a systematic review. This article discusses some of the important questions to consider in relation to the chosen method and synthesis approach. Software can support researchers in a range of ways. Here, a range of review conditions and software solutions. For example, facilitating contemporaneous collaboration across time and geographical space; in-built bias assessment tools; and line-by-line coding for qualitative textual analysis. EPPI-Reviewer is a review software for research synthesis managed by the EPPI-centre, UCL Institute of Education. EPPI-Reviewer has text mining automation technologies. Version 5 supports data sharing and re-use across the systematic review community. Open source software will soon be released. EPPI-Centre will continue to offer the software as a cloud-based service. The software is offered via a subscription with a one-month (extendible) trial available and volume discounts for 'site licences'. It is free to use for Cochrane and Campbell reviews. The next EPPI-Reviewer version is being built in collaboration with National Institute for Health and Care Excellence using 'surveillance' of newly published research to support 'living' iterative reviews. This is achieved using a combination of machine learning and traditional information retrieval technologies to identify the type of research each new publication describes and determine its relevance for a particular review, domain or guideline. While the amount of available knowledge and research is constantly increasing, the ways in which software can support the focus and relevance of data identification are also developing fast. Software advances are maximising the opportunities for the production of relevant and timely reviews. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. A System for Information Management in BioMedical Studies—SIMBioMS

    PubMed Central

    Krestyaninova, Maria; Zarins, Andris; Viksna, Juris; Kurbatova, Natalja; Rucevskis, Peteris; Neogi, Sudeshna Guha; Gostev, Mike; Perheentupa, Teemu; Knuuttila, Juha; Barrett, Amy; Lappalainen, Ilkka; Rung, Johan; Podnieks, Karlis; Sarkans, Ugis; McCarthy, Mark I; Brazma, Alvis

    2009-01-01

    Summary: SIMBioMS is a web-based open source software system for managing data and information in biomedical studies. It provides a solution for the collection, storage, management and retrieval of information about research subjects and biomedical samples, as well as experimental data obtained using a range of high-throughput technologies, including gene expression, genotyping, proteomics and metabonomics. The system can easily be customized and has proven to be successful in several large-scale multi-site collaborative projects. It is compatible with emerging functional genomics data standards and provides data import and export in accepted standard formats. Protocols for transferring data to durable archives at the European Bioinformatics Institute have been implemented. Availability: The source code, documentation and initialization scripts are available at http://simbioms.org. Contact: support@simbioms.org; mariak@ebi.ac.uk PMID:19633095

  14. [Information exchange via internet--possibilities, limits, future].

    PubMed

    Schmiedl, S; Geishauser, M; Klöppel, M; Biemer, E

    1998-01-01

    Today, the exchange of information in the Internet is dominated by the WWW and e-mail. Discussion groups like mailing lists and newsgroups also permit communication in groups. Information retrieval becomes a crucial challenge in using the Internet. In the field of medicine, three more aspects are of special importance: privacy, legal requirements, and the necessity of transferring large amounts of data. For these problems, today's Internet doesn't provide a sufficient solution yet. Future developments will not only improve the existing services, but also lead to fundamental changes in the transfer technologies: Safer data transfer is to be ensured by new encrypting software together with the planned transfer protocol IPv6. Introducing the new transfer mode ATM will lead to better and resource saving transmission. Computer, telephone and TV networks will grow together, resulting in convergence of media.

  15. Mathematics and Information Retrieval.

    ERIC Educational Resources Information Center

    Salton, Gerald

    1979-01-01

    Examines the main mathematical approaches to information retrieval, including both algebraic and probabilistic models, and describes difficulties which impede formalization of information retrieval processes. A number of developments are covered where new theoretical understandings have directly led to improved retrieval techniques and operations.…

  16. Digital Preservation in Open-Source Digital Library Software

    ERIC Educational Resources Information Center

    Madalli, Devika P.; Barve, Sunita; Amin, Saiful

    2012-01-01

    Digital archives and digital library projects are being initiated all over the world for materials of different formats and domains. To organize, store, and retrieve digital content, many libraries as well as archiving centers are using either proprietary or open-source software. While it is accepted that print media can survive for centuries with…

  17. 42 CFR 433.127 - Termination of FFP for failure to provide access to claims processing and information retrieval...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... claims processing and information retrieval systems. 433.127 Section 433.127 Public Health CENTERS FOR... PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.127 Termination of FFP for failure to provide access to claims processing and information retrieval...

  18. 42 CFR 433.116 - FFP for operation of mechanized claims processing and information retrieval systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and information retrieval systems. 433.116 Section 433.116 Public Health CENTERS FOR MEDICARE... FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.116 FFP for operation of mechanized claims processing and information retrieval systems. (a) Subject to 42 CFR 433.113(c...

  19. 42 CFR 433.127 - Termination of FFP for failure to provide access to claims processing and information retrieval...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... claims processing and information retrieval systems. 433.127 Section 433.127 Public Health CENTERS FOR... PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.127 Termination of FFP for failure to provide access to claims processing and information retrieval...

  20. 42 CFR 433.127 - Termination of FFP for failure to provide access to claims processing and information retrieval...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... claims processing and information retrieval systems. 433.127 Section 433.127 Public Health CENTERS FOR... PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.127 Termination of FFP for failure to provide access to claims processing and information retrieval...

  1. 42 CFR 433.127 - Termination of FFP for failure to provide access to claims processing and information retrieval...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... claims processing and information retrieval systems. 433.127 Section 433.127 Public Health CENTERS FOR... PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.127 Termination of FFP for failure to provide access to claims processing and information retrieval...

  2. 42 CFR 433.127 - Termination of FFP for failure to provide access to claims processing and information retrieval...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... claims processing and information retrieval systems. 433.127 Section 433.127 Public Health CENTERS FOR... PROGRAMS STATE FISCAL ADMINISTRATION Mechanized Claims Processing and Information Retrieval Systems § 433.127 Termination of FFP for failure to provide access to claims processing and information retrieval...

  3. On Information Retrieval (IR) Systems: Revisiting Their Development, Evaluation Methodologies, and Assumptions (SIGs LAN, ED).

    ERIC Educational Resources Information Center

    Stirling, Keith

    2000-01-01

    Describes a session on information retrieval systems that planned to discuss relevance measures with Web-based information retrieval; retrieval system performance and evaluation; probabilistic independence of index terms; vector-based models; metalanguages and digital objects; how users assess the reliability, timeliness and bias of information;…

  4. Synthesizer: Expediting synthesis studies from context-free data with information retrieval techniques.

    PubMed

    Gandy, Lisa M; Gumm, Jordan; Fertig, Benjamin; Thessen, Anne; Kennish, Michael J; Chavan, Sameer; Marchionni, Luigi; Xia, Xiaoxin; Shankrit, Shambhavi; Fertig, Elana J

    2017-01-01

    Scientists have unprecedented access to a wide variety of high-quality datasets. These datasets, which are often independently curated, commonly use unstructured spreadsheets to store their data. Standardized annotations are essential to perform synthesis studies across investigators, but are often not used in practice. Therefore, accurately combining records in spreadsheets from differing studies requires tedious and error-prone human curation. These efforts result in a significant time and cost barrier to synthesis research. We propose an information retrieval inspired algorithm, Synthesize, that merges unstructured data automatically based on both column labels and values. Application of the Synthesize algorithm to cancer and ecological datasets had high accuracy (on the order of 85-100%). We further implement Synthesize in an open source web application, Synthesizer (https://github.com/lisagandy/synthesizer). The software accepts input as spreadsheets in comma separated value (CSV) format, visualizes the merged data, and outputs the results as a new spreadsheet. Synthesizer includes an easy to use graphical user interface, which enables the user to finish combining data and obtain perfect accuracy. Future work will allow detection of units to automatically merge continuous data and application of the algorithm to other data formats, including databases.

  5. Synthesizer: Expediting synthesis studies from context-free data with information retrieval techniques

    PubMed Central

    Gumm, Jordan; Fertig, Benjamin; Thessen, Anne; Kennish, Michael J.; Chavan, Sameer; Marchionni, Luigi; Xia, Xiaoxin; Shankrit, Shambhavi; Fertig, Elana J.

    2017-01-01

    Scientists have unprecedented access to a wide variety of high-quality datasets. These datasets, which are often independently curated, commonly use unstructured spreadsheets to store their data. Standardized annotations are essential to perform synthesis studies across investigators, but are often not used in practice. Therefore, accurately combining records in spreadsheets from differing studies requires tedious and error-prone human curation. These efforts result in a significant time and cost barrier to synthesis research. We propose an information retrieval inspired algorithm, Synthesize, that merges unstructured data automatically based on both column labels and values. Application of the Synthesize algorithm to cancer and ecological datasets had high accuracy (on the order of 85–100%). We further implement Synthesize in an open source web application, Synthesizer (https://github.com/lisagandy/synthesizer). The software accepts input as spreadsheets in comma separated value (CSV) format, visualizes the merged data, and outputs the results as a new spreadsheet. Synthesizer includes an easy to use graphical user interface, which enables the user to finish combining data and obtain perfect accuracy. Future work will allow detection of units to automatically merge continuous data and application of the algorithm to other data formats, including databases. PMID:28437440

  6. Anatomy of an Extensible Open Source PACS.

    PubMed

    Valente, Frederico; Silva, Luís A Bastião; Godinho, Tiago Marques; Costa, Carlos

    2016-06-01

    The conception and deployment of cost effective Picture Archiving and Communication Systems (PACS) is a concern for small to medium medical imaging facilities, research environments, and developing countries' healthcare institutions. Financial constraints and the specificity of these scenarios contribute to a low adoption rate of PACS in those environments. Furthermore, with the advent of ubiquitous computing and new initiatives to improve healthcare information technologies and data sharing, such as IHE and XDS-i, a PACS must adapt quickly to changes. This paper describes Dicoogle, a software framework that enables developers and researchers to quickly prototype and deploy new functionality taking advantage of the embedded Digital Imaging and Communications in Medicine (DICOM) services. This full-fledged implementation of a PACS archive is very amenable to extension due to its plugin-based architecture and out-of-the-box functionality, which enables the exploration of large DICOM datasets and associated metadata. These characteristics make the proposed solution very interesting for prototyping, experimentation, and bridging functionality with deployed applications. Besides being an advanced mechanism for data discovery and retrieval based on DICOM object indexing, it enables the detection of inconsistencies in an institution's data and processes. Several use cases have benefited from this approach such as radiation dosage monitoring, Content-Based Image Retrieval (CBIR), and the use of the framework as support for classes targeting software engineering for clinical contexts.

  7. The joint methane profiles retrieval approach from GOSAT TIR and SWIR spectra

    NASA Astrophysics Data System (ADS)

    Zadvornykh, Ilya V.; Gribanov, Konstantin G.; Zakharov, Vyacheslav I.; Imasu, Ryoichi

    2017-11-01

    In this paper we present a method, using methane as example, which allows more accurate greenhouse gases retrieval in the Earth's atmosphere. Using the new version of the FIRE-ARMS software, supplemented with the VLIDORT vector radiation transfer model, we carried out joint methane retrieval from TIR (Thermal Infrared Range) and SWIR (ShortWavelength Infrared Range) GOSAT spectra using optimal estimation method. MACC reanalysis data from the European Center for Medium-Range Forecasts (ECMWF), supplemented by data from aircraft measurements of the HIPPO experiment were used as a statistical ensemble.

  8. Transparent Information Systems through Gateways, Front Ends, Intermediaries, and Interfaces.

    ERIC Educational Resources Information Center

    Williams, Martha E.

    1986-01-01

    Provides overview of design requirements for transparent information retrieval (implies that user sees through complexity of retrieval activities sequence). Highlights include need for transparent systems; history of transparent retrieval research; information retrieval functions (automated converters, routers, selectors, evaluators/analyzers);…

  9. Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.

  10. BARTTest: Community-Standard Atmospheric Radiative-Transfer and Retrieval Tests

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph; Himes, Michael D.; Cubillos, Patricio E.; Blecic, Jasmina; Challener, Ryan C.

    2018-01-01

    Atmospheric radiative transfer (RT) codes are used both to predict planetary and brown-dwarf spectra and in retrieval algorithms to infer atmospheric chemistry, clouds, and thermal structure from observations. Observational plans, theoretical models, and scientific results depend on the correctness of these calculations. Yet, the calculations are complex and the codes implementing them are often written without modern software-verification techniques. The community needs a suite of test calculations with analytically, numerically, or at least community-verified results. We therefore present the Bayesian Atmospheric Radiative Transfer Test Suite, or BARTTest. BARTTest has four categories of tests: analytically verified RT tests of simple atmospheres (single line in single layer, line blends, saturation, isothermal, multiple line-list combination, etc.), community-verified RT tests of complex atmospheres, synthetic retrieval tests on simulated data with known answers, and community-verified real-data retrieval tests.BARTTest is open-source software intended for community use and further development. It is available at https://github.com/ExOSPORTS/BARTTest. We propose this test suite as a standard for verifying atmospheric RT and retrieval codes, analogous to the Held-Suarez test for general circulation models. This work was supported by NASA Planetary Atmospheres grant NX12AI69G, NASA Astrophysics Data Analysis Program grant NNX13AF38G, and NASA Exoplanets Research Program grant NNX17AB62G.

  11. Grasping objects autonomously in simulated KC-135 zero-g

    NASA Technical Reports Server (NTRS)

    Norsworthy, Robert S.

    1994-01-01

    The KC-135 aircraft was chosen for simulated zero gravity testing of the Extravehicular Activity Helper/retriever (EVAHR). A software simulation of the EVAHR hardware, KC-135 flight dynamics, collision detection and grasp inpact dynamics has been developed to integrate and test the EVAHR software prior to flight testing on the KC-135. The EVAHR software will perform target pose estimation, tracking, and motion estimation for rigid, freely rotating, polyhedral objects. Manipulator grasp planning and trajectory control software has also been developed to grasp targets while avoiding collisions.

  12. CARDS: A blueprint and environment for domain-specific software reuse

    NASA Technical Reports Server (NTRS)

    Wallnau, Kurt C.; Solderitsch, Anne Costa; Smotherman, Catherine

    1992-01-01

    CARDS (Central Archive for Reusable Defense Software) exploits advances in domain analysis and domain modeling to identify, specify, develop, archive, retrieve, understand, and reuse domain-specific software components. An important element of CARDS is to provide visibility into the domain model artifacts produced by, and services provided by, commercial computer-aided software engineering (CASE) technology. The use of commercial CASE technology is important to provide rich, robust support for the varied roles involved in a reuse process. We refer to this kind of use of knowledge representation systems as supporting 'knowledge-based integration.'

  13. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

    NASA Technical Reports Server (NTRS)

    Elander, Valjean; Koshak, William; Phanord, Dieudonne

    2004-01-01

    The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

  14. Incorporating the APS Catalog of the POSS I and Image Archive in ADS

    NASA Technical Reports Server (NTRS)

    Humphreys, Roberta M.

    1998-01-01

    The primary purpose of this contract was to develop the software to both create and access an on-line database of images from digital scans of the Palomar Sky Survey. This required modifying our DBMS (called Star Base) to create an image database from the actual raw pixel data from the scans. The digitized images are processed into a set of coordinate-reference index and pixel files that are stored in run-length files, thus achieving an efficient lossless compression. For efficiency and ease of referencing, each digitized POSS I plate is then divided into 900 subplates. Our custom DBMS maps each query into the corresponding POSS plate(s) and subplate(s). All images from the appropriate subplates are retrieved from disk with byte-offsets taken from the index files. These are assembled on-the-fly into a GIF image file for browser display, and a FITS format image file for retrieval. The FITS images have a pixel size of 0.33 arcseconds. The FITS header contains astrometric and photometric information. This method keeps the disk requirements manageable while allowing for future improvements. When complete, the APS Image Database will contain over 130 Gb of data. A set of web pages query forms are available on-line, as well as an on-line tutorial and documentation. The database is distributed to the Internet by a high-speed SGI server and a high-bandwidth disk system. URL is http://aps.umn.edu/IDB/. The image database software is written in perl and C and has been compiled on SGI computers with MIX5.3. A copy of the written documentation is included and the software is on the accompanying exabyte tape.

  15. Location Based Service in Indoor Environment Using Quick Response Code Technology

    NASA Astrophysics Data System (ADS)

    Hakimpour, F.; Zare Zardiny, A.

    2014-10-01

    Today by extensive use of intelligent mobile phones, increased size of screens and enriching the mobile phones by Global Positioning System (GPS) technology use of location based services have been considered by public users more than ever.. Based on the position of users, they can receive the desired information from different LBS providers. Any LBS system generally includes five main parts: mobile devices, communication network, positioning system, service provider and data provider. By now many advances have been gained in relation to any of these parts; however the users positioning especially in indoor environments is propounded as an essential and critical issue in LBS. It is well known that GPS performs too poorly inside buildings to provide usable indoor positioning. On the other hand, current indoor positioning technologies such as using RFID or WiFi network need different hardware and software infrastructures. In this paper, we propose a new method to overcome these challenges. This method is using the Quick Response (QR) Code Technology. QR Code is a 2D encrypted barcode with a matrix structure which consists of black modules arranged in a square grid. Scanning and data retrieving process from QR Code is possible by use of different camera-enabled mobile phones only by installing the barcode reader software. This paper reviews the capabilities of QR Code technology and then discusses the advantages of using QR Code in Indoor LBS (ILBS) system in comparison to other technologies. Finally, some prospects of using QR Code are illustrated through implementation of a scenario. The most important advantages of using this new technology in ILBS are easy implementation, spending less expenses, quick data retrieval, possibility of printing the QR Code on different products and no need for complicated hardware and software infrastructures.

  16. Improve Biomedical Information Retrieval using Modified Learning to Rank Methods.

    PubMed

    Xu, Bo; Lin, Hongfei; Lin, Yuan; Ma, Yunlong; Yang, Liang; Wang, Jian; Yang, Zhihao

    2016-06-14

    In these years, the number of biomedical articles has increased exponentially, which becomes a problem for biologists to capture all the needed information manually. Information retrieval technologies, as the core of search engines, can deal with the problem automatically, providing users with the needed information. However, it is a great challenge to apply these technologies directly for biomedical retrieval, because of the abundance of domain specific terminologies. To enhance biomedical retrieval, we propose a novel framework based on learning to rank. Learning to rank is a series of state-of-the-art information retrieval techniques, and has been proved effective in many information retrieval tasks. In the proposed framework, we attempt to tackle the problem of the abundance of terminologies by constructing ranking models, which focus on not only retrieving the most relevant documents, but also diversifying the searching results to increase the completeness of the resulting list for a given query. In the model training, we propose two novel document labeling strategies, and combine several traditional retrieval models as learning features. Besides, we also investigate the usefulness of different learning to rank approaches in our framework. Experimental results on TREC Genomics datasets demonstrate the effectiveness of our framework for biomedical information retrieval.

  17. Competitive retrieval is not a prerequisite for forgetting in the retrieval practice paradigm.

    PubMed

    Camp, Gino; Dalm, Sander

    2016-09-01

    Retrieving information from memory can lead to forgetting of other, related information. The inhibition account of this retrieval-induced forgetting effect predicts that this form of forgetting occurs when competition arises between the practiced information and the related information, leading to inhibition of the related information. In the standard retrieval practice paradigm, a retrieval practice task is used in which participants retrieve the items based on a category-plus-stem cue (e.g., FRUIT-or___). In the current experiment, participants instead generated the target based on a cue in which the first 2 letters of the target were transposed (e.g., FRUIT-roange). This noncompetitive task also induced forgetting of unpracticed items from practiced categories. This finding is inconsistent with the inhibition account, which asserts that the forgetting effect depends on competitive retrieval. We argue that interference-based accounts of forgetting and the context-based account of retrieval-induced forgetting can account for this result. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Individual Differences in Working Memory Capacity Predict Retrieval-Induced Forgetting

    ERIC Educational Resources Information Center

    Aslan, Alp; Bauml, Karl-Heinz T.

    2011-01-01

    Selectively retrieving a subset of previously studied information enhances memory for the retrieved information but causes forgetting of related, nonretrieved information. Such retrieval-induced forgetting (RIF) has often been attributed to inhibitory executive-control processes that supposedly suppress the nonretrieved items' memory…

  19. Current Research into Chemical and Textual Information Retrieval at the Department of Information Studies, University of Sheffield.

    ERIC Educational Resources Information Center

    Lynch, Michael F.; Willett, Peter

    1987-01-01

    Discusses research into chemical information and document retrieval systems at the University of Sheffield. Highlights include the use of cluster analysis methods for document retrieval and drug design, representation and searching of files of generic chemical structures, and the application of parallel computer hardware to information retrieval.…

  20. Natural language processing in pathology: a scoping review.

    PubMed

    Burger, Gerard; Abu-Hanna, Ameen; de Keizer, Nicolette; Cornet, Ronald

    2016-07-22

    Encoded pathology data are key for medical registries and analyses, but pathology information is often expressed as free text. We reviewed and assessed the use of NLP (natural language processing) for encoding pathology documents. Papers addressing NLP in pathology were retrieved from PubMed, Association for Computing Machinery (ACM) Digital Library and Association for Computational Linguistics (ACL) Anthology. We reviewed and summarised the study objectives; NLP methods used and their validation; software implementations; the performance on the dataset used and any reported use in practice. The main objectives of the 38 included papers were encoding and extraction of clinically relevant information from pathology reports. Common approaches were word/phrase matching, probabilistic machine learning and rule-based systems. Five papers (13%) compared different methods on the same dataset. Four papers did not specify the method(s) used. 18 of the 26 studies that reported F-measure, recall or precision reported values of over 0.9. Proprietary software was the most frequently mentioned category (14 studies); General Architecture for Text Engineering (GATE) was the most applied architecture overall. Practical system use was reported in four papers. Most papers used expert annotation validation. Different methods are used in NLP research in pathology, and good performances, that is, high precision and recall, high retrieval/removal rates, are reported for all of these. Lack of validation and of shared datasets precludes performance comparison. More comparative analysis and validation are needed to provide better insight into the performance and merits of these methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  1. Contextual Information Drives the Reconsolidation-Dependent Updating of Retrieved Fear Memories

    PubMed Central

    Jarome, Timothy J; Ferrara, Nicole C; Kwapis, Janine L; Helmstetter, Fred J

    2015-01-01

    Stored memories enter a temporary state of vulnerability following retrieval known as ‘reconsolidation', a process that can allow memories to be modified to incorporate new information. Although reconsolidation has become an attractive target for treatment of memories related to traumatic past experiences, we still do not know what new information triggers the updating of retrieved memories. Here, we used biochemical markers of synaptic plasticity in combination with a novel behavioral procedure to determine what was learned during memory reconsolidation under normal retrieval conditions. We eliminated new information during retrieval by manipulating animals' training experience and measured changes in proteasome activity and GluR2 expression in the amygdala, two established markers of fear memory lability and reconsolidation. We found that eliminating new contextual information during the retrieval of memories for predictable and unpredictable fear associations prevented changes in proteasome activity and glutamate receptor expression in the amygdala, indicating that this new information drives the reconsolidation of both predictable and unpredictable fear associations on retrieval. Consistent with this, eliminating new contextual information prior to retrieval prevented the memory-impairing effects of protein synthesis inhibitors following retrieval. These results indicate that under normal conditions, reconsolidation updates memories by incorporating new contextual information into the memory trace. Collectively, these results suggest that controlling contextual information present during retrieval may be a useful strategy for improving reconsolidation-based treatments of traumatic memories associated with anxiety disorders such as post-traumatic stress disorder. PMID:26062788

  2. Architecture for biomedical multimedia information delivery on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.

    1997-10-01

    Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.

  3. Toward an Episodic Context Account of Retrieval-Based Learning: Dissociating Retrieval Practice and Elaboration

    ERIC Educational Resources Information Center

    Lehman, Melissa; Smith, Megan A.; Karpicke, Jeffrey D.

    2014-01-01

    We tested the predictions of 2 explanations for retrieval-based learning; while the elaborative retrieval hypothesis assumes that the retrieval of studied information promotes the generation of semantically related information, which aids in later retrieval (Carpenter, 2009), the episodic context account proposed by Karpicke, Lehman, and Aue (in…

  4. Using Induction to Refine Information Retrieval Strategies

    NASA Technical Reports Server (NTRS)

    Baudin, Catherine; Pell, Barney; Kedar, Smadar

    1994-01-01

    Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.

  5. Hypothesis-confirming information search strategies and computerized information-retrieval systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, S.M.

    A recent trend in information-retrieval systems technology is the development of on-line information retrieval systems. One objective of these systems has been to attempt to enhance decision effectiveness by allowing users to preferentially seek information, thereby facilitating the reduction or elimination of information overload. These systems do not necessarily lead to more-effective decision making, however. Recent research in information-search strategy suggests that when users are seeking information subsequent to forming initial beliefs, they may preferentially seek information to confirm these beliefs. It seems that effective computer-based decision support requires an information retrieval system capable of: (a) retrieving a subset ofmore » all available information, in order to reduce information overload, and (b) supporting an information search strategy that considers all relevant information, rather than merely hypothesis-confirming information. An information retrieval system with an expert component (i.e., a knowledge-based DSS) should be able to provide these capabilities. Results of this study are non conclusive; there was neither strong confirmatory evidence nor strong disconfirmatory evidence regarding the effectiveness of the KBDSS.« less

  6. 46 CFR 520.6 - Retrieval of information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 9 2012-10-01 2012-10-01 false Retrieval of information. 520.6 Section 520.6 Shipping FEDERAL MARITIME COMMISSION REGULATIONS AFFECTING OCEAN SHIPPING IN FOREIGN COMMERCE CARRIER AUTOMATED TARIFFS § 520.6 Retrieval of information. (a) General. Tariffs systems shall present retrievers with the...

  7. 46 CFR 520.6 - Retrieval of information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 9 2010-10-01 2010-10-01 false Retrieval of information. 520.6 Section 520.6 Shipping FEDERAL MARITIME COMMISSION REGULATIONS AFFECTING OCEAN SHIPPING IN FOREIGN COMMERCE CARRIER AUTOMATED TARIFFS § 520.6 Retrieval of information. (a) General. Tariffs systems shall present retrievers with the...

  8. 46 CFR 520.6 - Retrieval of information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 9 2014-10-01 2014-10-01 false Retrieval of information. 520.6 Section 520.6 Shipping FEDERAL MARITIME COMMISSION REGULATIONS AFFECTING OCEAN SHIPPING IN FOREIGN COMMERCE CARRIER AUTOMATED TARIFFS § 520.6 Retrieval of information. (a) General. Tariffs systems shall present retrievers with the...

  9. 46 CFR 520.6 - Retrieval of information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 9 2013-10-01 2013-10-01 false Retrieval of information. 520.6 Section 520.6 Shipping FEDERAL MARITIME COMMISSION REGULATIONS AFFECTING OCEAN SHIPPING IN FOREIGN COMMERCE CARRIER AUTOMATED TARIFFS § 520.6 Retrieval of information. (a) General. Tariffs systems shall present retrievers with the...

  10. Mathematical, Logical, and Formal Methods in Information Retrieval: An Introduction to the Special Issue.

    ERIC Educational Resources Information Center

    Crestani, Fabio; Dominich, Sandor; Lalmas, Mounia; van Rijsbergen, Cornelis Joost

    2003-01-01

    Discusses the importance of research on the use of mathematical, logical, and formal methods in information retrieval to help enhance retrieval effectiveness and clarify underlying concepts of information retrieval. Highlights include logic; probability; spaces; and future research needs. (Author/LRW)

  11. 46 CFR 520.6 - Retrieval of information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 9 2011-10-01 2011-10-01 false Retrieval of information. 520.6 Section 520.6 Shipping FEDERAL MARITIME COMMISSION REGULATIONS AFFECTING OCEAN SHIPPING IN FOREIGN COMMERCE CARRIER AUTOMATED TARIFFS § 520.6 Retrieval of information. (a) General. Tariffs systems shall present retrievers with the...

  12. Information Retrieval: A Sequential Learning Process.

    ERIC Educational Resources Information Center

    Bookstein, Abraham

    1983-01-01

    Presents decision-theoretic models which intrinsically include retrieval of multiple documents whereby system responds to request by presenting documents to patron in sequence, gathering feedback, and using information to modify future retrievals. Document independence model, set retrieval model, sequential retrieval model, learning model,…

  13. TRMM Common Microphysics Products: A Tool for Evaluating Spaceborne Precipitation Retrieval Algorithms

    NASA Technical Reports Server (NTRS)

    Kingsmill, David E.; Yuter, Sandra E.; Hobbs, Peter V.; Rangno, Arthur L.; Heymsfield, Andrew J.; Stith, Jeffrey L.; Bansemer, Aaron; Haggerty, Julie A.; Korolev, Alexei V.

    2004-01-01

    A customized product for analysis of microphysics data collected from aircraft during field campaigns in support of the TRMM program is described. These Common Microphysics Products (CMP's) are designed to aid in evaluation of TRMM spaceborne precipitation retrieval algorithms. Information needed for this purpose (e.g., particle size spectra and habit, liquid and ice water content) was derived using a common processing strategy on the wide variety of microphysical instruments and raw native data formats employed in the field campaigns. The CMP's are organized into an ASCII structure to allow easy access to the data for those less familiar with and without the tools to accomplish microphysical data processing. Detailed examples of the CMP show its potential and some of its limitations. This approach may be a first step toward developing a generalized microphysics format and an associated community-oriented, non-proprietary software package for microphysics data processing, initiatives that would likely broaden community access to and use of microphysics datasets.

  14. Developing an ontological explosion knowledge base for business continuity planning purposes.

    PubMed

    Mohammadfam, Iraj; Kalatpour, Omid; Golmohammadi, Rostam; Khotanlou, Hasan

    2013-01-01

    Industrial accidents are among the most known challenges to business continuity. Many organisations have lost their reputation following devastating accidents. To manage the risks of such accidents, it is necessary to accumulate sufficient knowledge regarding their roots, causes and preventive techniques. The required knowledge might be obtained through various approaches, including databases. Unfortunately, many databases are hampered by (among other things) static data presentations, a lack of semantic features, and the inability to present accident knowledge as discrete domains. This paper proposes the use of Protégé software to develop a knowledge base for the domain of explosion accidents. Such a structure has a higher capability to improve information retrieval compared with common accident databases. To accomplish this goal, a knowledge management process model was followed. The ontological explosion knowledge base (EKB) was built for further applications, including process accident knowledge retrieval and risk management. The paper will show how the EKB has a semantic feature that enables users to overcome some of the search constraints of existing accident databases.

  15. Status of the NPP and J1 NOAA Unique Combined Atmospheric Processing System (NUCAPS): recent algorithm enhancements geared toward validation and near real time users applications.

    NASA Astrophysics Data System (ADS)

    Gambacorta, A.; Nalli, N. R.; Tan, C.; Iturbide-Sanchez, F.; Wilson, M.; Zhang, K.; Xiong, X.; Barnet, C. D.; Sun, B.; Zhou, L.; Wheeler, A.; Reale, A.; Goldberg, M.

    2017-12-01

    The NOAA Unique Combined Atmospheric Processing System (NUCAPS) is the NOAA operational algorithm to retrieve thermodynamic and composition variables from hyper spectral thermal sounders such as CrIS, IASI and AIRS. The combined use of microwave sounders, such as ATMS, AMSU and MHS, enables full atmospheric sounding of the atmospheric column under all-sky conditions. NUCAPS retrieval products are accessible in near real time (about 1.5 hour delay) through the NOAA Comprehensive Large Array-data Stewardship System (CLASS). Since February 2015, NUCAPS retrievals have been also accessible via Direct Broadcast, with unprecedented low latency of less than 0.5 hours. NUCAPS builds on a long-term, multi-agency investment on algorithm research and development. The uniqueness of this algorithm consists in a number of features that are key in providing highly accurate and stable atmospheric retrievals, suitable for real time weather and air quality applications. Firstly, maximizing the use of the information content present in hyper spectral thermal measurements forms the foundation of the NUCAPS retrieval algorithm. Secondly, NUCAPS is a modular, name-list driven design. It can process multiple hyper spectral infrared sounders (on Aqua, NPP, MetOp and JPSS series) by mean of the same exact retrieval software executable and underlying spectroscopy. Finally, a cloud-clearing algorithm and a synergetic use of microwave radiance measurements enable full vertical sounding of the atmosphere, under all-sky regimes. As we transition toward improved hyper spectral missions, assessing retrieval skill and consistency across multiple platforms becomes a priority for real time users applications. Focus of this presentation is a general introduction on the recent improvements in the delivery of the NUCAPS full spectral resolution upgrade and an overview of the lessons learned from the 2017 Hazardous Weather Test bed Spring Experiment. Test cases will be shown on the use of NPP and MetOp NUCAPS under pre-convective, capping inversion and dry layer intrusion events.

  16. A Multi­Discipline Approach to Digitizing Historic Seismograms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Andrew

    2016-04-07

    Retriever Technology has developed and has made available free of charge a seismogram digitization software package called SKATE (Seismogram Kit for Automatic Trace Extraction). We have developed an extensive set of algorithms that process seismogram image files, provide editing tools, and output time series data. The software is available online and free of charge at seismo.redfish.com. To demonstrate the speed and cost effectiveness of the software, we have processed over 30,000 images.

  17. Supporting geoscience with graphical-user-interface Internet tools for the Macintosh

    NASA Astrophysics Data System (ADS)

    Robin, Bernard

    1995-07-01

    This paper describes a suite of Macintosh graphical-user-interface (GUI) software programs that can be used in conjunction with the Internet to support geoscience education. These software programs allow science educators to access and retrieve a large body of resources from an increasing number of network sites, taking advantage of the intuitive, simple-to-use Macintosh operating system. With these tools, educators easily can locate, download, and exchange not only text files but also sound resources, video movie clips, and software application files from their desktop computers. Another major advantage of these software tools is that they are available at no cost and may be distributed freely. The following GUI software tools are described including examples of how they can be used in an educational setting: ∗ Eudora—an e-mail program ∗ NewsWatcher—a newsreader ∗ TurboGopher—a Gopher program ∗ Fetch—a software application for easy File Transfer Protocol (FTP) ∗ NCSA Mosaic—a worldwide hypertext browsing program. An explosive growth of online archives currently is underway as new electronic sites are being added continuously to the Internet. Many of these resources may be of interest to science educators who learn they can share not only ASCII text files, but also graphic image files, sound resources, QuickTime movie clips, and hypermedia projects with colleagues from locations around the world. These powerful, yet simple to learn GUI software tools are providing a revolution in how knowledge can be accessed, retrieved, and shared.

  18. Data Discretization for Novel Relationship Discovery in Information Retrieval.

    ERIC Educational Resources Information Center

    Benoit, G.

    2002-01-01

    Describes an information retrieval, visualization, and manipulation model which offers the user multiple ways to exploit the retrieval set, based on weighted query terms, via an interactive interface. Outlines the mathematical model and describes an information retrieval application built on the model to search structured and full-text files.…

  19. X-Ray Phase Imaging for Breast Cancer Detection

    DTIC Science & Technology

    2012-09-01

    the Gerchberg-Saxton algorithm in the Fresnel diffraction regime, and is much more robust against image noise than the TIE-based method. For details...developed efficient coding with the software modules for the image registration, flat-filed correction , and phase retrievals. In addition, we...X, Liu H. 2010. Performance analysis of the attenuation-partition based iterative phase retrieval algorithm for in-line phase-contrast imaging

  20. Visual working memory buffers information retrieved from visual long-term memory.

    PubMed

    Fukuda, Keisuke; Woodman, Geoffrey F

    2017-05-16

    Human memory is thought to consist of long-term storage and short-term storage mechanisms, the latter known as working memory. Although it has long been assumed that information retrieved from long-term memory is represented in working memory, we lack neural evidence for this and need neural measures that allow us to watch this retrieval into working memory unfold with high temporal resolution. Here, we show that human electrophysiology can be used to track information as it is brought back into working memory during retrieval from long-term memory. Specifically, we found that the retrieval of information from long-term memory was limited to just a few simple objects' worth of information at once, and elicited a pattern of neurophysiological activity similar to that observed when people encode new information into working memory. Our findings suggest that working memory is where information is buffered when being retrieved from long-term memory and reconcile current theories of memory retrieval with classic notions about the memory mechanisms involved.

  1. Visual working memory buffers information retrieved from visual long-term memory

    PubMed Central

    Fukuda, Keisuke; Woodman, Geoffrey F.

    2017-01-01

    Human memory is thought to consist of long-term storage and short-term storage mechanisms, the latter known as working memory. Although it has long been assumed that information retrieved from long-term memory is represented in working memory, we lack neural evidence for this and need neural measures that allow us to watch this retrieval into working memory unfold with high temporal resolution. Here, we show that human electrophysiology can be used to track information as it is brought back into working memory during retrieval from long-term memory. Specifically, we found that the retrieval of information from long-term memory was limited to just a few simple objects’ worth of information at once, and elicited a pattern of neurophysiological activity similar to that observed when people encode new information into working memory. Our findings suggest that working memory is where information is buffered when being retrieved from long-term memory and reconcile current theories of memory retrieval with classic notions about the memory mechanisms involved. PMID:28461479

  2. Databank Software for the 1990s and Beyond--Part 1: The User's Wish List.

    ERIC Educational Resources Information Center

    Basch, Reva

    1990-01-01

    Describes desired software enhancements identified by the Southern California Online Users Group in the areas of search language, database selection, document retrieval and display, user interface, customer support, and cost and economic issues. The need to prioritize these wishes and to determine whether features should reside in the mainframe or…

  3. Full-field optical coherence tomography used for security and document identity

    NASA Astrophysics Data System (ADS)

    Chang, Shoude; Mao, Youxin; Sherif, Sherif; Flueraru, Costel

    2006-09-01

    The optical coherence tomography (OCT) is an emerging technology for high-resolution cross-sectional imaging of 3D structures. In the past years, OCT systems have been used mainly for medical, especially ophthalmological diagnostics. Concerning the nature of OCT system being capable to explore the internal features of an object, we apply the OCT technology to directly retrieve the 2D information pre-stored in a multiple-layer information carrier. The standard depth-resolution of an OCT system is at micrometer level. If a 20mm by 20mm sampling area with a 1024 x 1024 CCD array is used in the OCT system having 10 μm, an information carrier having a volume of 20mm x 20mm x 2mm could contain 200 Mega-pixel images. Because of its tiny size and large information volume, the information carrier, with its OCT retrieving system, will have potential applications in documents security and object identification. In addition, as the information carrier can be made by low-scattering transparent material, the signal/noise ratio will be improved dramatically. As a consequence, the specific hardware and complicated software can also be greatly simplified. Owing to non-scanning along X-Y axis, the full-field OCT could be the simplest and most economic imaging system for extracting information from such a multilayer information carrier. In this paper, deign and implementation of a full-field OCT system is described and the related algorithms are introduced. In our experiments, a four layers information carrier is used, which contains 4 layers of image pattern, two text images and two fingerprint images. The extracted tomography images of each layer are also provided.

  4. Portable open-path optical remote sensing (ORS) Fourier transform infrared (FTIR) instrumentation miniaturization and software for point and click real-time analysis

    NASA Astrophysics Data System (ADS)

    Zemek, Peter G.; Plowman, Steven V.

    2010-04-01

    Advances in hardware have miniaturized the emissions spectrometer and associated optics, rendering them easily deployed in the field. Such systems are also suitable for vehicle mounting, and can provide high quality data and concentration information in minutes. Advances in software have accompanied this hardware evolution, enabling the development of portable point-and-click OP-FTIR systems that weigh less than 16 lbs. These systems are ideal for first-responders, military, law enforcement, forensics, and screening applications using optical remote sensing (ORS) methodologies. With canned methods and interchangeable detectors, the new generation of OP-FTIR technology is coupled to the latest forward reference-type model software to provide point-and-click technology. These software models have been established for some time. However, refined user-friendly models that use active, passive, and solar occultation methodologies now allow the user to quickly field-screen and quantify plumes, fence-lines, and combustion incident scenarios in high-temporal-resolution. Synthetic background generation is now redundant as the models use highly accurate instrument line shape (ILS) convolutions and several other parameters, in conjunction with radiative transfer model databases to model a single calibration spectrum to collected sample spectra. Data retrievals are performed directly on single beam spectra using non-linear classical least squares (NLCLS). Typically, the Hitran line database is used to generate the initial calibration spectrum contained within the software.

  5. A semantic medical multimedia retrieval approach using ontology information hiding.

    PubMed

    Guo, Kehua; Zhang, Shigeng

    2013-01-01

    Searching useful information from unstructured medical multimedia data has been a difficult problem in information retrieval. This paper reports an effective semantic medical multimedia retrieval approach which can reflect the users' query intent. Firstly, semantic annotations will be given to the multimedia documents in the medical multimedia database. Secondly, the ontology that represented semantic information will be hidden in the head of the multimedia documents. The main innovations of this approach are cross-type retrieval support and semantic information preservation. Experimental results indicate a good precision and efficiency of our approach for medical multimedia retrieval in comparison with some traditional approaches.

  6. A Prototype System for Retrieval of Gene Functional Information

    PubMed Central

    Folk, Lillian C.; Patrick, Timothy B.; Pattison, James S.; Wolfinger, Russell D.; Mitchell, Joyce A.

    2003-01-01

    Microarrays allow researchers to gather data about the expression patterns of thousands of genes simultaneously. Statistical analysis can reveal which genes show statistically significant results. Making biological sense of those results requires the retrieval of functional information about the genes thus identified, typically a manual gene-by-gene retrieval of information from various on-line databases. For experiments generating thousands of genes of interest, retrieval of functional information can become a significant bottleneck. To address this issue, we are currently developing a prototype system to automate the process of retrieval of functional information from multiple on-line sources. PMID:14728346

  7. Modeling and investigative studies of Jovian low frequency emissions

    NASA Technical Reports Server (NTRS)

    Menietti, J. D.; Green, James L.; Six, N. Frank; Gulkis, S.

    1986-01-01

    Jovian decametric (DAM) and hectometric (HOM) emissions were first observed over the entire spectrum by the Voyager 1 and 2 flybys of the planet. They display unusual arc-like structures on frequency-versus-time spectrograms. Software for the modeling of the Jovian plasma and magnetic field environment was performed. In addition, an extensive library of programs was developed for the retrieval of Voyager Planetary Radio Astronomy (PRA) data in both the high and low frequency bands from new noise-free, recalibrated data tapes. This software allows the option of retrieving data sorted with respect to particular sub-Io longitudes. This has proven to be invaluable in the analyses of the data. Graphics routines were also developed to display the data on color spectrograms.

  8. Modeling and investigative studies of Jovian low frequency emissions

    NASA Astrophysics Data System (ADS)

    Menietti, J. D.; Green, James L.; Six, N. Frank; Gulkis, S.

    1986-08-01

    Jovian decametric (DAM) and hectometric (HOM) emissions were first observed over the entire spectrum by the Voyager 1 and 2 flybys of the planet. They display unusual arc-like structures on frequency-versus-time spectrograms. Software for the modeling of the Jovian plasma and magnetic field environment was performed. In addition, an extensive library of programs was developed for the retrieval of Voyager Planetary Radio Astronomy (PRA) data in both the high and low frequency bands from new noise-free, recalibrated data tapes. This software allows the option of retrieving data sorted with respect to particular sub-Io longitudes. This has proven to be invaluable in the analyses of the data. Graphics routines were also developed to display the data on color spectrograms.

  9. COM3/369: Knowledge-based Information Systems: A new approach for the representation and retrieval of medical information

    PubMed Central

    Mann, G; Birkmann, C; Schmidt, T; Schaeffler, V

    1999-01-01

    Introduction Present solutions for the representation and retrieval of medical information from online sources are not very satisfying. Either the retrieval process lacks of precision and completeness the representation does not support the update and maintenance of the represented information. Most efforts are currently put into improving the combination of search engines and HTML based documents. However, due to the current shortcomings of methods for natural language understanding there are clear limitations to this approach. Furthermore, this approach does not solve the maintenance problem. At least medical information exceeding a certain complexity seems to afford approaches that rely on structured knowledge representation and corresponding retrieval mechanisms. Methods Knowledge-based information systems are based on the following fundamental ideas. The representation of information is based on ontologies that define the structure of the domain's concepts and their relations. Views on domain models are defined and represented as retrieval schemata. Retrieval schemata can be interpreted as canonical query types focussing on specific aspects of the provided information (e.g. diagnosis or therapy centred views). Based on these retrieval schemata it can be decided which parts of the information in the domain model must be represented explicitly and formalised to support the retrieval process. As representation language propositional logic is used. All other information can be represented in a structured but informal way using text, images etc. Layout schemata are used to assign layout information to retrieved domain concepts. Depending on the target environment HTML or XML can be used. Results Based on this approach two knowledge-based information systems have been developed. The 'Ophthalmologic Knowledge-based Information System for Diabetic Retinopathy' (OKIS-DR) provides information on diagnoses, findings, examinations, guidelines, and reference images related to diabetic retinopathy. OKIS-DR uses combinations of findings to specify the information that must be retrieved. The second system focuses on nutrition related allergies and intolerances. Information on allergies and intolerances of a patient are used to retrieve general information on the specified combination of allergies and intolerances. As a special feature the system generates tables showing food types and products that are tolerated or not tolerated by patients. Evaluation by external experts and user groups showed that the described approach of knowledge-based information systems increases the precision and completeness of knowledge retrieval. Due to the structured and non-redundant representation of information the maintenance and update of the information can be simplified. Both systems are available as WWW based online knowledge bases and CD-ROMs (cf. http://mta.gsf.de topic: products).

  10. An Electronic Tree Inventory for Arboriculture Management

    NASA Astrophysics Data System (ADS)

    Tait, Roger J.; Allen, Tony J.; Sherkat, Nasser; Bellett-Travers, Marcus D.

    The integration of Global Positioning System (GPS) technology into mobile devices provides them with an awareness of their physical location. This geospatial context can be employed in a wide range of applications including locating nearby places of interest as well as guiding emergency services to incidents. In this research, a GPS-enabled Personal Digital Assistant (PDA) is used to create a computerised tree inventory for the management of arboriculture. Using the General Packet Radio Service (GPRS), GPS information and arboreal image data are sent to a web-server. An office-based PC running customised Geographical Information Software (GIS) then automatically retrieves the GPS tagged image data for display and analysis purposes. The resulting application allows an expert user to view the condition of individual trees in greater detail than is possible using remotely sensed imagery.

  11. Quantification of micro-CT images of textile reinforcements

    NASA Astrophysics Data System (ADS)

    Straumit, Ilya; Lomov, Stepan V.; Wevers, Martine

    2017-10-01

    VoxTex software (KU Leuven) employs 3D image processing, which use the local directionality information, retrieved using analysis of local structure tensor. The processing results in a voxel 3D array, with each voxel carrying information on (1) material type (matrix; yarn/ply, with identification of the yarn/ply in the reinforcement architecture; void) and (2) fibre direction for fibrous yarns/plies. The knowledge of the material phase volume and known characterisation of the textile structure allows assigning to the voxels (3) fibre volume fraction. This basic voxel model can be further used for different type of the material analysis: Internal geometry and characterisation of defects; permeability; micromechanics; mesoFE voxel models. Apart from the voxel based analysis, approaches to reconstruction of the yarn paths are presented.

  12. piscope - A Python based software package for the analysis of volcanic SO2 emissions using UV SO2 cameras

    NASA Astrophysics Data System (ADS)

    Gliss, Jonas; Stebel, Kerstin; Kylling, Arve; Solvejg Dinger, Anna; Sihler, Holger; Sudbø, Aasmund

    2017-04-01

    UV SO2 cameras have become a common method for monitoring SO2 emission rates from volcanoes. Scattered solar UV radiation is measured in two wavelength windows, typically around 310 nm and 330 nm (distinct / weak SO2 absorption) using interference filters. The data analysis comprises the retrieval of plume background intensities (to calculate plume optical densities), the camera calibration (to convert optical densities into SO2 column densities) and the retrieval of gas velocities within the plume as well as the retrieval of plume distances. SO2 emission rates are then typically retrieved along a projected plume cross section, for instance a straight line perpendicular to the plume propagation direction. Today, for most of the required analysis steps, several alternatives exist due to ongoing developments and improvements related to the measurement technique. We present piscope, a cross platform, open source software toolbox for the analysis of UV SO2 camera data. The code is written in the Python programming language and emerged from the idea of a common analysis platform incorporating a selection of the most prevalent methods found in literature. piscope includes several routines for plume background retrievals, routines for cell and DOAS based camera calibration including two individual methods to identify the DOAS field of view (shape and position) within the camera images. Gas velocities can be retrieved either based on an optical flow analysis or using signal cross correlation. A correction for signal dilution (due to atmospheric scattering) can be performed based on topographic features in the images. The latter requires distance retrievals to the topographic features used for the correction. These distances can be retrieved automatically on a pixel base using intersections of individual pixel viewing directions with the local topography. The main features of piscope are presented based on dataset recorded at Mt. Etna, Italy in September 2015.

  13. SPIRES (Stanford Public Information Retrieval System) 1970-71 Annual Report.

    ERIC Educational Resources Information Center

    Parker, Edwin B.

    SPIRES (Stanford Public Information REtrieval System) is a computer information storage and retrieval system being developed at Stanford University with funding from the National Science Foundation. SPIRES has two major goals: to provide a user-oriented, interactive, on-line retrieval system for a variety of researchers at Stanford; and to support…

  14. Retrieval Cues on Tests: A Strategy for Helping Students Overcome Retrieval Failure

    ERIC Educational Resources Information Center

    Gallagher, Kristel M.

    2017-01-01

    Students often struggle to recall information on tests, frequently claiming to experience a "retrieval failure" of learned information. Thus, the retrieval of information from memory may be a roadblock to student success. I propose a relatively simple adjustment to the wording of test items to help eliminate this potential barrier.…

  15. Comparing the quality of accessing medical literature using content-based visual and textual information retrieval

    NASA Astrophysics Data System (ADS)

    Müller, Henning; Kalpathy-Cramer, Jayashree; Kahn, Charles E., Jr.; Hersh, William

    2009-02-01

    Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently, visual retrieval alone does not achieve the performance necessary for real-world clinical applications. Most of the common visual retrieval techniques have a MAP (Mean Average Precision) of around 2-3%, which is much lower than that achieved using textual retrieval (MAP=29%). Advanced machine learning techniques, together with good training data, have been shown to improve the performance of visual retrieval systems in the past. Multimodal retrieval (basing retrieval on both visual and textual information) can achieve better results than purely visual, but only when carefully applied. In many cases, multimodal retrieval systems performed even worse than purely textual retrieval systems. On the other hand, some multimodal retrieval systems demonstrated significantly increased early precision, which has been shown to be a desirable behavior in real-world systems.

  16. Supporting the Loewenstein occupational therapy cognitive assessment using distributed user interfaces.

    PubMed

    Tesoriero, Ricardo; Gallud Lazaro, Jose A; Altalhi, Abdulrahman H

    2017-02-01

    Improve the quantity and quality of information obtained from traditional Loewenstein Occupational Therapy Cognitive Assessment Battery systems to monitor the evolution of patients' rehabilitation process as well as to compare different rehabilitation therapies. The system replaces traditional artefacts with virtual versions of them to take advantage of cutting edge interaction technology. The system is defined as a Distributed User Interface (DUI) supported by a display ecosystem, including mobile devices as well as multi-touch surfaces. Due to the heterogeneity of the devices involved in the system, the software technology is based on a client-server architecture using the Web as the software platform. The system provides therapists with information that is not available (or it is very difficult to gather) using traditional technologies (i.e. response time measurements, object tracking, information storage and retrieval facilities, etc.). The use of DUIs allows therapists to gather information that is unavailable using traditional assessment methods as well as adapt the system to patients' profile to increase the range of patients that are able to take this assessment. Implications for Rehabilitation Using a Distributed User Interface environment to carry out LOTCAs improves the quality of the information gathered during the rehabilitation assessment. This system captures physical data regarding patient's interaction during the assessment to improve the rehabilitation process analysis. Allows professionals to adapt the assessment procedure to create different versions according to patients' profile. Improves the availability of patients' profile information to therapists to adapt the assessment procedure.

  17. The digital anatomist information system and its use in the generation and delivery of Web-based anatomy atlases.

    PubMed

    Brinkley, J F; Bradley, S W; Sundsten, J W; Rosse, C

    1997-12-01

    Advances in network and imaging technology, coupled with the availability of 3-D datasets such as the Visible Human, provide a unique opportunity for developing information systems in anatomy that can deliver relevant knowledge directly to the clinician, researcher or educator. A software framework is described for developing such a system within a distributed architecture that includes spatial and symbolic anatomy information resources, Web and custom servers, and authoring and end-user client programs. The authoring tools have been used to create 3-D atlases of the brain, knee and thorax that are used both locally and throughout the world. For the one and a half year period from June 1995-January 1997, the on-line atlases were accessed by over 33,000 sites from 94 countries, with an average of over 4000 "hits" per day, and 25,000 hits per day during peak exam periods. The atlases have been linked to by over 500 sites, and have received at least six unsolicited awards by outside rating institutions. The flexibility of the software framework has allowed the information system to evolve with advances in technology and representation methods. Possible new features include knowledge-based image retrieval and tutoring, dynamic generation of 3-D scenes, and eventually, real-time virtual reality navigation through the body. Such features, when coupled with other on-line biomedical information resources, should lead to interesting new ways for managing and accessing structural information in medicine. Copyright 1997 Academic Press.

  18. OPEN-SOURCE SOFTWARE IN DENTISTRY: A SYSTEMATIC REVIEW.

    PubMed

    Chruściel-Nogalska, Małgorzata; Smektała, Tomasz; Tutak, Marcin; Sporniak-Tutak, Katarzyna; Olszewski, Raphael

    2017-01-01

    Technological development and the need for electronic health records management resulted in the need for a computer with dedicated, commercial software in daily dental practice. The alternative for commercial software may be open-source solutions. Therefore, this study reviewed the current literature on the availability and use of open-source software (OSS) in dentistry. A comprehensive database search was performed on February 1, 2017. Only articles published in peer-reviewed journals with a focus on the use or description of OSS were retrieved. The level of evidence, according to Oxford EBM Centre Levels of Evidence Scale was classified for all studies. Experimental studies underwent additional quality reporting assessment. The screening and evaluation process resulted in twenty-one studies from 1,940 articles found, with 10 of them being experimental studies. None of the articles provided level 1 evidence, and only one study was considered high quality following quality assessment. Twenty-six different OSS programs were described in the included studies of which ten were used for image visualization, five were used for healthcare records management, four were used for educations processes, one was used for remote consultation and simulation, and six were used for general purposes. Our analysis revealed that the dental literature on OSS consists of scarce, incomplete, and methodologically low quality information.

  19. Software for analysis of chemical mixtures--composition, occurrence, distribution, and possible toxicity

    USGS Publications Warehouse

    Scott, Jonathon C.; Skach, Kenneth A.; Toccalino, Patricia L.

    2013-01-01

    The composition, occurrence, distribution, and possible toxicity of chemical mixtures in the environment are research concerns of the U.S. Geological Survey and others. The presence of specific chemical mixtures may serve as indicators of natural phenomena or human-caused events. Chemical mixtures may also have ecological, industrial, geochemical, or toxicological effects. Chemical-mixture occurrences vary by analyte composition and concentration. Four related computer programs have been developed by the National Water-Quality Assessment Program of the U.S. Geological Survey for research of chemical-mixture compositions, occurrences, distributions, and possible toxicities. The compositions and occurrences are identified for the user-supplied data, and therefore the resultant counts are constrained by the user’s choices for the selection of chemicals, reporting limits for the analytical methods, spatial coverage, and time span for the data supplied. The distribution of chemical mixtures may be spatial, temporal, and (or) related to some other variable, such as chemical usage. Possible toxicities optionally are estimated from user-supplied benchmark data. The software for the analysis of chemical mixtures described in this report is designed to work with chemical-analysis data files retrieved from the U.S. Geological Survey National Water Information System but can also be used with appropriately formatted data from other sources. Installation and usage of the mixture software are documented. This mixture software was designed to function with minimal changes on a variety of computer-operating systems. To obtain the software described herein and other U.S. Geological Survey software, visit http://water.usgs.gov/software/.

  20. Generic Information Can Retrieve Known Biological Associations: Implications for Biomedical Knowledge Discovery

    PubMed Central

    van Haagen, Herman H. H. B. M.; 't Hoen, Peter A. C.; Mons, Barend; Schultes, Erik A.

    2013-01-01

    Motivation Weighted semantic networks built from text-mined literature can be used to retrieve known protein-protein or gene-disease associations, and have been shown to anticipate associations years before they are explicitly stated in the literature. Our text-mining system recognizes over 640,000 biomedical concepts: some are specific (i.e., names of genes or proteins) others generic (e.g., ‘Homo sapiens’). Generic concepts may play important roles in automated information retrieval, extraction, and inference but may also result in concept overload and confound retrieval and reasoning with low-relevance or even spurious links. Here, we attempted to optimize the retrieval performance for protein-protein interactions (PPI) by filtering generic concepts (node filtering) or links to generic concepts (edge filtering) from a weighted semantic network. First, we defined metrics based on network properties that quantify the specificity of concepts. Then using these metrics, we systematically filtered generic information from the network while monitoring retrieval performance of known protein-protein interactions. We also systematically filtered specific information from the network (inverse filtering), and assessed the retrieval performance of networks composed of generic information alone. Results Filtering generic or specific information induced a two-phase response in retrieval performance: initially the effects of filtering were minimal but beyond a critical threshold network performance suddenly drops. Contrary to expectations, networks composed exclusively of generic information demonstrated retrieval performance comparable to unfiltered networks that also contain specific concepts. Furthermore, an analysis using individual generic concepts demonstrated that they can effectively support the retrieval of known protein-protein interactions. For instance the concept “binding” is indicative for PPI retrieval and the concept “mutation abnormality” is indicative for gene-disease associations. Conclusion Generic concepts are important for information retrieval and cannot be removed from semantic networks without negative impact on retrieval performance. PMID:24260124

  1. Web information retrieval based on ontology

    NASA Astrophysics Data System (ADS)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  2. An Abstraction-Based Data Model for Information Retrieval

    NASA Astrophysics Data System (ADS)

    McAllister, Richard A.; Angryk, Rafal A.

    Language ontologies provide an avenue for automated lexical analysis that may be used to supplement existing information retrieval methods. This paper presents a method of information retrieval that takes advantage of WordNet, a lexical database, to generate paths of abstraction, and uses them as the basis for an inverted index structure to be used in the retrieval of documents from an indexed corpus. We present this method as a entree to a line of research on using ontologies to perform word-sense disambiguation and improve the precision of existing information retrieval techniques.

  3. StreptomycesInforSys: A web-enabled information repository

    PubMed Central

    Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P

    2012-01-01

    Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. Availability www.sis.biowaves.org PMID:23275736

  4. StreptomycesInforSys: A web-enabled information repository.

    PubMed

    Jain, Chakresh Kumar; Gupta, Vidhi; Gupta, Ashvarya; Gupta, Sanjay; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Sarethy, Indira P

    2012-01-01

    Members of Streptomyces produce 70% of natural bioactive products. There is considerable amount of information available based on polyphasic approach for classification of Streptomyces. However, this information based on phenotypic, genotypic and bioactive component production profiles is crucial for pharmacological screening programmes. This is scattered across various journals, books and other resources, many of which are not freely accessible. The designed database incorporates polyphasic typing information using combinations of search options to aid in efficient screening of new isolates. This will help in the preliminary categorization of appropriate groups. It is a free relational database compatible with existing operating systems. A cross platform technology with XAMPP Web server has been used to develop, manage, and facilitate the user query effectively with database support. Employment of PHP, a platform-independent scripting language, embedded in HTML and the database management software MySQL will facilitate dynamic information storage and retrieval. The user-friendly, open and flexible freeware (PHP, MySQL and Apache) is foreseen to reduce running and maintenance cost. www.sis.biowaves.org.

  5. The 1980 land cover for the Puget Sound region

    NASA Technical Reports Server (NTRS)

    Shinn, R. D.; Westerlund, F. V.; Eby, J. R.

    1982-01-01

    Both LANDSAT imagery and the video information communications and retrieval software were used to develop a land cover classifiction of the Puget Sound of Washington. Planning agencies within the region were provided with a highly accurate land cover map registered to the 1980 census tracts which could subsequently be incorporated as one data layer in a multi-layer data base. Many historical activities related to previous land cover mapping studies conducted in the Puget Sound region are summarized. Valuable insight into conducting a project with a large community of users and in establishing user confidence in a multi-purpose land cover map derived from LANDSAT is provided.

  6. Spacelab data management subsystem phase B study

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The Spacelab data management system is described. The data management subsystem (DMS) integrates the avionics equipment into an operational system by providing the computations, logic, signal flow, and interfaces needed to effectively command, control, monitor, and check out the experiment and subsystem hardware. Also, the DMS collects/retrieves experiment data and other information by recording and by command of the data relay link to ground. The major elements of the DMS are the computer subsystem, data acquisition and distribution subsystem, controls and display subsystem, onboard checkout subsystem, and software. The results of the DMS portion of the Spacelab Phase B Concept Definition Study are analyzed.

  7. Java and its future in biomedical computing.

    PubMed Central

    Rodgers, R P

    1996-01-01

    Java, a new object-oriented computing language related to C++, is receiving considerable attention due to its use in creating network-sharable, platform-independent software modules (known as "applets") that can be used with the World Wide Web. The Web has rapidly become the most commonly used information-retrieval tool associated with the global computer network known as the Internet, and Java has the potential to further accelerate the Web's application to medical problems. Java's potentially wide acceptance due to its Web association and its own technical merits also suggests that it may become a popular language for non-Web-based, object-oriented computing. PMID:8880677

  8. 4D reconstruction of the past: the image retrieval and 3D model construction pipeline

    NASA Astrophysics Data System (ADS)

    Hadjiprocopis, Andreas; Ioannides, Marinos; Wenzel, Konrad; Rothermel, Mathias; Johnsons, Paul S.; Fritsch, Dieter; Doulamis, Anastasios; Protopapadakis, Eftychios; Kyriakaki, Georgia; Makantasis, Kostas; Weinlinger, Guenther; Klein, Michael; Fellner, Dieter; Stork, Andre; Santos, Pedro

    2014-08-01

    One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Our main goal is to enable historians, architects, archaeolo- gists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web. This paper aims to provide an update of our progress in designing and imple- menting a pipeline for searching, filtering and retrieving photographs from Open Access Image Repositories and social media sites and using these images to build accurate 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EU- ROPEANA. We provide details of how our implemented software searches and retrieves images of archaeological sites from Flickr and Picasa repositories as well as strategies on how to filter the results, on two levels; a) based on their built-in metadata including geo-location information and b) based on image processing and clustering techniques. We also describe our implementation of a Structure from Motion pipeline designed for producing 3D models using the large collection of 2D input images (>1000) retrieved from Internet Repositories.

  9. PGMapper: a web-based tool linking phenotype to genes.

    PubMed

    Xiong, Qing; Qiu, Yuhui; Gu, Weikuan

    2008-04-01

    With the availability of whole genome sequence in many species, linkage analysis, positional cloning and microarray are gradually becoming powerful tools for investigating the links between phenotype and genotype or genes. However, in these methods, causative genes underlying a quantitative trait locus, or a disease, are usually located within a large genomic region or a large set of genes. Examining the function of every gene is very time consuming and needs to retrieve and integrate the information from multiple databases or genome resources. PGMapper is a software tool for automatically matching phenotype to genes from a defined genome region or a group of given genes by combining the mapping information from the Ensembl database and gene function information from the OMIM and PubMed databases. PGMapper is currently available for candidate gene search of human, mouse, rat, zebrafish and 12 other species. Available online at http://www.genediscovery.org/pgmapper/index.jsp.

  10. A rudimentary database for three-dimensional objects using structural representation

    NASA Technical Reports Server (NTRS)

    Sowers, James P.

    1987-01-01

    A database which enables users to store and share the description of three-dimensional objects in a research environment is presented. The main objective of the design is to make it a compact structure that holds sufficient information to reconstruct the object. The database design is based on an object representation scheme which is information preserving, reasonably efficient, and yet economical in terms of the storage requirement. The determination of the needed data for the reconstruction process is guided by the belief that it is faster to do simple computations to generate needed data/information for construction than to retrieve everything from memory. Some recent techniques of three-dimensional representation that influenced the design of the database are discussed. The schema for the database and the structural definition used to define an object are given. The user manual for the software developed to create and maintain the contents of the database is included.

  11. Tracking-Data-Conversion Tool

    NASA Technical Reports Server (NTRS)

    Flora-Adams, Dana; Makihara, Jeanne; Benenyan, Zabel; Berner, Jeff; Kwok, Andrew

    2007-01-01

    Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-topeer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.

  12. Compression and fast retrieval of SNP data.

    PubMed

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Compression and fast retrieval of SNP data

    PubMed Central

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-01-01

    Motivation: The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. Results: We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Availability and implementation: Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. Contact: sambofra@dei.unipd.it or cobelli@dei.unipd.it. PMID:25064564

  14. Storage and retrieval of digital images in dermatology.

    PubMed

    Bittorf, A; Krejci-Papa, N C; Diepgen, T L

    1995-11-01

    Differential diagnosis in dermatology relies on the interpretation of visual information in the form of clinical and histopathological images. Up until now, reference images have had to be retrieved from textbooks and/or appropriate journals. To overcome inherent limitations of those storage media with respect to the number of images stored, display, and search parameters available, we designed a computer-based database of digitized dermatologic images. Images were taken from the photo archive of the Dermatological Clinic of the University of Erlangen. A database was designed using the Entity-Relationship approach. It was implemented on a PC-Windows platform using MS Access* and MS Visual Basic®. As WWW-server a Sparc 10 workstation was used with the CERN Hypertext-Transfer-Protocol-Daemon (httpd) 3.0 pre 6 software running. For compressed storage on a hard drive, a quality factor of 60 allowed on-screen differential diagnosis and corresponded to a compression factor of 1:35 for clinical images and 1:40 for histopathological images. Hierarchical keys of clinical or histopathological criteria permitted multi-criteria searches. A script using the Common Gateway Interface (CGI) enabled remote search and image retrieval via the World-Wide-Web (W3). A dermatologic image database, featurig clinical and histopathological images was constructed which allows for multi-parameter searches and world-wide remote access.

  15. Multimodal medical information retrieval with unsupervised rank fusion.

    PubMed

    Mourão, André; Martins, Flávio; Magalhães, João

    2015-01-01

    Modern medical information retrieval systems are paramount to manage the insurmountable quantities of clinical data. These systems empower health care experts in the diagnosis of patients and play an important role in the clinical decision process. However, the ever-growing heterogeneous information generated in medical environments poses several challenges for retrieval systems. We propose a medical information retrieval system with support for multimodal medical case-based retrieval. The system supports medical information discovery by providing multimodal search, through a novel data fusion algorithm, and term suggestions from a medical thesaurus. Our search system compared favorably to other systems in 2013 ImageCLEFMedical. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. A Semantic Medical Multimedia Retrieval Approach Using Ontology Information Hiding

    PubMed Central

    Guo, Kehua; Zhang, Shigeng

    2013-01-01

    Searching useful information from unstructured medical multimedia data has been a difficult problem in information retrieval. This paper reports an effective semantic medical multimedia retrieval approach which can reflect the users' query intent. Firstly, semantic annotations will be given to the multimedia documents in the medical multimedia database. Secondly, the ontology that represented semantic information will be hidden in the head of the multimedia documents. The main innovations of this approach are cross-type retrieval support and semantic information preservation. Experimental results indicate a good precision and efficiency of our approach for medical multimedia retrieval in comparison with some traditional approaches. PMID:24082915

  17. Large Scale Hierarchical K-Means Based Image Retrieval With MapReduce

    DTIC Science & Technology

    2014-03-27

    hadoop distributed file system: Architecture and design, 2007. [10] G. Bradski. Dr. Dobb’s Journal of Software Tools, 2000. [11] Terry Costlow. Big data ...million images running on 20 virtual machines are shown. 15. SUBJECT TERMS Image Retrieval, MapReduce, Hierarchical K-Means, Big Data , Hadoop U U U UU 87...13 2.1.1.2 HDFS Data Representation . . . . . . . . . . . . . . . . 14 2.1.1.3 Hadoop Engine

  18. An Evaluation of On-Line Information Retrieval System Techniques.

    ERIC Educational Resources Information Center

    Wolfe, Theodore

    This report presents a review and evaluation of three remote access on-line information retrieval systems and some ideas on what the capabilities of an ideal on-line information retrieval system should be. The three systems reviewed are the DDC Remote On-Line Retrieval System, the National Aeronautics and Space Administration RECON System, and the…

  19. Requirements for SPIRES II. An External Specification for the Stanford Public Information Retrieval System.

    ERIC Educational Resources Information Center

    Parker, Edwin B.

    SPIRES (Stanford Public Information Retrieval System) is a computerized information storage and retrieval system intended for use by students and faculty members who have little knowledge of computers but who need rapid and sophisticated retrieval and analysis. The functions and capabilities of the system from the user's point of view are…

  20. Knowledge and utilization of computer-software for statistics among Nigerian dentists.

    PubMed

    Chukwuneke, F N; Anyanechi, C E; Obiakor, A O; Amobi, O; Onyejiaka, N; Alamba, I

    2013-01-01

    The use of computer soft ware for generation of statistic analysis has transformed health information and data to simplest form in the areas of access, storage, retrieval and analysis in the field of research. This survey therefore was carried out to assess the level of knowledge and utilization of computer software for statistical analysis among dental researchers in eastern Nigeria. Questionnaires on the use of computer software for statistical analysis were randomly distributed to 65 practicing dental surgeons of above 5 years experience in the tertiary academic hospitals in eastern Nigeria. The focus was on: years of clinical experience; research work experience; knowledge and application of computer generated software for data processing and stastistical analysis. Sixty-two (62/65; 95.4%) of these questionnaires were returned anonymously, which were used in our data analysis. Twenty-nine (29/62; 46.8%) respondents fall within those with 5-10 years of clinical experience out of which none has completed the specialist training programme. Practitioners with above 10 years clinical experiences were 33 (33/62; 53.2%) out of which 15 (15/33; 45.5%) are specialists representing 24.2% (15/62) of the total number of respondents. All the 15 specialists are actively involved in research activities and only five (5/15; 33.3%) can utilize software statistical analysis unaided. This study has i dentified poor utilization of computer software for statistic analysis among dental researchers in eastern Nigeria. This is strongly associated with lack of exposure on the use of these software early enough especially during the undergraduate training. This call for introduction of computer training programme in dental curriculum to enable practitioners develops the attitude of using computer software for their research.

  1. Graph Visualization for RDF Graphs with SPARQL-EndPoints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Bond, Nathaniel

    2014-07-11

    RDF graphs are hard to visualize as triples. This software module is a web interface that connects to a SPARQL endpoint and retrieves graph data that the user can explore interactively and seamlessly. The software written in python and JavaScript has been tested to work on screens as little as the smart phones to large screens such as EVEREST.

  2. Objective and Item Banking Computer Software and Its Use in Comprehensive Achievement Monitoring.

    ERIC Educational Resources Information Center

    Schriber, Peter E.; Gorth, William P.

    The current emphasis on objectives and test item banks for constructing more effective tests is being augmented by increasingly sophisticated computer software. Items can be catalogued in numerous ways for retrieval. The items as well as instructional objectives can be stored and test forms can be selected and printed by the computer. It is also…

  3. The astronaut and the banana peel: An EVA retriever scenario

    NASA Technical Reports Server (NTRS)

    Shapiro, Daniel G.

    1989-01-01

    To prepare for the problem of accidents in Space Station activities, the Extravehicular Activity Retriever (EVAR) robot is being constructed, whose purpose is to retrieve astronauts and tools that float free of the Space Station. Advanced Decision Systems is at the beginning of a project to develop research software capable of guiding EVAR through the retrieval process. This involves addressing problems in machine vision, dexterous manipulation, real time construction of programs via speech input, and reactive execution of plans despite the mishaps and unexpected conditions that arise in uncontrolled domains. The problem analysis phase of this work is presented. An EVAR scenario is used to elucidate major domain and technical problems. An overview of the technical approach to prototyping an EVAR system is also presented.

  4. Feedback increases benefits but not costs of retrieval practice: Retrieval-induced forgetting is strength independent.

    PubMed

    Tempel, Tobias; Frings, Christian

    2018-04-01

    We examined how the provision of feedback affected two separate effects of retrieval practice: strengthening of practiced information and forgetting of related, unpracticed information. Feedback substantially increased recall of retrieval-practiced items. This unsurprising result shows once again that restudy opportunities boost the benefits of testing. In contrast, retrieval-induced forgetting was unaffected by the manipulation and occurred in equal size with or without feedback. These findings demonstrate strength independence of retrieval-induced forgetting and thus support a theoretical account assuming that an inhibitory mechanism causes retrieval-induced forgetting. According to this theory, inhibition resolves competition that arises during retrieval attempts but is unrelated to the consequences of retrieval practice concerning practiced items. The present results match these assumptions and contradict the theoretical alternative that blocking by strengthened information might explain retrieval-induced forgetting. We discuss our findings against the background of previous studies.

  5. Critical Design Decisions of The Planck LFI Level 1 Software

    NASA Astrophysics Data System (ADS)

    Morisset, N.; Rohlfs, R.; Türler, M.; Meharga, M.; Binko, P.; Beck, M.; Frailis, M.; Zacchei, A.

    2010-12-01

    The PLANCK satellite with two on-board instruments, a Low Frequency Instrument (LFI) and a High Frequency Instrument (HFI) has been launched on May 14th with Ariane 5. The ISDC Data Centre for Astrophysics in Versoix, Switzerland has developed and maintains the Planck LFI Level 1 software for the Data Processing Centre (DPC) in Trieste, Italy. The main tasks of the Level 1 processing are to retrieve the daily available scientific and housekeeping (HK) data of the LFI instrument, the Sorption Cooler and the 4k Cooler data from Mission Operation Centre (MOC) in Darmstadt; to sort them by time and by type (detector, observing mode, etc...); to extract the spacecraft attitude information from auxiliary files; to flag the data according to several criteria; and to archive the resulting Time Ordered Information (TOI), which will then be used to produce maps of the sky in different spectral bands. The output of the Level 1 software are the TOI files in FITS format, later ingested into the Data Management Component (DMC) database. This software has been used during different phases of the LFI instrument development. We started to reuse some ISDC components for the LFI Qualification Model (QM) and we completely rework the software for the Flight Model (FM). This was motivated by critical design decisions taken jointly with the DPC. The main questions were: a) the choice of the data format: FITS or DMC? b) the design of the pipelines: use of the Planck Process Coordinator (ProC) or a simple Perl script? c) do we adapt the existing QM software or do we restart from scratch? The timeline and available manpower are also important issues to be taken into account. We present here the orientation of our choices and discuss their pertinence based on the experience of the final pre-launch tests and the start of real Planck LFI operations.

  6. The identification of clinically important elements within medical journal abstracts: Patient-Population-Problem, Exposure-Intervention, Comparison, Outcome, Duration and Results (PECODR).

    PubMed

    Dawes, Martin; Pluye, Pierre; Shea, Laura; Grad, Roland; Greenberg, Arlene; Nie, Jian-Yun

    2007-01-01

    Information retrieval in primary care is becoming more difficult as the volume of medical information held in electronic databases expands. The lexical structure of this information might permit automatic indexing and improved retrieval. To determine the possibility of identifying the key elements of clinical studies, namely Patient-Population-Problem, Exposure-Intervention, Comparison, Outcome, Duration and Results (PECODR), from abstracts of medical journals. We used a convenience sample of 20 synopses from the journal Evidence-Based Medicine (EBM) and their matching original journal article abstracts obtained from PubMed. Three independent primary care professionals identified PECODR-related extracts of text. Rules were developed to define each PECODR element and the selection process of characters, words, phrases and sentences. From the extracts of text related to PECODR elements, potential lexical patterns that might help identify those elements were proposed and assessed using NVivo software. A total of 835 PECODR-related text extracts containing 41,263 individual text characters were identified from 20 EBM journal synopses. There were 759 extracts in the corresponding PubMed abstracts containing 31,947 characters. PECODR elements were found in nearly all abstracts and synopses with the exception of duration. There was agreement on 86.6% of the extracts from the 20 EBM synopses and 85.0% on the corresponding PubMed abstracts. After consensus this rose to 98.4% and 96.9% respectively. We found potential text patterns in the Comparison, Outcome and Results elements of both EBM synopses and PubMed abstracts. Some phrases and words are used frequently and are specific for these elements in both synopses and abstracts. Results suggest a PECODR-related structure exists in medical abstracts and that there might be lexical patterns specific to these elements. More sophisticated computer-assisted lexical-semantic analysis might refine these results, and pave the way to automating PECODR indexing, and improve information retrieval in primary care.

  7. TCGA-assembler 2: software pipeline for retrieval and processing of TCGA/CPTAC data.

    PubMed

    Wei, Lin; Jin, Zhilin; Yang, Shengjie; Xu, Yanxun; Zhu, Yitan; Ji, Yuan

    2018-05-01

    The Cancer Genome Atlas (TCGA) program has produced huge amounts of cancer genomics data providing unprecedented opportunities for research. In 2014, we developed TCGA-Assembler, a software pipeline for retrieval and processing of public TCGA data. In 2016, TCGA data were transferred from the TCGA data portal to the Genomic Data Commons (GDCs), which is supported by a different set of data storage and retrieval mechanisms. In addition, new proteomics data of TCGA samples have been generated by the Clinical Proteomic Tumor Analysis Consortium (CPTAC) program, which were not available for downloading through TCGA-Assembler. It is desirable to acquire and integrate data from both GDC and CPTAC. We develop TCGA-assembler 2 (TA2) to automatically download and integrate data from GDC and CPTAC. We make substantial improvement on the functionality of TA2 to enhance user experience and software performance. TA2 together with its previous version have helped more than 2000 researchers from 64 countries to access and utilize TCGA and CPTAC data in their research. Availability of TA2 will continue to allow existing and new users to conduct reproducible research based on TCGA and CPTAC data. http://www.compgenome.org/TCGA-Assembler/ or https://github.com/compgenome365/TCGA-Assembler-2. zhuyitan@gmail.com or koaeraser@gmail.com. Supplementary data are available at Bioinformatics online.

  8. Assimilation of SMOS Retrieved Soil Moisture into the Land Information System

    NASA Technical Reports Server (NTRS)

    Blankenship, Clay B.; Case, Jonathan L.; Zavodsky, Bradley T.

    2014-01-01

    Soil moisture is a crucial variable for weather prediction because of its influence on evaporation and surface heat fluxes. It is also of critical importance for drought and flood monitoring and prediction and for public health applications such as monitoring vector-borne diseases. Land surface modeling benefits greatly from regular updates with soil moisture observations via data assimilation. Satellite remote sensing is the only practical observation type for this purpose in most areas due to its worldwide coverage. The newest operational satellite sensor for soil moisture is the Microwave Imaging Radiometer using Aperture Synthesis (MIRAS) instrument aboard the Soil Moisture and Ocean Salinity (SMOS) satellite. The NASA Short-term Prediction Research and Transition Center (SPoRT) has implemented the assimilation of SMOS soil moisture observations into the NASA Land Information System (LIS), an integrated modeling and data assimilation software platform. We present results from assimilating SMOS observations into the Noah 3.2 land surface model within LIS. The SMOS MIRAS is an L-band radiometer launched by the European Space Agency in 2009, from which we assimilate Level 2 retrievals [1] into LIS-Noah. The measurements are sensitive to soil moisture concentration in roughly the top 2.5 cm of soil. The retrievals have a target volumetric accuracy of 4% at a resolution of 35-50 km. Sensitivity is reduced where precipitation, snowcover, frozen soil, or dense vegetation is present. Due to the satellite's polar orbit, the instrument achieves global coverage twice daily at most mid- and low-latitude locations, with only small gaps between swaths.

  9. The effects of retrieval ease on health issue judgments: implications for campaign strategies.

    PubMed

    Chang, Chingching

    2010-12-01

    This paper examines the effects of retrieving information about a health ailment on judgments of the perceived severity of the disease and self-efficacy regarding prevention and treatment. The literature on metacognition suggests that recall tasks render two types of information accessible: the retrieved content, and the subjective experience of retrieving the content. Both types of information can influence judgments. Content-based thinking models hold that the more instances of an event people can retrieve, the higher they will estimate the frequency of the event to be. In contrast, experience-based thinking models suggest that when people experience difficulty in retrieving information regarding an event, they rate the event as less likely to occur. In the first experiment, ease of retrieval was manipulated by asking participants to list either a high or low number of consequences of an ailment. As expected, retrieval difficulty resulted in lower perceived disease severity. In the second experiment, ease of retrieval was manipulated by varying the number of disease prevention or treatment measures participants attempted to list. As predicted, retrieval difficulty resulted in lower self-efficacy regarding prevention and treatment. In experiment three, when information regarding a health issue was made accessible by exposure to public service announcements (PSAs), ease-of-retrieval effects were attenuated. Finally, in experiment four, exposure to PSAs encouraged content-based judgments when the issue was of great concern.

  10. Designing an information search interface for younger and older adults.

    PubMed

    Pak, Richard; Price, Margaux M

    2008-08-01

    The present study examined Web-based information retrieval as a function of age for two information organization schemes: hierarchical organization and one organized around tags or keywords. Older adults' performance in information retrieval tasks has traditionally been lower compared with younger adults'. The current study examined the degree to which information organization moderated age-related performance differences on an information retrieval task. The theory of fluid and crystallized intelligence may provide insight into different kinds of information architectures that may reduce age-related differences in computer-based information retrieval performance. Fifty younger (18-23 years of age) and 50 older (55-76 years of age) participants browsed a Web site for answers to specific questions. Half of the participants browsed the hierarchically organized system (taxonomy), which maintained a one-to-one relationship between menu link and page, whereas the other half browsed the tag-based interface, with a many-to-one relationship between menu and page. This difference was expected to interact with age-related differences in fluid and crystallized intelligence. Age-related differences in information retrieval performance persisted; however, a tag-based retrieval interface reduced age-related differences, as compared with a taxonomical interface. Cognitive aging theory can lead to interface interventions that reduce age-related differences in performance with technology. In an information retrieval paradigm, older adults may be able to leverage their increased crystallized intelligence to offset fluid intelligence declines in a computer-based information search task. More research is necessary, but the results suggest that information retrieval interfaces organized around keywords may reduce age-related differences in performance.

  11. A Note about Information Science Research.

    ERIC Educational Resources Information Center

    Salton, Gerard

    1985-01-01

    Discusses the relationship between information science research and practice and briefly describes current research on 10 topics in information retrieval literature: vector processing retrieval strategy, probabilistic retrieval models, inverted file procedures, relevance feedback, Boolean query formulations, front-end procedures, citation…

  12. Environmental Information Management For Data Discovery and Access System

    NASA Astrophysics Data System (ADS)

    Giriprakash, P.

    2011-01-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury was developed during 2007 and released in early 2008. This new version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow ! the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.

  13. Data from selected U.S. Geological Survey National Stream Water Quality Monitoring Networks

    USGS Publications Warehouse

    Alexander, Richard B.; Slack, James R.; Ludtke, Amy S.; Fitzgerald, Kathleen K.; Schertz, Terry L.

    1998-01-01

    A nationally consistent and well-documented collection of water quality and quantity data compiled during the past 30 years for streams and rivers in the United States is now available on CD-ROM and accessible over the World Wide Web. The data include measurements from two U.S. Geological Survey (USGS) national networks for 122 physical, chemical, and biological properties of water collected at 680 monitoring stations from 1962 to 1995, quality assurance information that describes the sample collection agencies, laboratories, analytical methods, and estimates of laboratory measurement error (bias and variance), and information on selected cultural and natural characteristics of the station watersheds. The data are easily accessed via user-supplied software including Web browser, spreadsheet, and word processor, or may be queried and printed according to user-specified criteria using the supplied retrieval software on CD-ROM. The water quality data serve a variety of scientific uses including research and educational applications related to trend detection, flux estimation, investigations of the effects of the natural environment and cultural sources on water quality, and the development of statistical methods for designing efficient monitoring networks and interpreting water resources data.

  14. Developmental Differences in the Use of Retrieval Cues to Describe Episodic Information in Memory.

    ERIC Educational Resources Information Center

    Ackerman, Brian P.; Rathburn, Jill

    1984-01-01

    Examines reasons why second and fourth grade students use cues relatively ineffectively to retrieve episodic information. Four experiments tested the hypothesis that retrieval cue effectiveness varies with the extent to which cue information describes event information in memory. Results showed that problems of discriminability and…

  15. Infectious Cognition: Risk Perception Affects Socially Shared Retrieval-Induced Forgetting of Medical Information.

    PubMed

    Coman, Alin; Berry, Jessica N

    2015-12-01

    When speakers selectively retrieve previously learned information, listeners often concurrently, and covertly, retrieve their memories of that information. This concurrent retrieval typically enhances memory for mentioned information (the rehearsal effect) and impairs memory for unmentioned but related information (socially shared retrieval-induced forgetting, SSRIF), relative to memory for unmentioned and unrelated information. Building on research showing that anxiety leads to increased attention to threat-relevant information, we explored whether concurrent retrieval is facilitated in high-anxiety real-world contexts. Participants first learned category-exemplar facts about meningococcal disease. Following a manipulation of perceived risk of infection (low vs. high risk), they listened to a mock radio show in which some of the facts were selectively practiced. Final recall tests showed that the rehearsal effect was equivalent between the two risk conditions, but SSRIF was significantly larger in the high-risk than in the low-risk condition. Thus, the tendency to exaggerate consequences of news events was found to have deleterious consequences. © The Author(s) 2015.

  16. SORTEZ: a relational translator for NCBI's ASN.1 database.

    PubMed

    Hart, K W; Searls, D B; Overton, G C

    1994-07-01

    The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.

  17. Collaborative Information Retrieval Method among Personal Repositories

    NASA Astrophysics Data System (ADS)

    Kamei, Koji; Yukawa, Takashi; Yoshida, Sen; Kuwabara, Kazuhiro

    In this paper, we describe a collaborative information retrieval method among personal repositorie and an implementation of the method on a personal agent framework. We propose a framework for personal agents that aims to enable the sharing and exchange of information resources that are distributed unevenly among individuals. The kernel of a personal agent framework is an RDF(resource description framework)-based information repository for storing, retrieving and manipulating privately collected information, such as documents the user read and/or wrote, email he/she exchanged, web pages he/she browsed, etc. The repository also collects annotations to information resources that describe relationships among information resources and records of interaction between the user and information resources. Since the information resources in a personal repository and their structure are personalized, information retrieval from other users' is an important application of the personal agent. A vector space model with a personalized concept-base is employed as an information retrieval mechanism in a personal repository. Since a personalized concept-base is constructed from information resources in a personal repository, it reflects its user's knowledge and interests. On the other hand, it leads to another problem while querying other users' personal repositories; that is, simply transferring query requests does not provide desirable results. To solve this problem, we propose a query equalization scheme based on a relevance feedback method for collaborative information retrieval between personalized concept-bases. In this paper, we describe an implementation of the collaborative information retrieval method and its user interface on the personal agent framework.

  18. Improving biomedical information retrieval by linear combinations of different query expansion techniques.

    PubMed

    Abdulla, Ahmed AbdoAziz Ahmed; Lin, Hongfei; Xu, Bo; Banbhrani, Santosh Kumar

    2016-07-25

    Biomedical literature retrieval is becoming increasingly complex, and there is a fundamental need for advanced information retrieval systems. Information Retrieval (IR) programs scour unstructured materials such as text documents in large reserves of data that are usually stored on computers. IR is related to the representation, storage, and organization of information items, as well as to access. In IR one of the main problems is to determine which documents are relevant and which are not to the user's needs. Under the current regime, users cannot precisely construct queries in an accurate way to retrieve particular pieces of data from large reserves of data. Basic information retrieval systems are producing low-quality search results. In our proposed system for this paper we present a new technique to refine Information Retrieval searches to better represent the user's information need in order to enhance the performance of information retrieval by using different query expansion techniques and apply a linear combinations between them, where the combinations was linearly between two expansion results at one time. Query expansions expand the search query, for example, by finding synonyms and reweighting original terms. They provide significantly more focused, particularized search results than do basic search queries. The retrieval performance is measured by some variants of MAP (Mean Average Precision) and according to our experimental results, the combination of best results of query expansion is enhanced the retrieved documents and outperforms our baseline by 21.06 %, even it outperforms a previous study by 7.12 %. We propose several query expansion techniques and their combinations (linearly) to make user queries more cognizable to search engines and to produce higher-quality search results.

  19. A big data geospatial analytics platform - Physical Analytics Integrated Repository and Services (PAIRS)

    NASA Astrophysics Data System (ADS)

    Hamann, H.; Jimenez Marianno, F.; Klein, L.; Albrecht, C.; Freitag, M.; Hinds, N.; Lu, S.

    2015-12-01

    A big data geospatial analytics platform:Physical Analytics Information Repository and Services (PAIRS)Fernando Marianno, Levente Klein, Siyuan Lu, Conrad Albrecht, Marcus Freitag, Nigel Hinds, Hendrik HamannIBM TJ Watson Research Center, Yorktown Heights, NY 10598A major challenge in leveraging big geospatial data sets is the ability to quickly integrate multiple data sources into physical and statistical models and be run these models in real time. A geospatial data platform called Physical Analytics Information and Services (PAIRS) is developed on top of open source hardware and software stack to manage Terabyte of data. A new data interpolation and re gridding is implemented where any geospatial data layers can be associated with a set of global grid where the grid resolutions is doubling for consecutive layers. Each pixel on the PAIRS grid have an index that is a combination of locations and time stamp. The indexing allow quick access to data sets that are part of a global data layers and allowing to retrieve only the data of interest. PAIRS takes advantages of parallel processing framework (Hadoop) in a cloud environment to digest, curate, and analyze the data sets while being very robust and stable. The data is stored on a distributed no-SQL database (Hbase) across multiple server, data upload and retrieval is parallelized where the original analytics task is broken up is smaller areas/volume, analyzed independently, and then reassembled for the original geographical area. The differentiating aspect of PAIRS is the ability to accelerate model development across large geographical regions and spatial resolution ranging from 0.1 m up to hundreds of kilometer. System performance is benchmarked on real time automated data ingestion and retrieval of Modis and Landsat data layers. The data layers are curated for sensor error, verified for correctness, and analyzed statistically to detect local anomalies. Multi-layer query enable PAIRS to filter different data layers based on specific conditions (e.g analyze flooding risk of a property based on topography, soil ability to hold water, and forecasted precipitation) or retrieve information about locations that share similar weather and vegetation patterns during extreme weather events like heat wave.

  20. 42 CFR 433.110 - Basis, purpose, and applicability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... and Information Retrieval Systems § 433.110 Basis, purpose, and applicability. (a) This subpart... information retrieval systems and for the operation of certain systems. Additional HHS regulations and CMS... conditions on mechanized claims processing and information retrieval systems (including eligibility...

  1. 42 CFR 433.110 - Basis, purpose, and applicability.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... and Information Retrieval Systems § 433.110 Basis, purpose, and applicability. (a) This subpart... information retrieval systems and for the operation of certain systems. Additional HHS regulations and CMS... conditions on mechanized claims processing and information retrieval systems (including eligibility...

  2. 42 CFR 433.110 - Basis, purpose, and applicability.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... and Information Retrieval Systems § 433.110 Basis, purpose, and applicability. (a) This subpart... information retrieval systems and for the operation of certain systems. Additional HHS regulations and CMS... conditions on mechanized claims processing and information retrieval systems (including eligibility...

  3. 42 CFR 433.110 - Basis, purpose, and applicability.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... and Information Retrieval Systems § 433.110 Basis, purpose, and applicability. (a) This subpart... information retrieval systems and for the operation of certain systems. Additional HHS regulations and CMS... conditions on mechanized claims processing and information retrieval systems (including eligibility...

  4. A tutorial on information retrieval: basic terms and concepts

    PubMed Central

    Zhou, Wei; Smalheiser, Neil R; Yu, Clement

    2006-01-01

    This informal tutorial is intended for investigators and students who would like to understand the workings of information retrieval systems, including the most frequently used search engines: PubMed and Google. Having a basic knowledge of the terms and concepts of information retrieval should improve the efficiency and productivity of searches. As well, this knowledge is needed in order to follow current research efforts in biomedical information retrieval and text mining that are developing new systems not only for finding documents on a given topic, but extracting and integrating knowledge across documents. PMID:16722601

  5. Millimeter-wave Imaging Radiometer (MIR) data processing and development of water vapor retrieval algorithms

    NASA Technical Reports Server (NTRS)

    Chang, L. Aron

    1995-01-01

    This document describes the progress of the task of the Millimeter-wave Imaging Radiometer (MIR) data processing and the development of water vapor retrieval algorithms, for the second six-month performing period. Aircraft MIR data from two 1995 field experiments were collected and processed with a revised data processing software. Two revised versions of water vapor retrieval algorithm were developed, one for the execution of retrieval on a supercomputer platform, and one for using pressure as the vertical coordinate. Two implementations of incorporating products from other sensors into the water vapor retrieval system, one from the Special Sensor Microwave Imager (SSM/I), the other from the High-resolution Interferometer Sounder (HIS). Water vapor retrievals were performed for both airborne MIR data and spaceborne SSM/T-2 data, during field experiments of TOGA/COARE, CAMEX-1, and CAMEX-2. The climatology of water vapor during TOGA/COARE was examined by SSM/T-2 soundings and conventional rawinsonde.

  6. Full extraction methods to retrieve effective refractive index and parameters of a bianisotropic metamaterial based on material dispersion models

    NASA Astrophysics Data System (ADS)

    Hsieh, Feng-Ju; Wang, Wei-Chih

    2012-09-01

    This paper discusses two improved methods in retrieving effective refractive indices, impedances, and material properties, such as permittivity (ɛ) and permeability (μ), of metamaterials. The first method modified from Kong's retrieval method allows effective constitutive parameters over all frequencies including the anti-resonant band, where imaginary parts of ɛ or μ are negative, to be solved. The second method is based on genetic algorithms and optimization of properly defined goal functions to retrieve parameters of the Drude and Lorentz dispersion models. Equations of effective refractive index and impedance at any reference planes are derived. Split ring resonator-rod based metamaterials operating in terahertz frequencies are designed and investigated with proposed methods. Retrieved material properties and parameters are used to regenerate S-parameters and compared with simulation results generated by cst microwave studio software.

  7. CLAES Product Improvement by Use of the GSFC Data Assimilation System (DAS)

    NASA Technical Reports Server (NTRS)

    Kumer, J. B.; Douglass, Anne (Technical Monitor)

    2000-01-01

    This report presents the Cryogenic Limb Array Etalon Spectrometer (CLAES) product improvement by use of the GSFC Data Assimilation System (DAS). The first task is to plug line of sight gradients derived from the CTM for 2/20/92 into the forward model of our retrieval software (RSW) in order to assess the impact on the retrieved quantities. The reporting period covers 12 May 2000 - 21 December 2000.

  8. Software Application Profile: Opal and Mica: open-source software solutions for epidemiological data management, harmonization and dissemination

    PubMed Central

    Doiron, Dany; Marcon, Yannick; Fortier, Isabel; Burton, Paul; Ferretti, Vincent

    2017-01-01

    Abstract Motivation Improving the dissemination of information on existing epidemiological studies and facilitating the interoperability of study databases are essential to maximizing the use of resources and accelerating improvements in health. To address this, Maelstrom Research proposes Opal and Mica, two inter-operable open-source software packages providing out-of-the-box solutions for epidemiological data management, harmonization and dissemination. Implementation Opal and Mica are two standalone but inter-operable web applications written in Java, JavaScript and PHP. They provide web services and modern user interfaces to access them. General features Opal allows users to import, manage, annotate and harmonize study data. Mica is used to build searchable web portals disseminating study and variable metadata. When used conjointly, Mica users can securely query and retrieve summary statistics on geographically dispersed Opal servers in real-time. Integration with the DataSHIELD approach allows conducting more complex federated analyses involving statistical models. Availability Opal and Mica are open-source and freely available at [www.obiba.org] under a General Public License (GPL) version 3, and the metadata models and taxonomies that accompany them are available under a Creative Commons licence. PMID:29025122

  9. SkZpipe: A Python3 module to produce efficiently PSF-fitting photometry with DAOPHOT, and much more

    NASA Astrophysics Data System (ADS)

    Mauro, F.

    2017-07-01

    In an era characterized by big sky surveys and the availability of large amount of photometric data, it is important for astronomers to have tools to process their data in an efficient, accurate and easy way, minimizing reduction time. We present SkZpipe, a Python3 module designed mainly to process generic data, performing point-spread function (PSF) fitting photometry with the DAOPHOT suite (Stetson 1987). The software has already demonstrated its accuracy and efficiency with the adaptation VVV-SkZ_pipeline (Mauro et al. 2013) for the "VISTA Variables in the Vía Láctea" ESO survey, showing how it can replace the users, avoiding repetitive interaction in all the operations, retaining all of the benefits of the power and accuracy of the DAOPHOT suite, detaching them from the burden of data precessing. This software provides not only a pipeline, but also all the tools to run easily each atomic step of the photometric procedure, to match the results, and to retrieve information from fits headers and the internal instrumental database. We plan to add the support to other photometric softwares in the future.

  10. A Logic Basis for Information Retrieval.

    ERIC Educational Resources Information Center

    Watters, C. R.; Shepherd, M. A.

    1987-01-01

    Discusses the potential of recent work in artificial intelligence, especially expert systems, for the development of more effective information retrieval systems. Highlights include the role of an expert bibliographic retrieval system and a prototype expert retrieval system, PROBIB-2, that uses MicroProlog to provide deductive reasoning…

  11. The semantic representation of event information depends on the cue modality: an instance of meaning-based retrieval.

    PubMed

    Karlsson, Kristina; Sikström, Sverker; Willander, Johan

    2013-01-01

    The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues.

  12. The Semantic Representation of Event Information Depends on the Cue Modality: An Instance of Meaning-Based Retrieval

    PubMed Central

    Karlsson, Kristina; Sikström, Sverker; Willander, Johan

    2013-01-01

    The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues. PMID:24204561

  13. Development of GNSS PWV information management system for very short-term weather forecast in the Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Park, Han-Earl; Yoon, Ha Su; Yoo, Sung-Moon; Cho, Jungho

    2017-04-01

    Over the past decade, Global Navigation Satellite System (GNSS) was in the spotlight as a meteorological research tool. The Korea Astronomy and Space Science Institute (KASI) developed a GNSS precipitable water vapor (PWV) information management system to apply PWV to practical applications, such as very short-term weather forecast. The system consists of a DPR, DRS, and TEV, which are divided functionally. The DPR processes GNSS data using the Bernese GNSS software and then retrieves PWV from zenith total delay (ZTD) with the optimized mean temperature equation for the Korean Peninsula. The DRS collects data from eighty permanent GNSS stations in the southern part of the Korean Peninsula and provides the PWV retrieved from GNSS data to a user. The TEV is in charge of redundancy of the DPR. The whole process is performed in near real-time where the delay is ten minutes. The validity of the GNSS PWV was proved by means of a comparison with radiosonde data. In the experiment of numerical weather prediction model, the GNSS PWV was utilized as the initial value of the Weather Research & Forecasting (WRF) model for heavy rainfall event. As a result, we found that the forecasting capability of the WRF is improved by data assimilation of GNSS PWV.

  14. Evaluation of a mandatory quality assurance data capture in anesthesia: a secure electronic system to capture quality assurance information linked to an automated anesthesia record.

    PubMed

    Peterfreund, Robert A; Driscoll, William D; Walsh, John L; Subramanian, Aparna; Anupama, Shaji; Weaver, Melissa; Morris, Theresa; Arnholz, Sarah; Zheng, Hui; Pierce, Eric T; Spring, Stephen F

    2011-05-01

    Efforts to assure high-quality, safe, clinical care depend upon capturing information about near-miss and adverse outcome events. Inconsistent or unreliable information capture, especially for infrequent events, compromises attempts to analyze events in quantitative terms, understand their implications, and assess corrective efforts. To enhance reporting, we developed a secure, electronic, mandatory system for reporting quality assurance data linked to our electronic anesthesia record. We used the capabilities of our anesthesia information management system (AIMS) in conjunction with internally developed, secure, intranet-based, Web application software. The application is implemented with a backend allowing robust data storage, retrieval, data analysis, and reporting capabilities. We customized a feature within the AIMS software to create a hard stop in the documentation workflow before the end of anesthesia care time stamp for every case. The software forces the anesthesia provider to access the separate quality assurance data collection program, which provides a checklist for targeted clinical events and a free text option. After completing the event collection program, the software automatically returns the clinician to the AIMS to finalize the anesthesia record. The number of events captured by the departmental quality assurance office increased by 92% (95% confidence interval [CI] 60.4%-130%) after system implementation. The major contributor to this increase was the new electronic system. This increase has been sustained over the initial 12 full months after implementation. Under our reporting criteria, the overall rate of clinical events reported by any method was 471 events out of 55,382 cases or 0.85% (95% CI 0.78% to 0.93%). The new system collected 67% of these events (95% confidence interval 63%-71%). We demonstrate the implementation in an academic anesthesia department of a secure clinical event reporting system linked to an AIMS. The system enforces entry of quality assurance information (either no clinical event or notification of a clinical event). System implementation resulted in capturing nearly twice the number of events at a relatively steady case load. © 2011 International Anesthesia Research Society

  15. Information systems: the key to evidence-based health practice.

    PubMed Central

    Rodrigues, R. J.

    2000-01-01

    Increasing prominence is being given to the use of best current evidence in clinical practice and health services and programme management decision-making. The role of information in evidence-based practice (EBP) is discussed, together with questions of how advanced information systems and technology (IS&T) can contribute to the establishment of a broader perspective for EBP. The author examines the development, validation and use of a variety of sources of evidence and knowledge that go beyond the well-established paradigm of research, clinical trials, and systematic literature review. Opportunities and challenges in the implementation and use of IS&T and knowledge management tools are examined for six application areas: reference databases, contextual data, clinical data repositories, administrative data repositories, decision support software, and Internet-based interactive health information and communication. Computerized and telecommunications applications that support EBP follow a hierarchy in which systems, tasks and complexity range from reference retrieval and the processing of relatively routine transactions, to complex "data mining" and rule-driven decision support systems. PMID:11143195

  16. Exploiting salient semantic analysis for information retrieval

    NASA Astrophysics Data System (ADS)

    Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui

    2016-11-01

    Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.

  17. A Survey of Stemming Algorithms in Information Retrieval

    ERIC Educational Resources Information Center

    Moral, Cristian; de Antonio, Angélica; Imbert, Ricardo; Ramírez, Jaime

    2014-01-01

    Background: During the last fifty years, improved information retrieval techniques have become necessary because of the huge amount of information people have available, which continues to increase rapidly due to the use of new technologies and the Internet. Stemming is one of the processes that can improve information retrieval in terms of…

  18. Disposal of Information Seeking and Retrieval Research: Replacement with a Radical Proposition

    ERIC Educational Resources Information Center

    Budd, John M.; Anstaett, Ashley

    2013-01-01

    Introduction: Research and theory on the topics of information seeking and retrieval have been plagued by some fundamental problems for several decades. Many of the difficulties spring from mechanistic and instrumental thinking and modelling. Method: Existing models of information retrieval and information seeking are examined for efficacy in a…

  19. Adult Age Differences in Accessing and Retrieving Information from Long-Term Memory.

    ERIC Educational Resources Information Center

    Petros, Thomas V.; And Others

    1983-01-01

    Investigated adult age differences in accessing and retrieving information from long-term memory. Results showed that older adults (N=26) were slower than younger adults (N=35) at feature extraction, lexical access, and accessing category information. The age deficit was proportionally greater when retrieval of category information was required.…

  20. Intelligent Information Retrieval: An Introduction.

    ERIC Educational Resources Information Center

    Gauch, Susan

    1992-01-01

    Discusses the application of artificial intelligence to online information retrieval systems and describes several systems: (1) CANSEARCH, from MEDLINE; (2) Intelligent Interface for Information Retrieval (I3R); (3) Gausch's Query Reformulation; (4) Environmental Pollution Expert (EP-X); (5) PLEXUS (gardening); and (6) SCISOR (corporate…

  1. Knowledge-Based Information Retrieval.

    ERIC Educational Resources Information Center

    Ford, Nigel

    1991-01-01

    Discussion of information retrieval focuses on theoretical and empirical advances in knowledge-based information retrieval. Topics discussed include the use of natural language for queries; the use of expert systems; intelligent tutoring systems; user modeling; the need for evaluation of system effectiveness; and examples of systems, including…

  2. PMD2HD--a web tool aligning a PubMed search results page with the local German Cancer Research Centre library collection.

    PubMed

    Bohne-Lang, Andreas; Lang, Elke; Taube, Anke

    2005-06-27

    Web-based searching is the accepted contemporary mode of retrieving relevant literature, and retrieving as many full text articles as possible is a typical prerequisite for research success. In most cases only a proportion of references will be directly accessible as digital reprints through displayed links. A large number of references, however, have to be verified in library catalogues and, depending on their availability, are accessible as print holdings or by interlibrary loan request. The problem of verifying local print holdings from an initial retrieval set of citations can be solved using Z39.50, an ANSI protocol for interactively querying library information systems. Numerous systems include Z39.50 interfaces and therefore can process Z39.50 interactive requests. However, the programmed query interaction command structure is non-intuitive and inaccessible to the average biomedical researcher. For the typical user, it is necessary to implement the protocol within a tool that hides and handles Z39.50 syntax, presenting a comfortable user interface. PMD2HD is a web tool implementing Z39.50 to provide an appropriately functional and usable interface to integrate into the typical workflow that follows an initial PubMed literature search, providing users with an immediate asset to assist in the most tedious step in literature retrieval, checking for subscription holdings against a local online catalogue. PMD2HD can facilitate literature access considerably with respect to the time and cost of manual comparisons of search results with local catalogue holdings. The example presented in this article is related to the library system and collections of the German Cancer Research Centre. However, the PMD2HD software architecture and use of common Z39.50 protocol commands allow for transfer to a broad range of scientific libraries using Z39.50-compatible library information systems.

  3. Intelligent Information Retrieval: Diagnosing Information Need. Part I. The Theoretical Framework for Developing an Intelligent IR Tool.

    ERIC Educational Resources Information Center

    Cole, Charles

    1998-01-01

    Suggests that the principles underlying the procedure used by doctors to diagnose a patient's disease are useful in the design of intelligent information-retrieval systems because the task of the doctor is conceptually similar to the computer or human intermediary's task in information retrieval: to draw out the user's query/information need.…

  4. All-Union Conference on Information Retrieval Systems and Automatic Processing of Scientific and Technical Information, 3rd, Moscow, 1967, Transactions. (Selected Articles).

    ERIC Educational Resources Information Center

    Air Force Systems Command, Wright-Patterson AFB, OH. Foreign Technology Div.

    The role and place of the machine in scientific and technical information is explored including: basic trends in the development of information retrieval systems; preparation of engineering and scientific cadres with respect to mechanization and automation of information works; the logic of descriptor retrieval systems; the 'SETKA-3' automated…

  5. 32 CFR 310.10 - General.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... DoD Component. (b) Retrieval practices. (1) Records in a group of records that MAY be retrieved by a... automated (Information Technology) system that is capable of being manipulated to retrieve information about... to this part, retrieval policies and practices shall be evaluated. If DoD Component policy is to...

  6. 32 CFR 310.10 - General.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... DoD Component. (b) Retrieval practices. (1) Records in a group of records that MAY be retrieved by a... automated (Information Technology) system that is capable of being manipulated to retrieve information about... to this part, retrieval policies and practices shall be evaluated. If DoD Component policy is to...

  7. 32 CFR 310.10 - General.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... DoD Component. (b) Retrieval practices. (1) Records in a group of records that MAY be retrieved by a... automated (Information Technology) system that is capable of being manipulated to retrieve information about... to this part, retrieval policies and practices shall be evaluated. If DoD Component policy is to...

  8. 32 CFR 310.10 - General.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... DoD Component. (b) Retrieval practices. (1) Records in a group of records that MAY be retrieved by a... automated (Information Technology) system that is capable of being manipulated to retrieve information about... to this part, retrieval policies and practices shall be evaluated. If DoD Component policy is to...

  9. Screening_mgmt: a Python module for managing screening data.

    PubMed

    Helfenstein, Andreas; Tammela, Päivi

    2015-02-01

    High-throughput screening is an established technique in drug discovery and, as such, has also found its way into academia. High-throughput screening generates a considerable amount of data, which is why specific software is used for its analysis and management. The commercially available software packages are often beyond the financial limits of small-scale academic laboratories and, furthermore, lack the flexibility to fulfill certain user-specific requirements. We have developed a Python module, screening_mgmt, which is a lightweight tool for flexible data retrieval, analysis, and storage for different screening assays in one central database. The module reads custom-made analysis scripts and plotting instructions, and it offers a graphical user interface to import, modify, and display the data in a uniform manner. During the test phase, we used this module for the management of 10,000 data points of various origins. It has provided a practical, user-friendly tool for sharing and exchanging information between researchers. © 2014 Society for Laboratory Automation and Screening.

  10. SWToolbox: A surface-water tool-box for statistical analysis of streamflow time series

    USGS Publications Warehouse

    Kiang, Julie E.; Flynn, Kate; Zhai, Tong; Hummel, Paul; Granato, Gregory

    2018-03-07

    This report is a user guide for the low-flow analysis methods provided with version 1.0 of the Surface Water Toolbox (SWToolbox) computer program. The software combines functionality from two software programs—U.S. Geological Survey (USGS) SWSTAT and U.S. Environmental Protection Agency (EPA) DFLOW. Both of these programs have been used primarily for computation of critical low-flow statistics. The main analysis methods are the computation of hydrologic frequency statistics such as the 7-day minimum flow that occurs on average only once every 10 years (7Q10), computation of design flows including biologically based flows, and computation of flow-duration curves and duration hydrographs. Other annual, monthly, and seasonal statistics can also be computed. The interface facilitates retrieval of streamflow discharge data from the USGS National Water Information System and outputs text reports for a record of the analysis. Tools for graphing data and screening tests are available to assist the analyst in conducting the analysis.

  11. Automatic software correction of residual aberrations in reconstructed HRTEM exit waves of crystalline samples

    DOE PAGES

    Ophus, Colin; Rasool, Haider I.; Linck, Martin; ...

    2016-11-30

    We develop an automatic and objective method to measure and correct residual aberrations in atomic-resolution HRTEM complex exit waves for crystalline samples aligned along a low-index zone axis. Our method uses the approximate rotational point symmetry of a column of atoms or single atom to iteratively calculate a best-fit numerical phase plate for this symmetry condition, and does not require information about the sample thickness or precise structure. We apply our method to two experimental focal series reconstructions, imaging a β-Si 3N 4 wedge with O and N doping, and a single-layer graphene grain boundary. We use peak and latticemore » fitting to evaluate the precision of the corrected exit waves. We also apply our method to the exit wave of a Si wedge retrieved by off-axis electron holography. In all cases, the software correction of the residual aberration function improves the accuracy of the measured exit waves.« less

  12. Indoor Navigation by People with Visual Impairment Using a Digital Sign System

    PubMed Central

    Legge, Gordon E.; Beckmann, Paul J.; Tjan, Bosco S.; Havey, Gary; Kramer, Kevin; Rolkosky, David; Gage, Rachel; Chen, Muzi; Puchakayala, Sravan; Rangarajan, Aravindhan

    2013-01-01

    There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects—blind, low vision, blindfolded sighted, and normally sighted controls—were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment. PMID:24116156

  13. Integrating Query of Relational and Textual Data in Clinical Databases: A Case Study

    PubMed Central

    Fisk, John M.; Mutalik, Pradeep; Levin, Forrest W.; Erdos, Joseph; Taylor, Caroline; Nadkarni, Prakash

    2003-01-01

    Objectives: The authors designed and implemented a clinical data mart composed of an integrated information retrieval (IR) and relational database management system (RDBMS). Design: Using commodity software, which supports interactive, attribute-centric text and relational searches, the mart houses 2.8 million documents that span a five-year period and supports basic IR features such as Boolean searches, stemming, and proximity and fuzzy searching. Measurements: Results are relevance-ranked using either “total documents per patient” or “report type weighting.” Results: Non-curated medical text has a significant degree of malformation with respect to spelling and punctuation, which creates difficulties for text indexing and searching. Presently, the IR facilities of RDBMS packages lack the features necessary to handle such malformed text adequately. Conclusion: A robust IR+RDBMS system can be developed, but it requires integrating RDBMSs with third-party IR software. RDBMS vendors need to make their IR offerings more accessible to non-programmers. PMID:12509355

  14. NASA Access Mechanism: Lessons learned document

    NASA Technical Reports Server (NTRS)

    Burdick, Lisa; Dunbar, Rick; Duncan, Denise; Generous, Curtis; Hunter, Judy; Lycas, John; Taber-Dudas, Ardeth

    1994-01-01

    The six-month beta test of the NASA Access Mechanism (NAM) prototype was completed on June 30, 1993. This report documents the lessons learned from the use of this Graphical User Interface to NASA databases such as the NASA STI Database, outside databases, Internet resources, and peers in the NASA R&D community. Design decisions, such as the use of XWindows software, a client-server distributed architecture, and use of the NASA Science Internet, are explained. Users' reactions to the interface and suggestions for design changes are reported, as are the changes made by the software developers based on new technology for information discovery and retrieval. The lessons learned section also reports reactions from the public, both at demonstrations and in response to articles in the trade press and journals. Recommendations are included for future versions, such as a World Wide Web (WWW) and Mosaic based interface to heterogeneous databases, and NAM-Lite, a version which allows customization to include utilities provided locally at NASA Centers.

  15. Automatic software correction of residual aberrations in reconstructed HRTEM exit waves of crystalline samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ophus, Colin; Rasool, Haider I.; Linck, Martin

    We develop an automatic and objective method to measure and correct residual aberrations in atomic-resolution HRTEM complex exit waves for crystalline samples aligned along a low-index zone axis. Our method uses the approximate rotational point symmetry of a column of atoms or single atom to iteratively calculate a best-fit numerical phase plate for this symmetry condition, and does not require information about the sample thickness or precise structure. We apply our method to two experimental focal series reconstructions, imaging a β-Si 3N 4 wedge with O and N doping, and a single-layer graphene grain boundary. We use peak and latticemore » fitting to evaluate the precision of the corrected exit waves. We also apply our method to the exit wave of a Si wedge retrieved by off-axis electron holography. In all cases, the software correction of the residual aberration function improves the accuracy of the measured exit waves.« less

  16. Flexible, fast and accurate sequence alignment profiling on GPGPU with PaSWAS.

    PubMed

    Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J L; Nap, Jan Peter

    2015-01-01

    To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation.

  17. Identification of misspelled words without a comprehensive dictionary using prevalence analysis.

    PubMed

    Turchin, Alexander; Chu, Julia T; Shubina, Maria; Einbinder, Jonathan S

    2007-10-11

    Misspellings are common in medical documents and can be an obstacle to information retrieval. We evaluated an algorithm to identify misspelled words through analysis of their prevalence in a representative body of text. We evaluated the algorithm's accuracy of identifying misspellings of 200 anti-hypertensive medication names on 2,000 potentially misspelled words randomly selected from narrative medical documents. Prevalence ratios (the frequency of the potentially misspelled word divided by the frequency of the non-misspelled word) in physician notes were computed by the software for each of the words. The software results were compared to the manual assessment by an independent reviewer. Area under the ROC curve for identification of misspelled words was 0.96. Sensitivity, specificity, and positive predictive value were 99.25%, 89.72% and 82.9% for the prevalence ratio threshold (0.32768) with the highest F-measure (0.903). Prevalence analysis can be used to identify and correct misspellings with high accuracy.

  18. A software system for the simulation of chest lesions

    NASA Astrophysics Data System (ADS)

    Ryan, John T.; McEntee, Mark; Barrett, Saoirse; Evanoff, Michael; Manning, David; Brennan, Patrick

    2007-03-01

    We report on the development of a novel software tool for the simulation of chest lesions. This software tool was developed for use in our study to attain optimal ambient lighting conditions for chest radiology. This study involved 61 consultant radiologists from the American Board of Radiology. Because of its success, we intend to use the same tool for future studies. The software has two main functions: the simulation of lesions and retrieval of information for ROC (Receiver Operating Characteristic) and JAFROC (Jack-Knife Free Response ROC) analysis. The simulation layer operates by randomly selecting an image from a bank of reportedly normal chest x-rays. A random location is then generated for each lesion, which is checked against a reference lung-map. If the location is within the lung fields, as derived from the lung-map, a lesion is superimposed. Lesions are also randomly selected from a bank of manually created chest lesion images. A blending algorithm determines which are the best intensity levels for the lesion to sit naturally within the chest x-ray. The same software was used to run a study for all 61 radiologists. A sequence of images is displayed in random order. Half of these images had simulated lesions, ranging from subtle to obvious, and half of the images were normal. The operator then selects locations where he/she thinks lesions exist and grades the lesion accordingly. We have found that this software was very effective in this study and intend to use the same principles for future studies.

  19. User guide to Exploration and Graphics for RivEr Trends (EGRET) and dataRetrieval: R packages for hydrologic data

    USGS Publications Warehouse

    Hirsch, Robert M.; De Cicco, Laura A.

    2015-01-01

    Evaluating long-term changes in river conditions (water quality and discharge) is an important use of hydrologic data. To carry out such evaluations, the hydrologist needs tools to facilitate several key steps in the process: acquiring the data records from a variety of sources, structuring it in ways that facilitate the analysis, processing the data with routines that extract information about changes that may be happening, and displaying findings with graphical techniques. A pair of tightly linked R packages, called dataRetrieval and EGRET (Exploration and Graphics for RivEr Trends), have been developed for carrying out each of these steps in an integrated manner. They are designed to easily accept data from three sources: U.S. Geological Survey hydrologic data, U.S. Environmental Protection Agency (EPA) STORET data, and user-supplied flat files. The dataRetrieval package not only serves as a “front end” to the EGRET package, it can also be used to easily download many types of hydrologic data and organize it in ways that facilitate many other hydrologic applications. The EGRET package has components oriented towards the description of long-term changes in streamflow statistics (high flow, average flow, and low flow) as well as changes in water quality. For the water-quality analysis, it uses Weighted Regressions on Time, Discharge and Season (WRTDS) to describe long-term trends in both concentration and flux. EGRET also creates a wide range of graphical presentations of the water-quality data and of the WRTDS results. This report serves as a user guide to these two R packages, providing detailed guidance on installation and use of the software, documentation of the analysis methods used, as well as guidance on some of the kinds of questions and approaches that the software can facilitate.

  20. An intelligent, free-flying robot

    NASA Technical Reports Server (NTRS)

    Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.

    1988-01-01

    The ground-based demonstration of EVA Retriever, a voice-supervised, intelligent, free-flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out, (2) searches for and acquires the target, (3) plans and executes a rendezvous while continuously tracking the target, (4) avoids stationary and moving obstacles, (5) reaches for and grapples the target, (6) returns to transfer the object, and (7) returns to base.

  1. An intelligent, free-flying robot

    NASA Technical Reports Server (NTRS)

    Reuter, G. J.; Hess, C. W.; Rhoades, D. E.; Mcfadin, L. W.; Healey, K. J.; Erickson, J. D.; Phinney, Dale E.

    1989-01-01

    The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base.

  2. The electronic, 'paperless' medical office; has it arrived?

    PubMed

    Gates, P; Urquhart, J

    2007-02-01

    Modern information technology offers efficiencies in medical practice, with a reduction in secretarial time in maintaining, filing and retrieving the paper medical record. Electronic requesting of investigations allows tracking of outstanding results. Less storage space is required and telephone calls from pharmacies, pathology and medical imaging service providers to clarify the hand-written request are abolished. Voice recognition software reduces secretarial typing time per letter. These combined benefits can lead to significantly reduced costs and improved patient care. The paperless office is possible, but requires commitment and training of all staff; it is preferable but not absolutely essential that at least one member of the practice has an interest and some expertise in computers. More importantly, back-up from information technology providers and back-up of the electronic data are absolutely crucial and a paperless environment should not be considered without them.

  3. Using SIR (Scientific Information Retrieval System) for data management during a field program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tichler, J.L.

    As part of the US Department of Energy's program, PRocessing of Emissions by Clouds and Precipitation (PRECP), a team of scientists from four laboratories conducted a study in north central New York State, to characterize the chemical and physical processes occurring in winter storms. Sampling took place from three aircraft, two instrumented motor homes and a network of 26 surface precipitation sampling sites. Data management personnel were part of the field program, using a portable IBM PC-AT computer to enter information as it became available during the field study. Having the same database software on the field computer and onmore » the cluster of VAX 11/785 computers in use aided database development and the transfer of data between machines. 2 refs., 3 figs., 5 tabs.« less

  4. Parallel interactive retrieval of item and associative information from event memory.

    PubMed

    Cox, Gregory E; Criss, Amy H

    2017-09-01

    Memory contains information about individual events (items) and combinations of events (associations). Despite the fundamental importance of this distinction, it remains unclear exactly how these two kinds of information are stored and whether different processes are used to retrieve them. We use both model-independent qualitative properties of response dynamics and quantitative modeling of individuals to address these issues. Item and associative information are not independent and they are retrieved concurrently via interacting processes. During retrieval, matching item and associative information mutually facilitate one another to yield an amplified holistic signal. Modeling of individuals suggests that this kind of facilitation between item and associative retrieval is a ubiquitous feature of human memory. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. The Asymmetrical Effects of Divided Attention on Encoding and Retrieval Processes: A Different View Based on an Interference with the Episodic Register

    PubMed Central

    Guez, Jonathan; Naveh-Benjamin, Moshe

    2013-01-01

    In this study, we evaluate the conceptualization of encoding and retrieval processes established in previous studies that used a divided attention (DA) paradigm. These studies indicated that there were considerable detrimental effects of DA at encoding on later memory performance, but only minimal effects, if any, on divided attention at retrieval. We suggest that this asymmetry in the effects of DA on memory can be due, at least partially, to a confound between the memory phase (encoding and retrieval) and the memory requirements of the task (memory “for” encoded information versus memory “at” test). To control for this confound, we tested memory for encoded information and for retrieved information by introducing a second test that assessed memory for the retrieved information from the first test. We report the results of four experiments that use measures of memory performance, retrieval latency, and performance on the concurrent task, all of which consistently show that DA at retrieval strongly disrupts later memory for the retrieved episode, similarly to the effects of DA at encoding. We suggest that these symmetrical disruptive effects of DA at encoding and retrieval on later retrieval reflect a disruption of an episodic buffer (EB) or episodic register component (ER), rather than a failure of encoding or retrieval operations per se. PMID:24040249

  6. The asymmetrical effects of divided attention on encoding and retrieval processes: a different view based on an interference with the episodic register.

    PubMed

    Guez, Jonathan; Naveh-Benjamin, Moshe

    2013-01-01

    In this study, we evaluate the conceptualization of encoding and retrieval processes established in previous studies that used a divided attention (DA) paradigm. These studies indicated that there were considerable detrimental effects of DA at encoding on later memory performance, but only minimal effects, if any, on divided attention at retrieval. We suggest that this asymmetry in the effects of DA on memory can be due, at least partially, to a confound between the memory phase (encoding and retrieval) and the memory requirements of the task (memory "for" encoded information versus memory "at" test). To control for this confound, we tested memory for encoded information and for retrieved information by introducing a second test that assessed memory for the retrieved information from the first test. We report the results of four experiments that use measures of memory performance, retrieval latency, and performance on the concurrent task, all of which consistently show that DA at retrieval strongly disrupts later memory for the retrieved episode, similarly to the effects of DA at encoding. We suggest that these symmetrical disruptive effects of DA at encoding and retrieval on later retrieval reflect a disruption of an episodic buffer (EB) or episodic register component (ER), rather than a failure of encoding or retrieval operations per se.

  7. Harnessing user generated multimedia content in the creation of collaborative classification structures and retrieval learning games

    NASA Astrophysics Data System (ADS)

    Borchert, Otto Jerome

    This paper describes a software tool to assist groups of people in the classification and identification of real world objects called the Classification, Identification, and Retrieval-based Collaborative Learning Environment (CIRCLE). A thorough literature review identified current pedagogical theories that were synthesized into a series of five tasks: gathering, elaboration, classification, identification, and reinforcement through game play. This approach is detailed as part of an included peer reviewed paper. Motivation is increased through the use of formative and summative gamification; getting points completing important portions of the tasks and playing retrieval learning based games, respectively, which is also included as a peer-reviewed conference proceedings paper. Collaboration is integrated into the experience through specific tasks and communication mediums. Implementation focused on a REST-based client-server architecture. The client is a series of web-based interfaces to complete each of the tasks, support formal classroom interaction through faculty accounts and student tracking, and a module for peers to help each other. The server, developed using an in-house JavaMOO platform, stores relevant project data and serves data through a series of messages implemented as a JavaScript Object Notation Application Programming Interface (JSON API). Through a series of two beta tests and two experiments, it was discovered the second, elaboration, task requires considerable support. While students were able to properly suggest experiments and make observations, the subtask involving cleaning the data for use in CIRCLE required extra support. When supplied with more structured data, students were enthusiastic about the classification and identification tasks, showing marked improvement in usability scores and in open ended survey responses. CIRCLE tracks a variety of educationally relevant variables, facilitating support for instructors and researchers. Future work will revolve around material development, software refinement, and theory building. Curricula, lesson plans, instructional materials need to be created to seamlessly integrate CIRCLE in a variety of courses. Further refinement of the software will focus on improving the elaboration interface and developing further game templates to add to the motivation and retrieval learning aspects of the software. Data gathered from CIRCLE experiments can be used to develop and strengthen theories on teaching and learning.

  8. 42 CFR 433.138 - Identifying liable third parties.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Integration with the State mechanized claims processing and information retrieval system. Basic requirement—Development of an action plan. (1) If a State has a mechanized claims processing and information retrieval... processing and information retrieval system. (2) The action plan must describe the actions and methodologies...

  9. 42 CFR 433.138 - Identifying liable third parties.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Integration with the State mechanized claims processing and information retrieval system. Basic requirement—Development of an action plan. (1) If a State has a mechanized claims processing and information retrieval... processing and information retrieval system. (2) The action plan must describe the actions and methodologies...

  10. 42 CFR 433.110 - Basis, purpose, and applicability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... and Information Retrieval Systems § 433.110 Basis, purpose, and applicability. (a) This subpart... information retrieval systems and for the operation of certain systems. Additional HHS regulations and CMS... mechanized claims processing and information retrieval system or if the system fails to meet certain...

  11. Advanced Feedback Methods in Information Retrieval.

    ERIC Educational Resources Information Center

    Salton, G.; And Others

    1985-01-01

    In this study, automatic feedback techniques are applied to Boolean query statements in online information retrieval to generate improved query statements based on information contained in previously retrieved documents. Feedback operations are carried out using conventional Boolean logic and extended logic. Experimental output is included to…

  12. Retrieval monitoring is influenced by information value: the interplay between importance and confidence on false memory.

    PubMed

    McDonough, Ian M; Bui, Dung C; Friedman, Michael C; Castel, Alan D

    2015-10-01

    The perceived value of information can influence one's motivation to successfully remember that information. This study investigated how information value can affect memory search and evaluation processes (i.e., retrieval monitoring). In Experiment 1, participants studied unrelated words associated with low, medium, or high values. Subsequent memory tests required participants to selectively monitor retrieval for different values. False memory effects were smaller when searching memory for high-value than low-value words, suggesting that people more effectively monitored more important information. In Experiment 2, participants studied semantically-related words, and the need for retrieval monitoring was reduced at test by using inclusion instructions (i.e., endorsement of any word related to the studied words) compared with standard instructions. Inclusion instructions led to increases in false recognition for low-value, but not for high-value words, suggesting that under standard-instruction conditions retrieval monitoring was less likely to occur for important information. Experiment 3 showed that words retrieved with lower confidence were associated with more effective retrieval monitoring, suggesting that the quality of the retrieved memory influenced the degree and effectiveness of monitoring processes. Ironically, unless encouraged to do so, people were less likely to carefully monitor important information, even though people want to remember important memories most accurately. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Advanced laser stratospheric monitoring systems analyses

    NASA Technical Reports Server (NTRS)

    Larsen, J. C.

    1984-01-01

    This report describes the software support supplied by Systems and Applied Sciences Corporation for the study of Advanced Laser Stratospheric Monitoring Systems Analyses under contract No. NAS1-15806. This report discusses improvements to the Langley spectroscopic data base, development of LHS instrument control software and data analyses and validation software. The effect of diurnal variations on the retrieved concentrations of NO, NO2 and C L O from a space and balloon borne measurement platform are discussed along with the selection of optimum IF channels for sensing stratospheric species from space.

  14. Query-Time Optimization Techniques for Structured Queries in Information Retrieval

    ERIC Educational Resources Information Center

    Cartright, Marc-Allen

    2013-01-01

    The use of information retrieval (IR) systems is evolving towards larger, more complicated queries. Both the IR industrial and research communities have generated significant evidence indicating that in order to continue improving retrieval effectiveness, increases in retrieval model complexity may be unavoidable. From an operational perspective,…

  15. The Development of Online Information Retrieval Services in the People's Republic of China.

    ERIC Educational Resources Information Center

    Xiaocun, Lu

    1986-01-01

    Assesses the promotion and development of online information retrieval in China. Highlights include opening of the first online retrieval center at China Overseas Building Development Company Limited; establishment and activities of a cooperative network; online retrieval seminars; telecommunication lines and terminal installations; and problems…

  16. CHERNOLITTM. Chernobyl Bibliographic Search System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caff, F., Jr.; Kennedy, R.A.; Mahaffey, J.A.

    1992-03-02

    The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world`s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less

  17. Chernobyl Bibliographic Search System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carr, Jr, F.; Kennedy, R. A.; Mahaffey, J. A.

    1992-05-11

    The Chernobyl Bibliographic Search System (Chernolit TM) provides bibliographic data in a usable format for research studies relating to the Chernobyl nuclear accident that occurred in the former Ukrainian Republic of the USSR in 1986. Chernolit TM is a portable and easy to use product. The bibliographic data is provided under the control of a graphical user interface so that the user may quickly and easily retrieve pertinent information from the large database. The user may search the database for occurrences of words, names, or phrases; view bibliographic references on screen; and obtain reports of selected references. Reports may bemore » viewed on the screen, printed, or accumulated in a folder that is written to a disk file when the user exits the software. Chernolit TM provides a cost-effective alternative to multiple, independent literature searches. Forty-five hundred references concerning the accident, including abstracts, are distributed with Chernolit TM. The data contained in the database were obtained from electronic literature searches and from requested donations from individuals and organizations. These literature searches interrogated the Energy Science and Technology database (formerly DOE ENERGY) of the DIALOG Information Retrieval Service. Energy Science and Technology, provided by the U.S. DOE, Washington, D.C., is a multi-disciplinary database containing references to the world''s scientific and technical literature on energy. All unclassified information processed at the Office of Scientific and Technical Information (OSTI) of the U.S. DOE is included in the database. In addition, information on many documents has been manually added to Chernolit TM. Most of this information was obtained in response to requests for data sent to people and/or organizations throughout the world.« less

  18. Query Expansion for Noisy Legal Documents

    DTIC Science & Technology

    2008-11-01

    9] G. Salton (ed). The SMART retrieval system experiments in automatic document processing. 1971. [10] H. Schutze and J . Pedersen. A cooccurrence...Language Modeling and Information Retrieval. http://www.lemurproject.org. [2] J . Baron, D. Lewis, and D. Oard. TREC 2006 legal track overview. In...Retrieval, 1993. [8] J . Rocchio. Relevance feedback in information retrieval. In The SMART retrieval system experiments in automatic document processing, 1971

  19. When your face describes your memories: facial expressions during retrieval of autobiographical memories.

    PubMed

    El Haj, Mohamad; Daoudi, Mohamed; Gallouj, Karim; Moustafa, Ahmed A; Nandrino, Jean-Louis

    2018-05-11

    Thanks to the current advances in the software analysis of facial expressions, there is a burgeoning interest in understanding emotional facial expressions observed during the retrieval of autobiographical memories. This review describes the research on facial expressions during autobiographical retrieval showing distinct emotional facial expressions according to the characteristics of retrieved memoires. More specifically, this research demonstrates that the retrieval of emotional memories can trigger corresponding emotional facial expressions (e.g. positive memories may trigger positive facial expressions). Also, this study demonstrates the variations of facial expressions according to specificity, self-relevance, or past versus future direction of memory construction. Besides linking research on facial expressions during autobiographical retrieval to cognitive and affective characteristics of autobiographical memory in general, this review positions this research within the broader context research on the physiologic characteristics of autobiographical retrieval. We also provide several perspectives for clinical studies to investigate facial expressions in populations with deficits in autobiographical memory (e.g. whether autobiographical overgenerality in neurologic and psychiatric populations may trigger few emotional facial expressions). In sum, this review paper demonstrates how the evaluation of facial expressions during autobiographical retrieval may help understand the functioning and dysfunctioning of autobiographical memory.

  20. An automatic method for retrieving and indexing catalogues of biomedical courses.

    PubMed

    Maojo, Victor; de la Calle, Guillermo; García-Remesal, Miguel; Bankauskaite, Vaida; Crespo, Jose

    2008-11-06

    Although there is wide information about Biomedical Informatics education and courses in different Websites, information is usually not exhaustive and difficult to update. We propose a new methodology based on information retrieval techniques for extracting, indexing and retrieving automatically information about educational offers. A web application has been developed to make available such information in an inventory of courses and educational offers.

  1. 36 CFR 1194.31 - Functional performance criteria.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... information retrieval that does not require user vision shall be provided, or support for assistive technology... and information retrieval that does not require visual acuity greater than 20/70 shall be provided in... information retrieval that does not require user hearing shall be provided, or support for assistive...

  2. 42 CFR 433.138 - Identifying liable third parties.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... processing and information retrieval system. Basic requirement—Development of an action plan. (1) If a State has a mechanized claims processing and information retrieval system approved by CMS under subpart C of... plan must be integrated with the mechanized claims processing and information retrieval system. (2) The...

  3. 36 CFR 1194.31 - Functional performance criteria.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... information retrieval that does not require user vision shall be provided, or support for assistive technology... and information retrieval that does not require visual acuity greater than 20/70 shall be provided in... information retrieval that does not require user hearing shall be provided, or support for assistive...

  4. 36 CFR 1194.31 - Functional performance criteria.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... information retrieval that does not require user vision shall be provided, or support for assistive technology... and information retrieval that does not require visual acuity greater than 20/70 shall be provided in... information retrieval that does not require user hearing shall be provided, or support for assistive...

  5. User-Centric Multi-Criteria Information Retrieval

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Zhang, Yi

    2009-01-01

    Information retrieval models usually represent content only, and not other considerations, such as authority, cost, and recency. How could multiple criteria be utilized in information retrieval, and how would it affect the results? In our experiments, using multiple user-centric criteria always produced better results than a single criteria.

  6. 42 CFR 433.138 - Identifying liable third parties.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... processing and information retrieval system. Basic requirement—Development of an action plan. (1) If a State has a mechanized claims processing and information retrieval system approved by CMS under subpart C of... plan must be integrated with the mechanized claims processing and information retrieval system. (2) The...

  7. 42 CFR 433.138 - Identifying liable third parties.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... processing and information retrieval system. Basic requirement—Development of an action plan. (1) If a State has a mechanized claims processing and information retrieval system approved by CMS under subpart C of... plan must be integrated with the mechanized claims processing and information retrieval system. (2) The...

  8. Information Retrieval (SPIRES) and Library Automation (BALLOTS) at Stanford University.

    ERIC Educational Resources Information Center

    Ferguson, Douglas, Ed.

    At Stanford University, two major projects have been involved jointly in library automation and information retrieval since 1968: BALLOTS (Bibliographic Automation of Large Library Operations) and SPIRES (Stanford Physics Information Retrieval System). In early 1969, two prototype applications were activated using the jointly developed systems…

  9. An Expressive and Efficient Language for XML Information Retrieval.

    ERIC Educational Resources Information Center

    Chinenyanga, Taurai Tapiwa; Kushmerick, Nicholas

    2002-01-01

    Discusses XML and information retrieval and describes a query language, ELIXIR (expressive and efficient language for XML information retrieval), with a textual similarity operator that can be used for similarity joins. Explains the algorithm for answering ELIXIR queries to generate intermediate relational data. (Author/LRW)

  10. Scaling Up High-Value Retrieval to Medium-Volume Data

    NASA Astrophysics Data System (ADS)

    Cunningham, Hamish; Hanbury, Allan; Rüger, Stefan

    We summarise the scientific work presented at the first Information Retrieval Facility Conference [3] and argue that high-value retrieval with medium-volume data, exemplified by patent search, is a thriving topic in a multidisciplinary area that sits between Information Retrieval, Natural Language Processing and Semantic Web Technologies. We analyse the parameters that condition choices of retrieval technology for different sizes and values of document space, and we present the patent document space and some of its characteristics for retrieval work.

  11. Northwestern University Schizophrenia Data and Software Tool (NUSDAST)

    PubMed Central

    Wang, Lei; Kogan, Alex; Cobia, Derin; Alpert, Kathryn; Kolasny, Anthony; Miller, Michael I.; Marcus, Daniel

    2013-01-01

    The schizophrenia research community has invested substantial resources on collecting, managing and sharing large neuroimaging datasets. As part of this effort, our group has collected high resolution magnetic resonance (MR) datasets from individuals with schizophrenia, their non-psychotic siblings, healthy controls and their siblings. This effort has resulted in a growing resource, the Northwestern University Schizophrenia Data and Software Tool (NUSDAST), an NIH-funded data sharing project to stimulate new research. This resource resides on XNAT Central, and it contains neuroimaging (MR scans, landmarks and surface maps for deep subcortical structures, and FreeSurfer cortical parcellation and measurement data), cognitive (cognitive domain scores for crystallized intelligence, working memory, episodic memory, and executive function), clinical (demographic, sibling relationship, SAPS and SANS psychopathology), and genetic (20 polymorphisms) data, collected from more than 450 subjects, most with 2-year longitudinal follow-up. A neuroimaging mapping, analysis and visualization software tool, CAWorks, is also part of this resource. Moreover, in making our existing neuroimaging data along with the associated meta-data and computational tools publically accessible, we have established a web-based information retrieval portal that allows the user to efficiently search the collection. This research-ready dataset meaningfully combines neuroimaging data with other relevant information, and it can be used to help facilitate advancing neuroimaging research. It is our hope that this effort will help to overcome some of the commonly recognized technical barriers in advancing neuroimaging research such as lack of local organization and standard descriptions. PMID:24223551

  12. Northwestern University Schizophrenia Data and Software Tool (NUSDAST).

    PubMed

    Wang, Lei; Kogan, Alex; Cobia, Derin; Alpert, Kathryn; Kolasny, Anthony; Miller, Michael I; Marcus, Daniel

    2013-01-01

    The schizophrenia research community has invested substantial resources on collecting, managing and sharing large neuroimaging datasets. As part of this effort, our group has collected high resolution magnetic resonance (MR) datasets from individuals with schizophrenia, their non-psychotic siblings, healthy controls and their siblings. This effort has resulted in a growing resource, the Northwestern University Schizophrenia Data and Software Tool (NUSDAST), an NIH-funded data sharing project to stimulate new research. This resource resides on XNAT Central, and it contains neuroimaging (MR scans, landmarks and surface maps for deep subcortical structures, and FreeSurfer cortical parcellation and measurement data), cognitive (cognitive domain scores for crystallized intelligence, working memory, episodic memory, and executive function), clinical (demographic, sibling relationship, SAPS and SANS psychopathology), and genetic (20 polymorphisms) data, collected from more than 450 subjects, most with 2-year longitudinal follow-up. A neuroimaging mapping, analysis and visualization software tool, CAWorks, is also part of this resource. Moreover, in making our existing neuroimaging data along with the associated meta-data and computational tools publically accessible, we have established a web-based information retrieval portal that allows the user to efficiently search the collection. This research-ready dataset meaningfully combines neuroimaging data with other relevant information, and it can be used to help facilitate advancing neuroimaging research. It is our hope that this effort will help to overcome some of the commonly recognized technical barriers in advancing neuroimaging research such as lack of local organization and standard descriptions.

  13. Near-Earth Object Survey Simulation Software

    NASA Astrophysics Data System (ADS)

    Naidu, Shantanu P.; Chesley, Steven R.; Farnocchia, Davide

    2017-10-01

    There is a significant interest in Near-Earth objects (NEOs) because they pose an impact threat to Earth, offer valuable scientific information, and are potential targets for robotic and human exploration. The number of NEO discoveries has been rising rapidly over the last two decades with over 1800 being discovered last year, making the total number of known NEOs >16000. Pan-STARRS and the Catalina Sky Survey are currently the most prolific NEO surveys, having discovered >1600 NEOs between them in 2016. As next generation surveys such as Large Synoptic Survey Telescope (LSST) and the proposed Near-Earth Object Camera (NEOCam) become operational in the next decade, the discovery rate is expected to increase tremendously. Coordination between various survey telescopes will be necessary in order to optimize NEO discoveries and create a unified global NEO discovery network. We are collaborating on a community-based, open-source software project to simulate asteroid surveys to facilitate such coordination and develop strategies for improving discovery efficiency. Our effort so far has focused on development of a fast and efficient tool capable of accepting user-defined asteroid population models and telescope parameters such as a list of pointing angles and camera field-of-view, and generating an output list of detectable asteroids. The software takes advantage of the widely used and tested SPICE library and architecture developed by NASA’s Navigation and Ancillary Information Facility (Acton, 1996) for saving and retrieving asteroid trajectories and camera pointing. Orbit propagation is done using OpenOrb (Granvik et al. 2009) but future versions will allow the user to plug in a propagator of their choice. The software allows the simulation of both ground-based and space-based surveys. Performance is being tested using the Grav et al. (2011) asteroid population model and the LSST simulated survey “enigma_1189”.

  14. Starbase Data Tables: An ASCII Relational Database for Unix

    NASA Astrophysics Data System (ADS)

    Roll, John

    2011-11-01

    Database management is an increasingly important part of astronomical data analysis. Astronomers need easy and convenient ways of storing, editing, filtering, and retrieving data about data. Commercial databases do not provide good solutions for many of the everyday and informal types of database access astronomers need. The Starbase database system with simple data file formatting rules and command line data operators has been created to answer this need. The system includes a complete set of relational and set operators, fast search/index and sorting operators, and many formatting and I/O operators. Special features are included to enhance the usefulness of the database when manipulating astronomical data. The software runs under UNIX, MSDOS and IRAF.

  15. Strain energy release rate as a function of temperature and preloading history utilizing the edge delamination fatique test method

    NASA Technical Reports Server (NTRS)

    Zimmerman, Richard S.; Adams, Donald F.

    1989-01-01

    Static laminate and tension-tension fatigue tests of IM7/8551-7 composite materials was performed. The Edge Delamination Test (EDT) was utilized to evaluate the temperature and preloading history effect on the critical strain energy release rate. Static and fatigue testing was performed at room temperature and 180 F (82 C). Three preloading schemes were used to precondition fatigue test specimens prior to performing the normal tension-tension fatigue EDT testing. Computer software was written to perform all fatigue testing while monitoring the dynamic modulus to detect the onset of delamination and record the test information for later retrieval and reduction.

  16. A microprocessor-based control system for the Vienna PDS microdensitometer

    NASA Technical Reports Server (NTRS)

    Jenkner, H.; Stoll, M.; Hron, J.

    1984-01-01

    The Motorola Exorset 30 system, based on a Motorola 6809 microprocessor which serves as control processor for the microdensitometer is presented. User communication and instrument control are implemented in this syatem; data transmission to a host computer is provided via standard interfaces. The Vienna PDS system (VIPS) software was developed in BASIC and M6809 assembler. It provides efficient user interaction via function keys and argument input in a menu oriented environment. All parameters can be stored on, and retrieved from, minifloppy disks, making it possible to set up large scanning tasks. Extensive user information includes continuously updated status and coordinate displays, as well as a real time graphic display during scanning.

  17. Help: first aid issues.

    PubMed

    Granitoff, N; Whitaker, I Y; Diccini, S; Goncalves, V C; Marin, H F

    1995-01-01

    First aid is the initial and immediate care given to a victim outside the hospital environment, with the purpose of assuring life and avoiding worsening conditions until he/she receives qualified assistance. Providing immediate aid to someone requires tranquility and, above all, knowledge on what has to be done or not in each situation. In addition to being treated by health professionals, the chances that a victim will receive early treatment by others are large. However, in Brazil, access to information, and the possibility of reviewing it whenever necessary, may contribute greatly to the process of assimilation of this knowledge, in addition to exercises on simulated cases. Informatics has been shown as an extremely useful tool in the development of educational software, considering its multiplicity of resources and providing for the users: motivation for an interactive experience, an individualized teaching that takes into account his/her own rhythm and desired complexity level, besides making possible the user's capacity for solving problems through simulated situations. Considering that, the number of individuals of the population prepared to act as First Aid helpers in situations of life threatening accidents or sudden illness is still very scarce. The ever increasing use of the computer as a mean of spreading information in schools, enterprises, and even households and considering the advantages of an educational software for the users regarding storage and retrieval of information when needed, we proposed the creation of an interactive teaching software. This software is being developed using Storyboard live. The methodology is the following: literature review, selection of images, development of the program, application tests. The initial selected issues are: assessment of the victim, cardiorespiratory arrest and resuscitation, airway obstruction, wounds, and hemorrhages. After utilizing the program, the user should be able to solve hypothetical situations with minimal initial care and maximal physical comfort for an accident or sudden illness victim.

  18. PeptidePicker: a scientific workflow with web interface for selecting appropriate peptides for targeted proteomics experiments.

    PubMed

    Mohammed, Yassene; Domański, Dominik; Jackson, Angela M; Smith, Derek S; Deelder, André M; Palmblad, Magnus; Borchers, Christoph H

    2014-06-25

    One challenge in Multiple Reaction Monitoring (MRM)-based proteomics is to select the most appropriate surrogate peptides to represent a target protein. We present here a software package to automatically generate these most appropriate surrogate peptides for an LC/MRM-MS analysis. Our method integrates information about the proteins, their tryptic peptides, and the suitability of these peptides for MRM which is available online in UniProtKB, NCBI's dbSNP, ExPASy, PeptideAtlas, PRIDE, and GPMDB. The scoring algorithm reflects our knowledge in choosing the best candidate peptides for MRM, based on the uniqueness of the peptide in the targeted proteome, its physiochemical properties, and whether it previously has been observed. The modularity of the workflow allows further extension and additional selection criteria to be incorporated. We have developed a simple Web interface where the researcher provides the protein accession number, the subject organism, and peptide-specific options. Currently, the software is designed for human and mouse proteomes, but additional species can be easily be added. Our software improved the peptide selection by eliminating human error, considering multiple data sources and all of the isoforms of the protein, and resulted in faster peptide selection - approximately 50 proteins per hour compared to 8 per day. Compiling a list of optimal surrogate peptides for target proteins to be analyzed by LC/MRM-MS has been a cumbersome process, in which expert researchers retrieved information from different online repositories and used their own reasoning to find the most appropriate peptides. Our scientific workflow automates this process by integrating information from different data sources including UniProt, Global Proteome Machine, NCBI's dbSNP, and PeptideAtlas, simulating the researchers' reasoning, and incorporating their knowledge of how to select the best proteotypic peptides for an MRM analysis. The developed software can help to standardize the selection of peptides, eliminate human error, and increase productivity. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Post-Flight Data Analysis Tool

    NASA Technical Reports Server (NTRS)

    George, Marina

    2018-01-01

    A software tool that facilitates the retrieval and analysis of post-flight data. This allows our team and other teams to effectively and efficiently analyze and evaluate post-flight data in order to certify commercial providers.

  20. Interfering Effects of Retrieval in Learning New Information

    ERIC Educational Resources Information Center

    Finn, Bridgid; Roediger, Henry L., III

    2013-01-01

    In 7 experiments, we explored the role of retrieval in associative updating, that is, in incorporating new information into an associative memory. We tested the hypothesis that retrieval would facilitate incorporating a new contextual detail into a learned association. Participants learned 3 pieces of information--a person's face, name, and…

Top