ERIC Educational Resources Information Center
Brown, Lauren E.; Dubois, Alain; Shepard, Donald B.
2008-01-01
Retrieval efficiencies of paper-based references in journals and other serials containing 10 scientific names of fossil amphibians were determined for seven major search engines. Retrievals were compared to the number of references obtained covering the period 1895-2006 by a Comprehensive Search. The latter was primarily a traditional…
Figure mining for biomedical research.
Rodriguez-Esteban, Raul; Iossifov, Ivan
2009-08-15
Figures from biomedical articles contain valuable information difficult to reach without specialized tools. Currently, there is no search engine that can retrieve specific figure types. This study describes a retrieval method that takes advantage of principles in image understanding, text mining and optical character recognition (OCR) to retrieve figure types defined conceptually. A search engine was developed to retrieve tables and figure types to aid computational and experimental research. http://iossifovlab.cshl.edu/figurome/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CARRO CA
2011-07-15
This Hazard and Operability (HAZOP) study addresses the Sludge Treatment Project (STP) Engineered Container Retrieval and Transfer System (ECRTS) preliminary design for retrieving sludge from underwater engineered containers located in the 105-K West (KW) Basin, transferring the sludge as a sludge-water slurry (hereafter referred to as 'slurry') to a Sludge Transport and Storage Container (STSC) located in a Modified KW Basin Annex, and preparing the STSC for transport to T Plant using the Sludge Transport System (STS). There are six, underwater engineered containers located in the KW Basin that, at the time of sludge retrieval, will contain an estimated volumemore » of 5.2 m{sup 3} of KW Basin floor and pit sludge, 18.4 m{sup 3} of 105-K East (KE) Basin floor, pit, and canister sludge, and 3.5 m{sup 3} of settler tank sludge. The KE and KW Basin sludge consists of fuel corrosion products (including metallic uranium, and fission and activation products), small fuel fragments, iron and aluminum oxide, sand, dirt, operational debris, and biological debris. The settler tank sludge consists of sludge generated by the washing of KE and KW Basin fuel in the Primary Clean Machine. A detailed description of the origin of sludge and its chemical and physical characteristics can be found in HNF-41051, Preliminary STP Container and Settler Sludge Process System Description and Material Balance. In summary, the ECRTS retrieves sludge from the engineered containers and hydraulically transfers it as a slurry into an STSC positioned within a trailer-mounted STS cask located in a Modified KW Basin Annex. The slurry is allowed to settle within the STSC to concentrate the solids and clarify the supernate. After a prescribed settling period the supernate is decanted. The decanted supernate is filtered through a sand filter and returned to the basin. Subsequent batches of slurry are added to the STSC, settled, and excess supernate removed until the prescribed quantity of sludge is collected. The sand filter is then backwashed into the STSC. The STSC and STS cask are then inerted and transported to T Plant.« less
Roogle: an information retrieval engine for clinical data warehouse.
Cuggia, Marc; Garcelon, Nicolas; Campillo-Gimenez, Boris; Bernicot, Thomas; Laurent, Jean-François; Garin, Etienne; Happe, André; Duvauferrier, Régis
2011-01-01
High amount of relevant information is contained in reports stored in the electronic patient records and associated metadata. R-oogle is a project aiming at developing information retrieval engines adapted to these reports and designed for clinicians. The system consists in a data warehouse (full-text reports and structured data) imported from two different hospital information systems. Information retrieval is performed using metadata-based semantic and full-text search methods (as Google). Applications may be biomarkers identification in a translational approach, search of specific cases, and constitution of cohorts, professional practice evaluation, and quality control assessment.
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T.; Prokosch, U.
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval. PMID:10566463
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T; Prokosch, U
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval.
Computer aided systems human engineering: A hypermedia tool
NASA Technical Reports Server (NTRS)
Boff, Kenneth R.; Monk, Donald L.; Cody, William J.
1992-01-01
The Computer Aided Systems Human Engineering (CASHE) system, Version 1.0, is a multimedia ergonomics database on CD-ROM for the Apple Macintosh II computer, being developed for use by human system designers, educators, and researchers. It will initially be available on CD-ROM and will allow users to access ergonomics data and models stored electronically as text, graphics, and audio. The CASHE CD-ROM, Version 1.0 will contain the Boff and Lincoln (1988) Engineering Data Compendium, MIL-STD-1472D and a unique, interactive simulation capability, the Perception and Performance Prototyper. Its features also include a specialized data retrieval, scaling, and analysis capability and the state of the art in information retrieval, browsing, and navigation.
A unified architecture for biomedical search engines based on semantic web technologies.
Jalali, Vahid; Matash Borujerdi, Mohammad Reza
2011-04-01
There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.
An advanced search engine for patent analytics in medicinal chemistry.
Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnykova, Dina; Lovis, Christian; Ruch, Patrick
2012-01-01
Patent collections contain an important amount of medical-related knowledge, but existing tools were reported to lack of useful functionalities. We present here the development of TWINC, an advanced search engine dedicated to patent retrieval in the domain of health and life sciences. Our tool embeds two search modes: an ad hoc search to retrieve relevant patents given a short query and a related patent search to retrieve similar patents given a patent. Both search modes rely on tuning experiments performed during several patent retrieval competitions. Moreover, TWINC is enhanced with interactive modules, such as chemical query expansion, which is of prior importance to cope with various ways of naming biomedical entities. While the related patent search showed promising performances, the ad-hoc search resulted in fairly contrasted results. Nonetheless, TWINC performed well during the Chemathlon task of the PatOlympics competition and experts appreciated its usability.
Till, Benedikt; Niederkrotenthaler, Thomas
2014-08-01
The Internet provides a variety of resources for individuals searching for suicide-related information. Structured content-analytic approaches to assess intercultural differences in web contents retrieved with method-related and help-related searches are scarce. We used the 2 most popular search engines (Google and Yahoo/Bing) to retrieve US-American and Austrian search results for the term suicide, method-related search terms (e.g., suicide methods, how to kill yourself, painless suicide, how to hang yourself), and help-related terms (e.g., suicidal thoughts, suicide help) on February 11, 2013. In total, 396 websites retrieved with US search engines and 335 websites from Austrian searches were analyzed with content analysis on the basis of current media guidelines for suicide reporting. We assessed the quality of websites and compared findings across search terms and between the United States and Austria. In both countries, protective outweighed harmful website characteristics by approximately 2:1. Websites retrieved with method-related search terms (e.g., how to hang yourself) contained more harmful (United States: P < .001, Austria: P < .05) and fewer protective characteristics (United States: P < .001, Austria: P < .001) compared to the term suicide. Help-related search terms (e.g., suicidal thoughts) yielded more websites with protective characteristics (United States: P = .07, Austria: P < .01). Websites retrieved with U.S. search engines generally had more protective characteristics (P < .001) than searches with Austrian search engines. Resources with harmful characteristics were better ranked than those with protective characteristics (United States: P < .01, Austria: P < .05). The quality of suicide-related websites obtained depends on the search terms used. Preventive efforts to improve the ranking of preventive web content, particularly regarding method-related search terms, seem necessary. © Copyright 2014 Physicians Postgraduate Press, Inc.
An introduction to information retrieval: applications in genomics
Nadkarni, P M
2011-01-01
Information retrieval (IR) is the field of computer science that deals with the processing of documents containing free text, so that they can be rapidly retrieved based on keywords specified in a user’s query. IR technology is the basis of Web-based search engines, and plays a vital role in biomedical research, because it is the foundation of software that supports literature search. Documents can be indexed by both the words they contain, as well as the concepts that can be matched to domain-specific thesauri; concept matching, however, poses several practical difficulties that make it unsuitable for use by itself. This article provides an introduction to IR and summarizes various applications of IR and related technologies to genomics. PMID:12049181
NASA Technical Reports Server (NTRS)
2012-01-01
The NASA Thesaurus contains the authorized NASA subject terms used to index and retrieve materials in the NASA Aeronautics and Space Database (NA&SD) and NASA Technical Reports Server (NTRS). The scope of this controlled vocabulary includes not only aerospace engineering, but all supporting areas of engineering and physics, the natural space sciences (astronomy, astrophysics, planetary science), Earth sciences, and the biological sciences. The NASA Thesaurus Data File contains all valid terms and hierarchical relationships, USE references, and related terms in machine-readable form. The Data File is available in the following formats: RDF/SKOS, RDF/OWL, ZThes-1.0, and CSV/TXT.
An ontology-based search engine for protein-protein interactions
2010-01-01
Background Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. Results We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Conclusion Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology. PMID:20122195
An ontology-based search engine for protein-protein interactions.
Park, Byungkyu; Han, Kyungsook
2010-01-18
Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology.
Finding Information on the World Wide Web: The Retrieval Effectiveness of Search Engines.
ERIC Educational Resources Information Center
Pathak, Praveen; Gordon, Michael
1999-01-01
Describes a study that examined the effectiveness of eight search engines for the World Wide Web. Calculated traditional information-retrieval measures of recall and precision at varying numbers of retrieved documents to use as the bases for statistical comparisons of retrieval effectiveness. Also examined the overlap between search engines.…
Font adaptive word indexing of modern printed documents.
Marinai, Simone; Marino, Emanuele; Soda, Giovanni
2006-08-01
We propose an approach for the word-level indexing of modern printed documents which are difficult to recognize using current OCR engines. By means of word-level indexing, it is possible to retrieve the position of words in a document, enabling queries involving proximity of terms. Web search engines implement this kind of indexing, allowing users to retrieve Web pages on the basis of their textual content. Nowadays, digital libraries hold collections of digitized documents that can be retrieved either by browsing the document images or relying on appropriate metadata assembled by domain experts. Word indexing tools would therefore increase the access to these collections. The proposed system is designed to index homogeneous document collections by automatically adapting to different languages and font styles without relying on OCR engines for character recognition. The approach is based on three main ideas: the use of Self Organizing Maps (SOM) to perform unsupervised character clustering, the definition of one suitable vector-based word representation whose size depends on the word aspect-ratio, and the run-time alignment of the query word with indexed words to deal with broken and touching characters. The most appropriate applications are for processing modern printed documents (17th to 19th centuries) where current OCR engines are less accurate. Our experimental analysis addresses six data sets containing documents ranging from books of the 17th century to contemporary journals.
WORDGRAPH: Keyword-in-Context Visualization for NETSPEAK's Wildcard Search.
Riehmann, Patrick; Gruendl, Henning; Potthast, Martin; Trenkmann, Martin; Stein, Benno; Froehlich, Benno
2012-09-01
The WORDGRAPH helps writers in visually choosing phrases while writing a text. It checks for the commonness of phrases and allows for the retrieval of alternatives by means of wildcard queries. To support such queries, we implement a scalable retrieval engine, which returns high-quality results within milliseconds using a probabilistic retrieval strategy. The results are displayed as WORDGRAPH visualization or as a textual list. The graphical interface provides an effective means for interactive exploration of search results using filter techniques, query expansion, and navigation. Our observations indicate that, of three investigated retrieval tasks, the textual interface is sufficient for the phrase verification task, wherein both interfaces support context-sensitive word choice, and the WORDGRAPH best supports the exploration of a phrase's context or the underlying corpus. Our user study confirms these observations and shows that WORDGRAPH is generally the preferred interface over the textual result list for queries containing multiple wildcards.
García-Remesal, Miguel; Maojo, Victor; Crespo, José
2010-01-01
In this paper we present a knowledge engineering approach to automatically recognize and extract genetic sequences from scientific articles. To carry out this task, we use a preliminary recognizer based on a finite state machine to extract all candidate DNA/RNA sequences. The latter are then fed into a knowledge-based system that automatically discards false positives and refines noisy and incorrectly merged sequences. We created the knowledge base by manually analyzing different manuscripts containing genetic sequences. Our approach was evaluated using a test set of 211 full-text articles in PDF format containing 3134 genetic sequences. For such set, we achieved 87.76% precision and 97.70% recall respectively. This method can facilitate different research tasks. These include text mining, information extraction, and information retrieval research dealing with large collections of documents containing genetic sequences.
Analyzing Medical Image Search Behavior: Semantics and Prediction of Query Results.
De-Arteaga, Maria; Eggel, Ivan; Kahn, Charles E; Müller, Henning
2015-10-01
Log files of information retrieval systems that record user behavior have been used to improve the outcomes of retrieval systems, understand user behavior, and predict events. In this article, a log file of the ARRS GoldMiner search engine containing 222,005 consecutive queries is analyzed. Time stamps are available for each query, as well as masked IP addresses, which enables to identify queries from the same person. This article describes the ways in which physicians (or Internet searchers interested in medical images) search and proposes potential improvements by suggesting query modifications. For example, many queries contain only few terms and therefore are not specific; others contain spelling mistakes or non-medical terms that likely lead to poor or empty results. One of the goals of this report is to predict the number of results a query will have since such a model allows search engines to automatically propose query modifications in order to avoid result lists that are empty or too large. This prediction is made based on characteristics of the query terms themselves. Prediction of empty results has an accuracy above 88%, and thus can be used to automatically modify the query to avoid empty result sets for a user. The semantic analysis and data of reformulations done by users in the past can aid the development of better search systems, particularly to improve results for novice users. Therefore, this paper gives important ideas to better understand how people search and how to use this knowledge to improve the performance of specialized medical search engines.
Health search engine with e-document analysis for reliable search results.
Gaudinat, Arnaud; Ruch, Patrick; Joubert, Michel; Uziel, Philippe; Strauss, Anne; Thonnet, Michèle; Baud, Robert; Spahni, Stéphane; Weber, Patrick; Bonal, Juan; Boyer, Celia; Fieschi, Marius; Geissbuhler, Antoine
2006-01-01
After a review of the existing practical solution available to the citizen to retrieve eHealth document, the paper describes an original specialized search engine WRAPIN. WRAPIN uses advanced cross lingual information retrieval technologies to check information quality by synthesizing medical concepts, conclusions and references contained in the health literature, to identify accurate, relevant sources. Thanks to MeSH terminology [1] (Medical Subject Headings from the U.S. National Library of Medicine) and advanced approaches such as conclusion extraction from structured document, reformulation of the query, WRAPIN offers to the user a privileged access to navigate through multilingual documents without language or medical prerequisites. The results of an evaluation conducted on the WRAPIN prototype show that results of the WRAPIN search engine are perceived as informative 65% (59% for a general-purpose search engine), reliable and trustworthy 72% (41% for the other engine) by users. But it leaves room for improvement such as the increase of database coverage, the explanation of the original functionalities and an audience adaptability. Thanks to evaluation outcomes, WRAPIN is now in exploitation on the HON web site (http://www.healthonnet.org), free of charge. Intended to the citizen it is a good alternative to general-purpose search engines when the user looks up trustworthy health and medical information or wants to check automatically a doubtful content of a Web page.
CDAPubMed: a browser extension to retrieve EHR-based biomedical literature.
Perez-Rey, David; Jimenez-Castellanos, Ana; Garcia-Remesal, Miguel; Crespo, Jose; Maojo, Victor
2012-04-05
Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems.
CDAPubMed: a browser extension to retrieve EHR-based biomedical literature
2012-01-01
Background Over the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs. Results We have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination. Conclusions CDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems. PMID:22480327
Indexing and Retrieval for the Web.
ERIC Educational Resources Information Center
Rasmussen, Edie M.
2003-01-01
Explores current research on indexing and ranking as retrieval functions of search engines on the Web. Highlights include measuring search engine stability; evaluation of Web indexing and retrieval; Web crawlers; hyperlinks for indexing and ranking; ranking for metasearch; document structure; citation indexing; relevance; query evaluation;…
DIALOG for Electrical Engineers. CTHB Publikation Nr 29 (1982).
ERIC Educational Resources Information Center
Fjallbrant, Nancy
This manual provides electrical and electronic engineers with an introduction to online information retrieval as implemented on the DIALOG information retrieval system. Sections cover: (1) the development of computerized information retrieval; (2) its advantages; (3) the equipment needed, DIALOG hours of availability, methods of access, and cost…
Synchronisation Technique of Data Recorded on a Multichannel Tape Recorder,
1984-01-01
retrieval Synchronizers I 16. Abstract A portable, self-contained, electronic digital unit, termed Data Synchroniser was designed and developed by EDE...AD A139 570 SYNCHRONISATION TECHNIQUE OF DATA RECORDED ON A / OULl ICHANNEL TAPE RECORDER (U) ENGINEERING DEVELOPMENT ESTA B LISHMENT MARIBYRNONO...BGINEERING DEVELOPMEWIT ESTABUSHIMENT S[ SYNCHRONISATION TECHNIQUE OF DATA - i RECORDED ON A MULTICHANNEL TAPE RECORDER BY J.D. DICKENS .t T)TCi j.D. ~c .s
GenderMedDB: an interactive database of sex and gender-specific medical literature.
Oertelt-Prigione, Sabine; Gohlke, Björn-Oliver; Dunkel, Mathias; Preissner, Robert; Regitz-Zagrosek, Vera
2014-01-01
Searches for sex and gender-specific publications are complicated by the absence of a specific algorithm within search engines and by the lack of adequate archives to collect the retrieved results. We previously addressed this issue by initiating the first systematic archive of medical literature containing sex and/or gender-specific analyses. This initial collection has now been greatly enlarged and re-organized as a free user-friendly database with multiple functions: GenderMedDB (http://gendermeddb.charite.de). GenderMedDB retrieves the included publications from the PubMed database. Manuscripts containing sex and/or gender-specific analysis are continuously screened and the relevant findings organized systematically into disciplines and diseases. Publications are furthermore classified by research type, subject and participant numbers. More than 11,000 abstracts are currently included in the database, after screening more than 40,000 publications. The main functions of the database include searches by publication data or content analysis based on pre-defined classifications. In addition, registrants are enabled to upload relevant publications, access descriptive publication statistics and interact in an open user forum. Overall, GenderMedDB offers the advantages of a discipline-specific search engine as well as the functions of a participative tool for the gender medicine community.
Sarrouti, Mourad; Ouatik El Alaoui, Said
2017-04-01
Passage retrieval, the identification of top-ranked passages that may contain the answer for a given biomedical question, is a crucial component for any biomedical question answering (QA) system. Passage retrieval in open-domain QA is a longstanding challenge widely studied over the last decades. However, it still requires further efforts in biomedical QA. In this paper, we present a new biomedical passage retrieval method based on Stanford CoreNLP sentence/passage length, probabilistic information retrieval (IR) model and UMLS concepts. In the proposed method, we first use our document retrieval system based on PubMed search engine and UMLS similarity to retrieve relevant documents to a given biomedical question. We then take the abstracts from the retrieved documents and use Stanford CoreNLP for sentence splitter to make a set of sentences, i.e., candidate passages. Using stemmed words and UMLS concepts as features for the BM25 model, we finally compute the similarity scores between the biomedical question and each of the candidate passages and keep the N top-ranked ones. Experimental evaluations performed on large standard datasets, provided by the BioASQ challenge, show that the proposed method achieves good performances compared with the current state-of-the-art methods. The proposed method significantly outperforms the current state-of-the-art methods by an average of 6.84% in terms of mean average precision (MAP). We have proposed an efficient passage retrieval method which can be used to retrieve relevant passages in biomedical QA systems with high mean average precision. Copyright © 2017 Elsevier Inc. All rights reserved.
Engineering Data Compendium. Human Perception and Performance, Volume 2
NASA Technical Reports Server (NTRS)
Boff, Kenneth R. (Editor); Lincoln, Janet E. (Editor)
1988-01-01
The concept underlying the Engineering Data Compendium was the product of a Research and Development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design of military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by system designers. The present volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is Volume 2, which contains sections on Information Storage and Retrieval, Spatial Awareness, Perceptual Organization, and Attention and Allocation of Resources.
A comparison of Boolean-based retrieval to the WAIS system for retrieval of aeronautical information
NASA Technical Reports Server (NTRS)
Marchionini, Gary; Barlow, Diane
1994-01-01
An evaluation of an information retrieval system using a Boolean-based retrieval engine and inverted file architecture and WAIS, which uses a vector-based engine, was conducted. Four research questions in aeronautical engineering were used to retrieve sets of citations from the NASA Aerospace Database which was mounted on a WAIS server and available through Dialog File 108 which served as the Boolean-based system (BBS). High recall and high precision searches were done in the BBS and terse and verbose queries were used in the WAIS condition. Precision values for the WAIS searches were consistently above the precision values for high recall BBS searches and consistently below the precision values for high precision BBS searches. Terse WAIS queries gave somewhat better precision performance than verbose WAIS queries. In every case, a small number of relevant documents retrieved by one system were not retrieved by the other, indicating the incomplete nature of the results from either retrieval system. Relevant documents in the WAIS searches were found to be randomly distributed in the retrieved sets rather than distributed by ranks. Advantages and limitations of both types of systems are discussed.
Griffon, Nicolas; Kerdelhué, Gaétan; Hamek, Saliha; Hassler, Sylvain; Boog, César; Lamy, Jean-Baptiste; Duclos, Catherine; Venot, Alain; Darmoni, Stéfan J
2014-10-01
Doc'CISMeF (DC) is a semantic search engine used to find resources in CISMeF-BP, a quality controlled health gateway, which gathers guidelines available on the internet in French. Visualization of Concepts in Medicine (VCM) is an iconic language that may ease information retrieval tasks. This study aimed to describe the creation and evaluation of an interface integrating VCM in DC in order to make this search engine much easier to use. Focus groups were organized to suggest ways to enhance information retrieval tasks using VCM in DC. A VCM interface was created and improved using the ergonomic evaluation approach. 20 physicians were recruited to compare the VCM interface with the non-VCM one. Each evaluator answered two different clinical scenarios in each interface. The ability and time taken to select a relevant resource were recorded and compared. A usability analysis was performed using the System Usability Scale (SUS). The VCM interface contains a filter based on icons, and icons describing each resource according to focus group recommendations. Some ergonomic issues were resolved before evaluation. Use of VCM significantly increased the success of information retrieval tasks (OR=11; 95% CI 1.4 to 507). Nonetheless, it took significantly more time to find a relevant resource with VCM interface (101 vs 65 s; p=0.02). SUS revealed 'good' usability with an average score of 74/100. VCM was successfully implemented in DC as an option. It increased the success rate of information retrieval tasks, despite requiring slightly more time, and was well accepted by end-users. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Thesaurus. Volumes 1 and 2; Hierarchical Listing with Definitions; Rotated Term Display
NASA Technical Reports Server (NTRS)
2012-01-01
The NASA Thesaurus contains the authorized subject terms by which the documents in the NASA STI Databases are indexed and retrieved. The scope of this controlled vocabulary includes not only aerospace engineering, but all supporting areas of engineering and physics, the natural space sciences (astronomy, astrophysics, planetary science), Earth sciences, and to some extent, the biological sciences. Volume 1 - Hierarchical Listing With Definitions contains over 18,400 subject terms, 4,300 definitions, and more than 4,500 USE cross references. The Hierarchical Listing presents full hierarchical structure for each term along with 'related term' lists, and can serve as an orthographic authority. Volume 2 - Rotated Term Display is a ready-reference tool which provides over 52,700 additional 'access points' to the thesaurus terminology. It contains the postable and nonpostable terms found in the Hierarchical Listing arranged in a KWIC (key-word-in-context) index. This CD-ROM version of the NASA Thesaurus is in PDF format and is updated to the current year of purchase.
Engineering Analysis Using a Web-based Protocol
NASA Technical Reports Server (NTRS)
Schoeffler, James D.; Claus, Russell W.
2002-01-01
This paper reviews the development of a web-based framework for engineering analysis. A one-dimensional, high-speed analysis code called LAPIN was used in this study, but the approach can be generalized to any engineering analysis tool. The web-based framework enables users to store, retrieve, and execute an engineering analysis from a standard web-browser. We review the encapsulation of the engineering data into the eXtensible Markup Language (XML) and various design considerations in the storage and retrieval of application data.
Mobile medical visual information retrieval.
Depeursinge, Adrien; Duc, Samuel; Eggel, Ivan; Müller, Henning
2012-01-01
In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical information retrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.
1977-11-01
UNLIMITED!IBIR-4J1 * : . C)" A COMPARATIVE EVALUATION OF THE THESAURUS OF, SAENGINEERING AND SCIENTIFIC TERMS AND THE DDC RETRIEVAL AND INDEXING...f cce7,i on For TER1 AND THE DDC RETRIEVAL AND A--" o " INDEXING TERMINOLOGY NTIS - * ByD , 2 _jDitr" : i :/ _ _ . _Av: "jl r " 1 t v Codes...34 . (ii. ABSTRACT A comparative evaluation nas been undertaken or the DDC Retrieval and Indexing Terminology (DRIT) and the Thesaurus of Engineering and
Biomedical information retrieval across languages.
Daumke, Philipp; Markü, Kornél; Poprat, Michael; Schulz, Stefan; Klar, Rüdiger
2007-06-01
This work presents a new dictionary-based approach to biomedical cross-language information retrieval (CLIR) that addresses many of the general and domain-specific challenges in current CLIR research. Our method is based on a multilingual lexicon that was generated partly manually and partly automatically, and currently covers six European languages. It contains morphologically meaningful word fragments, termed subwords. Using subwords instead of entire words significantly reduces the number of lexical entries necessary to sufficiently cover a specific language and domain. Mediation between queries and documents is based on these subwords as well as on lists of word-n-grams that are generated from large monolingual corpora and constitute possible translation units. The translations are then sent to a standard Internet search engine. This process makes our approach an effective tool for searching the biomedical content of the World Wide Web in different languages. We evaluate this approach using the OHSUMED corpus, a large medical document collection, within a cross-language retrieval setting.
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Gallagher, Mary C.
1985-01-01
There exists a large number of large-scale bibliographic Information Storage and Retrieval Systems containing large amounts of valuable data of interest in a wide variety of research applications. These systems are not used to capacity because the end users, i.e., the researchers, have not been trained in the techniques of accessing such systems. This thesis describes the development of a transportable, university-level course in methods of querying on-line interactive Information Storage and Retrieval systems as a solution to this problem. This course was designed to instruct upper division science and engineering students to enable these end users to directly access such systems. The course is designed to be taught by instructors who are not specialists in either computer science or research skills. It is independent of any particular IS and R system or computer hardware. The project is sponsored by NASA and conducted by the University of Southwestern Louisiana and Southern University.
Publications - Search Help | Alaska Division of Geological & Geophysical
main content Publications Search Help General Hints The search engine will retrieve those publications publication's title is known, enter those words in the title input box. The search engine will look for all of .). Publication Year The search engine will retrieve all publication years by default. Select one publication year
MPEG-7 audio-visual indexing test-bed for video retrieval
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian
2003-12-01
This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.
System engineering approach to GPM retrieval algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, C. R.; Chandrasekar, V.
2004-01-01
System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Groundmore » validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do calculated at each bin, the rain rate can then be calculated based on a suitable rain-rate model. This paper develops a system engineering interface to the retrieval algorithms while remaining cognizant of system engineering issues so that it can be used to bridge the divide between algorithm physics an d overall mission requirements. Additionally, in line with the systems approach, a methodology is developed such that the measurement requirements pass through the retrieval model and other subsystems and manifest themselves as measurement and other system constraints. A systems model has been developed for the retrieval algorithm that can be evaluated through system-analysis tools such as MATLAB/Simulink.« less
PubMed vs. HighWire Press: a head-to-head comparison of two medical literature search engines.
Vanhecke, Thomas E; Barnes, Michael A; Zimmerman, Janet; Shoichet, Sandor
2007-09-01
PubMed and HighWire Press are both useful medical literature search engines available for free to anyone on the internet. We measured retrieval accuracy, number of results generated, retrieval speed, features and search tools on HighWire Press and PubMed using the quick search features of each. We found that using HighWire Press resulted in a higher likelihood of retrieving the desired article and higher number of search results than the same search on PubMed. PubMed was faster than HighWire Press in delivering search results regardless of search settings. There are considerable differences in search features between these two search engines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
GEUTHER J; CONRAD EA; RHOADARMER D
2009-08-24
The Sludge Treatment Project (STP) is considering two different concepts for the retrieval, loading, transport and interim storage of the K Basin sludge. The two design concepts under consideration are: (1) Hydraulic Loading Concept - In the hydraulic loading concept, the sludge is retrieved from the Engineered Containers directly into the Sludge Transport and Storage Container (STSC) while located in the STS cask in the modified KW Basin Annex. The sludge is loaded via a series of transfer, settle, decant, and filtration return steps until the STSC sludge transportation limits are met. The STSC is then transported to T Plantmore » and placed in storage arrays in the T Plant canyon cells for interim storage. (2) Small Canister Concept - In the small canister concept, the sludge is transferred from the Engineered Containers (ECs) into a settling vessel. After settling and decanting, the sludge is loaded underwater into small canisters. The small canisters are then transferred to the existing Fuel Transport System (FTS) where they are loaded underwater into the FTS Shielded Transfer Cask (STC). The STC is raised from the basin and placed into the Cask Transfer Overpack (CTO), loaded onto the trailer in the KW Basin Annex for transport to T Plant. At T Plant, the CTO is removed from the transport trailer and placed on the canyon deck. The CTO and STC are opened and the small canisters are removed using the canyon crane and placed into an STSC. The STSC is closed, and placed in storage arrays in the T Plant canyon cells for interim storage. The purpose of the cost estimate is to provide a comparison of the two concepts described.« less
Design implications for task-specific search utilities for retrieval and re-engineering of code
NASA Astrophysics Data System (ADS)
Iqbal, Rahat; Grzywaczewski, Adam; Halloran, John; Doctor, Faiyaz; Iqbal, Kashif
2017-05-01
The importance of information retrieval systems is unquestionable in the modern society and both individuals as well as enterprises recognise the benefits of being able to find information effectively. Current code-focused information retrieval systems such as Google Code Search, Codeplex or Koders produce results based on specific keywords. However, these systems do not take into account developers' context such as development language, technology framework, goal of the project, project complexity and developer's domain expertise. They also impose additional cognitive burden on users in switching between different interfaces and clicking through to find the relevant code. Hence, they are not used by software developers. In this paper, we discuss how software engineers interact with information and general-purpose information retrieval systems (e.g. Google, Yahoo!) and investigate to what extent domain-specific search and recommendation utilities can be developed in order to support their work-related activities. In order to investigate this, we conducted a user study and found that software engineers followed many identifiable and repeatable work tasks and behaviours. These behaviours can be used to develop implicit relevance feedback-based systems based on the observed retention actions. Moreover, we discuss the implications for the development of task-specific search and collaborative recommendation utilities embedded with the Google standard search engine and Microsoft IntelliSense for retrieval and re-engineering of code. Based on implicit relevance feedback, we have implemented a prototype of the proposed collaborative recommendation system, which was evaluated in a controlled environment simulating the real-world situation of professional software engineers. The evaluation has achieved promising initial results on the precision and recall performance of the system.
Multimedia proceedings of the 10th Office Information Technology Conference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudson, B.
1993-09-10
The CD contains the handouts for all the speakers, demo software from Apple, Adobe, Microsoft, and Zylabs, and video movies of the keynote speakers. Adobe Acrobat is used to provide full-fidelity retrieval of the speakers` slides and Apple`s Quicktime for Macintosh and Windows is used for video playback. ZyIndex is included for Windows users to provide a full-text search engine for selected documents. There are separately labelled installation and operating instructions for Macintosh and Windows users and some general materials common to both sets of users.
IntegromeDB: an integrated system and biological search engine.
Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia
2012-01-19
With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.
A medical ontology for intelligent web-based skin lesions image retrieval.
Maragoudakis, Manolis; Maglogiannis, Ilias
2011-06-01
Researchers have applied increasing efforts towards providing formal computational frameworks to consolidate the plethora of concepts and relations used in the medical domain. In the domain of skin related diseases, the variability of semantic features contained within digital skin images is a major barrier to the medical understanding of the symptoms and development of early skin cancers. The desideratum of making these standards machine-readable has led to their formalization in ontologies. In this work, in an attempt to enhance an existing Core Ontology for skin lesion images, hand-coded from image features, high quality images were analyzed by an autonomous ontology creation engine. We show that by exploiting agglomerative clustering methods with distance criteria upon the existing ontological structure, the original domain model could be enhanced with new instances, attributes and even relations, thus allowing for better classification and retrieval of skin lesion categories from the web.
Combinatorial Fusion Analysis for Meta Search Information Retrieval
NASA Astrophysics Data System (ADS)
Hsu, D. Frank; Taksa, Isak
Leading commercial search engines are built as single event systems. In response to a particular search query, the search engine returns a single list of ranked search results. To find more relevant results the user must frequently try several other search engines. A meta search engine was developed to enhance the process of multi-engine querying. The meta search engine queries several engines at the same time and fuses individual engine results into a single search results list. The fusion of multiple search results has been shown (mostly experimentally) to be highly effective. However, the question of why and how the fusion should be done still remains largely unanswered. In this chapter, we utilize the combinatorial fusion analysis proposed by Hsu et al. to analyze combination and fusion of multiple sources of information. A rank/score function is used in the design and analysis of our framework. The framework provides a better understanding of the fusion phenomenon in information retrieval. For example, to improve the performance of the combined multiple scoring systems, it is necessary that each of the individual scoring systems has relatively high performance and the individual scoring systems are diverse. Additionally, we illustrate various applications of the framework using two examples from the information retrieval domain.
Information Retrieval for Education: Making Search Engines Language Aware
ERIC Educational Resources Information Center
Ott, Niels; Meurers, Detmar
2010-01-01
Search engines have been a major factor in making the web the successful and widely used information source it is today. Generally speaking, they make it possible to retrieve web pages on a topic specified by the keywords entered by the user. Yet web searching currently does not take into account which of the search results are comprehensible for…
Web information retrieval based on ontology
NASA Astrophysics Data System (ADS)
Zhang, Jian
2013-03-01
The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.
A multilingual assessment of melanoma information quality on the Internet.
Bari, Lilla; Kemeny, Lajos; Bari, Ferenc
2014-06-01
This study aims to assess and compare melanoma information quality in Hungarian, Czech, and German languages on the Internet. We used country-specific Google search engines to retrieve the first 25 uniform resource locators (URLs) by searching the word "melanoma" in the given language. Using the automated toolbar of Health On the Net Foundation (HON), we assessed each Web site for HON certification based on the Health On the Net Foundation Code of Conduct (HONcode). Information quality was determined using a 35-point checklist created by Bichakjian et al. (J Clin Oncol 20:134-141, 2002), with the NCCN melanoma guideline as control. After excluding duplicate and link-only pages, a total of 24 Hungarian, 18 Czech, and 21 German melanoma Web sites were evaluated and rated. The amount of HON certified Web sites was the highest among the German Web pages (19%). One of the retrieved Hungarian and none of the Czech Web sites were HON certified. We found the highest number of Web sites containing comprehensive, correct melanoma information in German language, followed by Czech and Hungarian pages. Although the majority of the Web sites lacked data about incidence, risk factors, prevention, treatment, work-up, and follow-up, at least one comprehensive, high-quality Web site was found in each language. Several Web sites contained incorrect information in each language. While a small amount of comprehensive, quality melanoma-related Web sites was found, most of the retrieved Web content lacked basic disease information, such as risk factors, prevention, and treatment. A significant number of Web sites contained malinformation. In case of melanoma, primary and secondary preventions are of especially high importance; therefore, the improvement of disease information quality available on the Internet is necessary.
An Intelligent System for Document Retrieval in Distributed Office Environments.
ERIC Educational Resources Information Center
Mukhopadhyay, Uttam; And Others
1986-01-01
MINDS (Multiple Intelligent Node Document Servers) is a distributed system of knowledge-based query engines for efficiently retrieving multimedia documents in an office environment of distributed workstations. By learning document distribution patterns and user interests and preferences during system usage, it customizes document retrievals for…
IntegromeDB: an integrated system and biological search engine
2012-01-01
Background With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Description Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. Conclusions The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback. PMID:22260095
Overview of Nuclear Physics Data: Databases, Web Applications and Teaching Tools
NASA Astrophysics Data System (ADS)
McCutchan, Elizabeth
2017-01-01
The mission of the United States Nuclear Data Program (USNDP) is to provide current, accurate, and authoritative data for use in pure and applied areas of nuclear science and engineering. This is accomplished by compiling, evaluating, and disseminating extensive datasets. Our main products include the Evaluated Nuclear Structure File (ENSDF) containing information on nuclear structure and decay properties and the Evaluated Nuclear Data File (ENDF) containing information on neutron-induced reactions. The National Nuclear Data Center (NNDC), through the website www.nndc.bnl.gov, provides web-based retrieval systems for these and many other databases. In addition, the NNDC hosts several on-line physics tools, useful for calculating various quantities relating to basic nuclear physics. In this talk, I will first introduce the quantities which are evaluated and recommended in our databases. I will then outline the searching capabilities which allow one to quickly and efficiently retrieve data. Finally, I will demonstrate how the database searches and web applications can provide effective teaching tools concerning the structure of nuclei and how they interact. Work supported by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886.
ERIC Educational Resources Information Center
Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David
1999-01-01
Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…
A tissue engineering strategy for the treatment of avascular necrosis of the femoral head.
Aarvold, A; Smith, J O; Tayton, E R; Jones, A M H; Dawson, J I; Lanham, S; Briscoe, A; Dunlop, D G; Oreffo, R O C
2013-12-01
Skeletal stem cells (SSCs) and impaction bone grafting (IBG) can be combined to produce a mechanically stable living bone composite. This novel strategy has been translated to the treatment of avascular necrosis of the femoral head. Surgical technique, clinical follow-up and retrieval analysis data of this translational case series is presented. SSCs and milled allograft were impacted into necrotic bone in five femoral heads of four patients. Cell viability was confirmed by parallel in vitro culture of the cell-graft constructs. Patient follow-up was by serial clinical and radiological examination. Tissue engineered bone was retrieved from two retrieved femoral heads and was analysed by histology, microcomputed tomography (μCT) and mechanical testing. Three patients remain asymptomatic at 22- to 44-month follow-up. One patient (both hips) required total hip replacement due to widespread residual necrosis. Retrieved tissue engineered bone demonstrated a mature trabecular micro-architecture histologically and on μCT. Bone density and axial compression strength were comparable to trabecular bone. Clinical follow-up shows this to be an effective new treatment for focal early stage avascular necrosis of the femoral head. Unique retrieval analysis of clinically translated tissue engineered bone has demonstrated regeneration of tissue that is both structurally and functionally analogous to normal trabecular bone. Copyright © 2013 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.
A Unified Mathematical Definition of Classical Information Retrieval.
ERIC Educational Resources Information Center
Dominich, Sandor
2000-01-01
Presents a unified mathematical definition for the classical models of information retrieval and identifies a mathematical structure behind relevance feedback. Highlights include vector information retrieval; probabilistic information retrieval; and similarity information retrieval. (Contains 118 references.) (Author/LRW)
Engineering a Multi-Purpose Test Collection for Web Retrieval Experiments.
ERIC Educational Resources Information Center
Bailey, Peter; Craswell, Nick; Hawking, David
2003-01-01
Describes a test collection that was developed as a multi-purpose testbed for experiments on the Web in distributed information retrieval, hyperlink algorithms, and conventional ad hoc retrieval. Discusses inter-server connectivity, integrity of server holdings, inclusion of documents related to a wide spread of likely queries, and distribution of…
EARS: An Online Bibliographic Search and Retrieval System Based on Ordered Explosion.
ERIC Educational Resources Information Center
Ramesh, R.; Drury, Colin G.
1987-01-01
Provides overview of Ergonomics Abstracts Retrieval System (EARS), an online bibliographic search and retrieval system in the area of human factors engineering. Other online systems are described, the design of EARS based on inverted file organization is explained, and system expansions including a thesaurus are discussed. (Author/LRW)
On search guide phrase compilation for recommending home medical products.
Luo, Gang
2010-01-01
To help people find desired home medical products (HMPs), we developed an intelligent personal health record (iPHR) system that can automatically recommend HMPs based on users' health issues. Using nursing knowledge, we pre-compile a set of "search guide" phrases that provides semantic translation from words describing health issues to their underlying medical meanings. Then iPHR automatically generates queries from those phrases and uses them and a search engine to retrieve HMPs. To avoid missing relevant HMPs during retrieval, the compiled search guide phrases need to be comprehensive. Such compilation is a challenging task because nursing knowledge updates frequently and contains numerous details scattered in many sources. This paper presents a semi-automatic tool facilitating such compilation. Our idea is to formulate the phrase compilation task as a multi-label classification problem. For each newly obtained search guide phrase, we first use nursing knowledge and information retrieval techniques to identify a small set of potentially relevant classes with corresponding hints. Then a nurse makes the final decision on assigning this phrase to proper classes based on those hints. We demonstrate the effectiveness of our techniques by compiling search guide phrases from an occupational therapy textbook.
eTACTS: a method for dynamically filtering clinical trial search results.
Miotto, Riccardo; Jiang, Silis; Weng, Chunhua
2013-12-01
Information overload is a significant problem facing online clinical trial searchers. We present eTACTS, a novel interactive retrieval framework using common eligibility tags to dynamically filter clinical trial search results. eTACTS mines frequent eligibility tags from free-text clinical trial eligibility criteria and uses these tags for trial indexing. After an initial search, eTACTS presents to the user a tag cloud representing the current results. When the user selects a tag, eTACTS retains only those trials containing that tag in their eligibility criteria and generates a new cloud based on tag frequency and co-occurrences in the remaining trials. The user can then select a new tag or unselect a previous tag. The process iterates until a manageable number of trials is returned. We evaluated eTACTS in terms of filtering efficiency, diversity of the search results, and user eligibility to the filtered trials using both qualitative and quantitative methods. eTACTS (1) rapidly reduced search results from over a thousand trials to ten; (2) highlighted trials that are generally not top-ranked by conventional search engines; and (3) retrieved a greater number of suitable trials than existing search engines. eTACTS enables intuitive clinical trial searches by indexing eligibility criteria with effective tags. User evaluation was limited to one case study and a small group of evaluators due to the long duration of the experiment. Although a larger-scale evaluation could be conducted, this feasibility study demonstrated significant advantages of eTACTS over existing clinical trial search engines. A dynamic eligibility tag cloud can potentially enhance state-of-the-art clinical trial search engines by allowing intuitive and efficient filtering of the search result space. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
eTACTS: A Method for Dynamically Filtering Clinical Trial Search Results
Miotto, Riccardo; Jiang, Silis; Weng, Chunhua
2013-01-01
Objective Information overload is a significant problem facing online clinical trial searchers. We present eTACTS, a novel interactive retrieval framework using common eligibility tags to dynamically filter clinical trial search results. Materials and Methods eTACTS mines frequent eligibility tags from free-text clinical trial eligibility criteria and uses these tags for trial indexing. After an initial search, eTACTS presents to the user a tag cloud representing the current results. When the user selects a tag, eTACTS retains only those trials containing that tag in their eligibility criteria and generates a new cloud based on tag frequency and co-occurrences in the remaining trials. The user can then select a new tag or unselect a previous tag. The process iterates until a manageable number of trials is returned. We evaluated eTACTS in terms of filtering efficiency, diversity of the search results, and user eligibility to the filtered trials using both qualitative and quantitative methods. Results eTACTS (1) rapidly reduced search results from over a thousand trials to ten; (2) highlighted trials that are generally not top-ranked by conventional search engines; and (3) retrieved a greater number of suitable trials than existing search engines. Discussion eTACTS enables intuitive clinical trial searches by indexing eligibility criteria with effective tags. User evaluation was limited to one case study and a small group of evaluators due to the long duration of the experiment. Although a larger-scale evaluation could be conducted, this feasibility study demonstrated significant advantages of eTACTS over existing clinical trial search engines. Conclusion A dynamic eligibility tag cloud can potentially enhance state-of-the-art clinical trial search engines by allowing intuitive and efficient filtering of the search result space. PMID:23916863
A Boon for the Architect Engineer
NASA Technical Reports Server (NTRS)
1992-01-01
Langley Research Center's need for an improved construction specification system led to an automated system called SPECSINTACT. A catalog of specifications, the system enables designers to retrieve relevant sections from computer storage and modify them as needed. SPECSINTACT has also been adopted by government agencies. The system is an integral part of the Construction Criteria Base (CCB), a single disc containing design and construction information for 10 government agencies including the American Institute of Architects' MASTERSPEC. CCB employs CD- ROM technologies and is available from the National Institute of Building Sciences. Users report substantial savings in time and productivity.
Lowe, H. J.
1993-01-01
This paper describes Image Engine, an object-oriented, microcomputer-based, multimedia database designed to facilitate the storage and retrieval of digitized biomedical still images, video, and text using inexpensive desktop computers. The current prototype runs on Apple Macintosh computers and allows network database access via peer to peer file sharing protocols. Image Engine supports both free text and controlled vocabulary indexing of multimedia objects. The latter is implemented using the TView thesaurus model developed by the author. The current prototype of Image Engine uses the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary (with UMLS Meta-1 extensions) as its indexing thesaurus. PMID:8130596
Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A
2012-09-01
Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.
Zhou, Lianjie; Chen, Nengcheng; Yuan, Sai; Chen, Zeqiang
2016-10-29
The efficient sharing of spatio-temporal trajectory data is important to understand traffic congestion in mass data. However, the data volumes of bus networks in urban cities are growing rapidly, reaching daily volumes of one hundred million datapoints. Accessing and retrieving mass spatio-temporal trajectory data in any field is hard and inefficient due to limited computational capabilities and incomplete data organization mechanisms. Therefore, we propose an optimized and efficient spatio-temporal trajectory data retrieval method based on the Cloudera Impala query engine, called ESTRI, to enhance the efficiency of mass data sharing. As an excellent query tool for mass data, Impala can be applied for mass spatio-temporal trajectory data sharing. In ESTRI we extend the spatio-temporal trajectory data retrieval function of Impala and design a suitable data partitioning method. In our experiments, the Taiyuan BeiDou (BD) bus network is selected, containing 2300 buses with BD positioning sensors, producing 20 million records every day, resulting in two difficulties as described in the Introduction section. In addition, ESTRI and MongoDB are applied in experiments. The experiments show that ESTRI achieves the most efficient data retrieval compared to retrieval using MongoDB for data volumes of fifty million, one hundred million, one hundred and fifty million, and two hundred million. The performance of ESTRI is approximately seven times higher than that of MongoDB. The experiments show that ESTRI is an effective method for retrieving mass spatio-temporal trajectory data. Finally, bus distribution mapping in Taiyuan city is achieved, describing the buses density in different regions at different times throughout the day, which can be applied in future studies of transport, such as traffic scheduling, traffic planning and traffic behavior management in intelligent public transportation systems.
Case-based fracture image retrieval.
Zhou, Xin; Stern, Richard; Müller, Henning
2012-05-01
Case-based fracture image retrieval can assist surgeons in decisions regarding new cases by supplying visually similar past cases. This tool may guide fracture fixation and management through comparison of long-term outcomes in similar cases. A fracture image database collected over 10 years at the orthopedic service of the University Hospitals of Geneva was used. This database contains 2,690 fracture cases associated with 43 classes (based on the AO/OTA classification). A case-based retrieval engine was developed and evaluated using retrieval precision as a performance metric. Only cases in the same class as the query case are considered as relevant. The scale-invariant feature transform (SIFT) is used for image analysis. Performance evaluation was computed in terms of mean average precision (MAP) and early precision (P10, P30). Retrieval results produced with the GNU image finding tool (GIFT) were used as a baseline. Two sampling strategies were evaluated. One used a dense 40 × 40 pixel grid sampling, and the second one used the standard SIFT features. Based on dense pixel grid sampling, three unsupervised feature selection strategies were introduced to further improve retrieval performance. With dense pixel grid sampling, the image is divided into 1,600 (40 × 40) square blocks. The goal is to emphasize the salient regions (blocks) and ignore irrelevant regions. Regions are considered as important when a high variance of the visual features is found. The first strategy is to calculate the variance of all descriptors on the global database. The second strategy is to calculate the variance of all descriptors for each case. A third strategy is to perform a thumbnail image clustering in a first step and then to calculate the variance for each cluster. Finally, a fusion between a SIFT-based system and GIFT is performed. A first comparison on the selection of sampling strategies using SIFT features shows that dense sampling using a pixel grid (MAP = 0.18) outperformed the SIFT detector-based sampling approach (MAP = 0.10). In a second step, three unsupervised feature selection strategies were evaluated. A grid parameter search is applied to optimize parameters for feature selection and clustering. Results show that using half of the regions (700 or 800) obtains the best performance for all three strategies. Increasing the number of clusters in clustering can also improve the retrieval performance. The SIFT descriptor variance in each case gave the best indication of saliency for the regions (MAP = 0.23), better than the other two strategies (MAP = 0.20 and 0.21). Combining GIFT (MAP = 0.23) and the best SIFT strategy (MAP = 0.23) produced significantly better results (MAP = 0.27) than each system alone. A case-based fracture retrieval engine was developed and is available for online demonstration. SIFT is used to extract local features, and three feature selection strategies were introduced and evaluated. A baseline using the GIFT system was used to evaluate the salient point-based approaches. Without supervised learning, SIFT-based systems with optimized parameters slightly outperformed the GIFT system. A fusion of the two approaches shows that the information contained in the two approaches is complementary. Supervised learning on the feature space is foreseen as the next step of this study.
Ilic, D; Bessell, T L; Silagy, C A; Green, S
2003-03-01
The Internet provides consumers with access to online health information; however, identifying relevant and valid information can be problematic. Our objectives were firstly to investigate the efficiency of search-engines, and then to assess the quality of online information pertaining to androgen deficiency in the ageing male (ADAM). Keyword searches were performed on nine search-engines (four general and five medical) to identify website information regarding ADAM. Search-engine efficiency was compared by percentage of relevant websites obtained via each search-engine. The quality of information published on each website was assessed using the DISCERN rating tool. Of 4927 websites searched, 47 (1.44%) and 10 (0.60%) relevant websites were identified by general and medical search-engines respectively. The overall quality of online information on ADAM was poor. The quality of websites retrieved using medical search-engines did not differ significantly from those retrieved by general search-engines. Despite the poor quality of online information relating to ADAM, it is evident that medical search-engines are no better than general search-engines in sourcing consumer information relevant to ADAM.
Relevance similarity: an alternative means to monitor information retrieval systems
Dong, Peng; Loh, Marie; Mondry, Adrian
2005-01-01
Background Relevance assessment is a major problem in the evaluation of information retrieval systems. The work presented here introduces a new parameter, "Relevance Similarity", for the measurement of the variation of relevance assessment. In a situation where individual assessment can be compared with a gold standard, this parameter is used to study the effect of such variation on the performance of a medical information retrieval system. In such a setting, Relevance Similarity is the ratio of assessors who rank a given document same as the gold standard over the total number of assessors in the group. Methods The study was carried out on a collection of Critically Appraised Topics (CATs). Twelve volunteers were divided into two groups of people according to their domain knowledge. They assessed the relevance of retrieved topics obtained by querying a meta-search engine with ten keywords related to medical science. Their assessments were compared to the gold standard assessment, and Relevance Similarities were calculated as the ratio of positive concordance with the gold standard for each topic. Results The similarity comparison among groups showed that a higher degree of agreements exists among evaluators with more subject knowledge. The performance of the retrieval system was not significantly different as a result of the variations in relevance assessment in this particular query set. Conclusion In assessment situations where evaluators can be compared to a gold standard, Relevance Similarity provides an alternative evaluation technique to the commonly used kappa scores, which may give paradoxically low scores in highly biased situations such as document repositories containing large quantities of relevant data. PMID:16029513
Where to search top-K biomedical ontologies?
Oliveira, Daniela; Butt, Anila Sahar; Haller, Armin; Rebholz-Schuhmann, Dietrich; Sahay, Ratnesh
2018-03-20
Searching for precise terms and terminological definitions in the biomedical data space is problematic, as researchers find overlapping, closely related and even equivalent concepts in a single or multiple ontologies. Search engines that retrieve ontological resources often suggest an extensive list of search results for a given input term, which leads to the tedious task of selecting the best-fit ontological resource (class or property) for the input term and reduces user confidence in the retrieval engines. A systematic evaluation of these search engines is necessary to understand their strengths and weaknesses in different search requirements. We have implemented seven comparable Information Retrieval ranking algorithms to search through ontologies and compared them against four search engines for ontologies. Free-text queries have been performed, the outcomes have been judged by experts and the ranking algorithms and search engines have been evaluated against the expert-based ground truth (GT). In addition, we propose a probabilistic GT that is developed automatically to provide deeper insights and confidence to the expert-based GT as well as evaluating a broader range of search queries. The main outcome of this work is the identification of key search factors for biomedical ontologies together with search requirements and a set of recommendations that will help biomedical experts and ontology engineers to select the best-suited retrieval mechanism in their search scenarios. We expect that this evaluation will allow researchers and practitioners to apply the current search techniques more reliably and that it will help them to select the right solution for their daily work. The source code (of seven ranking algorithms), ground truths and experimental results are available at https://github.com/danielapoliveira/bioont-search-benchmark.
ADP SYSTEMS ANALYSIS - COMMITTED VS. AVAILABLE MILITARY TRANSPORTATION (LMI T1).
LOGISTICS , * MANAGEMENT ENGINEERING), (*DATA PROCESSING, LOGISTICS), INFORMATION RETRIEVAL, SYSTEMS ENGINEERING, MILITARY TRANSPORTATION, CARGO VEHICLES, SCHEDULING, COMPUTER PROGRAMMING, MANAGEMENT PLANNING AND CONTROL
Intelligent web image retrieval system
NASA Astrophysics Data System (ADS)
Hong, Sungyong; Lee, Chungwoo; Nah, Yunmook
2001-07-01
Recently, the web sites such as e-business sites and shopping mall sites deal with lots of image information. To find a specific image from these image sources, we usually use web search engines or image database engines which rely on keyword only retrievals or color based retrievals with limited search capabilities. This paper presents an intelligent web image retrieval system. We propose the system architecture, the texture and color based image classification and indexing techniques, and representation schemes of user usage patterns. The query can be given by providing keywords, by selecting one or more sample texture patterns, by assigning color values within positional color blocks, or by combining some or all of these factors. The system keeps track of user's preferences by generating user query logs and automatically add more search information to subsequent user queries. To show the usefulness of the proposed system, some experimental results showing recall and precision are also explained.
Fringe pattern information retrieval using wavelets
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Patimo, Caterina; Manicone, Pasquale D.; Lamberti, Luciano
2005-08-01
Two-dimensional phase modulation is currently the basic model used in the interpretation of fringe patterns that contain displacement information, moire, holographic interferometry, speckle techniques. Another way to look to these two-dimensional signals is to consider them as frequency modulated signals. This alternative interpretation has practical implications similar to those that exist in radio engineering for handling frequency modulated signals. Utilizing this model it is possible to obtain frequency information by using the energy approach introduced by Ville in 1944. A natural complementary tool of this process is the wavelet methodology. The use of wavelet makes it possible to obtain the local values of the frequency in a one or two dimensional domain without the need of previous phase retrieval and differentiation. Furthermore from the properties of wavelets it is also possible to obtain at the same time the phase of the signal with the advantage of a better noise removal capabilities and the possibility of developing simpler algorithms for phase unwrapping due to the availability of the derivative of the phase.
ERIC Educational Resources Information Center
Air Force Systems Command, Wright-Patterson AFB, OH. Foreign Technology Div.
The role and place of the machine in scientific and technical information is explored including: basic trends in the development of information retrieval systems; preparation of engineering and scientific cadres with respect to mechanization and automation of information works; the logic of descriptor retrieval systems; the 'SETKA-3' automated…
Data engineering systems: Computerized modeling and data bank capabilities for engineering analysis
NASA Technical Reports Server (NTRS)
Kopp, H.; Trettau, R.; Zolotar, B.
1984-01-01
The Data Engineering System (DES) is a computer-based system that organizes technical data and provides automated mechanisms for storage, retrieval, and engineering analysis. The DES combines the benefits of a structured data base system with automated links to large-scale analysis codes. While the DES provides the user with many of the capabilities of a computer-aided design (CAD) system, the systems are actually quite different in several respects. A typical CAD system emphasizes interactive graphics capabilities and organizes data in a manner that optimizes these graphics. On the other hand, the DES is a computer-aided engineering system intended for the engineer who must operationally understand an existing or planned design or who desires to carry out additional technical analysis based on a particular design. The DES emphasizes data retrieval in a form that not only provides the engineer access to search and display the data but also links the data automatically with the computer analysis codes.
NASA Technical Reports Server (NTRS)
Narayanan, R.; Zimmerman, W. F.; Poon, P. T. Y.
1981-01-01
Test results on a modular simulation of the thermal transport and heat storage characteristics of a heat pipe solar receiver (HPSR) with thermal energy storage (TES) are presented. The HPSR features a 15-25 kWe Stirling engine power conversion system at the focal point of a parabolic dish concentrator operating at 827 C. The system collects and retrieves solar heat with sodium pipes and stores the heat in NaF-MgF2 latent heat storage material. The trials were run with a single full scale heat pipe, three full scale TES containers, and an air-cooled heat extraction coil to replace the Stirling engine heat exchanger. Charging and discharging, constant temperature operation, mixed mode operation, thermal inertial, etc. were studied. The heat pipe performance was verified, as were the thermal energy storage and discharge rates and isothermal discharges.
System to control contamination during retrieval of buried TRU waste
Menkhaus, Daniel E.; Loomis, Guy G.; Mullen, Carlan K.; Scott, Donald W.; Feldman, Edgar M.; Meyer, Leroy C.
1993-01-01
A system to control contamination during the retrieval of hazardous waste comprising an outer containment building, an inner containment building, within the outer containment building, an electrostatic radioactive particle recovery unit connected to and in communication with the inner and outer containment buildings, and a contaminate suppression system including a moisture control subsystem, and a rapid monitoring system having the ability to monitor conditions in the inner and outer containment buildings.
System to control contamination during retrieval of buried TRU waste
Menkhaus, D.E.; Loomis, G.G.; Mullen, C.K.; Scott, D.W.; Feldman, E.M.; Meyer, L.C.
1993-04-20
A system is described to control contamination during the retrieval of hazardous waste comprising an outer containment building, an inner containment building, within the outer containment building, an electrostatic radioactive particle recovery unit connected to and in communication with the inner and outer containment buildings, and a contaminate suppression system including a moisture control subsystem, and a rapid monitoring system having the ability to monitor conditions in the inner and outer containment buildings.
Kushniruk, Andre W; Kan, Min-Yem; McKeown, Kathleen; Klavans, Judith; Jordan, Desmond; LaFlamme, Mark; Patel, Vimia L
2002-01-01
This paper describes the comparative evaluation of an experimental automated text summarization system, Centrifuser and three conventional search engines - Google, Yahoo and About.com. Centrifuser provides information to patients and families relevant to their questions about specific health conditions. It then produces a multidocument summary of articles retrieved by a standard search engine, tailored to the user's question. Subjects, consisting of friends or family of hospitalized patients, were asked to "think aloud" as they interacted with the four systems. The evaluation involved audio- and video recording of subject interactions with the interfaces in situ at a hospital. Results of the evaluation show that subjects found Centrifuser's summarization capability useful and easy to understand. In comparing Centrifuser to the three search engines, subjects' ratings varied; however, specific interface features were deemed useful across interfaces. We conclude with a discussion of the implications for engineering Web-based retrieval systems.
Internet Search Engines - Fluctuations in Document Accessibility.
ERIC Educational Resources Information Center
Mettrop, Wouter; Nieuwenhuysen, Paul
2001-01-01
Reports an empirical investigation of the consistency of retrieval through Internet search engines. Evaluates 13 engines: AltaVista, EuroFerret, Excite, HotBot, InfoSeek, Lycos, MSN, NorthernLight, Snap, WebCrawler, and three national Dutch engines: Ilse, Search.nl and Vindex. The focus is on a characteristic related to size: the degree of…
An architecture for diversity-aware search for medical web content.
Denecke, K
2012-01-01
The Web provides a huge source of information, also on medical and health-related issues. In particular the content of medical social media data can be diverse due to the background of an author, the source or the topic. Diversity in this context means that a document covers different aspects of a topic or a topic is described in different ways. In this paper, we introduce an approach that allows to consider the diverse aspects of a search query when providing retrieval results to a user. We introduce a system architecture for a diversity-aware search engine that allows retrieving medical information from the web. The diversity of retrieval results is assessed by calculating diversity measures that rely upon semantic information derived from a mapping to concepts of a medical terminology. Considering these measures, the result set is diversified by ranking more diverse texts higher. The methods and system architecture are implemented in a retrieval engine for medical web content. The diversity measures reflect the diversity of aspects considered in a text and its type of information content. They are used for result presentation, filtering and ranking. In a user evaluation we assess the user satisfaction with an ordering of retrieval results that considers the diversity measures. It is shown through the evaluation that diversity-aware retrieval considering diversity measures in ranking could increase the user satisfaction with retrieval results.
Multi-source and ontology-based retrieval engine for maize mutant phenotypes
Green, Jason M.; Harnsomburana, Jaturon; Schaeffer, Mary L.; Lawrence, Carolyn J.; Shyu, Chi-Ren
2011-01-01
Model Organism Databases, including the various plant genome databases, collect and enable access to massive amounts of heterogeneous information, including sequence data, gene product information, images of mutant phenotypes, etc, as well as textual descriptions of many of these entities. While a variety of basic browsing and search capabilities are available to allow researchers to query and peruse the names and attributes of phenotypic data, next-generation search mechanisms that allow querying and ranking of text descriptions are much less common. In addition, the plant community needs an innovative way to leverage the existing links in these databases to search groups of text descriptions simultaneously. Furthermore, though much time and effort have been afforded to the development of plant-related ontologies, the knowledge embedded in these ontologies remains largely unused in available plant search mechanisms. Addressing these issues, we have developed a unique search engine for mutant phenotypes from MaizeGDB. This advanced search mechanism integrates various text description sources in MaizeGDB to aid a user in retrieving desired mutant phenotype information. Currently, descriptions of mutant phenotypes, loci and gene products are utilized collectively for each search, though expansion of the search mechanism to include other sources is straightforward. The retrieval engine, to our knowledge, is the first engine to exploit the content and structure of available domain ontologies, currently the Plant and Gene Ontologies, to expand and enrich retrieval results in major plant genomic databases. Database URL: http:www.PhenomicsWorld.org/QBTA.php PMID:21558151
45 CFR 205.35 - Mechanized claims processing and information retrieval systems; definitions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... claims processing and information retrieval systems; definitions. Section 205.35 through 205.38 contain...: (a) A mechanized claims processing and information retrieval system, hereafter referred to as an automated application processing and information retrieval system (APIRS), or the system, means a system of...
45 CFR 205.35 - Mechanized claims processing and information retrieval systems; definitions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... claims processing and information retrieval systems; definitions. Section 205.35 through 205.38 contain...: (a) A mechanized claims processing and information retrieval system, hereafter referred to as an automated application processing and information retrieval system (APIRS), or the system, means a system of...
45 CFR 205.35 - Mechanized claims processing and information retrieval systems; definitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... claims processing and information retrieval systems; definitions. Section 205.35 through 205.38 contain...: (a) A mechanized claims processing and information retrieval system, hereafter referred to as an automated application processing and information retrieval system (APIRS), or the system, means a system of...
Data Compression in Full-Text Retrieval Systems.
ERIC Educational Resources Information Center
Bell, Timothy C.; And Others
1993-01-01
Describes compression methods for components of full-text systems such as text databases on CD-ROM. Topics discussed include storage media; structures for full-text retrieval, including indexes, inverted files, and bitmaps; compression tools; memory requirements during retrieval; and ranking and information retrieval. (Contains 53 references.)…
Hanauer, David A; Mei, Qiaozhu; Law, James; Khanna, Ritu; Zheng, Kai
2015-06-01
This paper describes the University of Michigan's nine-year experience in developing and using a full-text search engine designed to facilitate information retrieval (IR) from narrative documents stored in electronic health records (EHRs). The system, called the Electronic Medical Record Search Engine (EMERSE), functions similar to Google but is equipped with special functionalities for handling challenges unique to retrieving information from medical text. Key features that distinguish EMERSE from general-purpose search engines are discussed, with an emphasis on functions crucial to (1) improving medical IR performance and (2) assuring search quality and results consistency regardless of users' medical background, stage of training, or level of technical expertise. Since its initial deployment, EMERSE has been enthusiastically embraced by clinicians, administrators, and clinical and translational researchers. To date, the system has been used in supporting more than 750 research projects yielding 80 peer-reviewed publications. In several evaluation studies, EMERSE demonstrated very high levels of sensitivity and specificity in addition to greatly improved chart review efficiency. Increased availability of electronic data in healthcare does not automatically warrant increased availability of information. The success of EMERSE at our institution illustrates that free-text EHR search engines can be a valuable tool to help practitioners and researchers retrieve information from EHRs more effectively and efficiently, enabling critical tasks such as patient case synthesis and research data abstraction. EMERSE, available free of charge for academic use, represents a state-of-the-art medical IR tool with proven effectiveness and user acceptance. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
A tutorial on information retrieval: basic terms and concepts
Zhou, Wei; Smalheiser, Neil R; Yu, Clement
2006-01-01
This informal tutorial is intended for investigators and students who would like to understand the workings of information retrieval systems, including the most frequently used search engines: PubMed and Google. Having a basic knowledge of the terms and concepts of information retrieval should improve the efficiency and productivity of searches. As well, this knowledge is needed in order to follow current research efforts in biomedical information retrieval and text mining that are developing new systems not only for finding documents on a given topic, but extracting and integrating knowledge across documents. PMID:16722601
Beyond Information Retrieval: Ways To Provide Content in Context.
ERIC Educational Resources Information Center
Wiley, Deborah Lynne
1998-01-01
Provides an overview of information retrieval from mainframe systems to Web search engines; discusses collaborative filtering, data extraction, data visualization, agent technology, pattern recognition, classification and clustering, and virtual communities. Argues that rather than huge data-storage centers and proprietary software, we need…
World Wide Web Based Image Search Engine Using Text and Image Content Features
NASA Astrophysics Data System (ADS)
Luo, Bo; Wang, Xiaogang; Tang, Xiaoou
2003-01-01
Using both text and image content features, a hybrid image retrieval system for Word Wide Web is developed in this paper. We first use a text-based image meta-search engine to retrieve images from the Web based on the text information on the image host pages to provide an initial image set. Because of the high-speed and low cost nature of the text-based approach, we can easily retrieve a broad coverage of images with a high recall rate and a relatively low precision. An image content based ordering is then performed on the initial image set. All the images are clustered into different folders based on the image content features. In addition, the images can be re-ranked by the content features according to the user feedback. Such a design makes it truly practical to use both text and image content for image retrieval over the Internet. Experimental results confirm the efficiency of the system.
Searches Conducted for Engineers.
ERIC Educational Resources Information Center
Lorenz, Patricia
This paper reports an industrial information specialist's experience in performing online searches for engineers and surveys the databases used. Engineers seeking assistance fall into three categories: (1) those who recognize the value of online retrieval; (2) referrals by colleagues; and (3) those who do not seek help. As more successful searches…
Strategies for Information Retrieval and Virtual Teaming to Mitigate Risk on NASA's Missions
NASA Technical Reports Server (NTRS)
Topousis, Daria; Williams, Gregory; Murphy, Keri
2007-01-01
Following the loss of NASA's Space Shuttle Columbia in 2003, it was determined that problems in the agency's organization created an environment that led to the accident. One component of the proposed solution resulted in the formation of the NASA Engineering Network (NEN), a suite of information retrieval and knowledge sharing tools. This paper describes the implementation of this set of search, portal, content management, and semantic technologies, including a unique meta search capability for data from distributed engineering resources. NEN's communities of practice are formed along engineering disciplines where users leverage their knowledge and best practices to collaborate and take informal learning back to their personal jobs and embed it into the procedures of the agency. These results offer insight into using traditional engineering disciplines for virtual teaming and problem solving.
Highly retrievable spin-wave-photon entanglement source.
Yang, Sheng-Jun; Wang, Xu-Jie; Li, Jun; Rui, Jun; Bao, Xiao-Hui; Pan, Jian-Wei
2015-05-29
Entanglement between a single photon and a quantum memory forms the building blocks for a quantum repeater and quantum network. Previous entanglement sources are typically with low retrieval efficiency, which limits future larger-scale applications. Here, we report a source of highly retrievable spin-wave-photon entanglement. Polarization entanglement is created through interaction of a single photon with an ensemble of atoms inside a low-finesse ring cavity. The cavity is engineered to be resonant for dual spin-wave modes, which thus enables efficient retrieval of the spin-wave qubit. An intrinsic retrieval efficiency up to 76(4)% has been observed. Such a highly retrievable atom-photon entanglement source will be very useful in future larger-scale quantum repeater and quantum network applications.
Cluster-Based Query Expansion Using Language Modeling for Biomedical Literature Retrieval
ERIC Educational Resources Information Center
Xu, Xuheng
2011-01-01
The tremendously huge volume of biomedical literature, scientists' specific information needs, long terms of multiples words, and fundamental problems of synonym and polysemy have been challenging issues facing the biomedical information retrieval community researchers. Search engines have significantly improved the efficiency and effectiveness of…
ERIC Educational Resources Information Center
Fox, Edward A.
1987-01-01
Discusses the CODER system, which was developed to investigate the application of artificial intelligence methods to increase the effectiveness of information retrieval systems, particularly those involving heterogeneous documents. Highlights include the use of PROLOG programing, blackboard-based designs, knowledge engineering, lexicological…
Zhou, Lianjie; Chen, Nengcheng; Yuan, Sai; Chen, Zeqiang
2016-01-01
The efficient sharing of spatio-temporal trajectory data is important to understand traffic congestion in mass data. However, the data volumes of bus networks in urban cities are growing rapidly, reaching daily volumes of one hundred million datapoints. Accessing and retrieving mass spatio-temporal trajectory data in any field is hard and inefficient due to limited computational capabilities and incomplete data organization mechanisms. Therefore, we propose an optimized and efficient spatio-temporal trajectory data retrieval method based on the Cloudera Impala query engine, called ESTRI, to enhance the efficiency of mass data sharing. As an excellent query tool for mass data, Impala can be applied for mass spatio-temporal trajectory data sharing. In ESTRI we extend the spatio-temporal trajectory data retrieval function of Impala and design a suitable data partitioning method. In our experiments, the Taiyuan BeiDou (BD) bus network is selected, containing 2300 buses with BD positioning sensors, producing 20 million records every day, resulting in two difficulties as described in the Introduction section. In addition, ESTRI and MongoDB are applied in experiments. The experiments show that ESTRI achieves the most efficient data retrieval compared to retrieval using MongoDB for data volumes of fifty million, one hundred million, one hundred and fifty million, and two hundred million. The performance of ESTRI is approximately seven times higher than that of MongoDB. The experiments show that ESTRI is an effective method for retrieving mass spatio-temporal trajectory data. Finally, bus distribution mapping in Taiyuan city is achieved, describing the buses density in different regions at different times throughout the day, which can be applied in future studies of transport, such as traffic scheduling, traffic planning and traffic behavior management in intelligent public transportation systems. PMID:27801869
Putting Google Scholar to the Test: A Preliminary Study
ERIC Educational Resources Information Center
Robinson, Mary L.; Wusteman, Judith
2007-01-01
Purpose: To describe a small-scale quantitative evaluation of the scholarly information search engine, Google Scholar. Design/methodology/approach: Google Scholar's ability to retrieve scholarly information was compared to that of three popular search engines: Ask.com, Google and Yahoo! Test queries were presented to all four search engines and…
Information Discovery and Retrieval Tools
2004-12-01
information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.
Information Discovery and Retrieval Tools
2003-04-01
information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.
Using Induction to Refine Information Retrieval Strategies
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Pell, Barney; Kedar, Smadar
1994-01-01
Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.
Development and tuning of an original search engine for patent libraries in medicinal chemistry.
Pasche, Emilie; Gobeill, Julien; Kreim, Olivier; Oezdemir-Zaech, Fatma; Vachon, Therese; Lovis, Christian; Ruch, Patrick
2014-01-01
The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks clearly increases the effectiveness of the system. We conclude that different search tasks demand different information retrieval engines' settings in order to yield optimal end-user retrieval.
Development and tuning of an original search engine for patent libraries in medicinal chemistry
2014-01-01
Background The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. Methods We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. Results The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. Conclusions We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks clearly increases the effectiveness of the system. We conclude that different search tasks demand different information retrieval engines' settings in order to yield optimal end-user retrieval. PMID:24564220
Occam's razor: supporting visual query expression for content-based image queries
NASA Astrophysics Data System (ADS)
Venters, Colin C.; Hartley, Richard J.; Hewitt, William T.
2005-01-01
This paper reports the results of a usability experiment that investigated visual query formulation on three dimensions: effectiveness, efficiency, and user satisfaction. Twenty eight evaluation sessions were conducted in order to assess the extent to which query by visual example supports visual query formulation in a content-based image retrieval environment. In order to provide a context and focus for the investigation, the study was segmented by image type, user group, and use function. The image type consisted of a set of abstract geometric device marks supplied by the UK Trademark Registry. Users were selected from the 14 UK Patent Information Network offices. The use function was limited to the retrieval of images by shape similarity. Two client interfaces were developed for comparison purposes: Trademark Image Browser Engine (TRIBE) and Shape Query Image Retrieval Systems Engine (SQUIRE).
Occam"s razor: supporting visual query expression for content-based image queries
NASA Astrophysics Data System (ADS)
Venters, Colin C.; Hartley, Richard J.; Hewitt, William T.
2004-12-01
This paper reports the results of a usability experiment that investigated visual query formulation on three dimensions: effectiveness, efficiency, and user satisfaction. Twenty eight evaluation sessions were conducted in order to assess the extent to which query by visual example supports visual query formulation in a content-based image retrieval environment. In order to provide a context and focus for the investigation, the study was segmented by image type, user group, and use function. The image type consisted of a set of abstract geometric device marks supplied by the UK Trademark Registry. Users were selected from the 14 UK Patent Information Network offices. The use function was limited to the retrieval of images by shape similarity. Two client interfaces were developed for comparison purposes: Trademark Image Browser Engine (TRIBE) and Shape Query Image Retrieval Systems Engine (SQUIRE).
Essie: A Concept-based Search Engine for Structured Biomedical Text
Ide, Nicholas C.; Loane, Russell F.; Demner-Fushman, Dina
2007-01-01
This article describes the algorithms implemented in the Essie search engine that is currently serving several Web sites at the National Library of Medicine. Essie is a phrase-based search engine with term and concept query expansion and probabilistic relevancy ranking. Essie’s design is motivated by an observation that query terms are often conceptually related to terms in a document, without actually occurring in the document text. Essie’s performance was evaluated using data and standard evaluation methods from the 2003 and 2006 Text REtrieval Conference (TREC) Genomics track. Essie was the best-performing search engine in the 2003 TREC Genomics track and achieved results comparable to those of the highest-ranking systems on the 2006 TREC Genomics track task. Essie shows that a judicious combination of exploiting document structure, phrase searching, and concept based query expansion is a useful approach for information retrieval in the biomedical domain. PMID:17329729
Use of controlled vocabularies to improve biomedical information retrieval tasks.
Pasche, Emilie; Gobeill, Julien; Vishnyakova, Dina; Ruch, Patrick; Lovis, Christian
2013-01-01
The high heterogeneity of biomedical vocabulary is a major obstacle for information retrieval in large biomedical collections. Therefore, using biomedical controlled vocabularies is crucial for managing these contents. We investigate the impact of query expansion based on controlled vocabularies to improve the effectiveness of two search engines. Our strategy relies on the enrichment of users' queries with additional terms, directly derived from such vocabularies applied to infectious diseases and chemical patents. We observed that query expansion based on pathogen names resulted in improvements of the top-precision of our first search engine, while the normalization of diseases degraded the top-precision. The expansion of chemical entities, which was performed on the second search engine, positively affected the mean average precision. We have shown that query expansion of some types of biomedical entities has a great potential to improve search effectiveness; therefore a fine-tuning of query expansion strategies could help improving the performances of search engines.
Semantic-Based Information Retrieval of Biomedical Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiao, Yu; Potok, Thomas E; Hurson, Ali R.
In this paper, we propose to improve the effectiveness of biomedical information retrieval via a medical thesaurus. We analyzed the deficiencies of the existing medical thesauri and reconstructed a new thesaurus, called MEDTHES, which follows the ANSI/NISO Z39.19-2003 standard. MEDTHES also endows the users with fine-grained control of information retrieval by providing functions to calculate the semantic similarity between words. We demonstrate the usage of MEDTHES through an existing data search engine.
Improving data retrieval quality: Evidence based medicine perspective.
Kamalov, M; Dobrynin, V; Balykina, J; Kolbin, A; Verbitskaya, E; Kasimova, M
2015-01-01
The actively developing approach in modern medicine is the approach focused on principles of evidence-based medicine. The assessment of quality and reliability of studies is needed. However, in some cases studies corresponding to the first level of evidence may contain errors in randomized control trials (RCTs). Solution of the problem is the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system. Studies both in the fields of medicine and information retrieval are conducted for developing search engines for the MEDLINE database [1]; combined techniques for summarization and information retrieval targeted to solving problems of finding the best medication based on the levels of evidence are being developed [2]. Based on the relevance and demand for studies both in the field of medicine and information retrieval, it was decided to start the development of a search engine for the MEDLINE database search on the basis of the Saint-Petersburg State University with the support of Pavlov First Saint-Petersburg State Medical University and Tashkent Institute of Postgraduate Medical Education. Novelty and value of the proposed system are characterized by the use of ranking method of relevant abstracts. It is suggested that the system will be able to perform ranking based on studies level of evidence and to apply GRADE criteria for system evaluation. The assigned task falls within the domain of information retrieval and machine learning. Based on the results of implementation from previous work [3], in which the main goal was to cluster abstracts from MEDLINE database by subtypes of medical interventions, a set of algorithms for clustering in this study was selected: K-means, K-means ++, EM from the sklearn (http://scikit-learn.org) and WEKA (http://www.cs.waikato.ac.nz/~ml/weka/) libraries, together with the methods of Latent Semantic Analysis (LSA) [4] choosing the first 210 facts and the model "bag of words" [5] to represent clustered documents. During the process of abstracts classification, few algorithms were tested including: Complement Naive Bayes [6], Sequential Minimal Optimization (SMO) [7] and non linear SVM from the WEKA library. The first step of this study was to markup abstracts of articles from the MEDLINE by containing and not containing a medical intervention. For this purpose, based on our previous work [8] a web-crawler was modified to perform the necessary markuping. The next step was to evaluate the clustering algorithms at the markup abstracts. As a result of clustering abstracts by two groups, when applying the LSA and choosing first 210 facts, the following results were obtained:1) K-means: Purity = 0,5598, Normalized Entropy = 0.5994;2)K-means ++: Purity = 0,6743, Normalized Entropy = 0.4996;3)EM: Purity = 0,5443, Normalized Entropy = 0.6344.When applying the model "bag of words":1)K-means: Purity = 0,5134, Normalized Entropy = 0.6254;2)K-means ++: Purity = 0,5645, Normalized Entropy = 0.5299;3)EM: Purity = 0,5247, Normalized Entropy = 0.6345.Then, studies which contain medical intervention have been considered and classified by the subtypes of medical interventions. At the process of classification abstracts by subtypes of medical interventions, abstracts were presented as a "bag of words" model with the removal of stop words. 1)Complement Naive Bayes: macro F-measure = 0.6934, micro F-measure = 0.7234;2)Sequantial Minimal Optimization: macro F-measure = 0.6543, micro F-measure = 0.7042;3)Non linear SVM: macro F-measure = 0.6835, micro F-measure = 0.7642. Based on the results of computational experiments, the best results of abstract clustering by containing and not containing medical intervention were obtained using the K-Means ++ algorithm together with LSA, choosing the first 210 facts. The quality of classification abstracts by subtypes of medical interventions value for existing ones [8] has been improved using non linear SVM algorithm, with "bag of words" model and the removal of stop words. The results of clustering obtained in this study will help in grouping abstracts by levels of evidence, using the classification by subtypes of medical interventions and it will be possible to extract information from the abstracts on specific types of interventions.
A two-level cache for distributed information retrieval in search engines.
Zhang, Weizhe; He, Hui; Ye, Jianwei
2013-01-01
To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.
A Two-Level Cache for Distributed Information Retrieval in Search Engines
Zhang, Weizhe; He, Hui; Ye, Jianwei
2013-01-01
To improve the performance of distributed information retrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache. PMID:24363621
The Evolution of Web Searching.
ERIC Educational Resources Information Center
Green, David
2000-01-01
Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…
Enhanced Information Retrieval Using AJAX
NASA Astrophysics Data System (ADS)
Kachhwaha, Rajendra; Rajvanshi, Nitin
2010-11-01
Information Retrieval deals with the representation, storage, organization of, and access to information items. The representation and organization of information items should provide the user with easy access to the information with the rapid development of Internet, large amounts of digitally stored information is readily available on the World Wide Web. This information is so huge that it becomes increasingly difficult and time consuming for the users to find the information relevant to their needs. The explosive growth of information on the Internet has greatly increased the need for information retrieval systems. However, most of the search engines are using conventional information retrieval systems. An information system needs to implement sophisticated pattern matching tools to determine contents at a faster rate. AJAX has recently emerged as the new tool such the of information retrieval process of information retrieval can become fast and information reaches the use at a faster pace as compared to conventional retrieval systems.
Millennial Undergraduate Research Strategies in Web and Library Information Retrieval Systems
ERIC Educational Resources Information Center
Porter, Brandi
2011-01-01
This article summarizes the author's dissertation regarding search strategies of millennial undergraduate students in Web and library online information retrieval systems. Millennials bring a unique set of search characteristics and strategies to their research since they have never known a world without the Web. Through the use of search engines,…
Sitzia, Clementina; Farini, Andrea; Jardim, Luciana; Razini, Paola; Belicchi, Marzia; Cassinelli, Letizia; Villa, Chiara; Erratico, Silvia; Parolini, Daniele; Bella, Pamela; da Silva Bizario, Joao Carlos; Garcia, Luis; Dias-Baruffi, Marcelo; Meregalli, Mirella; Torrente, Yvan
2016-01-01
Duchenne muscular dystrophy is the most common genetic muscular dystrophy. It is caused by mutations in the dystrophin gene, leading to absence of muscular dystrophin and to progressive degeneration of skeletal muscle. We have demonstrated that the exon skipping method safely and efficiently brings to the expression of a functional dystrophin in dystrophic CD133+ cells injected scid/mdx mice. Golden Retriever muscular dystrophic (GRMD) dogs represent the best preclinical model of Duchenne muscular dystrophy, mimicking the human pathology in genotypic and phenotypic aspects. Here, we assess the capacity of intra-arterial delivered autologous engineered canine CD133+ cells of restoring dystrophin expression in Golden Retriever muscular dystrophy. This is the first demonstration of five-year follow up study, showing initial clinical amelioration followed by stabilization in mild and severe affected Golden Retriever muscular dystrophy dogs. The occurrence of T-cell response in three Golden Retriever muscular dystrophy dogs, consistent with a memory response boosted by the exon skipped-dystrophin protein, suggests an adaptive immune response against dystrophin. PMID:27506452
Global polar geospatial information service retrieval based on search engine and ontology reasoning
Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang
2007-01-01
In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.
Advanced Feedback Methods in Information Retrieval.
ERIC Educational Resources Information Center
Salton, G.; And Others
1985-01-01
In this study, automatic feedback techniques are applied to Boolean query statements in online information retrieval to generate improved query statements based on information contained in previously retrieved documents. Feedback operations are carried out using conventional Boolean logic and extended logic. Experimental output is included to…
Publication Index and Retrieval System.
1980-04-01
A0DA1087 279, NERNER AND CO WASHINGTON D C F/6 13/2 PUBLICATION INDEX AND RETRIEVAL SYSTEM . (U) APR 80 DAC3977-C0081 UNCLASSIFIED WES-OS-78-2 Nt.m*nn...Engineers. The objective was to develop an information index and retrieval system for the publications of the DMRP. The report was prepared by Herner...Development of the system and preparation and review of the report were under the super- vision of Dr. R. T. Saucier, Special Assistant for Dredged
Atomic oxygen erosion considerations for spacecraft materials selection
NASA Technical Reports Server (NTRS)
Whitaker, Ann F.; Kamenetzky, Rachel R.
1993-01-01
The Long Duration Exposure Facility (LDEF) satellite carried 57 experiments that were designed to define the low-Earth orbit (LEO) space environment and to evaluate the impact of this environment on potential engineering materials and material processes. Deployed by the Shuttle Challenger in April of 1984, LDEF made over 32,000 orbits before being retrieved nearly 6 years later by the Shuttle Columbia in January of 1990. The Solar Array Passive LDEF Experiment (SAMPLE) AO171 contained approximately 300 specimens, representing numerous material classes and material processes. AO171 was located on LDEF in position A8 at a yaw of 38.1 degrees from the ram direction and was subjected to an atomic oxygen (AO) fluence of 6.93 x 10(exp 21) atoms/sq cm. LDEF AO171 data, as well as short-term shuttle data, will be discussed in this paper as it applies to engineering design applications of composites, bulk and thin film polymers, glassy ceramics, thermal control paints, and metals subjected to AO erosion.
Deep Question Answering for protein annotation
Gobeill, Julien; Gaudinat, Arnaud; Pasche, Emilie; Vishnyakova, Dina; Gaudet, Pascale; Bairoch, Amos; Ruch, Patrick
2015-01-01
Biomedical professionals have access to a huge amount of literature, but when they use a search engine, they often have to deal with too many documents to efficiently find the appropriate information in a reasonable time. In this perspective, question-answering (QA) engines are designed to display answers, which were automatically extracted from the retrieved documents. Standard QA engines in literature process a user question, then retrieve relevant documents and finally extract some possible answers out of these documents using various named-entity recognition processes. In our study, we try to answer complex genomics questions, which can be adequately answered only using Gene Ontology (GO) concepts. Such complex answers cannot be found using state-of-the-art dictionary- and redundancy-based QA engines. We compare the effectiveness of two dictionary-based classifiers for extracting correct GO answers from a large set of 100 retrieved abstracts per question. In the same way, we also investigate the power of GOCat, a GO supervised classifier. GOCat exploits the GOA database to propose GO concepts that were annotated by curators for similar abstracts. This approach is called deep QA, as it adds an original classification step, and exploits curated biological data to infer answers, which are not explicitly mentioned in the retrieved documents. We show that for complex answers such as protein functional descriptions, the redundancy phenomenon has a limited effect. Similarly usual dictionary-based approaches are relatively ineffective. In contrast, we demonstrate how existing curated data, beyond information extraction, can be exploited by a supervised classifier, such as GOCat, to massively improve both the quantity and the quality of the answers with a +100% improvement for both recall and precision. Database URL: http://eagl.unige.ch/DeepQA4PA/ PMID:26384372
Deep Question Answering for protein annotation.
Gobeill, Julien; Gaudinat, Arnaud; Pasche, Emilie; Vishnyakova, Dina; Gaudet, Pascale; Bairoch, Amos; Ruch, Patrick
2015-01-01
Biomedical professionals have access to a huge amount of literature, but when they use a search engine, they often have to deal with too many documents to efficiently find the appropriate information in a reasonable time. In this perspective, question-answering (QA) engines are designed to display answers, which were automatically extracted from the retrieved documents. Standard QA engines in literature process a user question, then retrieve relevant documents and finally extract some possible answers out of these documents using various named-entity recognition processes. In our study, we try to answer complex genomics questions, which can be adequately answered only using Gene Ontology (GO) concepts. Such complex answers cannot be found using state-of-the-art dictionary- and redundancy-based QA engines. We compare the effectiveness of two dictionary-based classifiers for extracting correct GO answers from a large set of 100 retrieved abstracts per question. In the same way, we also investigate the power of GOCat, a GO supervised classifier. GOCat exploits the GOA database to propose GO concepts that were annotated by curators for similar abstracts. This approach is called deep QA, as it adds an original classification step, and exploits curated biological data to infer answers, which are not explicitly mentioned in the retrieved documents. We show that for complex answers such as protein functional descriptions, the redundancy phenomenon has a limited effect. Similarly usual dictionary-based approaches are relatively ineffective. In contrast, we demonstrate how existing curated data, beyond information extraction, can be exploited by a supervised classifier, such as GOCat, to massively improve both the quantity and the quality of the answers with a +100% improvement for both recall and precision. Database URL: http://eagl.unige.ch/DeepQA4PA/. © The Author(s) 2015. Published by Oxford University Press.
The Impact of Subject Indexes on Semantic Indeterminacy in Enterprise Document Retrieval
ERIC Educational Resources Information Center
Schymik, Gregory
2012-01-01
Ample evidence exists to support the conclusion that enterprise search is failing its users. This failure is costing corporate America billions of dollars every year. Most enterprise search engines are built using web search engines as their foundations. These search engines are optimized for web use and are inadequate when used inside the…
Retrieving high-resolution images over the Internet from an anatomical image database
NASA Astrophysics Data System (ADS)
Strupp-Adams, Annette; Henderson, Earl
1999-12-01
The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.
13. SIDE VIEW OF THE STACKERRETRIEVER CRANE FROM THE TRANSFER ...
13. SIDE VIEW OF THE STACKER-RETRIEVER CRANE FROM THE TRANSFER BAY. THE STACKER-RETRIEVER IS A REMOTELY-OPERATED, MECHANIZED TRANSPORT SYSTEM FOR RETRIEVING PLUTONIUM CONTAINERS FROM THE STORAGE VAULT. (1/80) - Rocky Flats Plant, Plutonium Recovery Facility, Northwest portion of Rocky Flats Plant, Golden, Jefferson County, CO
Query Enhancement with Topic Detection and Disambiguation for Robust Retrieval
ERIC Educational Resources Information Center
Zhang, Hui
2013-01-01
With the rapid increase in the amount of available information, people nowadays rely heavily on information retrieval (IR) systems such as web search engine to fulfill their information needs. However, due to the lack of domain knowledge and the limitation of natural language such as synonyms and polysemes, many system users cannot formulate their…
Augmenting Oracle Text with the UMLS for enhanced searching of free-text medical reports.
Ding, Jing; Erdal, Selnur; Dhaval, Rakesh; Kamal, Jyoti
2007-10-11
The intrinsic complexity of free-text medical reports imposes great challenges for information retrieval systems. We have developed a prototype search engine for retrieving clinical reports that leverages the powerful indexing and querying capabilities of Oracle Text, and the rich biomedical domain knowledge and semantic structures that are captured in the UMLS Metathesaurus.
Designing a Syntax-Based Retrieval System for Supporting Language Learning
ERIC Educational Resources Information Center
Tsao, Nai-Lung; Kuo, Chin-Hwa; Wible, David; Hung, Tsung-Fu
2009-01-01
In this paper, we propose a syntax-based text retrieval system for on-line language learning and use a fast regular expression search engine as its main component. Regular expression searches provide more scalable querying and search results than keyword-based searches. However, without a well-designed index scheme, the execution time of regular…
What Friends Are For: Collaborative Intelligence Analysis and Search
2014-06-01
14. SUBJECT TERMS Intelligence Community, information retrieval, recommender systems , search engines, social networks, user profiling, Lucene...improvements over existing search systems . The improvements are shown to be robust to high levels of human error and low similarity between users ...precision NOLH nearly orthogonal Latin hypercubes P@ precision at documents RS recommender systems TREC Text REtrieval Conference USM user
Crawler Acquisition and Testing Demonstration Project Management Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
DEFIGH-PRICE, C.
2000-10-23
If the crawler based retrieval system is selected, this project management plan identifies the path forward for acquiring a crawler/track pump waste retrieval system, and completing sufficient testing to support deploying the crawler for as part of a retrieval technology demonstration for Tank 241-C-104. In the balance of the document, these activities will be referred to as the Crawler Acquisition and Testing Demonstration. During recent Tri-Party Agreement negotiations, TPA milestones were proposed for a sludge/hard heel waste retrieval demonstration in tank C-104. Specifically one of the proposed milestones requires completion of a cold demonstration of sufficient scale to support finalmore » design and testing of the equipment (M-45-03G) by 6/30/2004. A crawler-based retrieval system was one of the two options evaluated during the pre-conceptual engineering for C-104 retrieval (RPP-6843 Rev. 0). The alternative technology procurement initiated by the Hanford Tanks Initiative (HTI) project, combined with the pre-conceptual engineering for C-104 retrieval provide an opportunity to achieve compliance with the proposed TPA milestone M-45-03H. This Crawler Acquisition and Testing Demonstration project management plan identifies the plans, organizational interfaces and responsibilities, management control systems, reporting systems, timeline and requirements for the acquisition and testing of the crawler based retrieval system. This project management plan is complimentary to and supportive of the Project Management Plan for Retrieval of C-104 (RPP-6557). This project management plan focuses on utilizing and completing the efforts initiated under the Hanford Tanks Initiative (HTI) to acquire and cold test a commercial crawler based retrieval system. The crawler-based retrieval system will be purchased on a schedule to support design of the waste retrieval from tank C-104 (project W-523) and to meet the requirement of proposed TPA milestone M-45-03H. This Crawler Acquisition and Testing Demonstration project management plan includes the following: (1) Identification of acquisition strategy and plan to obtain a crawler based retrieval system; (2) Plan for sufficient cold testing to make a decision for W-523 and to comply with TPA Milestone M-45-03H; (3) Cost and schedule for path forward; (4) Responsibilities of the participants; and (5) The plan is supported by updated Level 1 logics, a Relative Order of Magnitude cost estimate and preliminary project schedule.« less
Application of Ensemble Detection and Analysis to Modeling Uncertainty in Non Stationary Process
NASA Technical Reports Server (NTRS)
Racette, Paul
2010-01-01
Characterization of non stationary and nonlinear processes is a challenge in many engineering and scientific disciplines. Climate change modeling and projection, retrieving information from Doppler measurements of hydrometeors, and modeling calibration architectures and algorithms in microwave radiometers are example applications that can benefit from improvements in the modeling and analysis of non stationary processes. Analyses of measured signals have traditionally been limited to a single measurement series. Ensemble Detection is a technique whereby mixing calibrated noise produces an ensemble measurement set. The collection of ensemble data sets enables new methods for analyzing random signals and offers powerful new approaches to studying and analyzing non stationary processes. Derived information contained in the dynamic stochastic moments of a process will enable many novel applications.
Query Transformations for Result Merging
2014-11-01
tors, term dependence, query expansion 1. INTRODUCTION Federated search deals with the problem of aggregating results from multiple search engines . The...invidual search engines are (i) typically focused on a particular domain or a particular corpus, (ii) employ diverse retrieval models, and (iii...determine which search engines are appropri- ate for addressing the information need (resource selection), and (ii) merging the results returned by
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Gallagher, Mary C.
1985-01-01
This Working Paper Series entry represents a collection of presentation visuals associated with the companion report entitled An Innovative, Multidisciplinary Educational Program in Interactive Information Storage and Retrieval, USL/DBMS NASA/RECON Working Paper Series report number DBMS.NASA/RECON-12. The project objectives are to develop a set of transportable, hands-on, data base management courses for science and engineering students to facilitate their utilization of information storage and retrieval programs.
NASA Astrophysics Data System (ADS)
Marke, Tobias; Ebell, Kerstin; Löhnert, Ulrich; Turner, David D.
2016-12-01
In this article, liquid water cloud microphysical properties are retrieved by a combination of microwave and infrared ground-based observations. Clouds containing liquid water are frequently occurring in most climate regimes and play a significant role in terms of interaction with radiation. Small perturbations in the amount of liquid water contained in the cloud can cause large variations in the radiative fluxes. This effect is enhanced for thin clouds (liquid water path, LWP <100 g/m2), which makes accurate retrieval information of the cloud properties crucial. Due to large relative errors in retrieving low LWP values from observations in the microwave domain and a high sensitivity for infrared methods when the LWP is low, a synergistic retrieval based on a neural network approach is built to estimate both LWP and cloud effective radius (reff). These statistical retrievals can be applied without high computational demand but imply constraints like prior information on cloud phase and cloud layering. The neural network retrievals are able to retrieve LWP and reff for thin clouds with a mean relative error of 9% and 17%, respectively. This is demonstrated using synthetic observations of a microwave radiometer (MWR) and a spectrally highly resolved infrared interferometer. The accuracy and robustness of the synergistic retrievals is confirmed by a low bias in a radiative closure study for the downwelling shortwave flux, even for marginally invalid scenes. Also, broadband infrared radiance observations, in combination with the MWR, have the potential to retrieve LWP with a higher accuracy than a MWR-only retrieval.
INFORMATION STORAGE AND RETRIEVAL, REPORTS ON EVALUATION, CLUSTERING, AND FEEDBACK.
ERIC Educational Resources Information Center
SALTON, GERALD
THE TWELFTH IN A SERIES COVERING RESEARCH IN AUTOMATIC STORAGE AND RETRIEVAL, THIS REPORT IS DIVIDED INTO THREE PARTS TITLED EVALUATION, CLUSTER SEARCHING, AND USER FEEDBACK METHODS, RESPECTIVELY. THE FIRST PART, EVALUATION, CONTAINS A COMPLETE SUMMARY OF THE RETRIEVAL RESULTS DERIVED FROM SOME SIXTY DIFFERENT TEXT ANALYSIS EXPERIMENTS. IN EACH…
32 CFR 701.116 - PA systems of records notices overview.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... (b) Retrieval practices. How a record is retrieved determines whether or not it qualifies to be a... birth, etc.) to qualify as a system of records. Accordingly, a record that contains information about an... system of records. The requirement is retrieval by a name or personal identifier.) Should a business...
32 CFR 701.116 - PA systems of records notices overview.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... (b) Retrieval practices. How a record is retrieved determines whether or not it qualifies to be a... birth, etc.) to qualify as a system of records. Accordingly, a record that contains information about an... system of records. The requirement is retrieval by a name or personal identifier.) Should a business...
32 CFR 701.116 - PA systems of records notices overview.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... (b) Retrieval practices. How a record is retrieved determines whether or not it qualifies to be a... birth, etc.) to qualify as a system of records. Accordingly, a record that contains information about an... system of records. The requirement is retrieval by a name or personal identifier.) Should a business...
32 CFR 701.116 - PA systems of records notices overview.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... (b) Retrieval practices. How a record is retrieved determines whether or not it qualifies to be a... birth, etc.) to qualify as a system of records. Accordingly, a record that contains information about an... system of records. The requirement is retrieval by a name or personal identifier.) Should a business...
32 CFR 701.116 - PA systems of records notices overview.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (b) Retrieval practices. How a record is retrieved determines whether or not it qualifies to be a... birth, etc.) to qualify as a system of records. Accordingly, a record that contains information about an... system of records. The requirement is retrieval by a name or personal identifier.) Should a business...
WWW Entrez: A Hypertext Retrieval Tool for Molecular Biology.
ERIC Educational Resources Information Center
Epstein, Jonathan A.; Kans, Jonathan A.; Schuler, Gregory D.
This article describes the World Wide Web (WWW) Entrez server which is based upon the National Center for Biotechnology Information's (NCBI) Entrez retrieval database and software. Entrez is a molecular sequence retrieval system that contains an integrated view of portions of Medline and all publicly available nucleotide and protein databases,…
Improving Concept-Based Web Image Retrieval by Mixing Semantically Similar Greek Queries
ERIC Educational Resources Information Center
Lazarinis, Fotis
2008-01-01
Purpose: Image searching is a common activity for web users. Search engines offer image retrieval services based on textual queries. Previous studies have shown that web searching is more demanding when the search is not in English and does not use a Latin-based language. The aim of this paper is to explore the behaviour of the major search…
Buckets: Smart Objects for Digital Libraries
NASA Technical Reports Server (NTRS)
Nelson, Michael L.
2001-01-01
Current discussion of digital libraries (DLs) is often dominated by the merits of the respective storage, search and retrieval functionality of archives, repositories, search engines, search interfaces and database systems. While these technologies are necessary for information management, the information content is more important than the systems used for its storage and retrieval. Digital information should have the same long-term survivability prospects as traditional hardcopy information and should be protected to the extent possible from evolving search engine technologies and vendor vagaries in database management systems. Information content and information retrieval systems should progress on independent paths and make limited assumptions about the status or capabilities of the other. Digital information can achieve independence from archives and DL systems through the use of buckets. Buckets are an aggregative, intelligent construct for publishing in DLs. Buckets allow the decoupling of information content from information storage and retrieval. Buckets exist within the Smart Objects and Dumb Archives model for DLs in that many of the functionalities and responsibilities traditionally associated with archives are pushed down (making the archives dumber) into the buckets (making them smarter). Some of the responsibilities imbued to buckets are the enforcement of their terms and conditions, and maintenance and display of their contents.
Descriptive Question Answering with Answer Type Independent Features
NASA Astrophysics Data System (ADS)
Yoon, Yeo-Chan; Lee, Chang-Ki; Kim, Hyun-Ki; Jang, Myung-Gil; Ryu, Pum Mo; Park, So-Young
In this paper, we present a supervised learning method to seek out answers to the most frequently asked descriptive questions: reason, method, and definition questions. Most of the previous systems for question answering focus on factoids, lists or definitional questions. However, descriptive questions such as reason questions and method questions are also frequently asked by users. We propose a system for these types of questions. The system conducts an answer search as follows. First, we analyze the user's question and extract search keywords and the expected answer type. Second, information retrieval results are obtained from an existing search engine such as Yahoo or Google. Finally, we rank the results to find snippets containing answers to the questions based on a ranking SVM algorithm. We also propose features to identify snippets containing answers for descriptive questions. The features are adaptable and thus are not dependent on answer type. Experimental results show that the proposed method and features are clearly effective for the task.
ERIC Educational Resources Information Center
Bremmer, Dale; Childs, Bart
This document discusses the importance of computing knowledge and experience in the techniques of fast data retrieval for today's engineer. It describes a course designed to teach the engineer the COBOL Language structure. One of the projects of the course, a report generator (REGE) written in COBOL which is used to alter, sort and print selected…
Do, Bao H; Wu, Andrew; Biswal, Sandip; Kamaya, Aya; Rubin, Daniel L
2010-11-01
Storing and retrieving radiology cases is an important activity for education and clinical research, but this process can be time-consuming. In the process of structuring reports and images into organized teaching files, incidental pathologic conditions not pertinent to the primary teaching point can be omitted, as when a user saves images of an aortic dissection case but disregards the incidental osteoid osteoma. An alternate strategy for identifying teaching cases is text search of reports in radiology information systems (RIS), but retrieved reports are unstructured, teaching-related content is not highlighted, and patient identifying information is not removed. Furthermore, searching unstructured reports requires sophisticated retrieval methods to achieve useful results. An open-source, RadLex(®)-compatible teaching file solution called RADTF, which uses natural language processing (NLP) methods to process radiology reports, was developed to create a searchable teaching resource from the RIS and the picture archiving and communication system (PACS). The NLP system extracts and de-identifies teaching-relevant statements from full reports to generate a stand-alone database, thus converting existing RIS archives into an on-demand source of teaching material. Using RADTF, the authors generated a semantic search-enabled, Web-based radiology archive containing over 700,000 cases with millions of images. RADTF combines a compact representation of the teaching-relevant content in radiology reports and a versatile search engine with the scale of the entire RIS-PACS collection of case material. ©RSNA, 2010
Rule-based deduplication of article records from bibliographic databases.
Jiang, Yu; Lin, Can; Meng, Weiyi; Yu, Clement; Cohen, Aaron M; Smalheiser, Neil R
2014-01-01
We recently designed and deployed a metasearch engine, Metta, that sends queries and retrieves search results from five leading biomedical databases: PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Central Register of Controlled Trials. Because many articles are indexed in more than one of these databases, it is desirable to deduplicate the retrieved article records. This is not a trivial problem because data fields contain a lot of missing and erroneous entries, and because certain types of information are recorded differently (and inconsistently) in the different databases. The present report describes our rule-based method for deduplicating article records across databases and includes an open-source script module that can be deployed freely. Metta was designed to satisfy the particular needs of people who are writing systematic reviews in evidence-based medicine. These users want the highest possible recall in retrieval, so it is important to err on the side of not deduplicating any records that refer to distinct articles, and it is important to perform deduplication online in real time. Our deduplication module is designed with these constraints in mind. Articles that share the same publication year are compared sequentially on parameters including PubMed ID number, digital object identifier, journal name, article title and author list, using text approximation techniques. In a review of Metta searches carried out by public users, we found that the deduplication module was more effective at identifying duplicates than EndNote without making any erroneous assignments.
Rule-based deduplication of article records from bibliographic databases
Jiang, Yu; Lin, Can; Meng, Weiyi; Yu, Clement; Cohen, Aaron M.; Smalheiser, Neil R.
2014-01-01
We recently designed and deployed a metasearch engine, Metta, that sends queries and retrieves search results from five leading biomedical databases: PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Central Register of Controlled Trials. Because many articles are indexed in more than one of these databases, it is desirable to deduplicate the retrieved article records. This is not a trivial problem because data fields contain a lot of missing and erroneous entries, and because certain types of information are recorded differently (and inconsistently) in the different databases. The present report describes our rule-based method for deduplicating article records across databases and includes an open-source script module that can be deployed freely. Metta was designed to satisfy the particular needs of people who are writing systematic reviews in evidence-based medicine. These users want the highest possible recall in retrieval, so it is important to err on the side of not deduplicating any records that refer to distinct articles, and it is important to perform deduplication online in real time. Our deduplication module is designed with these constraints in mind. Articles that share the same publication year are compared sequentially on parameters including PubMed ID number, digital object identifier, journal name, article title and author list, using text approximation techniques. In a review of Metta searches carried out by public users, we found that the deduplication module was more effective at identifying duplicates than EndNote without making any erroneous assignments. PMID:24434031
Assessing the impact of different satellite retrieval methods on forecast available potential energy
NASA Technical Reports Server (NTRS)
Whittaker, Linda M.; Horn, Lyle H.
1990-01-01
The effects of the inclusion of satellite temperature retrieval data, and of different satellite retrieval methods, on forecasts made with the NASA Goddard Laboratory for Atmospheres (GLA) fourth-order model were investigated using, as the parameter, the available potential energy (APE) in its isentropic form. Calculation of the APE were used to study the differences in the forecast sets both globally and in the Northern Hemisphere during 72-h forecast period. The analysis data sets used for the forecasts included one containing the NESDIS TIROS-N retrievals, the GLA retrievals using the physical inversion method, and a third, which did not contain satellite data, used as a control; two data sets, with and without satellite data, were used for verification. For all three data sets, the Northern Hemisphere values for the total APE showed an increase throughout the forecast period, mostly due to an increase in the zonal component, in contrast to the verification sets, which showed a steady level of total APE.
Predicting double negativity using transmitted phase in space coiling metamaterials.
Maurya, Santosh K; Pandey, Abhishek; Shukla, Shobha; Saxena, Sumit
2018-05-01
Metamaterials are engineered materials that offer the flexibility to manipulate the incident waves leading to exotic applications such as cloaking, extraordinary transmission, sub-wavelength imaging and negative refraction. These concepts have largely been explored in the context of electromagnetic waves. Acoustic metamaterials, similar to their optical counterparts, demonstrate anomalous effective elastic properties. Recent developments have shown that coiling up the propagation path of acoustic wave results in effective elastic response of the metamaterial beyond the natural response of its constituent materials. The effective response of metamaterials is generally evaluated using the 'S' parameter retrieval method based on amplitude of the waves. The phase of acoustic waves contains information of wave pressure and particle velocity. Here, we show using finite-element methods that phase reversal of transmitted waves may be used to predict extreme acoustic properties in space coiling metamaterials. This change is the difference in the phase of the transmitted wave with respect to the incident wave. This method is simpler when compared with the more rigorous 'S' parameter retrieval method. The inferences drawn using this method have been verified experimentally for labyrinthine metamaterials by showing negative refraction for the predicted band of frequencies.
Building the interspace: Digital library infrastructure for a University Engineering Community
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schatz, B.
A large-scale digital library is being constructed and evaluated at the University of Illinois, with the goal of bringing professional search and display to Internet information services. A testbed planned to grow to 10K documents and 100K users is being constructed in the Grainger Engineering Library Information Center, as a joint effort of the University Library and the National Center for Supercomputing Applications (NCSA), with evaluation and research by the Graduate School of Library and Information Science and the Department of Computer Science. The electronic collection will be articles from engineering and science journals and magazines, obtained directly from publishersmore » in SGML format and displayed containing all text, figures, tables, and equations. The publisher partners include IEEE Computer Society, AIAA (Aerospace Engineering), American Physical Society, and Wiley & Sons. The software will be based upon NCSA Mosaic as a network engine connected to commercial SGML displayers and full-text searchers. The users will include faculty/students across the midwestern universities in the Big Ten, with evaluations via interviews, surveys, and transaction logs. Concurrently, research into scaling the testbed is being conducted. This includes efforts in computer science, information science, library science, and information systems. These efforts will evaluate different semantic retrieval technologies, including automatic thesaurus and subject classification graphs. New architectures will be designed and implemented for a next generation digital library infrastructure, the Interspace, which supports interaction with information spread across information spaces within the Net.« less
Query Log Analysis of an Electronic Health Record Search Engine
Yang, Lei; Mei, Qiaozhu; Zheng, Kai; Hanauer, David A.
2011-01-01
We analyzed a longitudinal collection of query logs of a full-text search engine designed to facilitate information retrieval in electronic health records (EHR). The collection, 202,905 queries and 35,928 user sessions recorded over a course of 4 years, represents the information-seeking behavior of 533 medical professionals, including frontline practitioners, coding personnel, patient safety officers, and biomedical researchers for patient data stored in EHR systems. In this paper, we present descriptive statistics of the queries, a categorization of information needs manifested through the queries, as well as temporal patterns of the users’ information-seeking behavior. The results suggest that information needs in medical domain are substantially more sophisticated than those that general-purpose web search engines need to accommodate. Therefore, we envision there exists a significant challenge, along with significant opportunities, to provide intelligent query recommendations to facilitate information retrieval in EHR. PMID:22195150
NASA Astrophysics Data System (ADS)
Huang, H.; Vong, C. M.; Wong, P. K.
2010-05-01
With the development of modern technology, modern vehicles adopt electronic control system for injection and ignition. In traditional way, whenever there is any malfunctioning in an automotive engine, an automotive mechanic usually performs a diagnosis in the ignition system of the engine to check any exceptional symptoms. In this paper, we present a case-based reasoning (CBR) approach to help solve human diagnosis problem. Nevertheless, one drawback of CBR system is that the case library will be expanded gradually after repeatedly running the system, which may cause inaccuracy and longer time for the CBR retrieval. To tackle this problem, case-based maintenance (CBM) framework is employed so that the case library of the CBR system will be compressed by clustering to produce a set of representative cases. As a result, the performance (in retrieval accuracy and time) of the whole CBR system can be improved.
A Management Information System for Bare Base Civil Engineering Commanders
1988-09-01
initial beddown stage. The purpose of this research was to determine the feasibility of developing a microcomputer based management information system (MIS...the software best suited to synthesize four of the categories into a prototype field MIS. Keyword: Management information system , Bare bases, Civil engineering, Data bases, Information retrieval.
Research Trends with Cross Tabulation Search Engine
ERIC Educational Resources Information Center
Yin, Chengjiu; Hirokawa, Sachio; Yau, Jane Yin-Kim; Hashimoto, Kiyota; Tabata, Yoshiyuki; Nakatoh, Tetsuya
2013-01-01
To help researchers in building a knowledge foundation of their research fields which could be a time-consuming process, the authors have developed a Cross Tabulation Search Engine (CTSE). Its purpose is to assist researchers in 1) conducting research surveys, 2) efficiently and effectively retrieving information (such as important researchers,…
BROWSER: An Automatic Indexing On-Line Text Retrieval System. Annual Progress Report.
ERIC Educational Resources Information Center
Williams, J. H., Jr.
The development and testing of the Browsing On-line With Selective Retrieval (BROWSER) text retrieval system allowing a natural language query statement and providing on-line browsing capabilities through an IBM 2260 display terminal is described. The prototype system contains data bases of 25,000 German language patent abstracts, 9,000 English…
Swanson, James R.
1977-01-01
GEOTHERM is a computerized geothermal resources file developed by the U.S. Geological Survey. The file contains data on geothermal fields, wells, and chemical analyses from the United states and international sources. The General Information Processing System (GIPSY) in the IBM 370/155 computer is used to store and retrieve data. The GIPSY retrieval program contains simple commands which can be used to search the file, select a narrowly defined subset, sort the records, and output the data in a variety of forms. Eight commands are listed and explained so that the GEOTHERM file can be accessed directly by geologists. No programming experience is necessary to retrieve data from the file.
Improve Biomedical Information Retrieval using Modified Learning to Rank Methods.
Xu, Bo; Lin, Hongfei; Lin, Yuan; Ma, Yunlong; Yang, Liang; Wang, Jian; Yang, Zhihao
2016-06-14
In these years, the number of biomedical articles has increased exponentially, which becomes a problem for biologists to capture all the needed information manually. Information retrieval technologies, as the core of search engines, can deal with the problem automatically, providing users with the needed information. However, it is a great challenge to apply these technologies directly for biomedical retrieval, because of the abundance of domain specific terminologies. To enhance biomedical retrieval, we propose a novel framework based on learning to rank. Learning to rank is a series of state-of-the-art information retrieval techniques, and has been proved effective in many information retrieval tasks. In the proposed framework, we attempt to tackle the problem of the abundance of terminologies by constructing ranking models, which focus on not only retrieving the most relevant documents, but also diversifying the searching results to increase the completeness of the resulting list for a given query. In the model training, we propose two novel document labeling strategies, and combine several traditional retrieval models as learning features. Besides, we also investigate the usefulness of different learning to rank approaches in our framework. Experimental results on TREC Genomics datasets demonstrate the effectiveness of our framework for biomedical information retrieval.
Retrieval of constituent mixing ratios from limb thermal emission spectra
NASA Technical Reports Server (NTRS)
Shaffer, William A.; Kunde, Virgil G.; Conrath, Barney J.
1988-01-01
An onion-peeling iterative, least-squares relaxation method to retrieve mixing ratio profiles from limb thermal emission spectra is presented. The method has been tested on synthetic data, containing various amounts of added random noise for O3, HNO3, and N2O. The retrieval method is used to obtain O3 and HNO3 mixing ratio profiles from high-resolution thermal emission spectra. Results of the retrievals compare favorably with those obtained previously.
A Multimodal Search Engine for Medical Imaging Studies.
Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos
2017-02-01
The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.
2016-01-01
The aim of this study was to determine how representative wear scars of simulator-tested polyethylene (PE) inserts compare with retrieved PE inserts from total knee replacement (TKR). By means of a nonparametric self-organizing feature map (SOFM), wear scar images of 21 postmortem- and 54 revision-retrieved components were compared with six simulator-tested components that were tested either in displacement or in load control according to ISO protocols. The SOFM network was then trained with the wear scar images of postmortem-retrieved components since those are considered well-functioning at the time of retrieval. Based on this training process, eleven clusters were established, suggesting considerable variability among wear scars despite an uncomplicated loading history inside their hosts. The remaining components (revision-retrieved and simulator-tested) were then assigned to these established clusters. Six out of five simulator components were clustered together, suggesting that the network was able to identify similarities in loading history. However, the simulator-tested components ended up in a cluster at the fringe of the map containing only 10.8% of retrieved components. This may suggest that current ISO testing protocols were not fully representative of this TKR population, and protocols that better resemble patients' gait after TKR containing activities other than walking may be warranted. PMID:27597955
Voss retrieves a small tool from a tool kit in ISS Node 1/Unity
2001-08-13
STS105-E-5175 (13 August 2001) --- Astronaut James S. Voss, retrieves a small tool from a tool case in the U.S.-built Unity node aboard the International Space Station (ISS). The Expedition Two flight engineer is only days away from returning to Earth following five months aboard the orbital outpost. The image was recorded with a digital still camera.
A novel architecture for information retrieval system based on semantic web
NASA Astrophysics Data System (ADS)
Zhang, Hui
2011-12-01
Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John M.; Iredell, Lena; Keita, Fricky
2009-01-01
This paper describes the AIRS Science Team Version 5 retrieval algorithm in terms of its three most significant improvements over the methodology used in the AIRS Science Team Version 4 retrieval algorithm. Improved physics in Version 5 allows for use of AIRS clear column radiances in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of clear column radiances .R(sub i) for all channels. This new approach allows for the generation of more accurate values of .R(sub i) and T(p) under most cloud conditions. Secondly, Version 5 contains a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 also contains for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology, referred to as AIRS Version 5 AO, was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Results are shown comparing the relative performance of the AIRS Version 4, Version 5, and Version 5 AO for the single day, January 25, 2003. The Goddard DISC is now generating and distributing products derived using the AIRS Science Team Version 5 retrieval algorithm. This paper also described the Quality Control flags contained in the DISC AIRS/AMSU retrieval products and their intended use for scientific research purposes.
Analyzing Document Retrievability in Patent Retrieval Settings
NASA Astrophysics Data System (ADS)
Bashir, Shariq; Rauber, Andreas
Most information retrieval settings, such as web search, are typically precision-oriented, i.e. they focus on retrieving a small number of highly relevant documents. However, in specific domains, such as patent retrieval or law, recall becomes more relevant than precision: in these cases the goal is to find all relevant documents, requiring algorithms to be tuned more towards recall at the cost of precision. This raises important questions with respect to retrievability and search engine bias: depending on how the similarity between a query and documents is measured, certain documents may be more or less retrievable in certain systems, up to some documents not being retrievable at all within common threshold settings. Biases may be oriented towards popularity of documents (increasing weight of references), towards length of documents, favour the use of rare or common words; rely on structural information such as metadata or headings, etc. Existing accessibility measurement techniques are limited as they measure retrievability with respect to all possible queries. In this paper, we improve accessibility measurement by considering sets of relevant and irrelevant queries for each document. This simulates how recall oriented users create their queries when searching for relevant information. We evaluate retrievability scores using a corpus of patents from US Patent and Trademark Office.
Retrieval of all effective susceptibilities in nonlinear metamaterials
NASA Astrophysics Data System (ADS)
Larouche, Stéphane; Radisic, Vesna
2018-04-01
Electromagnetic metamaterials offer a great avenue to engineer and amplify the nonlinear response of materials. Their electric, magnetic, and magnetoelectric linear and nonlinear response are related to their structure, providing unprecedented liberty to control those properties. Both the linear and the nonlinear properties of metamaterials are typically anisotropic. While the methods to retrieve the effective linear properties are well established, existing nonlinear retrieval methods have serious limitations. In this work, we generalize a nonlinear transfer matrix approach to account for all nonlinear susceptibility terms and show how to use this approach to retrieve all effective nonlinear susceptibilities of metamaterial elements. The approach is demonstrated using sum frequency generation, but can be applied to other second-order or higher-order processes.
2011-09-01
search engines to find information. Most commercial search engines (Google, Yahoo, Bing, etc.) provide their indexing and search services...at no cost. The DoD can achieve large gains at a small cost by making public documents available to search engines . This can be achieved through the...were organized on the website dodreports.com. The results of this research revealed improvement gains of 8-20% for finding reports through commercial search engines during the first six months of
Elsevier’s approach to the bioCADDIE 2016 Dataset Retrieval Challenge
Scerri, Antony; Kuriakose, John; Deshmane, Amit Ajit; Stanger, Mark; Moore, Rebekah; Naik, Raj; de Waard, Anita
2017-01-01
Abstract We developed a two-stream, Apache Solr-based information retrieval system in response to the bioCADDIE 2016 Dataset Retrieval Challenge. One stream was based on the principle of word embeddings, the other was rooted in ontology based indexing. Despite encountering several issues in the data, the evaluation procedure and the technologies used, the system performed quite well. We provide some pointers towards future work: in particular, we suggest that more work in query expansion could benefit future biomedical search engines. Database URL: https://data.mendeley.com/datasets/zd9dxpyybg/1 PMID:29220454
Research on keyword retrieval method of HBase database based on index structure
NASA Astrophysics Data System (ADS)
Gong, Pijin; Lv, Congmin; Gong, Yongsheng; Ma, Haozhi; Sun, Yang; Wang, Lu
2017-10-01
With the rapid development of manned spaceflight engineering, the scientific experimental data in space application system is increasing rapidly. How to efficiently query the specific data in the mass data volume has become a problem. In this paper, a method of retrieving the object data based on the object attribute as the keyword is proposed. The HBase database is used to store the object data and object attributes, and the secondary index is constructed. The research shows that this method is a good way to retrieve specified data based on object attributes.
Current Searching Methodology and Retrieval Issues: An Assessment
2008-03-01
searching that are used by search engines are discussed. They are: full text searching, i.e., the searching of unstructured data, and metadata searching...also found among search engines ; however, it is the popularity of full text searching that has changed the road map to information access. The...other hand, information seekers’ willingness, or lack of, to learn the multiple search engines ’ capabilities may diminish their search results
Effects of retrieval practice on consumer memory for brand attributes.
Parker, Andrew; Dagnall, Neil
2007-08-01
The effect of retrieval practice on memory for brand attributes was examined. Participants were presented with advertisements for fictional products so each contained a number of brand attributes relating to the nature of the product and its qualities. Following this, participants practiced recalling a subset of those attributes either 3 or 6 times. The act of retrieving some brand information inhibited the recall of other brand information that was not practiced, but only when repeated retrieval practice took place 6 times. This is the first demonstration of inhibitory effects in consumers' memory using the retrieval practice paradigm.
ERIC Educational Resources Information Center
Hyman, Harvey
2012-01-01
This dissertation examines the impact of exploration and learning upon eDiscovery information retrieval; it is written in three parts. Part I contains foundational concepts and background on the topics of information retrieval and eDiscovery. This part informs the reader about the research frameworks, methodologies, data collection, and…
Salter, Phia S; Kelley, Nicholas J; Molina, Ludwin E; Thai, Luyen T
2017-09-01
Photographs provide critical retrieval cues for personal remembering, but few studies have considered this phenomenon at the collective level. In this research, we examined the psychological consequences of visual attention to the presence (or absence) of racially charged retrieval cues within American racial segregation photographs. We hypothesised that attention to racial retrieval cues embedded in historical photographs would increase social justice concept accessibility. In Study 1, we recorded gaze patterns with an eye-tracker among participants viewing images that contained racial retrieval cues or were digitally manipulated to remove them. In Study 2, we manipulated participants' gaze behaviour by either directing visual attention toward racial retrieval cues, away from racial retrieval cues, or directing attention within photographs where racial retrieval cues were missing. Across Studies 1 and 2, visual attention to racial retrieval cues in photographs documenting historical segregation predicted social justice concept accessibility.
Markó, K; Schulz, S; Hahn, U
2005-01-01
We propose an interlingua-based indexing approach to account for the particular challenges that arise in the design and implementation of cross-language document retrieval systems for the medical domain. Documents, as well as queries, are mapped to a language-independent conceptual layer on which retrieval operations are performed. We contrast this approach with the direct translation of German queries to English ones which, subsequently, are matched against English documents. We evaluate both approaches, interlingua-based and direct translation, on a large medical document collection, the OHSUMED corpus. A substantial benefit for interlingua-based document retrieval using German queries on English texts is found, which amounts to 93% of the (monolingual) English baseline. Most state-of-the-art cross-language information retrieval systems translate user queries to the language(s) of the target documents. In contra-distinction to this approach, translating both documents and user queries into a language-independent, concept-like representation format is more beneficial to enhance cross-language retrieval performance.
Accessibility, nature and quality of health information on the Internet: a survey on osteoarthritis.
Maloney, S; Ilic, D; Green, S
2005-03-01
This study aims to determine the quality and validity of information available on the Internet about osteoarthritis and to investigate the best way of sourcing this information. Keywords relevant to osteoarthritis were searched across 15 search engines representing medical, general and meta-search engines. Search engine efficiency was defined as the percentage of unique and relevant websites from all websites returned by each search engine. The quality of relevant information was appraised using the DISCERN tool and the concordance of the information offered by the website with the available evidence about osteoarthritis determined. A total of 3443 websites were retrieved, of which 344 were identified as unique and providing information relevant to osteoarthritis. The overall quality of website information was poor. There was no significant difference between types of search engine in sourcing relevant information; however, the information retrieved from medical search engines was of a higher quality. Fewer than a third of the websites identified as offering relevant information cited evidence to support their recommendations. Although the overall quality of website information about osteoarthritis was poor, medical search engines may provide consumers with the opportunity to source high-quality health information on the Internet. In the era of evidence-based medicine, one of the main obstacles to the Internet reaching its potential as a medical resource is the failure of websites to incorporate and attribute evidence-based information.
Development and Evaluation of Thesauri-Based Bibliographic Biomedical Search Engine
ERIC Educational Resources Information Center
Alghoson, Abdullah
2017-01-01
Due to the large volume and exponential growth of biomedical documents (e.g., books, journal articles), it has become increasingly challenging for biomedical search engines to retrieve relevant documents based on users' search queries. Part of the challenge is the matching mechanism of free-text indexing that performs matching based on…
Developing a distributed HTML5-based search engine for geospatial resource discovery
NASA Astrophysics Data System (ADS)
ZHOU, N.; XIA, J.; Nebert, D.; Yang, C.; Gui, Z.; Liu, K.
2013-12-01
With explosive growth of data, Geospatial Cyberinfrastructure(GCI) components are developed to manage geospatial resources, such as data discovery and data publishing. However, the efficiency of geospatial resources discovery is still challenging in that: (1) existing GCIs are usually developed for users of specific domains. Users may have to visit a number of GCIs to find appropriate resources; (2) The complexity of decentralized network environment usually results in slow response and pool user experience; (3) Users who use different browsers and devices may have very different user experiences because of the diversity of front-end platforms (e.g. Silverlight, Flash or HTML). To address these issues, we developed a distributed and HTML5-based search engine. Specifically, (1)the search engine adopts a brokering approach to retrieve geospatial metadata from various and distributed GCIs; (2) the asynchronous record retrieval mode enhances the search performance and user interactivity; (3) the search engine based on HTML5 is able to provide unified access capabilities for users with different devices (e.g. tablet and smartphone).
Using Dedal to share and reuse distributed engineering design information
NASA Technical Reports Server (NTRS)
Baya, Vinod; Baudin, Catherine; Mabogunje, Ade; Das, Aseem; Cannon, David M.; Leifer, Larry J.
1994-01-01
The overall goal of the project is to facilitate the reuse of previous design experience for the maintenance, repair and redesign of artifacts in the electromechanical engineering domain. An engineering team creates information in the form of meeting summaries, project memos, progress reports, engineering notes, spreadsheet calculations and CAD drawings. Design information captured in these media is difficult to reuse because the way design concepts are referred to evolve over the life of a project and because decisions, requirements and structure are interrelated but rarely explicitly linked. Based on protocol analysis of the information seeking behavior of designer's, we defined a language to describe the content and the form of design records and implemented this language in Dedal, a tool for indexing, modeling and retrieving design information. We first describe the approach to indexing and retrieval in Dedal. Next we describe ongoing work in extending Dedal's capabilities to a distributed environment by integrating it with World Wide Web. This will enable members of a design team who are not co-located to share and reuse information.
77 FR 61401 - Notice of Availability of Government-Owned Inventions; Available for Licensing
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-09
... FOR UNMANNED UNDERSEA VEHICLES//Patent No. 7,721,666: HULL-MOUNTED LINE RETRIEVAL AND RELEASE SYSTEM... EXTERNALLY MOUNTED SLEWING CRANE FOR SHIPPING CONTAINERS// Patent No. 7,730,843: HULL-MOUNTED LINE RETRIEVAL...
Large Scale Hierarchical K-Means Based Image Retrieval With MapReduce
2014-03-27
hadoop distributed file system: Architecture and design, 2007. [10] G. Bradski. Dr. Dobb’s Journal of Software Tools, 2000. [11] Terry Costlow. Big data ...million images running on 20 virtual machines are shown. 15. SUBJECT TERMS Image Retrieval, MapReduce, Hierarchical K-Means, Big Data , Hadoop U U U UU 87...13 2.1.1.2 HDFS Data Representation . . . . . . . . . . . . . . . . 14 2.1.1.3 Hadoop Engine
Deep Sludge Gas Release Event Analytical Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sams, Terry L.
2013-08-15
Long Abstract. Full Text. The purpose of the Deep Sludge Gas Release Event Analytical Evaluation (DSGRE-AE) is to evaluate the postulated hypothesis that a hydrogen GRE may occur in Hanford tanks containing waste sludges at levels greater than previously experienced. There is a need to understand gas retention and release hazards in sludge beds which are 200 -300 inches deep. These sludge beds are deeper than historical Hanford sludge waste beds, and are created when waste is retrieved from older single-shell tanks (SST) and transferred to newer double-shell tanks (DST).Retrieval of waste from SSTs reduces the risk to the environmentmore » from leakage or potential leakage of waste into the ground from these tanks. However, the possibility of an energetic event (flammable gas accident) in the retrieval receiver DST is worse than slow leakage. Lines of inquiry, therefore, are (1) can sludge waste be stored safely in deep beds; (2) can gas release events (GRE) be prevented by periodically degassing the sludge (e.g., mixer pump); or (3) does the retrieval strategy need to be altered to limit sludge bed height by retrieving into additional DSTs? The scope of this effort is to provide expert advice on whether or not to move forward with the generation of deep beds of sludge through retrieval of C-Farm tanks. Evaluation of possible mitigation methods (e.g., using mixer pumps to release gas, retrieving into an additional DST) are being evaluated by a second team and are not discussed in this report. While available data and engineering judgment indicate that increased gas retention (retained gas fraction) in DST sludge at depths resulting from the completion of SST 241-C Tank Farm retrievals is not expected and, even if gas releases were to occur, they would be small and local, a positive USQ was declared (Occurrence Report EM-RP--WRPS-TANKFARM-2012-0014, "Potential Exists for a Large Spontaneous Gas Release Event in Deep Settled Waste Sludge"). The purpose of this technical report is to (1) present and discuss current understandings of gas retention and release mechanisms for deep sludge in U.S. Department of Energy (DOE) complex waste storage tanks; and (2) to identify viable methods/criteria for demonstrating safety relative to deep sludge gas release events (DSGRE) in the near term to support the Hanford C-Farm retrieval mission. A secondary purpose is to identify viable methods/criteria for demonstrating safety relative to DSGREs in the longer term to support the mission to retrieve waste from the Hanford Tank Farms and deliver it to the Waste Treatment and Immobilization Plant (WTP). The potential DSGRE issue resulted in the declaration of a positive Unreviewed Safety Question (USQ). C-Farm retrievals are currently proceeding under a Justification for Continued Operation (JCO) that only allows tanks 241-AN-101 and 241-AN-106 sludge levels of 192 inches and 195 inches, respectively. C-Farm retrievals need deeper sludge levels (approximately 310 inches in 241-AN-101 and approximately 250 inches in 241-AN-106). This effort is to provide analytical data and justification to continue retrievals in a safe and efficient manner.« less
Searching the Internet for information on prostate cancer screening: an assessment of quality.
Ilic, Dragan; Risbridger, Gail; Green, Sally
2004-07-01
To identify how on-line information relating to prostate cancer screening (PCS) is best sourced, whether through general, medical, or meta-search engines, and to assess the quality of that information. Websites providing information about PCS were searched across 15 search engines representing three distinct types: general, medical, and meta-search engines. The quality of on-line information was assessed using the DISCERN quality assessment tool. Quality performance characteristics were analyzed by performing Mann-Whitney U tests. Search engine efficiency was measured by each search query as a percentage of the relevant websites included for analysis from the total returned and analyzed by performing Kruskal-Wallis analysis of variance. Of 6690 websites reviewed, 84 unique websites were identified as providing information relevant to PCS. General and meta-search engines were significantly more efficient at retrieving relevant information on PCS compared with medical search engines. The quality of information was variable, with most of a poor standard. Websites that provided referral links to other resources and a citation of evidence provided a significantly better quality of information. In contrast, websites offering a direct service were more likely to provide a significantly poorer quality of information. The current lack of a clear consensus on guidelines and recommendation in published data is also reflected by the variable quality of information found on-line. Specialized medical search engines were no more likely to retrieve relevant, high-quality information than general or meta-search engines.
Strength Measurements of Archive K Basin Sludge Using a Soil Penetrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delegard, Calvin H.; Schmidt, Andrew J.; Chenault, Jeffrey W.
2011-12-06
Spent fuel radioactive sludge present in the K East and K West spent nuclear fuel storage basins now resides in the KW Basin in six large underwater engineered containers. The sludge will be dispositioned in two phases under the Sludge Treatment Project: (1) hydraulic retrieval into sludge transport and storage containers (STSCs) and transport to interim storage in Central Plateau and (2) retrieval from the STSCs, treatment, and packaging for shipment to the Waste Isolation Pilot Plant. In the years the STSCs are stored, sludge strength is expected to increase through chemical reaction, intergrowth of sludge crystals, and compaction andmore » dewatering by settling. Increased sludge strength can impact the type and operation of the retrieval equipment needed prior to final sludge treatment and packaging. It is important to determine whether water jetting, planned for sludge retrieval from STSCs, will be effective. Shear strength is a property known to correlate with the effectiveness of water jetting. Accordingly, the unconfined compressive strengths (UCS) of archive K Basin sludge samples and sludge blends were measured using a pocket penetrometer modified for hot cell use. Based on known correlations, UCS values can be converted to shear strengths. Twenty-six sludge samples, stored in hot cells for a number of years since last being disturbed, were identified as potential candidates for UCS measurement and valid UCS measurements were made for twelve, each of which was found as moist or water-immersed solids at least 1/2-inch deep. Ten of the twelve samples were relatively weak, having consistencies described as 'very soft' to 'soft'. Two of the twelve samples, KE Pit and KC-4 P250, were strong with 'very stiff' and 'stiff' consistencies described, respectively, as 'can be indented by a thumb nail' or 'can be indented by thumb'. Both of these sludge samples are composites collected from KE Basin floor and Weasel Pit locations. Despite both strong sludges having relatively high iron concentrations, attribution of their high strengths to this factor could not be made with confidence as other measured sludge samples, also from the KE Basin floor and of high iron concentration, were relatively weak. The observed UCS and shear strengths for the two strong sludges were greater than observed in any prior testing of K Basin sludge except for sludge processed at 185 C under hydrothermal conditions.« less
Computer retrieval of bibliographies using an editing program
Brethauer, G.E.; Brokaw, V.L.
1979-01-01
A simple program permits use of the text .editor 'qedx,' part of many computer systems, to input bibliographic entries and to retrieve specific entries which contain keywords of interest. Multiple keywords may be used sequentially to find specific entries.
Automated semantic indexing of figure captions to improve radiology image retrieval.
Kahn, Charles E; Rubin, Daniel L
2009-01-01
We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Estimated precision was 0.897 (95% confidence interval, 0.857-0.937). Estimated recall was 0.930 (95% confidence interval, 0.838-1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval.
Hanauer, David A; Wu, Danny T Y; Yang, Lei; Mei, Qiaozhu; Murkowski-Steffy, Katherine B; Vydiswaran, V G Vinod; Zheng, Kai
2017-03-01
The utility of biomedical information retrieval environments can be severely limited when users lack expertise in constructing effective search queries. To address this issue, we developed a computer-based query recommendation algorithm that suggests semantically interchangeable terms based on an initial user-entered query. In this study, we assessed the value of this approach, which has broad applicability in biomedical information retrieval, by demonstrating its application as part of a search engine that facilitates retrieval of information from electronic health records (EHRs). The query recommendation algorithm utilizes MetaMap to identify medical concepts from search queries and indexed EHR documents. Synonym variants from UMLS are used to expand the concepts along with a synonym set curated from historical EHR search logs. The empirical study involved 33 clinicians and staff who evaluated the system through a set of simulated EHR search tasks. User acceptance was assessed using the widely used technology acceptance model. The search engine's performance was rated consistently higher with the query recommendation feature turned on vs. off. The relevance of computer-recommended search terms was also rated high, and in most cases the participants had not thought of these terms on their own. The questions on perceived usefulness and perceived ease of use received overwhelmingly positive responses. A vast majority of the participants wanted the query recommendation feature to be available to assist in their day-to-day EHR search tasks. Challenges persist for users to construct effective search queries when retrieving information from biomedical documents including those from EHRs. This study demonstrates that semantically-based query recommendation is a viable solution to addressing this challenge. Published by Elsevier Inc.
Intelligent Melting Probes - How to Make the Most out of our Data
NASA Astrophysics Data System (ADS)
Kowalski, J.; Clemens, J.; Chen, S.; Schüller, K.
2016-12-01
Direct exploration of glaciers, ice sheets, or subglacial environments poses a big challenge. Different technological solutions have been proposed and deployed in the last decades, examples being hot-water drills or different melting probe designs. Most of the recent engineering concepts integrate a variety of different on-board sensors, e.g. temperature sensors, pressure sensors, or an inertial measurement unit. Not only do individual sensors provide valuable insight into the current state of the probe, yet often they also contain a wealth of additional information when analyzed collectively. This quite naturally raises the question: How can we make most out of our data? We find that it is necessary to implement intelligent data integration and sensor fusion strategies to retrieve a maximum amount of information from the observations. In this contribution, we are inspired by the engineering design of the IceMole, a minimally invasive, steerable melting probe. We will talk about two sensor integration strategies relevant to IceMole melting scenarios. At first, we will present a multi-sensor fusion approach to accurately retrieve subsurface position and attitude information. It uses an extended Kalman filter to integrate data from an on-board IMU, a differential magnetometer system, the screw feed, as well as the travel time of acoustic signals originating from emitters at the ice surface. Furthermore, an evidential mapping algorithm estimates a map of the environment from data of ultrasound phased arrays in the probe's head. Various results from tests in a swimming pool and in glacier ice will be shown during the presentation. A second block considers the fluid-dynamical state in the melting channel, as well as the ambient cryo-environment. It is devoted to retrieving information from on-board temperature and pressure sensors. Here, we will report on preliminary results from re-analysing past field test data. Knowledge from integrated sensor data likewise provides valuable input for the parameter identification and verification of data based models. Due to the concept of not focusing on the physical laws, this approach can still be used, if modifications are done. It is highly transferable and hasn't been exploited rigorously so far. This could be a potential future direction.
NASA Astrophysics Data System (ADS)
Ho, Chris M. W.; Marshall, Garland R.
1993-12-01
SPLICE is a program that processes partial query solutions retrieved from 3D, structural databases to generate novel, aggregate ligands. It is designed to interface with the database searching program FOUNDATION, which retrieves fragments containing any combination of a user-specified minimum number of matching query elements. SPLICE eliminates aspects of structures that are physically incapable of binding within the active site. Then, a systematic rule-based procedure is performed upon the remaining fragments to ensure receptor complementarity. All modifications are automated and remain transparent to the user. Ligands are then assembled by linking components into composite structures through overlapping bonds. As a control experiment, FOUNDATION and SPLICE were used to reconstruct a know HIV-1 protease inhibitor after it had been fragmented, reoriented, and added to a sham database of fifty different small molecules. To illustrate the capabilities of this program, a 3D search query containing the pharmacophoric elements of an aspartic proteinase-inhibitor crystal complex was searched using FOUNDATION against a subset of the Cambridge Structural Database. One hundred thirty-one compounds were retrieved, each containing any combination of at least four query elements. Compounds were automatically screened and edited for receptor complementarity. Numerous combinations of fragments were discovered that could be linked to form novel structures, containing a greater number of pharmacophoric elements than any single retrieved fragment.
WaterlooClarke: TREC 2015 Clinical Decision Support Track
2015-11-20
questions (diagnosis, test and treatment articles). The two different full-text search engines we adopted in order to search over the collection of articles...two different search engines using reciprocal rank fusion. The evaluation of the submitted runs using partially marked results of Text Retrieval Conference (TREC) from the previous year shows that the methodologies are promising.
E-Referencer: Transforming Boolean OPACs to Web Search Engines.
ERIC Educational Resources Information Center
Khoo, Christopher S. G.; Poo, Danny C. C.; Toh, Teck-Kang; Hong, Glenn
E-Referencer is an expert intermediary system for searching library online public access catalogs (OPACs) on the World Wide Web. It is implemented as a proxy server that mediates the interaction between the user and Boolean OPACs. It transforms a Boolean OPAC into a retrieval system with many of the search capabilities of Web search engines.…
ERIC Educational Resources Information Center
Tunender, Heather; Ervin, Jane
1998-01-01
Character strings were planted in a World Wide Web site (Project Whistlestop) to test indexing and retrieval rates of five Web search tools (Lycos, infoseek, AltaVista, Yahoo, Excite). It was found that search tools indexed few of the planted character strings, none indexed the META descriptor tag, and only Excite indexed into the 3rd-4th site…
Pit 9 Category of Transuranic Waste Stored Below Ground within Area G
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hargis, Kenneth M.
2014-01-08
A large wildfire called the Las Conchas Fire burned large areas near Los Alamos National Laboratory (LANL) in 2011 and heightened public concern and news media attention over transuranic (TRU) waste stored at LANL’s Technical Area 54 (TA-54) Area G waste management facility. The removal of TRU waste from Area G had been placed at a lower priority in budget decisions for environmental cleanup at LANL because TRU waste removal is not included in the March 2005 Compliance Order on Consent (Reference 1) that is the primary regulatory driver for environmental cleanup at LANL. The Consent Order is an agreementmore » between LANL and the New Mexico Environment Department (NMED) that contains specific requirements and schedules for cleaning up historical contamination at the LANL site. After the Las Conchas Fire, discussions were held by the U.S. Department of Energy (DOE) with the NMED on accelerating TRU waste removal from LANL and disposing it at the Waste Isolation Pilot Plant (WIPP).This report summarizes available information on the origin, configuration, and composition of the waste containers within Pit 9, their physical and radiological characteristics, and issues that may be encountered in their retrieval and processing. Review of the available information indicates that Pit 9 should present no major issues in retrieval and processing, and most drums contain TRU waste that can be shipped to WIPP. The primary concern in retrieval is the integrity of containers that have been stored below-ground for 35 to 40 years. The most likely issue that will be encountered in processing containers retrieved from Pit 9 is the potential for items that are prohibited at WIPP such as sealed containers greater than four liters in size and free liquids that exceed limits for WIPP.« less
Gulati, Karan; Kogawa, Masakazu; Prideaux, Matthew; Findlay, David M; Atkins, Gerald J; Losic, Dusan
2016-12-01
There is an ongoing demand for new approaches for treating localized bone pathologies. Here we propose a new strategy for treatment of such conditions, via local delivery of hormones/drugs to the trauma site using drug releasing nano-engineered implants. The proposed implants were prepared in the form of small Ti wires/needles with a nano-engineered oxide layer composed of array of titania nanotubes (TNTs). TNTs implants were inserted into a 3D collagen gel matrix containing human osteoblast-like, and the results confirmed cell migration onto the implants and their attachment and spread. To investigate therapeutic efficacy, TNTs/Ti wires loaded with parathyroid hormone (PTH), an approved anabolic therapeutic for the treatment of severe bone fractures, were inserted into 3D gels containing osteoblast-like cells. Gene expression studies revealed a suppression of SOST (sclerostin) and an increase in RANKL (receptor activator of nuclear factor kappa-B ligand) mRNA expression, confirming the release of PTH from TNTs at concentrations sufficient to alter cell function. The performance of the TNTs wire implants using an example of a drug needed at relatively higher concentrations, the anti-inflammatory drug indomethacin, is also demonstrated. Finally, the mechanical stability of the prepared implants was tested by their insertion into bovine trabecular bone cores ex vivo followed by retrieval, which confirmed the robustness of the TNT structures. This study provides proof of principle for the suitability of the TNT/Ti wire implants for localized bone therapy, which can be customized to cater for specific therapeutic requirements. Copyright © 2016 Elsevier B.V. All rights reserved.
Development and evaluation of a biomedical search engine using a predicate-based vector space model.
Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey
2013-10-01
Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (p<.001) for the predicate-based (80%) than for the keyword-based (71%) approach. Relevance was almost doubled with the predicate-based approach-2.1 versus 1.6 without rank order adjustment (p<.001) and 1.34 versus 0.98 with rank order adjustment (p<.001) for predicate--versus keyword-based approach respectively. Predicates can support more precise searching than keywords, laying the foundation for rich and sophisticated information search. Copyright © 2013 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, H.; Vong, C. M.; Wong, P. K.
2010-05-21
With the development of modern technology, modern vehicles adopt electronic control system for injection and ignition. In traditional way, whenever there is any malfunctioning in an automotive engine, an automotive mechanic usually performs a diagnosis in the ignition system of the engine to check any exceptional symptoms. In this paper, we present a case-based reasoning (CBR) approach to help solve human diagnosis problem. Nevertheless, one drawback of CBR system is that the case library will be expanded gradually after repeatedly running the system, which may cause inaccuracy and longer time for the CBR retrieval. To tackle this problem, case-based maintenancemore » (CBM) framework is employed so that the case library of the CBR system will be compressed by clustering to produce a set of representative cases. As a result, the performance (in retrieval accuracy and time) of the whole CBR system can be improved.« less
Semantics-Based Intelligent Indexing and Retrieval of Digital Images - A Case Study
NASA Astrophysics Data System (ADS)
Osman, Taha; Thakker, Dhavalkumar; Schaefer, Gerald
The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return inaccurate matches as they typically rely on statistical analysis of query keyword recurrence in the image annotation or surrounding text. In this chapter we present a semantically enabled image annotation and retrieval engine that is designed to satisfy the requirements of commercial image collections market in terms of both accuracy and efficiency of the retrieval process. Our search engine relies on methodically structured ontologies for image annotation, thus allowing for more intelligent reasoning about the image content and subsequently obtaining a more accurate set of results and a richer set of alternatives matchmaking the original query. We also show how our well-analysed and designed domain ontology contributes to the implicit expansion of user queries as well as presenting our initial thoughts on exploiting lexical databases for explicit semantic-based query expansion.
[Tissue engineering of urinary bladder using acellular matrix].
Glybochko, P V; Olefir, Yu V; Alyaev, Yu G; Butnaru, D V; Bezrukov, E A; Chaplenko, A A; Zharikova, T M
2017-04-01
Tissue engineering has become a new promising strategy for repairing damaged organs of the urinary system, including the bladder. The basic idea of tissue engineering is to integrate cellular technology and advanced bio-compatible materials to replace or repair tissues and organs. of the study is the objective reflection of the current trends and advances in tissue engineering of the bladder using acellular matrix through a systematic search of preclinical and clinical studies of interest. Relevant studies, including those on methods of tissue engineering of urinary bladder, was retrieved from multiple databases, including Scopus, Web of Science, PubMed, Embase. The reference lists of the retrieved review articles were analyzed for the presence of the missing relevant publications. In addition, a manual search for registered clinical trials was conducted in clinicaltrials.gov. Following the above search strategy, a total of 77 eligible studies were selected for further analysis. Studies differed in the types of animal models, supporting structures, cells and growth factors. Among those, studies using cell-free matrix were selected for a more detailed analysis. Partial restoration of urothelium layer was observed in most studies where acellular grafts were used for cystoplasty, but no the growth of the muscle layer was observed. This is the main reason why cellular structures are more commonly used in clinical practice.
Retrieval System for Calcined Waste for the Idaho Cleanup Project - 12104
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eastman, Randy L.; Johnston, Beau A.; Lower, Danielle E.
This paper describes the conceptual approach to retrieve radioactive calcine waste, hereafter called calcine, from stainless steel storage bins contained within concrete vaults. The retrieval system will allow evacuation of the granular solids (calcine) from the storage bins through the use of stationary vacuum nozzles. The nozzles will use air jets for calcine fluidization and will be able to rotate and direct the fluidization or displacement of the calcine within the bin. Each bin will have a single retrieval system installed prior to operation to prevent worker exposure to the high radiation fields. The addition of an articulated camera armmore » will allow for operations monitoring and will be equipped with contingency tools to aid in calcine removal. Possible challenges (calcine bridging and rat-holing) associated with calcine retrieval and transport, including potential solutions for bin pressurization, calcine fluidization and waste confinement, are also addressed. The Calcine Disposition Project has the responsibility to retrieve, treat, and package HLW calcine. The calcine retrieval system has been designed to incorporate the functions and technical characteristics as established by the retrieval system functional analysis. By adequately implementing the highest ranking technical characteristics into the design of the retrieval system, the system will be able to satisfy the functional requirements. The retrieval system conceptual design provides the means for removing bulk calcine from the bins of the CSSF vaults. Top-down vacuum retrieval coupled with an articulating camera arm will allow for a robust, contained process capable of evacuating bulk calcine from bins and transporting it to the processing facility. The system is designed to fluidize, vacuum, transport and direct the calcine from its current location to the CSSF roof-top transport lines. An articulating camera arm, deployed through an adjacent access riser, will work in conjunction with the retrieval nozzle to aid in calcine fluidization, remote viewing, clumped calcine breaking and recovery from off-normal conditions. As the design of the retrieval system progresses from conceptual to preliminary, increasing attention will be directed toward detailed design and proof-of- concept testing. (authors)« less
2014-09-08
Figure 1.4: Number of publications containing the term “metal-organic frameworks” (Source: ISI Web of Science, retrieved April, 14 th , 2014) 8...1.4 Number of publications containing the term “metal-organic frameworks” (Source: ISI Web of Science, retrieved April, 14 th , 2014). 1.4...recorded with a PerkinElmer Spectrum One 10 in the range 400 – 4000 cm -1 . To record the IR spectrum, an IR beam is passed through the sample (in
Lin, Jimmy
2008-01-01
Background Graph analysis algorithms such as PageRank and HITS have been successful in Web environments because they are able to extract important inter-document relationships from manually-created hyperlinks. We consider the application of these techniques to biomedical text retrieval. In the current PubMed® search interface, a MEDLINE® citation is connected to a number of related citations, which are in turn connected to other citations. Thus, a MEDLINE record represents a node in a vast content-similarity network. This article explores the hypothesis that these networks can be exploited for text retrieval, in the same manner as hyperlink graphs on the Web. Results We conducted a number of reranking experiments using the TREC 2005 genomics track test collection in which scores extracted from PageRank and HITS analysis were combined with scores returned by an off-the-shelf retrieval engine. Experiments demonstrate that incorporating PageRank scores yields significant improvements in terms of standard ranked-retrieval metrics. Conclusion The link structure of content-similarity networks can be exploited to improve the effectiveness of information retrieval systems. These results generalize the applicability of graph analysis algorithms to text retrieval in the biomedical domain. PMID:18538027
LMSC PUBLISHED CONTRIBUTIONS, 1966 IMPRINTS: A CITATION BIBLIOGRAPHY,
PHYSICS, BIBLIOGRAPHIES), (*AERONAUTICS, BIBLIOGRAPHIES), (*ASTRONAUTICS, BIBLIOGRAPHIES), (* MATERIALS , BIBLIOGRAPHIES), (*ELECTRONICS...BIBLIOGRAPHIES), (*ENGINEERING, BIBLIOGRAPHIES), ASTROPHYSICS, NUCLEAR PHYSICS, MECHANICS, METALLURGY, CERAMIC MATERIALS , SOLID STATE PHYSICS, INFORMATION RETRIEVAL, PROPULSION SYSTEMS, BIONICS, REPORTS
Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation.
Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu
2017-07-01
In the field of pathology, whole slide image (WSI) has become the major carrier of visual and diagnostic information. Content-based image retrieval among WSIs can aid the diagnosis of an unknown pathological image by finding its similar regions in WSIs with diagnostic information. However, the huge size and complex content of WSI pose several challenges for retrieval. In this paper, we propose an unsupervised, accurate, and fast retrieval method for a breast histopathological image. Specifically, the method presents a local statistical feature of nuclei for morphology and distribution of nuclei, and employs the Gabor feature to describe the texture information. The latent Dirichlet allocation model is utilized for high-level semantic mining. Locality-sensitive hashing is used to speed up the search. Experiments on a WSI database with more than 8000 images from 15 types of breast histopathology demonstrate that our method achieves about 0.9 retrieval precision as well as promising efficiency. Based on the proposed framework, we are developing a search engine for an online digital slide browsing and retrieval platform, which can be applied in computer-aided diagnosis, pathology education, and WSI archiving and management.
14. END VIEW OF THE PLUTONIUM STORAGE VAULT FROM THE ...
14. END VIEW OF THE PLUTONIUM STORAGE VAULT FROM THE REMOTE CONTROL STATION. THE STACKER-RETRIEVER, A REMOTELY-OPERATED, MECHANIZED TRANSPORT SYSTEM, RETRIEVES CONTAINERS OF PLUTONIUM FROM SAFE GEOMETRY PALLETS STORED ALONG THE LENGTH OF THE VAULT. THE STACKER-RETRIEVER RUNS ALONG THE AISLE BETWEEN THE PALLETS OF THE STORAGE CHAMBER. (3/2/86) - Rocky Flats Plant, Plutonium Recovery Facility, Northwest portion of Rocky Flats Plant, Golden, Jefferson County, CO
2007-03-01
software level retrieve state information that can inherently contain more contextual information . As a result, such mechanisms can be applied in more...ease by which state information can be gathered for monitoring purposes. For example, we consider soft security to allow for easier state retrieval ...files are to be checked and what parameters are to be verified. The independent auditor periodically retrieves information pertaining to the files in
Mineral Resources Data System (MRDS)
Mason, G.T.; Arndt, R.E.
1996-01-01
The U.S. Geological Survey (USGS) operates the Mineral Resources Data System (MRDS), a digital system that contained 111,955 records on Sept. 1, 1995. Records describe metallic and industrial commodity deposits, mines, prospects, and occurrences in the United States and selected other countries. These records have been created over the years by USGS commodity specialists and through cooperative agreements with geological surveys of U.S. States and other countries. This CD-ROM contains the complete MRDS data base, several subsets of it, and software to allow data retrieval and display. Data retrievals are made by using GSSEARCH, a program that is included on this CD-ROM. Retrievals are made by specifying fields or any combination of the fields that provide information on deposit name, location, commodity, deposit model type, geology, mineral production, reserves, and references. A tutorial is included. Retrieved records may be printed or written to a hard disk file in four different formats: ascii, fixed, comma delimited, and DBASE compatible.
Presentation video retrieval using automatically recovered slide and spoken text
NASA Astrophysics Data System (ADS)
Cooper, Matthew
2013-03-01
Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.
Challenges and methodology for indexing the computerized patient record.
Ehrler, Frédéric; Ruch, Patrick; Geissbuhler, Antoine; Lovis, Christian
2007-01-01
Patient records contain most crucial documents for managing the treatments and healthcare of patients in the hospital. Retrieving information from these records in an easy, quick and safe way helps care providers to save time and find important facts about their patient's health. This paper presents the scalability issues induced by the indexing and the retrieval of the information contained in the patient records. For this study, EasyIR, an information retrieval tool performing full text queries and retrieving the related documents has been used. An evaluation of the performance reveals that the indexing process suffers from overhead consequence of the particular structure of the patient records. Most IR tools are designed to manage very large numbers of documents in a single index whereas in our hypothesis, one index per record, which usually implies few documents, has been imposed. As the number of modifications and creations of patient records are significant in a day, using a specialized and efficient indexation tool is required.
Automated Semantic Indexing of Figure Captions to Improve Radiology Image Retrieval
Kahn, Charles E.; Rubin, Daniel L.
2009-01-01
Objective We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. Design The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Measurements Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Results Estimated precision was 0.897 (95% confidence interval, 0.857–0.937). Estimated recall was 0.930 (95% confidence interval, 0.838–1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Conclusion Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval. PMID:19261938
Probabilistic and machine learning-based retrieval approaches for biomedical dataset retrieval
Karisani, Payam; Qin, Zhaohui S; Agichtein, Eugene
2018-01-01
Abstract The bioCADDIE dataset retrieval challenge brought together different approaches to retrieval of biomedical datasets relevant to a user’s query, expressed as a text description of a needed dataset. We describe experiments in applying a data-driven, machine learning-based approach to biomedical dataset retrieval as part of this challenge. We report on a series of experiments carried out to evaluate the performance of both probabilistic and machine learning-driven techniques from information retrieval, as applied to this challenge. Our experiments with probabilistic information retrieval methods, such as query term weight optimization, automatic query expansion and simulated user relevance feedback, demonstrate that automatically boosting the weights of important keywords in a verbose query is more effective than other methods. We also show that although there is a rich space of potential representations and features available in this domain, machine learning-based re-ranking models are not able to improve on probabilistic information retrieval techniques with the currently available training data. The models and algorithms presented in this paper can serve as a viable implementation of a search engine to provide access to biomedical datasets. The retrieval performance is expected to be further improved by using additional training data that is created by expert annotation, or gathered through usage logs, clicks and other processes during natural operation of the system. Database URL: https://github.com/emory-irlab/biocaddie PMID:29688379
NASA Technical Reports Server (NTRS)
1970-01-01
Results are presented of engineering tests of the Surveyor III television camera, which resided on the moon for 2 and 1/2 years before being brought back to earth by the Apollo XII astronauts. Electric circuits, electrical, mechanical, and optical components and subsystems, the vidicon tube, and a variety of internal materials and surface coatings were examined to determine the effects of lunar exposure. Anomalies and failures uncovered were analyzed. For the most part, the camera parts withstood the extreme environment exceedingly well except where degradation of obsolete parts or suspect components had been anticipated. No significant evidence of cold welding was observed, and the anomalies were largely attributable to causes other than lunar exposure. Very little evidence of micrometeoroid impact was noted. Discoloration of material surfaces -- one of the major effects noted--was found to be due to lunar dust contamination and radiation damage. The extensive test data contained in this report are supplemented by results of tests of other Surveyor parts retrieved by the Apollo XII astronauts, which are contained in a companion report.
Concept Based Tie-breaking and Maximal Marginal Relevance Retrieval in Microblog Retrieval
2014-11-01
the same score, another singal will be used to rank these documents to break the ties , but the relative orders of other documents against these...documents remain the same. The tie- breaking step above is repeatedly applied to further break ties until all candidate signals are applied and the ranking...searched it on the Yahoo! search engine, which returned some query sug- gestions for the query. The original queries as well as their query suggestions
Deep Borehole Field Test Requirements and Controlled Assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, Ernest
2015-07-01
This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientificmore » characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.« less
Scalable ranked retrieval using document images
NASA Astrophysics Data System (ADS)
Jain, Rajiv; Oard, Douglas W.; Doermann, David
2013-12-01
Despite the explosion of text on the Internet, hard copy documents that have been scanned as images still play a significant role for some tasks. The best method to perform ranked retrieval on a large corpus of document images, however, remains an open research question. The most common approach has been to perform text retrieval using terms generated by optical character recognition. This paper, by contrast, examines whether a scalable segmentation-free image retrieval algorithm, which matches sub-images containing text or graphical objects, can provide additional benefit in satisfying a user's information needs on a large, real world dataset. Results on 7 million scanned pages from the CDIP v1.0 test collection show that content based image retrieval finds a substantial number of documents that text retrieval misses, and that when used as a basis for relevance feedback can yield improvements in retrieval effectiveness.
Gao, Bo-Cai; Liu, Ming
2013-01-01
Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022
Enhancement of utilization of encryption engine
Robertson, Robert J.; Witzke, Edward L.
2008-04-22
A method of enhancing throughput of a pipelined encryption/decryption engine for an encryption/decryption process has a predetermined number of stages and provides feedback around the stages (and of such an encryption/decryption engine) by receiving a source datablock for a given stage and encryption/decryption context identifier; indexing according to the encryption/decryption context identifier into a bank of initial variables to retrieve an initial variable for the source datablock; and generating an output datablock from the source datablock and its corresponding initial variable.
Galbusera, Fabio; Brayda-Bruno, Marco; Freutel, Maren; Seitz, Andreas; Steiner, Malte; Wehrle, Esther; Wilke, Hans-Joachim
2012-01-01
Previous surveys showed a poor quality of the web sites providing health information about low back pain. However, the rapid and continuous evolution of the Internet content may question the current validity of those investigations. The present study is aimed to quantitatively assess the quality of the Internet information about low back pain retrieved with the most commonly employed search engines. An Internet search with the keywords "low back pain" has been performed with Google, Yahoo!® and Bing™ in the English language. The top 30 hits obtained with each search engine were evaluated by five independent raters and averaged following criteria derived from previous works. All search results were categorized as declaring compliant to a quality standard for health information (e.g. HONCode) or not and based on the web site type (Institutional, Free informative, Commercial, News, Social Network, Unknown). The quality of the hits retrieved by the three search engines was extremely similar. The web sites had a clear purpose, were easy to navigate, and mostly lacked in validity and quality of the provided links. The conformity to a quality standard was correlated with a marked greater quality of the web sites in all respects. Institutional web sites had the best validity and ease of use. Free informative web sites had good quality but a markedly lower validity compared to Institutional websites. Commercial web sites provided more biased information. News web sites were well designed and easy to use, but lacked in validity. The average quality of the hits retrieved by the most commonly employed search engines could be defined as satisfactory and favorably comparable with previous investigations. Awareness of the user about checking the quality of the information remains of concern.
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong; Zhou, Nanrun
2018-06-01
The diffractive-imaging-based encryption (DIBE) scheme has aroused wide interesting due to its compact architecture and low requirement of conditions. Nevertheless, the primary information can hardly be recovered exactly in the real applications when considering the speckle noise and potential occlusion imposed on the ciphertext. To deal with this issue, the customized data container (CDC) into DIBE is introduced and a new phase retrieval algorithm (PRA) for plaintext retrieval is proposed. The PRA, designed according to the peculiarity of the CDC, combines two key techniques from previous approaches, i.e., input-support-constraint and median-filtering. The proposed scheme can guarantee totally the reconstruction of the primary information despite heavy noise or occlusion and its effectiveness and feasibility have been demonstrated with simulation results.
A Bayesian approach to microwave precipitation profile retrieval
NASA Technical Reports Server (NTRS)
Evans, K. Franklin; Turk, Joseph; Wong, Takmeng; Stephens, Graeme L.
1995-01-01
A multichannel passive microwave precipitation retrieval algorithm is developed. Bayes theorem is used to combine statistical information from numerical cloud models with forward radiative transfer modeling. A multivariate lognormal prior probability distribution contains the covariance information about hydrometeor distribution that resolves the nonuniqueness inherent in the inversion process. Hydrometeor profiles are retrieved by maximizing the posterior probability density for each vector of observations. The hydrometeor profile retrieval method is tested with data from the Advanced Microwave Precipitation Radiometer (10, 19, 37, and 85 GHz) of convection over ocean and land in Florida. The CP-2 multiparameter radar data are used to verify the retrieved profiles. The results show that the method can retrieve approximate hydrometeor profiles, with larger errors over land than water. There is considerably greater accuracy in the retrieval of integrated hydrometeor contents than of profiles. Many of the retrieval errors are traced to problems with the cloud model microphysical information, and future improvements to the algorithm are suggested.
Clinician search behaviors may be influenced by search engine design.
Lau, Annie Y S; Coiera, Enrico; Zrimec, Tatjana; Compton, Paul
2010-06-30
Searching the Web for documents using information retrieval systems plays an important part in clinicians' practice of evidence-based medicine. While much research focuses on the design of methods to retrieve documents, there has been little examination of the way different search engine capabilities influence clinician search behaviors. Previous studies have shown that use of task-based search engines allows for faster searches with no loss of decision accuracy compared with resource-based engines. We hypothesized that changes in search behaviors may explain these differences. In all, 75 clinicians (44 doctors and 31 clinical nurse consultants) were randomized to use either a resource-based or a task-based version of a clinical information retrieval system to answer questions about 8 clinical scenarios in a controlled setting in a university computer laboratory. Clinicians using the resource-based system could select 1 of 6 resources, such as PubMed; clinicians using the task-based system could select 1 of 6 clinical tasks, such as diagnosis. Clinicians in both systems could reformulate search queries. System logs unobtrusively capturing clinicians' interactions with the systems were coded and analyzed for clinicians' search actions and query reformulation strategies. The most frequent search action of clinicians using the resource-based system was to explore a new resource with the same query, that is, these clinicians exhibited a "breadth-first" search behaviour. Of 1398 search actions, clinicians using the resource-based system conducted 401 (28.7%, 95% confidence interval [CI] 26.37-31.11) in this way. In contrast, the majority of clinicians using the task-based system exhibited a "depth-first" search behavior in which they reformulated query keywords while keeping to the same task profiles. Of 585 search actions conducted by clinicians using the task-based system, 379 (64.8%, 95% CI 60.83-68.55) were conducted in this way. This study provides evidence that different search engine designs are associated with different user search behaviors.
Sampson, Margaret; Barrowman, Nicholas J; Moher, David; Clifford, Tammy J; Platt, Robert W; Morrison, Andra; Klassen, Terry P; Zhang, Li
2006-02-24
Most electronic search efforts directed at identifying primary studies for inclusion in systematic reviews rely on the optimal Boolean search features of search interfaces such as DIALOG and Ovid. Our objective is to test the ability of an Ultraseek search engine to rank MEDLINE records of the included studies of Cochrane reviews within the top half of all the records retrieved by the Boolean MEDLINE search used by the reviewers. Collections were created using the MEDLINE bibliographic records of included and excluded studies listed in the review and all records retrieved by the MEDLINE search. Records were converted to individual HTML files. Collections of records were indexed and searched through a statistical search engine, Ultraseek, using review-specific search terms. Our data sources, systematic reviews published in the Cochrane library, were included if they reported using at least one phase of the Cochrane Highly Sensitive Search Strategy (HSSS), provided citations for both included and excluded studies and conducted a meta-analysis using a binary outcome measure. Reviews were selected if they yielded between 1000-6000 records when the MEDLINE search strategy was replicated. Nine Cochrane reviews were included. Included studies within the Cochrane reviews were found within the first 500 retrieved studies more often than would be expected by chance. Across all reviews, recall of included studies into the top 500 was 0.70. There was no statistically significant difference in ranking when comparing included studies with just the subset of excluded studies listed as excluded in the published review. The relevance ranking provided by the search engine was better than expected by chance and shows promise for the preliminary evaluation of large results from Boolean searches. A statistical search engine does not appear to be able to make fine discriminations concerning the relevance of bibliographic records that have been pre-screened by systematic reviewers.
Nuclear science abstracts (NSA) database 1948--1974 (on the Internet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Nuclear Science Abstracts (NSA) is a comprehensive abstract and index collection of the International Nuclear Science and Technology literature for the period 1948 through 1976. Included are scientific and technical reports of the US Atomic Energy Commission, US Energy Research and Development Administration and its contractors, other agencies, universities, and industrial and research organizations. Coverage of the literature since 1976 is provided by Energy Science and Technology Database. Approximately 25% of the records in the file contain abstracts. These are from the following volumes of the print Nuclear Science Abstracts: Volumes 12--18, Volume 29, and Volume 33. The database containsmore » over 900,000 bibliographic records. All aspects of nuclear science and technology are covered, including: Biomedical Sciences; Metals, Ceramics, and Other Materials; Chemistry; Nuclear Materials and Waste Management; Environmental and Earth Sciences; Particle Accelerators; Engineering; Physics; Fusion Energy; Radiation Effects; Instrumentation; Reactor Technology; Isotope and Radiation Source Technology. The database includes all records contained in Volume 1 (1948) through Volume 33 (1976) of the printed version of Nuclear Science Abstracts (NSA). This worldwide coverage includes books, conference proceedings, papers, patents, dissertations, engineering drawings, and journal literature. This database is now available for searching through the GOV. Research Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
Relocatable explosives storage magazine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liptak, R.E.; Keenan, W.A.
A relocatable storage magazine apparatus for storing and retrieving explosives and ordnance and for partially containing and attenuating the blast, conflagration and flying debris from an accidental explosion is described comprising: (a) a container having an access hole; (b) a debris trap attached to the container, the debris trap communicating with said container via the access hole, said debris trap having vent holes for venting the pressure of an explosion from said debris trap to the atmosphere; (c) means for covering said access hole; (d) means for suspending explosives and ordnance from the covering means; (e) means for entering themore » storage magazine to store and retrieve explosives and ordnance; (f) means for retaining said covering means in a position above the access hole wherein said explosives and ordnance are accessible from the entering means.« less
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz; Kelley, O.; Kummerow, C.; Huffman, G.; Olson, W.; Kwiatkowski, J.
2015-01-01
In February 2015, the Global Precipitation Measurement (GPM) mission core satellite will complete its first year in space. The core satellite carries a conically scanning microwave imager called the GPM Microwave Imager (GMI), which also has 166 GHz and 183 GHz frequency channels. The GPM core satellite also carries a dual frequency radar (DPR) which operates at Ku frequency, similar to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar, and a new Ka frequency. The precipitation processing system (PPS) is producing swath-based instantaneous precipitation retrievals from GMI, both radars including a dual-frequency product, and a combined GMIDPR precipitation retrieval. These level 2 products are written in the HDF5 format and have many additional parameters beyond surface precipitation that are organized into appropriate groups. While these retrieval algorithms were developed prior to launch and are not optimal, these algorithms are producing very creditable retrievals. It is appropriate for a wide group of users to have access to the GPM retrievals. However, for researchers requiring only surface precipitation, these L2 swath products can appear to be very intimidating and they certainly do contain many more variables than the average researcher needs. Some researchers desire only surface retrievals stored in a simple easily accessible format. In response, PPS has begun to produce gridded text based products that contain just the most widely used variables for each instrument (surface rainfall rate, fraction liquid, fraction convective) in a single line for each grid box that contains one or more observations.This paper will describe the gridded data products that are being produced and provide an overview of their content. Currently two types of gridded products are being produced: (1) surface precipitation retrievals from the core satellite instruments GMI, DPR, and combined GMIDPR (2) surface precipitation retrievals for the partner constellation satellites. Both of these gridded products are generated for a.25 degree x.25 degree hourly grid, which are packaged into daily ASCII (American Standard Code for Information Interchange) files that can downloaded from the PPS FTP (File Transfer Protocol) site. To reduce the download size, the files are compressed using the gzip utility.This paper will focus on presenting high-level details about the gridded text product being generated from the instruments on the GPM core satellite. But summary information will also be presented about the partner radiometer gridded product. All retrievals for the partner radiometer are done using the GPROF2014 algorithmusing as input the PPS generated inter-calibrated 1C product for the radiometer.
GPM Mission Gridded Text Products Providing Surface Precipitation Retrievals
NASA Astrophysics Data System (ADS)
Stocker, Erich Franz; Kelley, Owen; Huffman, George; Kummerow, Christian
2015-04-01
In February 2015, the Global Precipitation Measurement (GPM) mission core satellite will complete its first year in space. The core satellite carries a conically scanning microwave imager called the GPM Microwave Imager (GMI), which also has 166 GHz and 183 GHz frequency channels. The GPM core satellite also carries a dual frequency radar (DPR) which operates at Ku frequency, similar to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar), and a new Ka frequency. The precipitation processing system (PPS) is producing swath-based instantaneous precipitation retrievals from GMI, both radars including a dual-frequency product, and a combined GMI/DPR precipitation retrieval. These level 2 products are written in the HDF5 format and have many additional parameters beyond surface precipitation that are organized into appropriate groups. While these retrieval algorithms were developed prior to launch and are not optimal, these algorithms are producing very creditable retrievals. It is appropriate for a wide group of users to have access to the GPM retrievals. However, for reseachers requiring only surface precipitation, these L2 swath products can appear to be very intimidating and they certainly do contain many more variables than the average researcher needs. Some researchers desire only surface retrievals stored in a simple easily accessible format. In response, PPS has begun to produce gridded text based products that contain just the most widely used variables for each instrument (surface rainfall rate, fraction liquid, fraction convective) in a single line for each grid box that contains one or more observations. This paper will describe the gridded data products that are being produced and provide an overview of their content. Currently two types of gridded products are being produced: (1) surface precipitation retrievals from the core satellite instruments - GMI, DPR, and combined GMI/DPR (2) surface precipitation retrievals for the partner constellation satellites. Both of these gridded products are generated for a .25 degree x .25 degree hourly grid, which are packaged into daily ASCII files that can downloaded from the PPS FTP site. To reduce the download size, the files are compressed using the gzip utility. This paper will focus on presenting high-level details about the gridded text product being generated from the instruments on the GPM core satellite. But summary information will also be presented about the partner radiometer gridded product. All retrievals for the partner radiometer are done using the GPROF2014 algorithm using as input the PPS generated inter-calibrated 1C product for the radiometer.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., which contain the individual's name; rank/pay grade; Social Security Number; military branch or..., retiring, accessing, retaining, and disposing of records. Storage: Electronic storage media. Retrievability: Retrieved by individual's surname, Social Security Number and/or passport number. Safeguards: Electronic...
Code of Federal Regulations, 2013 CFR
2013-07-01
..., which contain the individual's name; rank/pay grade; Social Security Number; military branch or..., retiring, accessing, retaining, and disposing of records. Storage: Electronic storage media. Retrievability: Retrieved by individual's surname, Social Security Number and/or passport number. Safeguards: Electronic...
Code of Federal Regulations, 2012 CFR
2012-07-01
..., which contain the individual's name; rank/pay grade; Social Security Number; military branch or..., retiring, accessing, retaining, and disposing of records. Storage: Electronic storage media. Retrievability: Retrieved by individual's surname, Social Security Number and/or passport number. Safeguards: Electronic...
Developing a Large Lexical Database for Information Retrieval, Parsing, and Text Generation Systems.
ERIC Educational Resources Information Center
Conlon, Sumali Pin-Ngern; And Others
1993-01-01
Important characteristics of lexical databases and their applications in information retrieval and natural language processing are explained. An ongoing project using various machine-readable sources to build a lexical database is described, and detailed designs of individual entries with examples are included. (Contains 66 references.) (EAM)
Information Retrieval Using ADABAS-NATURAL (with Applications for Television and Radio).
ERIC Educational Resources Information Center
Silbergeld, I.; Kutok, P.
1984-01-01
Describes use of the software ADABAS (general purpose database management system) and NATURAL (interactive programing language) in development and implementation of an information retrieval system for the National Television and Radio Network of Israel. General design considerations, files contained in each archive, search strategies, and keywords…
75 FR 10474 - Privacy Act of 1974; Systems of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-08
... storage media. RETRIEVABILITY: Retrieved by last name and Social Security Number (SSN). SAFEGUARDS... proposed action will be effective without further notice on April 7, 2010 unless comments are received... RECORDS IN THE SYSTEM: Delete entry and replace with ``The files contain full name, grade, Social Security...
ERIC Educational Resources Information Center
Smith, Fred W.
2010-01-01
An automated retrieval system, adapted from a commercial warehouse application, has been installed at Georgia Southern University and has been well accepted by patrons and library personnel due to its reliability, efficiency, cost-effectiveness, and responsiveness. (Contains 1 figure.)
On Inference Rules of Logic-Based Information Retrieval Systems.
ERIC Educational Resources Information Center
Chen, Patrick Shicheng
1994-01-01
Discussion of relevance and the needs of the users in information retrieval focuses on a deductive object-oriented approach and suggests eight inference rules for the deduction. Highlights include characteristics of a deductive object-oriented system, database and data modeling language, implementation, and user interface. (Contains 24…
Parallel interactive retrieval of item and associative information from event memory.
Cox, Gregory E; Criss, Amy H
2017-09-01
Memory contains information about individual events (items) and combinations of events (associations). Despite the fundamental importance of this distinction, it remains unclear exactly how these two kinds of information are stored and whether different processes are used to retrieve them. We use both model-independent qualitative properties of response dynamics and quantitative modeling of individuals to address these issues. Item and associative information are not independent and they are retrieved concurrently via interacting processes. During retrieval, matching item and associative information mutually facilitate one another to yield an amplified holistic signal. Modeling of individuals suggests that this kind of facilitation between item and associative retrieval is a ubiquitous feature of human memory. Copyright © 2017 Elsevier Inc. All rights reserved.
Content-Based Medical Image Retrieval
NASA Astrophysics Data System (ADS)
Müller, Henning; Deserno, Thomas M.
This chapter details the necessity for alternative access concepts to the currently mainly text-based methods in medical information retrieval. This need is partly due to the large amount of visual data produced, the increasing variety of medical imaging data and changing user patterns. The stored visual data contain large amounts of unused information that, if well exploited, can help diagnosis, teaching and research. The chapter briefly reviews the history of image retrieval and its general methods before technologies that have been developed in the medical domain are focussed. We also discuss evaluation of medical content-based image retrieval (CBIR) systems and conclude with pointing out their strengths, gaps, and further developments. As examples, the MedGIFT project and the Image Retrieval in Medical Applications (IRMA) framework are presented.
High resolution satellite image indexing and retrieval using SURF features and bag of visual words
NASA Astrophysics Data System (ADS)
Bouteldja, Samia; Kourgli, Assia
2017-03-01
In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.
Comparing the performance of two CBIRS indexing schemes
NASA Astrophysics Data System (ADS)
Mueller, Wolfgang; Robbert, Guenter; Henrich, Andreas
2003-01-01
Content based image retrieval (CBIR) as it is known today has to deal with a number of challenges. Quickly summarized, the main challenges are firstly, to bridge the semantic gap between high-level concepts and low-level features using feedback, secondly to provide performance under adverse conditions. High-dimensional spaces, as well as a demanding machine learning task make the right way of indexing an important issue. When indexing multimedia data, most groups opt for extraction of high-dimensional feature vectors from the data, followed by dimensionality reduction like PCA (Principal Components Analysis) or LSI (Latent Semantic Indexing). The resulting vectors are indexed using spatial indexing structures such as kd-trees or R-trees, for example. Other projects, such as MARS and Viper propose the adaptation of text indexing techniques, notably the inverted file. Here, the Viper system is the most direct adaptation of text retrieval techniques to quantized vectors. However, while the Viper query engine provides decent performance together with impressive user-feedback behavior, as well as the possibility for easy integration of long-term learning algorithms, and support for potentially infinite feature vectors, there has been no comparison of vector-based methods and inverted-file-based methods under similar conditions. In this publication, we compare a CBIR query engine that uses inverted files (Bothrops, a rewrite of the Viper query engine based on a relational database), and a CBIR query engine based on LSD (Local Split Decision) trees for spatial indexing using the same feature sets. The Benchathlon initiative works on providing a set of images and ground truth for simulating image queries by example and corresponding user feedback. When performing the Benchathlon benchmark on a CBIR system (the System Under Test, SUT), a benchmarking harness connects over internet to the SUT, performing a number of queries using an agreed-upon protocol, the multimedia retrieval markup language (MRML). Using this benchmark one can measure the quality of retrieval, as well as the overall (speed) performance of the benchmarked system. Our Benchmarks will draw on the Benchathlon"s work for documenting the retrieval performance of both inverted file-based and LSD tree based techniques. However in addition to these results, we will present statistics, that can be obtained only inside the system under test. These statistics will include the number of complex mathematical operations, as well as the amount of data that has to be read from disk during operation of a query.
Classroom Laboratory Report: Using an Image Database System in Engineering Education.
ERIC Educational Resources Information Center
Alam, Javed; And Others
1991-01-01
Describes an image database system assembled using separate computer components that was developed to overcome text-only computer hardware storage and retrieval limitations for a pavement design class. (JJK)
Automation of Design Engineering Processes
NASA Technical Reports Server (NTRS)
Torrey, Glenn; Sawasky, Gerald; Courey, Karim
2004-01-01
A method, and a computer program that helps to implement the method, have been developed to automate and systematize the retention and retrieval of all the written records generated during the process of designing a complex engineering system. It cannot be emphasized strongly enough that all the written records as used here is meant to be taken literally: it signifies not only final drawings and final engineering calculations but also such ancillary documents as minutes of meetings, memoranda, requests for design changes, approval and review documents, and reports of tests. One important purpose served by the method is to make the records readily available to all involved users via their computer workstations from one computer archive while eliminating the need for voluminous paper files stored in different places. Another important purpose served by the method is to facilitate the work of engineers who are charged with sustaining the system and were not involved in the original design decisions. The method helps the sustaining engineers to retrieve information that enables them to retrace the reasoning that led to the original design decisions, thereby helping them to understand the system better and to make informed engineering choices pertaining to maintenance and/or modifications of the system. The software used to implement the method is written in Microsoft Access. All of the documents pertaining to the design of a given system are stored in one relational database in such a manner that they can be related to each other via a single tracking number.
Automatic generation of Web mining environments
NASA Astrophysics Data System (ADS)
Cibelli, Maurizio; Costagliola, Gennaro
1999-02-01
The main problem related to the retrieval of information from the world wide web is the enormous number of unstructured documents and resources, i.e., the difficulty of locating and tracking appropriate sources. This paper presents a web mining environment (WME), which is capable of finding, extracting and structuring information related to a particular domain from web documents, using general purpose indices. The WME architecture includes a web engine filter (WEF), to sort and reduce the answer set returned by a web engine, a data source pre-processor (DSP), which processes html layout cues in order to collect and qualify page segments, and a heuristic-based information extraction system (HIES), to finally retrieve the required data. Furthermore, we present a web mining environment generator, WMEG, that allows naive users to generate a WME specific to a given domain by providing a set of specifications.
PIRIA: a general tool for indexing, search, and retrieval of multimedia content
NASA Astrophysics Data System (ADS)
Joint, Magali; Moellic, Pierre-Alain; Hede, P.; Adam, P.
2004-05-01
The Internet is a continuously expanding source of multimedia content and information. There are many products in development to search, retrieve, and understand multimedia content. But most of the current image search/retrieval engines, rely on a image database manually pre-indexed with keywords. Computers are still powerless to understand the semantic meaning of still or animated image content. Piria (Program for the Indexing and Research of Images by Affinity), the search engine we have developed brings this possibility closer to reality. Piria is a novel search engine that uses the query by example method. A user query is submitted to the system, which then returns a list of images ranked by similarity, obtained by a metric distance that operates on every indexed image signature. These indexed images are compared according to several different classifiers, not only Keywords, but also Form, Color and Texture, taking into account geometric transformations and variance like rotation, symmetry, mirroring, etc. Form - Edges extracted by an efficient segmentation algorithm. Color - Histogram, semantic color segmentation and spatial color relationship. Texture - Texture wavelets and local edge patterns. If required, Piria is also able to fuse results from multiple classifiers with a new classification of index categories: Single Indexer Single Call (SISC), Single Indexer Multiple Call (SIMC), Multiple Indexers Single Call (MISC) or Multiple Indexers Multiple Call (MIMC). Commercial and industrial applications will be explored and discussed as well as current and future development.
[Biomedical information on the internet using search engines. A one-year trial].
Corrao, Salvatore; Leone, Francesco; Arnone, Sabrina
2004-01-01
The internet is a communication medium and content distributor that provide information in the general sense but it could be of great utility regarding as the search and retrieval of biomedical information. Search engines represent a great deal to rapidly find information on the net. However, we do not know whether general search engines and meta-search ones are reliable in order to find useful and validated biomedical information. The aim of our study was to verify the reproducibility of a search by key-words (pediatric or evidence) using 9 international search engines and 1 meta-search engine at the baseline and after a one year period. We analysed the first 20 citations as output of each searching. We evaluated the formal quality of Web-sites and their domain extensions. Moreover, we compared the output of each search at the start of this study and after a one year period and we considered as a criterion of reliability the number of Web-sites cited again. We found some interesting results that are reported throughout the text. Our findings point out an extreme dynamicity of the information on the Web and, for this reason, we advice a great caution when someone want to use search and meta-search engines as a tool for searching and retrieve reliable biomedical information. On the other hand, some search and meta-search engines could be very useful as a first step searching for defining better a search and, moreover, for finding institutional Web-sites too. This paper allows to know a more conscious approach to the internet biomedical information universe.
DRUMS: a human disease related unique gene mutation search engine.
Li, Zuofeng; Liu, Xingnan; Wen, Jingran; Xu, Ye; Zhao, Xin; Li, Xuan; Liu, Lei; Zhang, Xiaoyan
2011-10-01
With the completion of the human genome project and the development of new methods for gene variant detection, the integration of mutation data and its phenotypic consequences has become more important than ever. Among all available resources, locus-specific databases (LSDBs) curate one or more specific genes' mutation data along with high-quality phenotypes. Although some genotype-phenotype data from LSDB have been integrated into central databases little effort has been made to integrate all these data by a search engine approach. In this work, we have developed disease related unique gene mutation search engine (DRUMS), a search engine for human disease related unique gene mutation as a convenient tool for biologists or physicians to retrieve gene variant and related phenotype information. Gene variant and phenotype information were stored in a gene-centred relational database. Moreover, the relationships between mutations and diseases were indexed by the uniform resource identifier from LSDB, or another central database. By querying DRUMS, users can access the most popular mutation databases under one interface. DRUMS could be treated as a domain specific search engine. By using web crawling, indexing, and searching technologies, it provides a competitively efficient interface for searching and retrieving mutation data and their relationships to diseases. The present system is freely accessible at http://www.scbit.org/glif/new/drums/index.html. © 2011 Wiley-Liss, Inc.
A similarity learning approach to content-based image retrieval: application to digital mammography.
El-Naqa, Issam; Yang, Yongyi; Galatsanos, Nikolas P; Nishikawa, Robert M; Wernick, Miles N
2004-10-01
In this paper, we describe an approach to content-based retrieval of medical images from a database, and provide a preliminary demonstration of our approach as applied to retrieval of digital mammograms. Content-based image retrieval (CBIR) refers to the retrieval of images from a database using information derived from the images themselves, rather than solely from accompanying text indices. In the medical-imaging context, the ultimate aim of CBIR is to provide radiologists with a diagnostic aid in the form of a display of relevant past cases, along with proven pathology and other suitable information. CBIR may also be useful as a training tool for medical students and residents. The goal of information retrieval is to recall from a database information that is relevant to the user's query. The most challenging aspect of CBIR is the definition of relevance (similarity), which is used to guide the retrieval machine. In this paper, we pursue a new approach, in which similarity is learned from training examples provided by human observers. Specifically, we explore the use of neural networks and support vector machines to predict the user's notion of similarity. Within this framework we propose using a hierarchal learning approach, which consists of a cascade of a binary classifier and a regression module to optimize retrieval effectiveness and efficiency. We also explore how to incorporate online human interaction to achieve relevance feedback in this learning framework. Our experiments are based on a database consisting of 76 mammograms, all of which contain clustered microcalcifications (MCs). Our goal is to retrieve mammogram images containing similar MC clusters to that in a query. The performance of the retrieval system is evaluated using precision-recall curves computed using a cross-validation procedure. Our experimental results demonstrate that: 1) the learning framework can accurately predict the perceptual similarity reported by human observers, thereby serving as a basis for CBIR; 2) the learning-based framework can significantly outperform a simple distance-based similarity metric; 3) the use of the hierarchical two-stage network can improve retrieval performance; and 4) relevance feedback can be effectively incorporated into this learning framework to achieve improvement in retrieval precision based on online interaction with users; and 5) the retrieved images by the network can have predicting value for the disease condition of the query.
Passage-Based Bibliographic Coupling: An Inter-Article Similarity Measure for Biomedical Articles
Liu, Rey-Long
2015-01-01
Biomedical literature is an essential source of biomedical evidence. To translate the evidence for biomedicine study, researchers often need to carefully read multiple articles about specific biomedical issues. These articles thus need to be highly related to each other. They should share similar core contents, including research goals, methods, and findings. However, given an article r, it is challenging for search engines to retrieve highly related articles for r. In this paper, we present a technique PBC (Passage-based Bibliographic Coupling) that estimates inter-article similarity by seamlessly integrating bibliographic coupling with the information collected from context passages around important out-link citations (references) in each article. Empirical evaluation shows that PBC can significantly improve the retrieval of those articles that biomedical experts believe to be highly related to specific articles about gene-disease associations. PBC can thus be used to improve search engines in retrieving the highly related articles for any given article r, even when r is cited by very few (or even no) articles. The contribution is essential for those researchers and text mining systems that aim at cross-validating the evidence about specific gene-disease associations. PMID:26440794
Passage-Based Bibliographic Coupling: An Inter-Article Similarity Measure for Biomedical Articles.
Liu, Rey-Long
2015-01-01
Biomedical literature is an essential source of biomedical evidence. To translate the evidence for biomedicine study, researchers often need to carefully read multiple articles about specific biomedical issues. These articles thus need to be highly related to each other. They should share similar core contents, including research goals, methods, and findings. However, given an article r, it is challenging for search engines to retrieve highly related articles for r. In this paper, we present a technique PBC (Passage-based Bibliographic Coupling) that estimates inter-article similarity by seamlessly integrating bibliographic coupling with the information collected from context passages around important out-link citations (references) in each article. Empirical evaluation shows that PBC can significantly improve the retrieval of those articles that biomedical experts believe to be highly related to specific articles about gene-disease associations. PBC can thus be used to improve search engines in retrieving the highly related articles for any given article r, even when r is cited by very few (or even no) articles. The contribution is essential for those researchers and text mining systems that aim at cross-validating the evidence about specific gene-disease associations.
ERIC Educational Resources Information Center
Raeder, Aggi
1997-01-01
Discussion of ways to promote sites on the World Wide Web focuses on how search engines work and how they retrieve and identify sites. Appropriate Web links for submitting new sites and for Internet marketing are included. (LRW)
Task context and organization in free recall
Polyn, Sean M.; Norman, Kenneth A.; Kahana, Michael J.
2009-01-01
Prior work on organization in free recall has focused on the ways in which semantic and temporal information determine the order in which material is retrieved from memory. Tulving’s theory of ecphory suggests that these organizational effects arise from the interaction of a retrieval cue with the contents of memory. Using the continual-distraction free-recall paradigm (Bjork & Whitten, 1974) to minimize retrieval during the study period, we show that encoding task context can organize recall, suggesting that task-related information is part of the retrieval cue. We interpret these results in terms of the Context Maintenance and Retrieval model (CMR; Polyn, Norman, & Kahana, in press), in which an internal contextual representation, containing semantic, temporal, and source-related information, serves as the retrieval cue and organizes the retrieval of information from memory. We discuss these results in terms of the guided activation theory (Miller & Cohen, 2001) of the role of prefrontal cortex in task performance, as well as the rich neuropsychological literature implicating prefrontal cortex in memory search (e.g, Schacter, 1987). PMID:19524086
Leveraging Terminologies for Retrieval of Radiology Reports with Critical Imaging Findings
Warden, Graham I.; Lacson, Ronilda; Khorasani, Ramin
2011-01-01
Introduction: Communication of critical imaging findings is an important component of medical quality and safety. A fundamental challenge includes retrieval of radiology reports that contain these findings. This study describes the expressiveness and coverage of existing medical terminologies for critical imaging findings and evaluates radiology report retrieval using each terminology. Methods: Four terminologies were evaluated: National Cancer Institute Thesaurus (NCIT), Radiology Lexicon (RadLex), Systemized Nomenclature of Medicine (SNOMED-CT), and International Classification of Diseases (ICD-9-CM). Concepts in each terminology were identified for 10 critical imaging findings. Three findings were subsequently selected to evaluate document retrieval. Results: SNOMED-CT consistently demonstrated the highest number of overall terms (mean=22) for each of ten critical findings. However, retrieval rate and precision varied between terminologies for the three findings evaluated. Conclusion: No single terminology is optimal for retrieving radiology reports with critical findings. The expressiveness of a terminology does not consistently correlate with radiology report retrieval. PMID:22195212
MWR3C physical retrievals of precipitable water vapor and cloud liquid water path
Cadeddu, Maria
2016-10-12
The data set contains physical retrievals of PWV and cloud LWP retrieved from MWR3C measurements during the MAGIC campaign. Additional data used in the retrieval process include radiosondes and ceilometer. The retrieval is based on an optimal estimation technique that starts from a first guess and iteratively repeats the forward model calculations until a predefined convergence criterion is satisfied. The first guess is a vector of [PWV,LWP] from the neural network retrieval fields in the netcdf file. When convergence is achieved the 'a posteriori' covariance is computed and its square root is expressed in the file as the retrieval 1-sigma uncertainty. The closest radiosonde profile is used for the radiative transfer calculations and ceilometer data are used to constrain the cloud base height. The RMS error between the brightness temperatures is computed at the last iterations as a consistency check and is written in the last column of the output file.
Feasibility study of tank leakage mitigation using subsurface barriers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treat, R.L.; Peters, B.B.; Cameron, R.J.
1994-09-21
The US Department of Energy (DOE) has established the Tank Waste Remediation System (TWRS) to satisfy manage and dispose of the waste currently stored in the underground storage tanks. The retrieval element of TWRS includes a work scope to develop subsurface impermeable barriers beneath SSTs. The barriers could serve as a means to contain leakage that may result from waste retrieval operations and could also support site closure activities by facilitating cleanup. Three types of subsurface barrier systems have emerged for further consideration: (1) chemical grout, (2) freeze walls, and (3) desiccant, represented in this feasibility study as a circulatingmore » air barrier. This report contains analyses of the costs and relative risks associated with combinations retrieval technologies and barrier technologies that from 14 alternatives. Eight of the alternatives include the use of subsurface barriers; the remaining six nonbarrier alternative are included in order to compare the costs, relative risks and other values of retrieval with subsurface barriers. Each alternative includes various combinations of technologies that can impact the risks associated with future contamination of the groundwater beneath the Hanford Site to varying degrees. Other potential risks associated with these alternatives, such as those related to accidents and airborne contamination resulting from retrieval and barrier emplacement operations, are not quantitatively evaluated in this report.« less
D'Aniello, Biagio; Scandurra, Anna
2016-05-01
Life experiences and living conditions can influence the problem-solving strategies and the communicative abilities of dogs with humans. The goals of this study were to determine any behavioural differences between Labrador Retrievers living in a kennel and those living in a house as pets and to assess whether kennel dogs show preferences in social behaviours for their caretaker relative to a stranger when they are faced with an unsolvable task. Nine Labrador Retrievers living in a kennel from birth and ten Labrador Retrievers living in a family as pets were tested. The experimental procedure consisted of three "solvable" tasks in which the dogs could easily retrieve food from a container followed by an "unsolvable" task in which the container was hermetically locked. Dogs of both groups spent the same amount of time interacting with the experimental apparatus. Kennel dogs gazed towards people for less time and with higher latency than pet dogs; however, there were no significant preferences in gazing towards the stranger versus the caretaker in both groups. These findings demonstrated that kennel dogs are less prone to use human-directed gazing behaviour when they are faced with an unsolvable problem, taking the humans into account to solve a task less than do the pet dogs.
Consistency and accuracy of indexing systematic review articles and meta-analyses in medline.
Wilczynski, Nancy L; Haynes, R Brian
2009-09-01
Systematic review articles support the advance of science and translation of research evidence into healthcare practice. Inaccurate retrieval from medline could limit access to reviews. To determine the quality of indexing systematic reviews and meta-analyses in medline. The Clinical Hedges Database, containing the results of a hand search of 161 journals, was used to test medline indexing terms for their ability to retrieve systematic reviews that met predefined methodologic criteria (labelled as 'pass' review articles) and reviews that reported a meta-analysis. The Clinical Hedges Database contained 49 028 articles; 753 were 'pass' review articles (552 with a meta-analysis). In total 758 review articles (independent of whether they passed) reported a meta-analysis. The search strategy that retrieved the highest number of 'pass' systematic reviews achieved a sensitivity of 97.1%. The publication type 'meta analysis' had a false positive rate of 5.6% (95% CI 3.9 to 7.6), and false negative rate of 0.31% (95% CI 0.26 to 0.36) for retrieving systematic reviews that reported a meta-analysis. Inaccuracies in indexing systematic reviews and meta-analyses in medline can be partly overcome by a 5-term search strategy. Introducing a publication type for systematic reviews of the literature could improve retrieval performance.
Textile Retrieval Based on Image Content from CDC and Webcam Cameras in Indoor Environments.
García-Olalla, Oscar; Alegre, Enrique; Fernández-Robles, Laura; Fidalgo, Eduardo; Saikia, Surajit
2018-04-25
Textile based image retrieval for indoor environments can be used to retrieve images that contain the same textile, which may indicate that scenes are related. This makes up a useful approach for law enforcement agencies who want to find evidence based on matching between textiles. In this paper, we propose a novel pipeline that allows searching and retrieving textiles that appear in pictures of real scenes. Our approach is based on first obtaining regions containing textiles by using MSER on high pass filtered images of the RGB, HSV and Hue channels of the original photo. To describe the textile regions, we demonstrated that the combination of HOG and HCLOSIB is the best option for our proposal when using the correlation distance to match the query textile patch with the candidate regions. Furthermore, we introduce a new dataset, TextilTube, which comprises a total of 1913 textile regions labelled within 67 classes. We yielded 84.94% of success in the 40 nearest coincidences and 37.44% of precision taking into account just the first coincidence, which outperforms the current deep learning methods evaluated. Experimental results show that this pipeline can be used to set up an effective textile based image retrieval system in indoor environments.
Textile Retrieval Based on Image Content from CDC and Webcam Cameras in Indoor Environments
García-Olalla, Oscar; Saikia, Surajit
2018-01-01
Textile based image retrieval for indoor environments can be used to retrieve images that contain the same textile, which may indicate that scenes are related. This makes up a useful approach for law enforcement agencies who want to find evidence based on matching between textiles. In this paper, we propose a novel pipeline that allows searching and retrieving textiles that appear in pictures of real scenes. Our approach is based on first obtaining regions containing textiles by using MSER on high pass filtered images of the RGB, HSV and Hue channels of the original photo. To describe the textile regions, we demonstrated that the combination of HOG and HCLOSIB is the best option for our proposal when using the correlation distance to match the query textile patch with the candidate regions. Furthermore, we introduce a new dataset, TextilTube, which comprises a total of 1913 textile regions labelled within 67 classes. We yielded 84.94% of success in the 40 nearest coincidences and 37.44% of precision taking into account just the first coincidence, which outperforms the current deep learning methods evaluated. Experimental results show that this pipeline can be used to set up an effective textile based image retrieval system in indoor environments. PMID:29693590
36 CFR 1238.12 - What documentation is required for microfilmed records?
Code of Federal Regulations, 2011 CFR
2011-07-01
... microforms capture all information contained on the source documents and that they can be used for the... retrieval and use. Agencies must: (a) Arrange, describe, and index the filmed records to permit retrieval of... titling target or header. For fiche, place the titling information in the first frame if the information...
36 CFR § 1238.12 - What documentation is required for microfilmed records?
Code of Federal Regulations, 2013 CFR
2013-07-01
... microforms capture all information contained on the source documents and that they can be used for the... retrieval and use. Agencies must: (a) Arrange, describe, and index the filmed records to permit retrieval of... titling target or header. For fiche, place the titling information in the first frame if the information...
36 CFR 1238.12 - What documentation is required for microfilmed records?
Code of Federal Regulations, 2014 CFR
2014-07-01
... microforms capture all information contained on the source documents and that they can be used for the... retrieval and use. Agencies must: (a) Arrange, describe, and index the filmed records to permit retrieval of... titling target or header. For fiche, place the titling information in the first frame if the information...
36 CFR 1238.12 - What documentation is required for microfilmed records?
Code of Federal Regulations, 2012 CFR
2012-07-01
... microforms capture all information contained on the source documents and that they can be used for the... retrieval and use. Agencies must: (a) Arrange, describe, and index the filmed records to permit retrieval of... titling target or header. For fiche, place the titling information in the first frame if the information...
36 CFR 1238.12 - What documentation is required for microfilmed records?
Code of Federal Regulations, 2010 CFR
2010-07-01
... microforms capture all information contained on the source documents and that they can be used for the... retrieval and use. Agencies must: (a) Arrange, describe, and index the filmed records to permit retrieval of... titling target or header. For fiche, place the titling information in the first frame if the information...
Comparing the Document Representations of Two IR-Systems: CLARIT and TOPIC.
ERIC Educational Resources Information Center
Paijmans, Hans
1993-01-01
Compares two information retrieval systems, CLARIT and TOPIC, in terms of assigned versus derived and precoordinate versus postcoordinate indexing. Models of information retrieval systems are discussed, and a test of the systems using a demonstration database of full-text articles from the "Wall Street Journal" is described. (Contains 21…
On the Delusiveness of Adopting a Common Space for Modeling IR Objects: Are Queries Documents?
ERIC Educational Resources Information Center
Bollmann-Sdorra, Peter; Raghavan, Vjay V.
1993-01-01
Proposes that document space and query space have different structures in information retrieval and discusses similarity measures, term independence, and linear structure. Examples are given using the retrieval functions of dot-product, the cosine measure, the coefficient of Jaccard, and the overlap function. (Contains 28 references.) (LRW)
Integration of Synaptic Vesicle Cargo Retrieval with Endocytosis at Central Nerve Terminals
Cousin, Michael A.
2017-01-01
Central nerve terminals contain a limited number of synaptic vesicles (SVs) which mediate the essential process of neurotransmitter release during their activity-dependent fusion. The rapid and accurate formation of new SVs with the appropriate cargo is essential to maintain neurotransmission in mammalian brain. Generating SVs containing the correct SV cargo with the appropriate stoichiometry is a significant challenge, especially when multiple modes of endocytosis exist in central nerve terminals, which occur at different locations within the nerve terminals. These endocytosis modes include ultrafast endocytosis, clathrin-mediated endocytosis (CME) and activity-dependent bulk endocytosis (ADBE) which are triggered by specific patterns of neuronal activity. This review article will assess the evidence for the role of classical adaptor protein complexes in SV retrieval, discuss the role of monomeric adaptors and how interactions between specific SV cargoes can facilitate retrieval. In addition it will consider the evidence for preassembled plasma membrane cargo complexes and their role in facilitating these endocytosis modes. Finally it will present a unifying model for cargo retrieval at the presynapse, which integrates endocytosis modes in time and space. PMID:28824381
Harkins, Joe R.; Green, Mark E.
1981-01-01
Drainage areas for about 1,600 surface-water sites on streams and lakes in Florida are contained in this report. The sites are generally either U.S. Geological Survey gaging stations or the mouths of gaged streas. Each site is identified by latitude and longitude, by the general stream type, and by the U.S. Geological Survey 7.5-minute topographic map on which it can be located. The gaging stations are furhter identified by a downstream order number, a county code, and a nearby city or town. In addition to drainage areas, the surface areas of lakes are shown for the elevation given on the topographic map. These data were retrieved from the Surface Water Index developed and maintained by the Hydrologic Surveillance section of the Florida District Office, U.S. Geological Survey. (USGS)
Applying Use Cases to Describe the Role of Standards in e-Health Information Systems
NASA Astrophysics Data System (ADS)
Chávez, Emma; Finnie, Gavin; Krishnan, Padmanabhan
Individual health records (IHRs) contain a person's lifetime records of their key health history and care within a health system (National E-Health Transition Authority, Retrieved Jan 12, 2009 from http://www.nehta.gov.au/coordinated-care/whats-in-iehr, 2004). This information can be processed and stored in different ways. The record should be available electronically to authorized health care providers and the individual anywhere, anytime, to support high-quality care. Many organizations provide a diversity of solutions for e-health and its services. Standards play an important role to enable these organizations to support information interchange and improve efficiency of health care delivery. However, there are numerous standards to choose from and not all of them are accessible to the software developer. This chapter proposes a framework to describe the e-health standards that can be used by software engineers to implement e-health information systems.
Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking
NASA Astrophysics Data System (ADS)
Antonya, C.
2017-12-01
Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.
Development of consistent hazard controls for DOE transuranic waste operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woody, W.J.
2007-07-01
This paper describes the results of a re-engineering initiative undertaken with the Department of Energy's (DOE) Office of Environmental Management (EM) in order to standardize hazard analysis assumptions and methods and resulting safety controls applied to multiple transuranic (TRU) waste operations located across the United States. A wide range of safety controls are historically applied to transuranic waste operations, in spite of the fact that these operations have similar operational characteristics and hazard/accident potential. The re-engineering effort supported the development of a DOE technical standard with specific safety controls designated for accidents postulated during waste container retrieval, staging/storage, venting, onsitemore » movements, and characterization activities. Controls cover preventive and mitigative measures; include both hardware and specific administrative controls; and provide protection to the facility worker, onsite co-located workers and the general public located outside of facility boundaries. The Standard development involved participation from all major DOE sites conducting TRU waste operations. Both safety analysts and operations personnel contributed to the re-engineering effort. Acknowledgment is given in particular to the following individuals who formed a core working group: Brenda Hawks, (DOE Oak Ridge Office), Patrice McEahern (CWI-Idaho), Jofu Mishima (Consultant), Louis Restrepo (Omicron), Jay Mullis (DOE-ORO), Mike Hitchler (WSMS), John Menna (WSMS), Jackie East (WSMS), Terry Foppe (CTAC), Carla Mewhinney (WIPP-SNL), Stephie Jennings (WIPP-LANL), Michael Mikolanis (DOESRS), Kraig Wendt (BBWI-Idaho), Lee Roberts (Fluor Hanford), and Jim Blankenhorn (WSRC). Additional acknowledgment is given to Dae Chung (EM) and Ines Triay (EM) for leadership and management of the re-engineering effort. (authors)« less
An ontological case base engineering methodology for diabetes management.
El-Sappagh, Shaker H; El-Masri, Samir; Elmogy, Mohammed; Riad, A M; Saddik, Basema
2014-08-01
Ontology engineering covers issues related to ontology development and use. In Case Based Reasoning (CBR) system, ontology plays two main roles; the first as case base and the second as domain ontology. However, the ontology engineering literature does not provide adequate guidance on how to build, evaluate, and maintain ontologies. This paper proposes an ontology engineering methodology to generate case bases in the medical domain. It mainly focuses on the research of case representation in the form of ontology to support the case semantic retrieval and enhance all knowledge intensive CBR processes. A case study on diabetes diagnosis case base will be provided to evaluate the proposed methodology.
Caldas, José; Gehlenborg, Nils; Kettunen, Eeva; Faisal, Ali; Rönty, Mikko; Nicholson, Andrew G; Knuutila, Sakari; Brazma, Alvis; Kaski, Samuel
2012-01-15
Genome-wide measurement of transcript levels is an ubiquitous tool in biomedical research. As experimental data continues to be deposited in public databases, it is becoming important to develop search engines that enable the retrieval of relevant studies given a query study. While retrieval systems based on meta-data already exist, data-driven approaches that retrieve studies based on similarities in the expression data itself have a greater potential of uncovering novel biological insights. We propose an information retrieval method based on differential expression. Our method deals with arbitrary experimental designs and performs competitively with alternative approaches, while making the search results interpretable in terms of differential expression patterns. We show that our model yields meaningful connections between biological conditions from different studies. Finally, we validate a previously unknown connection between malignant pleural mesothelioma and SIM2s suggested by our method, via real-time polymerase chain reaction in an independent set of mesothelioma samples. Supplementary data and source code are available from http://www.ebi.ac.uk/fg/research/rex.
Bioenvironmental Engineering Guide for Composite Materials
2014-03-31
Russell J. Advanced composite cargo aircraft proves large structure practicality. High- Performance Composites 2010 Jan. Retrieved 3 January 2014 from...fuel or hydraulic fluid; location of radioactive components associated with the aircraft, such as depleted uranium counterweights, isotopes
Finding Specification Pages from the Web
NASA Astrophysics Data System (ADS)
Yoshinaga, Naoki; Torisawa, Kentaro
This paper presents a method of finding a specification page on the Web for a given object (e.g., ``Ch. d'Yquem'') and its class label (e.g., ``wine''). A specification page for an object is a Web page which gives concise attribute-value information about the object (e.g., ``county''-``Sauternes'') in well formatted structures. A simple unsupervised method using layout and symbolic decoration cues was applied to a large number of the Web pages to acquire candidate attributes for each class (e.g., ``county'' for a class ``wine''). We then filter out irrelevant words from the putative attributes through an author-aware scoring function that we called site frequency. We used the acquired attributes to select a representative specification page for a given object from the Web pages retrieved by a normal search engine. Experimental results revealed that our system greatly outperformed the normal search engine in terms of this specification retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertzler, C.L.; Poloski, J.P.; Bates, R.A.
1988-01-01
The Compliance Program Data Management System (DMS) developed at the Idaho National Engineering Laboratory (INEL) validates and maintains the integrity of data collected to support the Consent Order and Compliance Agreement (COCA) between the INEL and the Environmental Protection Agency (EPA). The system uses dBase III Plus programs and dBase III Plus in an interactive mode to enter, store, validate, manage, and retrieve analytical information provided on EPA Contract Laboratory Program (CLP) forms and CLP forms modified to accommodate 40 CFR 264 Appendix IX constituent analyses. Data analysis and presentation is performed utilizing SAS, a statistical analysis software program. Archivingmore » of data and results is performed at appropriate stages of data management. The DMS is useful for sampling and analysis programs where adherence to EPA CLP protocol, along with maintenance and retrieval of waste site investigation sampling results is desired or requested. 3 refs.« less
A model for enhancing Internet medical document retrieval with "medical core metadata".
Malet, G; Munoz, F; Appleyard, R; Hersh, W
1999-01-01
Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and MEDLINE-type content descriptions. The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines.
A Model for Enhancing Internet Medical Document Retrieval with “Medical Core Metadata”
Malet, Gary; Munoz, Felix; Appleyard, Richard; Hersh, William
1999-01-01
Objective: Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. Design: The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and Medline-type content descriptions. Results: The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. Conclusions: The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines. PMID:10094069
New Quality Metrics for Web Search Results
NASA Astrophysics Data System (ADS)
Metaxas, Panagiotis Takis; Ivanova, Lilia; Mustafaraj, Eni
Web search results enjoy an increasing importance in our daily lives. But what can be said about their quality, especially when querying a controversial issue? The traditional information retrieval metrics of precision and recall do not provide much insight in the case of web information retrieval. In this paper we examine new ways of evaluating quality in search results: coverage and independence. We give examples on how these new metrics can be calculated and what their values reveal regarding the two major search engines, Google and Yahoo. We have found evidence of low coverage for commercial and medical controversial queries, and high coverage for a political query that is highly contested. Given the fact that search engines are unwilling to tune their search results manually, except in a few cases that have become the source of bad publicity, low coverage and independence reveal the efforts of dedicated groups to manipulate the search results.
NASA Technical Reports Server (NTRS)
Vachon, R. I.; Obrien, J. F., Jr.; Lueg, R. E.; Cox, J. E.
1972-01-01
The 1972 Systems Engineering program at Marshall Space Flight Center where 15 participants representing 15 U.S. universities, 1 NASA/MSFC employee, and another specially assigned faculty member, participated in an 11-week program is discussed. The Fellows became acquainted with the philosophy of systems engineering, and as a training exercise, used this approach to produce a conceptional design for an Earth Resources Information Storage, Transformation, Analysis, and Retrieval System. The program was conducted in three phases; approximately 3 weeks were devoted to seminars, tours, and other presentations to subject the participants to technical and other aspects of the information management problem. The second phase, 5 weeks in length, consisted of evaluating alternative solutions to problems, effecting initial trade-offs and performing preliminary design studies and analyses. The last 3 weeks were occupied with final trade-off sessions, final design analyses and preparation of a final report and oral presentation.
EM-21 Retrieval Knowledge Center: Waste Retrieval Challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fellinger, Andrew P.; Rinker, Michael W.; Berglin, Eric J.
EM-21 is the Waste Processing Division of the Office of Engineering and Technology, within the U.S. Department of Energy’s (DOE) Office of Environmental Management (EM). In August of 2008, EM-21 began an initiative to develop a Retrieval Knowledge Center (RKC) to provide the DOE, high level waste retrieval operators, and technology developers with centralized and focused location to share knowledge and expertise that will be used to address retrieval challenges across the DOE complex. The RKC is also designed to facilitate information sharing across the DOE Waste Site Complex through workshops, and a searchable database of waste retrieval technology information.more » The database may be used to research effective technology approaches for specific retrieval tasks and to take advantage of the lessons learned from previous operations. It is also expected to be effective for remaining current with state-of-the-art of retrieval technologies and ongoing development within the DOE Complex. To encourage collaboration of DOE sites with waste retrieval issues, the RKC team is co-led by the Savannah River National Laboratory (SRNL) and the Pacific Northwest National Laboratory (PNNL). Two RKC workshops were held in the Fall of 2008. The purpose of these workshops was to define top level waste retrieval functional areas, exchange lessons learned, and develop a path forward to support a strategic business plan focused on technology needs for retrieval. The primary participants involved in these workshops included retrieval personnel and laboratory staff that are associated with Hanford and Savannah River Sites since the majority of remaining DOE waste tanks are located at these sites. This report summarizes and documents the results of the initial RKC workshops. Technology challenges identified from these workshops and presented here are expected to be a key component to defining future RKC-directed tasks designed to facilitate tank waste retrieval solutions.« less
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2011-01-01
The Goddard DISC has generated products derived from AIRS/AMSU-A observations, starting from September 2002 when the AIRS instrument became stable, using the AIRS Science Team Version-5 retrieval algorithm. The AIRS Science Team Version-6 retrieval algorithm will be finalized in September 2011. This paper describes some of the significant improvements contained in the Version-6 retrieval algorithm, compared to that used in Version-5, with an emphasis on the improvement of atmospheric temperature profiles, ocean and land surface skin temperatures, and ocean and land surface spectral emissivities. AIRS contains 2378 spectral channels covering portions of the spectral region 650 cm(sup -1) (15.38 micrometers) - 2665 cm(sup -1) (3.752 micrometers). These spectral regions contain significant absorption features from two CO2 absorption bands, the 15 micrometers (longwave) CO2 band, and the 4.3 micrometers (shortwave) CO2 absorption band. There are also two atmospheric window regions, the 12 micrometer - 8 micrometer (longwave) window, and the 4.17 micrometer - 3.75 micrometer (shortwave) window. Historically, determination of surface and atmospheric temperatures from satellite observations was performed using primarily observations in the longwave window and CO2 absorption regions. According to cloud clearing theory, more accurate soundings of both surface skin and atmospheric temperatures can be obtained under partial cloud cover conditions if one uses observations in longwave channels to determine coefficients which generate cloud cleared radiances R(sup ^)(sub i) for all channels, and uses R(sup ^)(sub i) only from shortwave channels in the determination of surface and atmospheric temperatures. This procedure is now being used in the AIRS Version-6 Retrieval Algorithm. Results are presented for both daytime and nighttime conditions showing improved Version-6 surface and atmospheric soundings under partial cloud cover.
Evaluation of contents-based image retrieval methods for a database of logos on drug tablets
NASA Astrophysics Data System (ADS)
Geradts, Zeno J.; Hardy, Huub; Poortman, Anneke; Bijhold, Jurrien
2001-02-01
In this research an evaluation has been made of the different ways of contents based image retrieval of logos of drug tablets. On a database of 432 illicitly produced tablets (mostly containing MDMA), we have compared different retrieval methods. Two of these methods were available from commercial packages, QBIC and Imatch, where the implementation of the contents based image retrieval methods are not exactly known. We compared the results for this database with the MPEG-7 shape comparison methods, which are the contour-shape, bounding box and region-based shape methods. In addition, we have tested the log polar method that is available from our own research.
An evaluation of various forms of VAS retrievals in the analysis of a preconvective environment
NASA Technical Reports Server (NTRS)
Petersen, R. A.; Keyser, D. A.
1987-01-01
VISSR Atmospheric Sounder (VAS) radiance data obtained over the continental United States on July 20, 1981 are used to evaluate a variety of VAS retrieval procedures and parameters in the qualitative analysis and forecasting of severe weather events. The particular case analyzed contains two significantly different mesoscale convective events in the central plains. Retrievals of temperature, dewpoint temperature, equivalent potential temperature, total column precipitable water, and lifted index are shown to be physically consistent in space and time and to compare well with available radiosonde data. The analysis of the VAS retrievals identified significant spatial gradients and temporal changes in the thermal and moisture fields, including times and locations between radiosonde observations.
User's operating procedures. Volume 1: Scout project information programs
NASA Technical Reports Server (NTRS)
Harris, C. G.; Harris, D. K.
1985-01-01
A review of the user's operating procedures for the Scout Project Automatic Data System, called SPADS is given. SPADS is the result of the past seven years of software development on a Prime minicomputer located at the Scout Project Office. SPADS was developed as a single entry, multiple cross reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. The instructions to operate the Scout Project Information programs in data retrieval and file maintenance via the user friendly menu drivers is presented.
User's operating procedures. Volume 3: Projects directorate information programs
NASA Technical Reports Server (NTRS)
Haris, C. G.; Harris, D. K.
1985-01-01
A review of the user's operating procedures for the scout project automatic data system, called SPADS is presented. SPADS is the results of the past seven years of software development on a prime mini-computer. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, three of three, provides the instructions to operate the projects directorate information programs in data retrieval and file maintenance via the user friendly menu drivers.
Interpolation of the Extended Boolean Retrieval Model.
ERIC Educational Resources Information Center
Zanger, Daniel Z.
2002-01-01
Presents an interpolation theorem for an extended Boolean information retrieval model. Results show that whenever two or more documents are similarly ranked at any two points for a query containing exactly two terms, then they are similarly ranked at all points in between; and that results can fail for queries with more than two terms. (Author/LRW)
Support Vector Machines: Relevance Feedback and Information Retrieval.
ERIC Educational Resources Information Center
Drucker, Harris; Shahrary, Behzad; Gibbon, David C.
2002-01-01
Compares support vector machines (SVMs) to Rocchio, Ide regular and Ide dec-hi algorithms in information retrieval (IR) of text documents using relevancy feedback. If the preliminary search is so poor that one has to search through many documents to find at least one relevant document, then SVM is preferred. Includes nine tables. (Contains 24…
ERIC Educational Resources Information Center
Cornell Univ., Ithaca, NY. Dept. of Computer Science.
Part Three of this five part report on Salton's Magical Automatic Retriever of Texts (SMART) project contains four papers. The first: "Variations on the Query Splitting Technique with Relevance Feedback" by T. P. Baker discusses some experiments in relevance feedback performed with variations on the technique of query splitting. The…
Task-Oriented Access to Data Files: An Evaluation.
ERIC Educational Resources Information Center
Watters, Carolyn; And Others
1994-01-01
Discussion of information retrieval highlights DalText, a prototype information retrieval system that provides access to nonindexed textual data files where the mode of access is determined by the user based on the task at hand. A user study is described that was conducted at Dalhousie University (Nova Scotia) to test DalText. (Contains 23…
A STORAGE AND RETRIEVAL SYSTEM FOR DOCUMENTS IN INSTRUCTIONAL RESOURCES. REPORT NO. 13.
ERIC Educational Resources Information Center
DIAMOND, ROBERT M.; LEE, BERTA GRATTAN
IN ORDER TO IMPROVE INSTRUCTION WITHIN TWO-YEAR LOWER DIVISION COURSES, A COMPREHENSIVE RESOURCE LIBRARY WAS DEVELOPED AND A SIMPLIFIED CATALOGING AND INFORMATION RETRIEVAL SYSTEM WAS APPLIED TO IT. THE ROYAL MCBEE "KEYDEX" SYSTEM, CONTAINING THREE MAJOR COMPONENTS--A PUNCH MACHINE, FILE CARDS, AND A LIGHT BOX--WAS USED. CARDS WERE HEADED WITH KEY…
Abdulla, Ahmed AbdoAziz Ahmed; Lin, Hongfei; Xu, Bo; Banbhrani, Santosh Kumar
2016-07-25
Biomedical literature retrieval is becoming increasingly complex, and there is a fundamental need for advanced information retrieval systems. Information Retrieval (IR) programs scour unstructured materials such as text documents in large reserves of data that are usually stored on computers. IR is related to the representation, storage, and organization of information items, as well as to access. In IR one of the main problems is to determine which documents are relevant and which are not to the user's needs. Under the current regime, users cannot precisely construct queries in an accurate way to retrieve particular pieces of data from large reserves of data. Basic information retrieval systems are producing low-quality search results. In our proposed system for this paper we present a new technique to refine Information Retrieval searches to better represent the user's information need in order to enhance the performance of information retrieval by using different query expansion techniques and apply a linear combinations between them, where the combinations was linearly between two expansion results at one time. Query expansions expand the search query, for example, by finding synonyms and reweighting original terms. They provide significantly more focused, particularized search results than do basic search queries. The retrieval performance is measured by some variants of MAP (Mean Average Precision) and according to our experimental results, the combination of best results of query expansion is enhanced the retrieved documents and outperforms our baseline by 21.06 %, even it outperforms a previous study by 7.12 %. We propose several query expansion techniques and their combinations (linearly) to make user queries more cognizable to search engines and to produce higher-quality search results.
Dynamic estimator for determining operating conditions in an internal combustion engine
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-01-05
Methods and systems are provided for estimating engine performance information for a combustion cycle of an internal combustion engine. Estimated performance information for a previous combustion cycle is retrieved from memory. The estimated performance information includes an estimated value of at least one engine performance variable. Actuator settings applied to engine actuators are also received. The performance information for the current combustion cycle is then estimated based, at least in part, on the estimated performance information for the previous combustion cycle and the actuator settings applied during the previous combustion cycle. The estimated performance information for the current combustion cycle is then stored to the memory to be used in estimating performance information for a subsequent combustion cycle.
Moen, Hans; Ginter, Filip; Marsi, Erwin; Peltonen, Laura-Maria; Salakoski, Tapio; Salanterä, Sanna
2015-01-01
Patients' health related information is stored in electronic health records (EHRs) by health service providers. These records include sequential documentation of care episodes in the form of clinical notes. EHRs are used throughout the health care sector by professionals, administrators and patients, primarily for clinical purposes, but also for secondary purposes such as decision support and research. The vast amounts of information in EHR systems complicate information management and increase the risk of information overload. Therefore, clinicians and researchers need new tools to manage the information stored in the EHRs. A common use case is, given a--possibly unfinished--care episode, to retrieve the most similar care episodes among the records. This paper presents several methods for information retrieval, focusing on care episode retrieval, based on textual similarity, where similarity is measured through domain-specific modelling of the distributional semantics of words. Models include variants of random indexing and the semantic neural network model word2vec. Two novel methods are introduced that utilize the ICD-10 codes attached to care episodes to better induce domain-specificity in the semantic model. We report on experimental evaluation of care episode retrieval that circumvents the lack of human judgements regarding episode relevance. Results suggest that several of the methods proposed outperform a state-of-the art search engine (Lucene) on the retrieval task.
2015-01-01
Patients' health related information is stored in electronic health records (EHRs) by health service providers. These records include sequential documentation of care episodes in the form of clinical notes. EHRs are used throughout the health care sector by professionals, administrators and patients, primarily for clinical purposes, but also for secondary purposes such as decision support and research. The vast amounts of information in EHR systems complicate information management and increase the risk of information overload. Therefore, clinicians and researchers need new tools to manage the information stored in the EHRs. A common use case is, given a - possibly unfinished - care episode, to retrieve the most similar care episodes among the records. This paper presents several methods for information retrieval, focusing on care episode retrieval, based on textual similarity, where similarity is measured through domain-specific modelling of the distributional semantics of words. Models include variants of random indexing and the semantic neural network model word2vec. Two novel methods are introduced that utilize the ICD-10 codes attached to care episodes to better induce domain-specificity in the semantic model. We report on experimental evaluation of care episode retrieval that circumvents the lack of human judgements regarding episode relevance. Results suggest that several of the methods proposed outperform a state-of-the art search engine (Lucene) on the retrieval task. PMID:26099735
NASA Astrophysics Data System (ADS)
Okamura, Rintaro; Iwabuchi, Hironobu; Schmidt, K. Sebastian
2017-12-01
Three-dimensional (3-D) radiative-transfer effects are a major source of retrieval errors in satellite-based optical remote sensing of clouds. The challenge is that 3-D effects manifest themselves across multiple satellite pixels, which traditional single-pixel approaches cannot capture. In this study, we present two multi-pixel retrieval approaches based on deep learning, a technique that is becoming increasingly successful for complex problems in engineering and other areas. Specifically, we use deep neural networks (DNNs) to obtain multi-pixel estimates of cloud optical thickness and column-mean cloud droplet effective radius from multispectral, multi-pixel radiances. The first DNN method corrects traditional bispectral retrievals based on the plane-parallel homogeneous cloud assumption using the reflectances at the same two wavelengths. The other DNN method uses so-called convolutional layers and retrieves cloud properties directly from the reflectances at four wavelengths. The DNN methods are trained and tested on cloud fields from large-eddy simulations used as input to a 3-D radiative-transfer model to simulate upward radiances. The second DNN-based retrieval, sidestepping the bispectral retrieval step through convolutional layers, is shown to be more accurate. It reduces 3-D radiative-transfer effects that would otherwise affect the radiance values and estimates cloud properties robustly even for optically thick clouds.
NASA Indexing Benchmarks: Evaluating Text Search Engines
NASA Technical Reports Server (NTRS)
Esler, Sandra L.; Nelson, Michael L.
1997-01-01
The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.
Diversification of visual media retrieval results using saliency detection
NASA Astrophysics Data System (ADS)
Muratov, Oleg; Boato, Giulia; De Natale, Franesco G. B.
2013-03-01
Diversification of retrieval results allows for better and faster search. Recently there has been proposed different methods for diversification of image retrieval results mainly utilizing text information and techniques imported from natural language processing domain. However, images contain visual information that is impossible to describe in text and the use of visual features is inevitable. Visual saliency is information about the main object of an image implicitly included by humans while creating visual content. For this reason it is naturally to exploit this information for the task of diversification of the content. In this work we study whether visual saliency can be used for the task of diversification and propose a method for re-ranking image retrieval results using saliency. The evaluation has shown that the use of saliency information results in higher diversity of retrieval results.
Sutton, Victoria R; Hauser, Susan E
2005-01-01
MD on Tap, a PDA application that searches and retrieves biomedical literature, is specifically designed for use by mobile healthcare professionals. With the goal of improving the usability of the application, a preliminary comparison was made of two search engines (PubMed and Essie) to determine which provided most efficient path to the desired clinically-relevant information.
Edinger, Tracy; Cohen, Aaron M.; Bedrick, Steven; Ambert, Kyle; Hersh, William
2012-01-01
Objective: Secondary use of electronic health record (EHR) data relies on the ability to retrieve accurate and complete information about desired patient populations. The Text Retrieval Conference (TREC) 2011 Medical Records Track was a challenge evaluation allowing comparison of systems and algorithms to retrieve patients eligible for clinical studies from a corpus of de-identified medical records, grouped by patient visit. Participants retrieved cohorts of patients relevant to 35 different clinical topics, and visits were judged for relevance to each topic. This study identified the most common barriers to identifying specific clinic populations in the test collection. Methods: Using the runs from track participants and judged visits, we analyzed the five non-relevant visits most often retrieved and the five relevant visits most often overlooked. Categories were developed iteratively to group the reasons for incorrect retrieval for each of the 35 topics. Results: Reasons fell into nine categories for non-relevant visits and five categories for relevant visits. Non-relevant visits were most often retrieved because they contained a non-relevant reference to the topic terms. Relevant visits were most often infrequently retrieved because they used a synonym for a topic term. Conclusions: This failure analysis provides insight into areas for future improvement in EHR-based retrieval with techniques such as more widespread and complete use of standardized terminology in retrieval and data entry systems. PMID:23304287
Edinger, Tracy; Cohen, Aaron M; Bedrick, Steven; Ambert, Kyle; Hersh, William
2012-01-01
Secondary use of electronic health record (EHR) data relies on the ability to retrieve accurate and complete information about desired patient populations. The Text Retrieval Conference (TREC) 2011 Medical Records Track was a challenge evaluation allowing comparison of systems and algorithms to retrieve patients eligible for clinical studies from a corpus of de-identified medical records, grouped by patient visit. Participants retrieved cohorts of patients relevant to 35 different clinical topics, and visits were judged for relevance to each topic. This study identified the most common barriers to identifying specific clinic populations in the test collection. Using the runs from track participants and judged visits, we analyzed the five non-relevant visits most often retrieved and the five relevant visits most often overlooked. Categories were developed iteratively to group the reasons for incorrect retrieval for each of the 35 topics. Reasons fell into nine categories for non-relevant visits and five categories for relevant visits. Non-relevant visits were most often retrieved because they contained a non-relevant reference to the topic terms. Relevant visits were most often infrequently retrieved because they used a synonym for a topic term. This failure analysis provides insight into areas for future improvement in EHR-based retrieval with techniques such as more widespread and complete use of standardized terminology in retrieval and data entry systems.
A PROPOSED CHEMICAL INFORMATION AND DATA SYSTEM. VOLUME I.
CHEMICAL COMPOUNDS, *DATA PROCESSING, *INFORMATION RETRIEVAL, * CHEMICAL ANALYSIS, INPUT OUTPUT DEVICES, COMPUTER PROGRAMMING, CLASSIFICATION...CONFIGURATIONS, DATA STORAGE SYSTEMS, ATOMS, MOLECULES, PERFORMANCE( ENGINEERING ), MAINTENANCE, SUBJECT INDEXING, MAGNETIC TAPE, AUTOMATIC, MILITARY REQUIREMENTS, TYPEWRITERS, OPTICS, TOPOLOGY, STATISTICAL ANALYSIS, FLOW CHARTING.
Hopkins during ITCS PWR Retrieval
2014-01-31
ISS038-E-040140 (31 Jan. 2014) --- NASA astronaut Mike Hopkins, Expedition 38 flight engineer, uses the Fluid Servicing System (FSS) to refill Internal Thermal Control System (ITCS) loops with fresh coolant in the Destiny laboratory of the International Space Station.
Hopkins during ITCS PWR Retrieval
2014-01-31
ISS038-E-040139 (31 Jan. 2014) --- NASA astronaut Mike Hopkins, Expedition 38 flight engineer, uses the Fluid Servicing System (FSS) to refill Internal Thermal Control System (ITCS) loops with fresh coolant in the Destiny laboratory of the International Space Station.
The Stryker Mobile Gun System: A Case Study on Managing Complexity
2009-06-01
In his article Managing Innovation in Complex Product Systems, Howard Rush (1997) identified three “hotspot” categories: 1) requirements... Managing innovation in complex product systems. The Institution for Electrical Engineers. Retrieved February 2, 2009, from http
Explosive parcel containment and blast mitigation container
Sparks, Michael H.
2001-06-12
The present invention relates to a containment structure for containing and mitigating explosions. The containment structure is installed in the wall of the building and has interior and exterior doors for placing suspicious packages into the containment structure and retrieving them from the exterior of the building. The containment structure has a blast deflection chute and a blowout panel to direct over pressure from explosions away from the building, surrounding structures and people.
The pond is wider than you think! Problems encountered when searching family practice literature.
Rosser, W. W.; Starkey, C.; Shaughnessy, R.
2000-01-01
OBJECTIVE: To explain differences in the results of literature searches in British general practice and North American family practice or family medicine. DESIGN: Comparative literature search. SETTING: The Department of Family and Community Medicine at the University of Toronto in Ontario. METHOD: Literature searches on MEDLINE demonstrated that certain search strategies ignored certain key words, depending on the search engine and the search terms chosen. Literature searches using the key words "general practice," "family practice," and "family medicine" combined with the topics "depression" and then "otitis media" were conducted in MEDLINE using four different Web-based search engines: Ovid, HealthGate, PubMed, and Internet Grateful Med. MAIN OUTCOME MEASURES: The number of MEDLINE references retrieved for both topics when searched with each of the three key words, "general practice," "family practice," and "family medicine" using each of the four search engines. RESULTS: For each topic, each search yielded very different articles. Some search engines did a better job of matching the term "general practice" to the terms "family medicine" and "family practice," and thus improved retrieval. The problem of language use extends to the variable use of terminology and differences in spelling between British and American English. CONCLUSION: We need to heighten awareness of literature search problems and the potential for duplication of research effort when some of the literature is ignored, and to suggest ways to overcome the deficiencies of the various search engines. Images Figure 1 Figure 2 PMID:10660792
Measurement of tag confidence in user generated contents retrieval
NASA Astrophysics Data System (ADS)
Lee, Sihyoung; Min, Hyun-Seok; Lee, Young Bok; Ro, Yong Man
2009-01-01
As online image sharing services are becoming popular, the importance of correctly annotated tags is being emphasized for precise search and retrieval. Tags created by user along with user-generated contents (UGC) are often ambiguous due to the fact that some tags are highly subjective and visually unrelated to the image. They cause unwanted results to users when image search engines rely on tags. In this paper, we propose a method of measuring tag confidence so that one can differentiate confidence tags from noisy tags. The proposed tag confidence is measured from visual semantics of the image. To verify the usefulness of the proposed method, experiments were performed with UGC database from social network sites. Experimental results showed that the image retrieval performance with confidence tags was increased.
Darmoni, Stéfan J; Soualmia, Lina F; Letord, Catherine; Jaulent, Marie-Christine; Griffon, Nicolas; Thirion, Benoît; Névéol, Aurélie
2012-07-01
As more scientific work is published, it is important to improve access to the biomedical literature. Since 2000, when Medical Subject Headings (MeSH) Concepts were introduced, the MeSH Thesaurus has been concept based. Nevertheless, information retrieval is still performed at the MeSH Descriptor or Supplementary Concept level. The study assesses the benefit of using MeSH Concepts for indexing and information retrieval. Three sets of queries were built for thirty-two rare diseases and twenty-two chronic diseases: (1) using PubMed Automatic Term Mapping (ATM), (2) using Catalog and Index of French-language Health Internet (CISMeF) ATM, and (3) extrapolating the MEDLINE citations that should be indexed with a MeSH Concept. Type 3 queries retrieve significantly fewer results than type 1 or type 2 queries (about 18,000 citations versus 200,000 for rare diseases; about 300,000 citations versus 2,000,000 for chronic diseases). CISMeF ATM also provides better precision than PubMed ATM for both disease categories. Using MeSH Concept indexing instead of ATM is theoretically possible to improve retrieval performance with the current indexing policy. However, using MeSH Concept information retrieval and indexing rules would be a fundamentally better approach. These modifications have already been implemented in the CISMeF search engine.
Intelligent retrieval of medical images from the Internet
NASA Astrophysics Data System (ADS)
Tang, Yau-Kuo; Chiang, Ted T.
1996-05-01
The object of this study is using Internet resources to provide a cost-effective, user-friendly method to access the medical image archive system and to provide an easy method for the user to identify the images required. This paper describes the prototype system architecture, the implementation, and results. In the study, we prototype the Intelligent Medical Image Retrieval (IMIR) system as a Hypertext Transport Prototype server and provide Hypertext Markup Language forms for user, as an Internet client, using browser to enter image retrieval criteria for review. We are developing the intelligent retrieval engine, with the capability to map the free text search criteria to the standard terminology used for medical image identification. We evaluate retrieved records based on the number of the free text entries matched and their relevance level to the standard terminology. We are in the integration and testing phase. We have collected only a few different types of images for testing and have trained a few phrases to map the free text to the standard medical terminology. Nevertheless, we are able to demonstrate the IMIR's ability to search, retrieve, and review medical images from the archives using general Internet browser. The prototype also uncovered potential problems in performance, security, and accuracy. Additional studies and enhancements will make the system clinically operational.
Code of Federal Regulations, 2010 CFR
2010-10-01
... vehicles, mechanical equipment containing internal combustion engines, and battery powered vehicles or... equipment containing internal combustion engines, and battery powered vehicles or equipment. (a... internal combustion engine, or a battery powered vehicle or equipment is subject to the requirements of...
SLUDGE PARTICLE SEPAPATION EFFICIENCIES DURING SETTLER TANK RETRIEVAL INTO SCS-CON-230
DOE Office of Scientific and Technical Information (OSTI.GOV)
DEARING JI; EPSTEIN M; PLYS MG
2009-07-16
The purpose of this document is to release, into the Hanford Document Control System, FA1/0991, Sludge Particle Separation Efficiencies for the Rectangular SCS-CON-230 Container, by M. Epstein and M. G. Plys, Fauske & Associates, LLC, June 2009. The Sludge Treatment Project (STP) will retrieve sludge from the 105-K West Integrated Water Treatment System (IWTS) Settler Tanks and transfer it to container SCS-CON-230 using the Settler Tank Retrieval System (STRS). The sludge will enter the container through two distributors. The container will have a filtration system that is designed to minimize the overflow of sludge fines from the container to themore » basin. FAI/09-91 was performed to quantify the effect of the STRS on sludge distribution inside of and overflow out of SCS-CON-230. Selected results of the analysis and a system description are discussed. The principal result of the analysis is that the STRS filtration system reduces the overflow of sludge from SCS-CON-230 to the basin by roughly a factor of 10. Some turbidity can be expected in the center bay where the container is located. The exact amount of overflow and subsequent turbidity is dependent on the density of the sludge (which will vary with location in the Settler Tanks) and the thermal gradient between the SCS-CON-230 and the basin. Attachment A presents the full analytical results. These results are applicable specifically to SCS-CON-230 and the STRS filtration system's expected operating duty cycles.« less
Buried waste integrated demonstration human engineered control station. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-09-01
This document describes the Human Engineered Control Station (HECS) project activities including the conceptual designs. The purpose of the HECS is to enhance the effectiveness and efficiency of remote retrieval by providing an integrated remote control station. The HECS integrates human capabilities, limitations, and expectations into the design to reduce the potential for human error, provides an easy system to learn and operate, provides an increased productivity, and reduces the ultimate investment in training. The overall HECS consists of the technology interface stations, supporting engineering aids, platform (trailer), communications network (broadband system), and collision avoidance system.
Katzman, Scott A; Vaughan, Betsy; Nieto, Jorge E; Galuppo, Larry D
2016-08-01
OBJECTIVE To evaluate the use of a laparoscopic specimen retrieval pouch for removal of intact or fragmented cystic calculi from standing horses. DESIGN Retrospective case series. ANIMALS 8 horses (5 geldings and 3 mares) with cystic calculi. PROCEDURES Physical examination and cystoscopic, ultrasonographic, and hematologic evaluations of urinary tract function were performed for each horse. A diagnosis of cystic calculus was made on the basis of results of cystoscopy and ultrasonography. Concurrent urolithiasis or other urinary tract abnormalities identified during preoperative evaluation were recorded. Horses were sedated and placed in standing stocks, and the perineum was aseptically prepared. Direct access to the urinary bladder was gained in geldings via perineal urethrotomy or in mares by a transurethral approach. Calculi were visualized endoscopically, manipulated into the retrieval pouch, and removed intact or fragmented (for larger calculi). RESULTS For 4 geldings and 1 mare, fragmentation was necessary to facilitate calculus removal. Mean duration of surgery was 125 minutes, and trauma to the urinary bladder and urethra was limited to areas of hyperemia and submucosal petechiation. No postoperative complications were encountered for any horse. When lithotripsy was required, the retrieval pouch provided an effective means of stabilizing calculi and containing the fragments for removal. CONCLUSIONS AND CLINICAL RELEVANCE Use of the laparoscopic specimen retrieval pouch was an effective, minimally traumatic method for retrieving cystic calculi from standing horses. The pouch protected the urinary bladder and urethra from trauma during calculus removal and allowed for stabilization, containment, and fragmentation of calculi when necessary.
NASA Astrophysics Data System (ADS)
Chang, Kai-Wei; L'Ecuyer, Tristan S.; Kahn, Brian H.; Natraj, Vijay
2017-05-01
Hyperspectral instruments such as Atmospheric Infrared Sounder (AIRS) have spectrally dense observations effective for ice cloud retrievals. However, due to the large number of channels, only a small subset is typically used. It is crucial that this subset of channels be chosen to contain the maximum possible information about the retrieved variables. This study describes an information content analysis designed to select optimal channels for ice cloud retrievals. To account for variations in ice cloud properties, we perform channel selection over an ensemble of cloud regimes, extracted with a clustering algorithm, from a multiyear database at a tropical Atmospheric Radiation Measurement site. Multiple satellite viewing angles over land and ocean surfaces are considered to simulate the variations in observation scenarios. The results suggest that AIRS channels near wavelengths of 14, 10.4, 4.2, and 3.8 μm contain the most information. With an eye toward developing a joint AIRS-MODIS (Moderate Resolution Imaging Spectroradiometer) retrieval, the analysis is also applied to combined measurements from both instruments. While application of this method to MODIS yields results consistent with previous channel sensitivity studies, the analysis shows that this combination may yield substantial improvement in cloud retrievals. MODIS provides most information on optical thickness and particle size, aided by a better constraint on cloud vertical placement from AIRS. An alternate scenario where cloud top boundaries are supplied by the active sensors in the A-train is also explored. The more robust cloud placement afforded by active sensors shifts the optimal channels toward the window region and shortwave infrared, further constraining optical thickness and particle size.
Sauer, Ursula G; Wächter, Thomas; Hareng, Lars; Wareing, Britta; Langsch, Angelika; Zschunke, Matthias; Alvers, Michael R; Landsiedel, Robert
2014-06-01
The knowledge-based search engine Go3R, www.Go3R.org, has been developed to assist scientists from industry and regulatory authorities in collecting comprehensive toxicological information with a special focus on identifying available alternatives to animal testing. The semantic search paradigm of Go3R makes use of expert knowledge on 3Rs methods and regulatory toxicology, laid down in the ontology, a network of concepts, terms, and synonyms, to recognize the contents of documents. Search results are automatically sorted into a dynamic table of contents presented alongside the list of documents retrieved. This table of contents allows the user to quickly filter the set of documents by topics of interest. Documents containing hazard information are automatically assigned to a user interface following the endpoint-specific IUCLID5 categorization scheme required, e.g. for REACH registration dossiers. For this purpose, complex endpoint-specific search queries were compiled and integrated into the search engine (based upon a gold standard of 310 references that had been assigned manually to the different endpoint categories). Go3R sorts 87% of the references concordantly into the respective IUCLID5 categories. Currently, Go3R searches in the 22 million documents available in the PubMed and TOXNET databases. However, it can be customized to search in other databases including in-house databanks. Copyright © 2013 Elsevier Ltd. All rights reserved.
Applying Hypertext Structures to Software Documentation.
ERIC Educational Resources Information Center
French, James C.; And Others
1997-01-01
Describes a prototype system for software documentation management called SLEUTH (Software Literacy Enhancing Usefulness to Humans) being developed at the University of Virginia. Highlights include information retrieval techniques, hypertext links that are installed automatically, a WAIS (Wide Area Information Server) search engine, user…
Nonstationary signal analysis in episodic memory retrieval
NASA Astrophysics Data System (ADS)
Ku, Y. G.; Kawasumi, Masashi; Saito, Masao
2004-04-01
The problem of blind source separation from a mixture that has nonstationarity can be seen in signal processing, speech processing, spectral analysis and so on. This study analyzed EEG signal during episodic memory retrieval using ICA and TVAR. This paper proposes a method which combines ICA and TVAR. The signal from the brain not only exhibits the nonstationary behavior, but also contain artifacts. EEG data at the frontal lobe (F3) from the scalp is collected during the episodic memory retrieval task. The method is applied to EEG data for analysis. The artifact (eye movement) is removed by ICA, and a single burst (around 6Hz) is obtained by TVAR, suggesting that the single burst is related to the brain activity during the episodic memory retrieval.
Comparison of k-means related clustering methods for nuclear medicine images segmentation
NASA Astrophysics Data System (ADS)
Borys, Damian; Bzowski, Pawel; Danch-Wierzchowska, Marta; Psiuk-Maksymowicz, Krzysztof
2017-03-01
In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.
Programmable stream prefetch with resource optimization
Boyle, Peter; Christ, Norman; Gara, Alan; Mawhinney, Robert; Ohmacht, Martin; Sugavanam, Krishnan
2013-01-08
A stream prefetch engine performs data retrieval in a parallel computing system. The engine receives a load request from at least one processor. The engine evaluates whether a first memory address requested in the load request is present and valid in a table. The engine checks whether there exists valid data corresponding to the first memory address in an array if the first memory address is present and valid in the table. The engine increments a prefetching depth of a first stream that the first memory address belongs to and fetching a cache line associated with the first memory address from the at least one cache memory device if there is not yet valid data corresponding to the first memory address in the array. The engine determines whether prefetching of additional data is needed for the first stream within its prefetching depth. The engine prefetches the additional data if the prefetching is needed.
2011-02-16
ISS026-E-027391 (16 Feb. 2011) --- Russian cosmonaut Dmitry Kondratyev, Expedition 26 flight engineer, wearing a Russian Orlan-MK spacesuit, participates in a session of extravehicular activity (EVA) focused on the installation of two scientific experiments outside the Zvezda Service Module of the International Space Station. During the four-hour, 51-minute spacewalk, Kondratyev and Russian cosmonaut Oleg Skripochka (out of frame), flight engineer, installed a pair of earthquake and lightning sensing experiments and retrieved a pair of spacecraft material evaluation panels.
Sutton, Victoria R.; Hauser, Susan E.
2005-01-01
MD on Tap, a PDA application that searches and retrieves biomedical literature, is specifically designed for use by mobile healthcare professionals. With the goal of improving the usability of the application, a preliminary comparison was made of two search engines (PubMed and Essie) to determine which provided most efficient path to the desired clinically-relevant information. PMID:16779415
Dennis B. Propst; Robert V. Abbey
1980-01-01
In 1979, over 450 million recreation days of use were reported at 419 Corps of Engineers lakes and other project areas. This figure represents a 2.7 percent increase in use over 1977 (424 million recreation days). The Corps and other agencies (quasi-public, state, local and other federal agencies) manage 3,175 recreation areas on a total of 11.2 million acres of land...
2011-02-16
ISS026-E-027361 (16 Feb. 2011) --- Russian cosmonaut Dmitry Kondratyev, Expedition 26 flight engineer, wearing a Russian Orlan-MK spacesuit, participates in a session of extravehicular activity (EVA) focused on the installation of two scientific experiments outside the Zvezda Service Module of the International Space Station. During the four-hour, 51-minute spacewalk, Kondratyev and Russian cosmonaut Oleg Skripochka (out of frame), flight engineer, installed a pair of earthquake and lightning sensing experiments and retrieved a pair of spacecraft material evaluation panels.
2011-02-16
ISS026-E-027368 (16 Feb. 2011) --- Russian cosmonaut Dmitry Kondratyev, Expedition 26 flight engineer, wearing a Russian Orlan-MK spacesuit, participates in a session of extravehicular activity (EVA) focused on the installation of two scientific experiments outside the Zvezda Service Module of the International Space Station. During the four-hour, 51-minute spacewalk, Kondratyev and Russian cosmonaut Oleg Skripochka (out of frame), flight engineer, installed a pair of earthquake and lightning sensing experiments and retrieved a pair of spacecraft material evaluation panels.
Brain CT image similarity retrieval method based on uncertain location graph.
Pan, Haiwei; Li, Pengyuan; Li, Qing; Han, Qilong; Feng, Xiaoning; Gao, Linlin
2014-03-01
A number of brain computed tomography (CT) images stored in hospitals that contain valuable information should be shared to support computer-aided diagnosis systems. Finding the similar brain CT images from the brain CT image database can effectively help doctors diagnose based on the earlier cases. However, the similarity retrieval for brain CT images requires much higher accuracy than the general images. In this paper, a new model of uncertain location graph (ULG) is presented for brain CT image modeling and similarity retrieval. According to the characteristics of brain CT image, we propose a novel method to model brain CT image to ULG based on brain CT image texture. Then, a scheme for ULG similarity retrieval is introduced. Furthermore, an effective index structure is applied to reduce the searching time. Experimental results reveal that our method functions well on brain CT images similarity retrieval with higher accuracy and efficiency.
Major Upgrades to the AIRS Version-6 Ozone Profile Methodology
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2015-01-01
This research is a continuation of part of what was shown at the last AIRS Science Team Meeting in the talk Improved Water Vapor and Ozone Profiles in SRT AIRS Version-6.X and the AIRS February 11, 2015 NetMeeting Further improvements in water vapor and ozone profiles compared to Version-6.AIRS Version-6 was finalized in late 2012 and is now operational. Version-6 contained many significant improvements in retrieval methodology compared to Version-5. However, Version-6 retrieval methodology used for the water vapor profile q(p) and ozone profile O3(p) retrievals is basically unchanged from Version-5, or even from Version-4. Subsequent research has made significant improvements in both water vapor and O3 profiles compared to Version-6. This talk will concentrate on O3 profile retrievals. Improvements in water vapor profile retrievals are given in a separate presentation.
2. EXTERIOR OF ENGINE ROOM, CONTAINING MESTACORLISS CROSSCOMPOUND ENGINE, FOR ...
2. EXTERIOR OF ENGINE ROOM, CONTAINING MESTA-CORLISS CROSS-COMPOUND ENGINE, FOR 40" BLOOMING MILL - Republic Iron & Steel Company, Youngstown Works, Blooming Mill & Blooming Mill Engines, North of Poland Avenue, Youngstown, Mahoning County, OH
1. EXTERIOR OF ENGINE ROOM, CONTAINING UNITEDTOD TWINTANDEM ENGINE, FOR ...
1. EXTERIOR OF ENGINE ROOM, CONTAINING UNITED-TOD TWIN-TANDEM ENGINE, FOR 40" BLOOMING MILL - Republic Iron & Steel Company, Youngstown Works, Blooming Mill & Blooming Mill Engines, North of Poland Avenue, Youngstown, Mahoning County, OH
Code of Federal Regulations, 2011 CFR
2011-01-01
... pod attaching structures containing flammable fluid lines. 25.1182 Section 25.1182 Aeronautics and..., and engine pod attaching structures containing flammable fluid lines. (a) Each nacelle area immediately behind the firewall, and each portion of any engine pod attaching structure containing flammable...
Code of Federal Regulations, 2010 CFR
2010-01-01
... pod attaching structures containing flammable fluid lines. 25.1182 Section 25.1182 Aeronautics and..., and engine pod attaching structures containing flammable fluid lines. (a) Each nacelle area immediately behind the firewall, and each portion of any engine pod attaching structure containing flammable...
46 CFR 182.465 - Ventilation of spaces containing diesel machinery.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 7 2010-10-01 2010-10-01 false Ventilation of spaces containing diesel machinery. 182... Ventilation of spaces containing diesel machinery. (a) A space containing diesel machinery must be fitted with... operation of main engines and auxiliary engines. (b) Air-cooled propulsion and auxiliary diesel engines...
46 CFR 182.465 - Ventilation of spaces containing diesel machinery.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 7 2011-10-01 2011-10-01 false Ventilation of spaces containing diesel machinery. 182... Ventilation of spaces containing diesel machinery. (a) A space containing diesel machinery must be fitted with... operation of main engines and auxiliary engines. (b) Air-cooled propulsion and auxiliary diesel engines...
Combination of image descriptors for the exploration of cultural photographic collections
NASA Astrophysics Data System (ADS)
Bhowmik, Neelanjan; Gouet-Brunet, Valérie; Bloch, Gabriel; Besson, Sylvain
2017-01-01
The rapid growth of image digitization and collections in recent years makes it challenging and burdensome to organize, categorize, and retrieve similar images from voluminous collections. Content-based image retrieval (CBIR) is immensely convenient in this context. A considerable number of local feature detectors and descriptors are present in the literature of CBIR. We propose a model to anticipate the best feature combinations for image retrieval-related applications. Several spatial complementarity criteria of local feature detectors are analyzed and then engaged in a regression framework to find the optimal combination of detectors for a given dataset and are better adapted for each given image; the proposed model is also useful to optimally fix some other parameters, such as the k in k-nearest neighbor retrieval. Three public datasets of various contents and sizes are employed to evaluate the proposal, which is legitimized by improving the quality of retrieval notably facing classical approaches. Finally, the proposed image search engine is applied to the cultural photographic collections of a French museum, where it demonstrates its added value for the exploration and promotion of these contents at different levels from their archiving up to their exhibition in or ex situ.
Massively parallel support for a case-based planning system
NASA Technical Reports Server (NTRS)
Kettler, Brian P.; Hendler, James A.; Anderson, William A.
1993-01-01
Case-based planning (CBP), a kind of case-based reasoning, is a technique in which previously generated plans (cases) are stored in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over generative planning, in which a new plan is produced from scratch. CBP thus offers a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory to reduce retrieval times. This approach requires significant domain engineering and complex memory indexing schemes to make these planners efficient. In contrast, our CBP system, CaPER, uses a massively parallel frame-based AI language (PARKA) and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large case bases can be used; memory can be probed in numerous alternate ways; and queries can be made at several levels, allowing more specific retrieval of stored plans that better fit the target problem with less adaptation. In this paper we describe CaPER's case retrieval techniques and some experimental results showing its good performance, even on large case bases.
Technical Review of Retrieval and Closure Plans for the INEEL INTEC Tank Farm Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bamberger, Judith A; Burks, Barry L; Quigley, Keith D
2001-09-28
The purpose of this report is to document the conclusions of a technical review of retrieval and closure plans for the Idaho National Energy and Environmental Laboratory (INEEL) Idaho Nuclear Technology and Engineering Center (INTEC) Tank Farm Facility. In addition to reviewing retrieval and closure plans for these tanks, the review process served as an information exchange mechanism so that staff in the INEEL High Level Waste (HLW) Program could become more familiar with retrieval and closure approaches that have been completed or are planned for underground storage tanks at the Oak Ridge National Laboratory (ORNL) and Hanford sites. Thismore » review focused not only on evaluation of the technical feasibility and appropriateness of the approach selected by INEEL but also on technology gaps that could be addressed through utilization of technologies or performance data available at other DOE sites and in the private sector. The reviewers, Judith Bamberger of Pacific Northwest National Laboratory (PNNL) and Dr. Barry Burks of The Providence Group Applied Technology, have extensive experience in the development and application of tank waste retrieval technologies for nuclear waste remediation.« less
Major Upgrades to the AIRS Version-6 Water Vapor Profile Methodology
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2015-01-01
This research is a continuation of part of what was shown at the last AIRS Science Team Meeting and the AIRS 2015 NetMeeting. AIRS Version 6 was finalized in late 2012 and is now operational. Version 6 contained many significant improvements in retrieval methodology compared to Version 5. Version 6 retrieval methodology used for the water vapor profile q(p) and ozone profile O3(p) retrievals is basically unchanged from Version 5, or even from Version 4. Subsequent research has made significant improvements in both water vapor and O3 profiles compared to Version 6.
Chen, Chi-Hsin; Yu, Chen
2017-06-01
Natural language environments usually provide structured contexts for learning. This study examined the effects of semantically themed contexts-in both learning and retrieval phases-on statistical word learning. Results from 2 experiments consistently showed that participants had higher performance in semantically themed learning contexts. In contrast, themed retrieval contexts did not affect performance. Our work suggests that word learners are sensitive to statistical regularities not just at the level of individual word-object co-occurrences but also at another level containing a whole network of associations among objects and their properties.
Campagne, Fabien
2008-02-29
The evaluation of information retrieval techniques has traditionally relied on human judges to determine which documents are relevant to a query and which are not. This protocol is used in the Text Retrieval Evaluation Conference (TREC), organized annually for the past 15 years, to support the unbiased evaluation of novel information retrieval approaches. The TREC Genomics Track has recently been introduced to measure the performance of information retrieval for biomedical applications. We describe two protocols for evaluating biomedical information retrieval techniques without human relevance judgments. We call these protocols No Title Evaluation (NT Evaluation). The first protocol measures performance for focused searches, where only one relevant document exists for each query. The second protocol measures performance for queries expected to have potentially many relevant documents per query (high-recall searches). Both protocols take advantage of the clear separation of titles and abstracts found in Medline. We compare the performance obtained with these evaluation protocols to results obtained by reusing the relevance judgments produced in the 2004 and 2005 TREC Genomics Track and observe significant correlations between performance rankings generated by our approach and TREC. Spearman's correlation coefficients in the range of 0.79-0.92 are observed comparing bpref measured with NT Evaluation or with TREC evaluations. For comparison, coefficients in the range 0.86-0.94 can be observed when evaluating the same set of methods with data from two independent TREC Genomics Track evaluations. We discuss the advantages of NT Evaluation over the TRels and the data fusion evaluation protocols introduced recently. Our results suggest that the NT Evaluation protocols described here could be used to optimize some search engine parameters before human evaluation. Further research is needed to determine if NT Evaluation or variants of these protocols can fully substitute for human evaluations.
Kundu, Joydip; Shim, Jin-Hyung; Jang, Jinah; Kim, Sung-Won; Cho, Dong-Woo
2015-11-01
Regenerative medicine is targeted to improve, restore or replace damaged tissues or organs using a combination of cells, materials and growth factors. Both tissue engineering and developmental biology currently deal with the process of tissue self-assembly and extracellular matrix (ECM) deposition. In this investigation, additive manufacturing (AM) with a multihead deposition system (MHDS) was used to fabricate three-dimensional (3D) cell-printed scaffolds using layer-by-layer (LBL) deposition of polycaprolactone (PCL) and chondrocyte cell-encapsulated alginate hydrogel. Appropriate cell dispensing conditions and optimum alginate concentrations for maintaining cell viability were determined. In vitro cell-based biochemical assays were performed to determine glycosaminoglycans (GAGs), DNA and total collagen contents from different PCL-alginate gel constructs. PCL-alginate gels containing transforming growth factor-β (TGFβ) showed higher ECM formation. The 3D cell-printed scaffolds of PCL-alginate gel were implanted in the dorsal subcutaneous spaces of female nude mice. Histochemical [Alcian blue and haematoxylin and eosin (H&E) staining] and immunohistochemical (type II collagen) analyses of the retrieved implants after 4 weeks revealed enhanced cartilage tissue and type II collagen fibril formation in the PCL-alginate gel (+TGFβ) hybrid scaffold. In conclusion, we present an innovative cell-printed scaffold for cartilage regeneration fabricated by an advanced bioprinting technology. Copyright © 2013 John Wiley & Sons, Ltd.
Accident analysis and control options in support of the sludge water system safety analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
HEY, B.E.
A hazards analysis was initiated for the SWS in July 2001 (SNF-8626, K Basin Sludge and Water System Preliminary Hazard Analysis) and updated in December 2001 (SNF-10020 Rev. 0, Hazard Evaluation for KE Sludge and Water System - Project A16) based on conceptual design information for the Sludge Retrieval System (SRS) and 60% design information for the cask and container. SNF-10020 was again revised in September 2002 to incorporate new hazards identified from final design information and from a What-if/Checklist evaluation of operational steps. The process hazards, controls, and qualitative consequence and frequency estimates taken from these efforts have beenmore » incorporated into Revision 5 of HNF-3960, K Basins Hazards Analysis. The hazards identification process documented in the above referenced reports utilized standard industrial safety techniques (AIChE 1992, Guidelines for Hazard Evaluation Procedures) to systematically guide several interdisciplinary teams through the system using a pre-established set of process parameters (e.g., flow, temperature, pressure) and guide words (e.g., high, low, more, less). The teams generally included representation from the U.S. Department of Energy (DOE), K Basins Nuclear Safety, T Plant Nuclear Safety, K Basin Industrial Safety, fire protection, project engineering, operations, and facility engineering.« less
Information Clustering Based on Fuzzy Multisets.
ERIC Educational Resources Information Center
Miyamoto, Sadaaki
2003-01-01
Proposes a fuzzy multiset model for information clustering with application to information retrieval on the World Wide Web. Highlights include search engines; term clustering; document clustering; algorithms for calculating cluster centers; theoretical properties concerning clustering algorithms; and examples to show how the algorithms work.…
Krikalev in Service module with tools
2001-03-30
ISS01-E-5150 (December 2000) --- Cosmonaut Sergei K. Krikalev, Expedition One flight engineer, retrieves a tool during an installation and set-up session in the Zvezda service module aboard the International Space Station (ISS). The picture was recorded with a digital still camera.
Multitasking Information Seeking and Searching Processes.
ERIC Educational Resources Information Center
Spink, Amanda; Ozmutlu, H. Cenk; Ozmutlu, Seda
2002-01-01
Presents findings from four studies of the prevalence of multitasking information seeking and searching by Web (via the Excite search engine), information retrieval system (mediated online database searching), and academic library users. Highlights include human information coordinating behavior (HICB); and implications for models of information…
Update of correlations between cone penetration and boring log data.
DOT National Transportation Integrated Search
2008-03-01
The cone penetration test (CPT) has been widely used in Louisiana in the last two decades as an in situ tool to characterize engineering : properties of soils. In addition, conventional drilling and sample retrieval using Shelby tube followed by labo...
Retrieving and Indexing Spatial Data in the Cloud Computing Environment
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Wang, Sheng; Zhou, Daliang
In order to solve the drawbacks of spatial data storage in common Cloud Computing platform, we design and present a framework for retrieving, indexing, accessing and managing spatial data in the Cloud environment. An interoperable spatial data object model is provided based on the Simple Feature Coding Rules from the OGC such as Well Known Binary (WKB) and Well Known Text (WKT). And the classic spatial indexing algorithms like Quad-Tree and R-Tree are re-designed in the Cloud Computing environment. In the last we develop a prototype software based on Google App Engine to implement the proposed model.
Ontology-Based Retrieval of Spatially Related Objects for Location Based Services
NASA Astrophysics Data System (ADS)
Haav, Hele-Mai; Kaljuvee, Aivi; Luts, Martin; Vajakas, Toivo
Advanced Location Based Service (LBS) applications have to integrate information stored in GIS, information about users' preferences (profile) as well as contextual information and information about application itself. Ontology engineering provides methods to semantically integrate several data sources. We propose an ontology-driven LBS development framework: the paper describes the architecture of ontologies and their usage for retrieval of spatially related objects relevant to the user. Our main contribution is to enable personalised ontology driven LBS by providing a novel approach for defining personalised semantic spatial relationships by means of ontologies. The approach is illustrated by an industrial case study.
EM-31 RETRIEVAL KNOWLEDGE CENTER MEETING REPORT: MOBILIZE AND DISLODGE TANK WASTE HEELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fellinger, A.
2010-02-16
The Retrieval Knowledge Center sponsored a meeting in June 2009 to review challenges and gaps to retrieval of tank waste heels. The facilitated meeting was held at the Savannah River Research Campus with personnel broadly representing tank waste retrieval knowledge at Hanford, Savannah River, Idaho, and Oak Ridge. This document captures the results of this meeting. In summary, it was agreed that the challenges to retrieval of tank waste heels fell into two broad categories: (1) mechanical heel waste retrieval methodologies and equipment and (2) understanding and manipulating the heel waste (physical, radiological, and chemical characteristics) to support retrieval optionsmore » and subsequent processing. Recent successes and lessons from deployments of the Sand and Salt Mantis vehicles as well as retrieval of C-Area tanks at Hanford were reviewed. Suggestions to address existing retrieval approaches that utilize a limited set of tools and techniques are included in this report. The meeting found that there had been very little effort to improve or integrate the multiple proven or new techniques and tools available into a menu of available methods for rapid insertion into baselines. It is recommended that focused developmental efforts continue in the two areas underway (low-level mixing evaluation and pumping slurries with large solid materials) and that projects to demonstrate new/improved tools be launched to outfit tank farm operators with the needed tools to complete tank heel retrievals effectively and efficiently. This document describes the results of a meeting held on June 3, 2009 at the Savannah River Site in South Carolina to identify technology gaps and potential technology solutions to retrieving high-level waste (HLW) heels from waste tanks within the complex of sites run by the U. S. Department of Energy (DOE). The meeting brought together personnel with extensive tank waste retrieval knowledge from DOE's four major waste sites - Hanford, Savannah River, Idaho, and Oak Ridge. The meeting was arranged by the Retrieval Knowledge Center (RKC), which is a technology development project sponsored by the Office of Technology Innovation & Development - formerly the Office of Engineering and Technology - within the DOE Office of Environmental Management (EM).« less
ERIC Educational Resources Information Center
Mathis, B. Claude
The College Suggestor is an optical coincident system of information retrieval consisting of a series of plastic cards. Each card represents a characteristic of an institution and contains grid positions for 1,931 colleges and universities. The system of 217 card is so designed that the location of each college on the grid position is coincident…
Ranking the whole MEDLINE database according to a large training set using text indexing.
Suomela, Brian P; Andrade, Miguel A
2005-03-24
The MEDLINE database contains over 12 million references to scientific literature, with about 3/4 of recent articles including an abstract of the publication. Retrieval of entries using queries with keywords is useful for human users that need to obtain small selections. However, particular analyses of the literature or database developments may need the complete ranking of all the references in the MEDLINE database as to their relevance to a topic of interest. This report describes a method that does this ranking using the differences in word content between MEDLINE entries related to a topic and the whole of MEDLINE, in a computational time appropriate for an article search query engine. We tested the capabilities of our system to retrieve MEDLINE references which are relevant to the subject of stem cells. We took advantage of the existing annotation of references with terms from the MeSH hierarchical vocabulary (Medical Subject Headings, developed at the National Library of Medicine). A training set of 81,416 references was constructed by selecting entries annotated with the MeSH term stem cells or some child in its sub tree. Frequencies of all nouns, verbs, and adjectives in the training set were computed and the ratios of word frequencies in the training set to those in the entire MEDLINE were used to score references. Self-consistency of the algorithm, benchmarked with a test set containing the training set and an equal number of references randomly selected from MEDLINE was better using nouns (79%) than adjectives (73%) or verbs (70%). The evaluation of the system with 6,923 references not used for training, containing 204 articles relevant to stem cells according to a human expert, indicated a recall of 65% for a precision of 65%. This strategy appears to be useful for predicting the relevance of MEDLINE references to a given concept. The method is simple and can be used with any user-defined training set. Choice of the part of speech of the words used for classification has important effects on performance. Lists of words, scripts, and additional information are available from the web address http://www.ogic.ca/projects/ks2004/.
ZifBASE: a database of zinc finger proteins and associated resources.
Jayakanthan, Mannu; Muthukumaran, Jayaraman; Chandrasekar, Sanniyasi; Chawla, Konika; Punetha, Ankita; Sundar, Durai
2009-09-09
Information on the occurrence of zinc finger protein motifs in genomes is crucial to the developing field of molecular genome engineering. The knowledge of their target DNA-binding sequences is vital to develop chimeric proteins for targeted genome engineering and site-specific gene correction. There is a need to develop a computational resource of zinc finger proteins (ZFP) to identify the potential binding sites and its location, which reduce the time of in vivo task, and overcome the difficulties in selecting the specific type of zinc finger protein and the target site in the DNA sequence. ZifBASE provides an extensive collection of various natural and engineered ZFP. It uses standard names and a genetic and structural classification scheme to present data retrieved from UniProtKB, GenBank, Protein Data Bank, ModBase, Protein Model Portal and the literature. It also incorporates specialized features of ZFP including finger sequences and positions, number of fingers, physiochemical properties, classes, framework, PubMed citations with links to experimental structures (PDB, if available) and modeled structures of natural zinc finger proteins. ZifBASE provides information on zinc finger proteins (both natural and engineered ones), the number of finger units in each of the zinc finger proteins (with multiple fingers), the synergy between the adjacent fingers and their positions. Additionally, it gives the individual finger sequence and their target DNA site to which it binds for better and clear understanding on the interactions of adjacent fingers. The current version of ZifBASE contains 139 entries of which 89 are engineered ZFPs, containing 3-7F totaling to 296 fingers. There are 50 natural zinc finger protein entries ranging from 2-13F, totaling to 307 fingers. It has sequences and structures from literature, Protein Data Bank, ModBase and Protein Model Portal. The interface is cross linked to other public databases like UniprotKB, PDB, ModBase and Protein Model Portal and PubMed for making it more informative. A database is established to maintain the information of the sequence features, including the class, framework, number of fingers, residues, position, recognition site and physio-chemical properties (molecular weight, isoelectric point) of both natural and engineered zinc finger proteins and dissociation constant of few. ZifBASE can provide more effective and efficient way of accessing the zinc finger protein sequences and their target binding sites with the links to their three-dimensional structures. All the data and functions are available at the advanced web-based search interface http://web.iitd.ac.in/~sundar/zifbase.
CropEx Web-Based Agricultural Monitoring and Decision Support
NASA Technical Reports Server (NTRS)
Harvey. Craig; Lawhead, Joel
2011-01-01
CropEx is a Web-based agricultural Decision Support System (DSS) that monitors changes in crop health over time. It is designed to be used by a wide range of both public and private organizations, including individual producers and regional government offices with a vested interest in tracking vegetation health. The database and data management system automatically retrieve and ingest data for the area of interest. Another stores results of the processing and supports the DSS. The processing engine will allow server-side analysis of imagery with support for image sub-setting and a set of core raster operations for image classification, creation of vegetation indices, and change detection. The system includes the Web-based (CropEx) interface, data ingestion system, server-side processing engine, and a database processing engine. It contains a Web-based interface that has multi-tiered security profiles for multiple users. The interface provides the ability to identify areas of interest to specific users, user profiles, and methods of processing and data types for selected or created areas of interest. A compilation of programs is used to ingest available data into the system, classify that data, profile that data for quality, and make data available for the processing engine immediately upon the data s availability to the system (near real time). The processing engine consists of methods and algorithms used to process the data in a real-time fashion without copying, storing, or moving the raw data. The engine makes results available to the database processing engine for storage and further manipulation. The database processing engine ingests data from the image processing engine, distills those results into numerical indices, and stores each index for an area of interest. This process happens each time new data is ingested and processed for the area of interest, and upon subsequent database entries, the database processing engine qualifies each value for each area of interest and conducts a logical processing of results indicating when and where thresholds are exceeded. Reports are provided at regular, operator-determined intervals that include variances from thresholds and links to view raw data for verification, if necessary. The technology and method of development allow the code base to easily be modified for varied use in the real-time and near-real-time processing environments. In addition, the final product will be demonstrated as a means for rapid draft assessment of imagery.
Retrieval of reflections from ambient noise using illumination diagnosis
NASA Astrophysics Data System (ADS)
Vidal, C. Almagro; Draganov, D.; van der Neut, J.; Drijkoningen, G.; Wapenaar, K.
2014-09-01
Seismic interferometry (SI) enables the retrieval of virtual sources at the location of receivers. In the case of passive SI, no active sources are used for the retrieval of the reflection response of the subsurface, but ambient-noise recordings only. The resulting retrieved response is determined by the illumination characteristics of the recorded ambient noise. Characteristics like geometrical distribution and signature of the noise sources, together with the complexity of the medium and the length of the noise records, determine the quality of the retrieved virtual-shot events. To retrieve body wave reflections, one needs to correlate body-wave noise. A source of such noise might be regional seismicity. In regions with notable human presence, the dominant noise sources are generally located at or close to the surface. In the latter case, the noise will be dominated by surface waves and consequently also the retrieved virtual common-source panels will contain dominant retrieved surface waves, drowning out possible retrieved reflections. In order to retrieve reflection events, suppression of the surface waves becomes the most important pre-processing goal. Because of the reasons mentioned above, we propose a fast method to evaluate the illumination characteristics of ambient noise using the correlation results from ambient-noise records. The method is based on the analysis of the so-called source function of the retrieved virtual-shot panel, and evaluates the apparent slowness of arrivals in the correlation results that pass through the position of the virtual source and at zero time. The results of the diagnosis are used to suppress the retrieval of surface waves and therefore to improve the quality of the retrieved reflection response. We explain the approach using modelled data from transient and continuous noise sources and an example from a passive field data set recorded at Annerveen, Northern Netherlands.
Database Deposit Service through JOIS : JAFIC File on Food Industry and Osaka Urban Engineering File
NASA Astrophysics Data System (ADS)
Kataoka, Akihiro
JICST has launched the database deposit service for the excellent quality in small-and medium size, both of which have no dissemination network. JAFIC File on Food Industry produced by the Japan Food Industry Center and Osaka Urban Engineering File by Osaka City have been in service by JOIS since March 2, 1987. In this paper the outline of the above databases is introduced in focussing on the items covered and retrieved by JOIS.
Yu, Sarah S; Johnson, Jeffrey D; Rugg, Michael D
2012-06-01
It has been proposed that the hippocampus selectively supports retrieval of contextual associations, but an alternative view holds that the hippocampus supports strong memories regardless of whether they contain contextual information. We employed a memory test that combined the 'Remember/Know' and source memory procedures, which allowed test items to be segregated both by memory strength (recognition accuracy) and, separately, by the quality of the contextual information that could be retrieved (indexed by the accuracy/confidence of a source memory judgment). As measured by fMRI, retrieval-related hippocampal activity tracked the quality of retrieved contextual information and not memory strength. These findings are consistent with the proposal that the hippocampus supports contextual recollection rather than recognition memory more generally. Copyright © 2011 Wiley Periodicals, Inc.
A data storage and retrieval model for Louisiana traffic operations data : technical summary.
DOT National Transportation Integrated Search
1996-08-01
The overall goal of this research study was to develop a prototype computer-based indexing model for traffic operation data in DOTD. The methodology included: 1) extraction of state road network, 2) development of geographic reference model, 3) engin...
A data storage and retrieval model for Louisiana traffic operations data : final report.
DOT National Transportation Integrated Search
1995-09-01
The type and amount of data managed by the Louisiana Department of Transportation and Development (DOTD) are huge. In many cases, these data are used to perform traffic engineering studies and highway safety analyses, among others. At the present tim...
Peeling the Onion: Okapi System Architecture and Software Design Issues.
ERIC Educational Resources Information Center
Jones, S.; And Others
1997-01-01
Discusses software design issues for Okapi, an information retrieval system that incorporates both search engine and user interface and supports weighted searching, relevance feedback, and query expansion. The basic search system, adjacency searching, and moving toward a distributed system are discussed. (Author/LRW)
Kodak Optical Disk and Microfilm Technologies Carve Niches in Specific Applications.
ERIC Educational Resources Information Center
Gallenberger, John; Batterton, John
1989-01-01
Describes the Eastman Kodak Company's microfilm and optical disk technologies and their applications. Topics discussed include WORM technology; retrieval needs and cost effective archival storage needs; engineering applications; jukeboxes; optical storage options; systems for use with mainframes and microcomputers; and possible future…
Calculation of the Actual Cost of Engine Maintenance
2003-03-01
Cost Estimating Integrated Tools ( ACEIT ) helps analysts store, retrieve, and analyze data; build cost models; analyze risk; time phase budgets; and...Tools ( ACEIT ).” n. pag. http://www.aceit.com/ 21 February 2003. • USAMC Logistics Support Activity (LOGSA). “Cost Analysis Strategy Assessment
Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Gur, Tamer; Cowley, Andrew; Li, Weizhong; Uludag, Mahmut; Pundir, Sangya; Cham, Jennifer A; McWilliam, Hamish; Lopez, Rodrigo
2015-07-01
The European Bioinformatics Institute (EMBL-EBI-https://www.ebi.ac.uk) provides free and unrestricted access to data across all major areas of biology and biomedicine. Searching and extracting knowledge across these domains requires a fast and scalable solution that addresses the requirements of domain experts as well as casual users. We present the EBI Search engine, referred to here as 'EBI Search', an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. API integration provides access to analytical tools, allowing users to further investigate the results of their search. The interconnectivity that exists between data resources at EMBL-EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types including sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, together with relevant life science literature. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NELS 2.0 - A general system for enterprise wide information management
NASA Technical Reports Server (NTRS)
Smith, Stephanie L.
1993-01-01
NELS, the NASA Electronic Library System, is an information management tool for creating distributed repositories of documents, drawings, and code for use and reuse by the aerospace community. The NELS retrieval engine can load metadata and source files of full text objects, perform natural language queries to retrieve ranked objects, and create links to connect user interfaces. For flexibility, the NELS architecture has layered interfaces between the application program and the stored library information. The session manager provides the interface functions for development of NELS applications. The data manager is an interface between session manager and the structured data system. The center of the structured data system is the Wide Area Information Server. This system architecture provides access to information across heterogeneous platforms in a distributed environment. There are presently three user interfaces that connect to the NELS engine; an X-Windows interface, and ASCII interface and the Spatial Data Management System. This paper describes the design and operation of NELS as an information management tool and repository.
ERISTAR: Earth Resources Information Storage, Transformation, Analysis, and Retrieval
NASA Technical Reports Server (NTRS)
1972-01-01
The National Aeronautics and Space Administration (NASA) and the American Society for Engineering Education (ASEE) have sponsored faculty fellowship programs in systems engineering design for the past several years. During the summer of 1972 four such programs were conducted by NASA, with Auburn University cooperating with Marshall Space Flight Center (MSFC). The subject for the Auburn-MSFC design group was ERISTAR, an acronym for Earth Resources Information Storage, Transformation, Analysis and Retrieval, which represents an earth resources information management network of state information centers administered by the respective states and linked to federally administered regional centers and a national center. The considerations for serving the users and the considerations that must be given to processing data from a variety of sources are described. The combination of these elements into a national network is discussed and an implementation plan is proposed for a prototype state information center. The compatibility of the proposed plan with the Department of Interior plan, RALI, is indicated.
TRENDS IN ENGINEERING GEOLOGIC AND RELATED MAPPING.
Varnes, David J.; Keaton, Jeffrey R.
1983-01-01
Progress is reviewed that has been made during the period 1972-1982 in producing medium- and small-scale engineering geologic maps with a variety of content. Improved methods to obtain and present information are evolving. Standards concerning text and map content, soil and rock classification, and map symbols have been proposed. Application of geomorphological techniques in terrain evaluation has increased, as has the use of aerial photography and other remote sensing. Computers are being used to store, analyze, retrieve, and print both text and map information. Development of offshore resources, especially petroleum, has led to marked improvement and growth in marine engineering geology and geotechnology. Coordinated planning for societal needs has required broader scope and increased complexity of both engineering geologic and environmental geologic studies. Refs.
16. INTERIOR OF ENGINE ROOM, CONTAINING MESTACORLISS CROSSCOMPOUND ENGINE, FOR ...
16. INTERIOR OF ENGINE ROOM, CONTAINING MESTA-CORLISS CROSS-COMPOUND ENGINE, FOR 40" BLOOMING MILL. THIS VIEW IS TAKEN FROM THE HIGH-PRESSURE SIDE OF THE ENGINE SHOWING THE SERVICE PLATFORM - Republic Iron & Steel Company, Youngstown Works, Blooming Mill & Blooming Mill Engines, North of Poland Avenue, Youngstown, Mahoning County, OH
13. INTERIOR OF ENGINE ROOM, CONTAINING MESTACORLISS CROSSCOMPOUND ENGINE, FOR ...
13. INTERIOR OF ENGINE ROOM, CONTAINING MESTA-CORLISS CROSS-COMPOUND ENGINE, FOR 40" BLOOMING MILL. THIS VIEW HIGHLIGHTS THE CRANK AND 24' DIAMETER FLYWHEEL. - Republic Iron & Steel Company, Youngstown Works, Blooming Mill & Blooming Mill Engines, North of Poland Avenue, Youngstown, Mahoning County, OH
An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.
Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J
2002-01-01
Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.
Potential of Higher Moments of the Radar Doppler Spectrum for Studying Ice Clouds
NASA Astrophysics Data System (ADS)
Loehnert, U.; Maahn, M.
2015-12-01
More observations of ice clouds are required to fill gaps in understanding of microphysical properties and processes. However, in situ observations by aircraft are costly and cannot provide long term observations which are required for a deeper understanding of the processes. Ground based remote sensing observations have the potential to fill this gap, but their observations do not contain sufficient information to unambiguously constrain ice cloud properties which leads to high uncertainties. For vertically pointing cloud radars, usually only reflectivity and mean Doppler velocity are used for retrievals; some studies proposed also the use of Doppler spectrum width.In this study, it is investigated whether additional information can be obtained by exploiting also higher moments of the Doppler spectrum such as skewness and kurtosis together with the slope of the Doppler peak. For this, observations of pure ice clouds from the Indirect and Semi-Direct Aerosol Campaign (ISDAC) in Alaska 2008 are analyzed. Using the ISDAC data set, an Optimal Estimation based retrieval is set up based on synthetic and real radar observations. The passive and active microwave radiative transfer model (PAMTRA) is used as a forward model together with the Self-Similar Rayleigh-Gans approximation for estimation of the scattering properties. The state vector of the retrieval consists of the parameters required to simulate the radar Doppler spectrum and describes particle mass, cross section area, particle size distribution, and kinematic conditions such as turbulence and vertical air motion. Using the retrieval, the information content (degrees of freedom for signal) is quantified that higher moments and slopes can contribute to an ice cloud retrieval. The impact of multiple frequencies, radar sensitivity and radar calibration is studied. For example, it is found that a single-frequency measurement using all moments and slopes contains already more information content than a dual-frequency measurement using only reflectivity and mean Doppler velocity. Eventually, the errors and uncertainties of the retrieved ice cloud parameters are investigated for the various retrieval configurations.
Potential of Higher Moments of the Radar Doppler Spectrum for Studying Ice Clouds
NASA Astrophysics Data System (ADS)
Lunt, M. F.; Rigby, M. L.; Ganesan, A.; Manning, A.; O'Doherty, S.; Prinn, R. G.; Saito, T.; Harth, C. M.; Muhle, J.; Weiss, R. F.; Salameh, P.; Arnold, T.; Yokouchi, Y.; Krummel, P. B.; Steele, P.; Fraser, P. J.; Li, S.; Park, S.; Kim, J.; Reimann, S.; Vollmer, M. K.; Lunder, C. R.; Hermansen, O.; Schmidbauer, N.; Young, D.; Simmonds, P. G.
2014-12-01
More observations of ice clouds are required to fill gaps in understanding of microphysical properties and processes. However, in situ observations by aircraft are costly and cannot provide long term observations which are required for a deeper understanding of the processes. Ground based remote sensing observations have the potential to fill this gap, but their observations do not contain sufficient information to unambiguously constrain ice cloud properties which leads to high uncertainties. For vertically pointing cloud radars, usually only reflectivity and mean Doppler velocity are used for retrievals; some studies proposed also the use of Doppler spectrum width.In this study, it is investigated whether additional information can be obtained by exploiting also higher moments of the Doppler spectrum such as skewness and kurtosis together with the slope of the Doppler peak. For this, observations of pure ice clouds from the Indirect and Semi-Direct Aerosol Campaign (ISDAC) in Alaska 2008 are analyzed. Using the ISDAC data set, an Optimal Estimation based retrieval is set up based on synthetic and real radar observations. The passive and active microwave radiative transfer model (PAMTRA) is used as a forward model together with the Self-Similar Rayleigh-Gans approximation for estimation of the scattering properties. The state vector of the retrieval consists of the parameters required to simulate the radar Doppler spectrum and describes particle mass, cross section area, particle size distribution, and kinematic conditions such as turbulence and vertical air motion. Using the retrieval, the information content (degrees of freedom for signal) is quantified that higher moments and slopes can contribute to an ice cloud retrieval. The impact of multiple frequencies, radar sensitivity and radar calibration is studied. For example, it is found that a single-frequency measurement using all moments and slopes contains already more information content than a dual-frequency measurement using only reflectivity and mean Doppler velocity. Eventually, the errors and uncertainties of the retrieved ice cloud parameters are investigated for the various retrieval configurations.
PROGRESS WITH K BASINS SLUDGE RETRIEVAL STABILIZATION & PACKAGING AT THE HANFORD NUCLEAR SITE
DOE Office of Scientific and Technical Information (OSTI.GOV)
KNOLLMEYER, P.M.; PHILLIPS, C; TOWNSON, P.S.
This paper shows how Fluor Hanford and BNG America have combined nuclear plant skills from the U.S. and the U.K. to devise methods to retrieve and treat the sludge that has accumulated in K Basins at the Hanford Site over many years. Retrieving the sludge is the final stage in removing fuel and sludge from the basins to allow them to be decontaminated and decommissioned, so as to remove the threat of contamination of the Columbia River. A description is given of sludge retrieval using vacuum lances and specially developed nozzles and pumps into Consolidation Containers within the basins. Themore » special attention that had to be paid to the heat generation and potential criticality issues with the irradiated uranium-containing sludge is described. The processes developed to re-mobilize the sludge from the Consolidation Containers and pump it through flexible and transportable hose-in-hose piping to the treatment facility are explained with particular note made of dealing with the abrasive nature of the sludge. The treatment facility, housed in an existing Hanford building, is described, and the uranium-corrosion and grout packaging processes explained. The uranium corrosion process is a robust, tempered process very suitable for dealing with a range of differing sludge compositions. Optimization and simplification of the original sludge corrosion process design is described and the use of transportable and reusable equipment is indicated. The processes and techniques described in the paper are shown to have wide applicability to nuclear cleanup.« less
NASA Technical Reports Server (NTRS)
Kumar, S. V.; Peters-Lidard, C. D.; Santanello, J. A.; Reichle, R. H.; Draper, C. S.; Koster, R. D.; Nearing, G.; Jasinski, M. F.
2015-01-01
Earth's land surface is characterized by tremendous natural heterogeneity and human-engineered modifications, both of which are challenging to represent in land surface models. Satellite remote sensing is often the most practical and effective method to observe the land surface over large geographical areas. Agricultural irrigation is an important human-induced modification to natural land surface processes, as it is pervasive across the world and because of its significant influence on the regional and global water budgets. In this article, irrigation is used as an example of a human-engineered, often unmodeled land surface process, and the utility of satellite soil moisture retrievals over irrigated areas in the continental US is examined. Such retrievals are based on passive or active microwave observations from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), the Advanced Microwave Scanning Radiometer 2 (AMSR2), the Soil Moisture Ocean Salinity (SMOS) mission, WindSat and the Advanced Scatterometer (ASCAT). The analysis suggests that the skill of these retrievals for representing irrigation effects is mixed, with ASCAT-based products somewhat more skillful than SMOS and AMSR2 products. The article then examines the suitability of typical bias correction strategies in current land data assimilation systems when unmodeled processes dominate the bias between the model and the observations. Using a suite of synthetic experiments that includes bias correction strategies such as quantile mapping and trained forward modeling, it is demonstrated that the bias correction practices lead to the exclusion of the signals from unmodeled processes, if these processes are the major source of the biases. It is further shown that new methods are needed to preserve the observational information about unmodeled processes during data assimilation.
Tozzi, Alberto Eugenio; Buonuomo, Paola Sabrina; Ciofi degli Atti, Marta Luisa; Carloni, Emanuela; Meloni, Marco; Gamba, Fiorenza
2010-01-01
Information available on the Internet about immunizations may influence parents' perception about human papillomavirus (HPV) immunization and their attitude toward vaccinating their daughters. We hypothesized that the quality of information on HPV available on the Internet may vary with language and with the level of knowledge of parents. To this end we compared the quality of a sample of Web pages in Italian with a sample of Web pages in English. Five reviewers assessed the quality of Web pages retrieved with popular search engines using criteria adapted from the Good Information Practice Essential Criteria for Vaccine Safety Web Sites recommended by the World Health Organization. Quality of Web pages was assessed in the domains of accessibility, credibility, content, and design. Scores in these domains were compared through nonparametric statistical tests. We retrieved and reviewed 74 Web sites in Italian and 117 in English. Most retrieved Web pages (33.5%) were from private agencies. Median scores were higher in Web pages in English compared with those in Italian in the domain of accessibility (p < .01), credibility (p < .01), and content (p < .01). The highest credibility and content scores were those of Web pages from governmental agencies or universities. Accessibility scores were positively associated with content scores (p < .01) and with credibility scores (p < .01). A total of 16.2% of Web pages in Italian opposed HPV immunization compared with 6.0% of those in English (p < .05). Quality of information and number of Web pages opposing HPV immunization may vary with the Web site language. High-quality Web pages on HPV, especially from public health agencies and universities, should be easily accessible and retrievable with common Web search engines. Copyright 2010 Society for Adolescent Medicine. Published by Elsevier Inc. All rights reserved.
Retrieval and classification of food images.
Farinella, Giovanni Maria; Allegra, Dario; Moltisanti, Marco; Stanco, Filippo; Battiato, Sebastiano
2016-10-01
Automatic food understanding from images is an interesting challenge with applications in different domains. In particular, food intake monitoring is becoming more and more important because of the key role that it plays in health and market economies. In this paper, we address the study of food image processing from the perspective of Computer Vision. As first contribution we present a survey of the studies in the context of food image processing from the early attempts to the current state-of-the-art methods. Since retrieval and classification engines able to work on food images are required to build automatic systems for diet monitoring (e.g., to be embedded in wearable cameras), we focus our attention on the aspect of the representation of the food images because it plays a fundamental role in the understanding engines. The food retrieval and classification is a challenging task since the food presents high variableness and an intrinsic deformability. To properly study the peculiarities of different image representations we propose the UNICT-FD1200 dataset. It was composed of 4754 food images of 1200 distinct dishes acquired during real meals. Each food plate is acquired multiple times and the overall dataset presents both geometric and photometric variabilities. The images of the dataset have been manually labeled considering 8 categories: Appetizer, Main Course, Second Course, Single Course, Side Dish, Dessert, Breakfast, Fruit. We have performed tests employing different representations of the state-of-the-art to assess the related performances on the UNICT-FD1200 dataset. Finally, we propose a new representation based on the perceptual concept of Anti-Textons which is able to encode spatial information between Textons outperforming other representations in the context of food retrieval and Classification. Copyright © 2016 Elsevier Ltd. All rights reserved.
Prediction of the Main Engine Power of a New Container Ship at the Preliminary Design Stage
NASA Astrophysics Data System (ADS)
Cepowski, Tomasz
2017-06-01
The paper presents mathematical relationships that allow us to forecast the estimated main engine power of new container ships, based on data concerning vessels built in 2005-2015. The presented approximations allow us to estimate the engine power based on the length between perpendiculars and the number of containers the ship will carry. The approximations were developed using simple linear regression and multivariate linear regression analysis. The presented relations have practical application for estimation of container ship engine power needed in preliminary parametric design of the ship. It follows from the above that the use of multiple linear regression to predict the main engine power of a container ship brings more accurate solutions than simple linear regression.
Bat-Inspired Algorithm Based Query Expansion for Medical Web Information Retrieval.
Khennak, Ilyes; Drias, Habiba
2017-02-01
With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.
Indexing and retrieving DICOM data in disperse and unstructured archives.
Costa, Carlos; Freitas, Filipe; Pereira, Marco; Silva, Augusto; Oliveira, José L
2009-01-01
This paper proposes an indexing and retrieval solution to gather information from distributed DICOM documents by allowing searches and access to the virtual data repository using a Google-like process. The medical imaging modalities are becoming more powerful and less expensive. The result is the proliferation of equipment acquisition by imaging centers, including the small ones. With this dispersion of data, it is not easy to take advantage of all the information that can be retrieved from these studies. Furthermore, many of these small centers do not have large enough requirements to justify the acquisition of a traditional PACS. A peer-to-peer PACS platform to index and query DICOM files over a set of distributed repositories that are logically viewed as a single federated unit. The solution is based on a public domain document-indexing engine and extends traditional PACS query and retrieval mechanisms. This proposal deals well with complex searching requirements, from a single desktop environment to distributed scenarios. The solution performance and robustness were demonstrated in trials. The characteristics of presented PACS platform make it particularly important for small institutions, including educational and research groups.
Multi-source and ontology-based retrieval engine for maize mutant phenotypes
USDA-ARS?s Scientific Manuscript database
In the midst of this genomics era, major plant genome databases are collecting massive amounts of heterogeneous information, including sequence data, gene product information, images of mutant phenotypes, etc., as well as textual descriptions of many of these entities. While basic browsing and sear...
An Analysis of Web Image Queries for Search.
ERIC Educational Resources Information Center
Pu, Hsiao-Tieh
2003-01-01
Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)
Philips with stowage bags in MPLM
2005-07-30
ISS011-E-11331 (30 July 2005) --- Astronaut John L. Phillips, Expedition 11 NASA space station science officer and flight engineer, retrieves supplies from the Raffaello Multi-Purpose Logistics Module (MPLM), which was brought to Earth-orbit by the seven-member STS-114 crew of the space shuttle Discovery.
Video-assisted segmentation of speech and audio track
NASA Astrophysics Data System (ADS)
Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.
1999-08-01
Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.
Nadkarni, P M
1997-08-01
Concept Locator (CL) is a client-server application that accesses a Sybase relational database server containing a subset of the UMLS Metathesaurus for the purpose of retrieval of concepts corresponding to one or more query expressions supplied to it. CL's query grammar permits complex Boolean expressions, wildcard patterns, and parenthesized (nested) subexpressions. CL translates the query expressions supplied to it into one or more SQL statements that actually perform the retrieval. The generated SQL is optimized by the client to take advantage of the strengths of the server's query optimizer, and sidesteps its weaknesses, so that execution is reasonably efficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrmann, W.; von Laven, G.M.; Parker, T.
1993-09-01
The Bibliographic Retrieval System (BARS) is a data base management system specially designed to retrieve bibliographic references. Two databases are available, (i) the Sandia Shock Compression (SSC) database which contains over 5700 references to the literature related to stress waves in solids and their applications, and (ii) the Shock Physics Index (SPHINX) which includes over 8000 further references to stress waves in solids, material properties at intermediate and low rates, ballistic and hypervelocity impact, and explosive or shock fabrication methods. There is some overlap in the information in the two data bases.
The Comprehensive Microbial Resource.
Peterson, J D; Umayam, L A; Dickinson, T; Hickey, E K; White, O
2001-01-01
One challenge presented by large-scale genome sequencing efforts is effective display of uniform information to the scientific community. The Comprehensive Microbial Resource (CMR) contains robust annotation of all complete microbial genomes and allows for a wide variety of data retrievals. The bacterial information has been placed on the Web at http://www.tigr.org/CMR for retrieval using standard web browsing technology. Retrievals can be based on protein properties such as molecular weight or hydrophobicity, GC-content, functional role assignments and taxonomy. The CMR also has special web-based tools to allow data mining using pre-run homology searches, whole genome dot-plots, batch downloading and traversal across genomes using a variety of datatypes.
MetaSEEk: a content-based metasearch engine for images
NASA Astrophysics Data System (ADS)
Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu
1997-12-01
Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.
Cohoon, Kevin P; McBride, Joseph; Friese, Jeremy L; McPhail, Ian R
2015-10-01
Evaluate the success rate of retrievable inferior vena cava filter (IVC) removal in a tertiary care practice. Retrievable IVC filters became readily available in the United States following Food and Drug Administration approval in 2003, and their use has increased dramatically. They represent an attractive option for patients with contraindications to anticoagulation who may only need short-term protection against pulmonary embolism. All patients who had undergone placement of a retrievable IVC filter at Mayo Clinic between 2003 and 2005 were retrospectively reviewed to evaluate our initial experience with retrievable inferior vena cava filters at a large tertiary care center. During a three-year-period of time, Mayo Clinic, Rochester, MN placed 892 IVC filters of which 460 were retrievable. Of the 460 retrievable filters placed (249 Günther Tulip®, 207 Recovery®, and 4 OptEase®), retrieval was attempted in 223 (48.5%). Of 223 initial attempts, 196 (87.9%) were initially successful and 27 (12.1%) were unsuccessful. Of the 27 unsuccessful initial retrieval attempts, 23 (85.2%) were because of the presence of significant thrombus within the filter and 4 (14.8%) were because of tilting and strut perforation. Of the 23 filters containing significant thrombus, 9 (39.1%) were later retrieved after a period of anticoagulation and resolution of the thrombus. Retrievable IVC filters can be removed with a high degree of success. Approximately one in ten retrievable IVC filter removal attempts may fail initially, usually because of significant thrombus within the filter. This does not preclude possible removal at a later date. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Callahan, P. X.; Schatte, C.; Grindeland, R. E.; Lencki, W. A.; Funk, G. A.
1986-01-01
A hardware description and experimental results are reported from the initial STS flight carrying two Research Animal Holding Facility (RAHF) units. The flight was mainly intended for engineering check-out of the RAHF design. The system development and prelaunch preparations are briefly summarized, including the provisions of retrieval teams at alternate landing sites and extensive rehearsals to ensure timely data analysis. The flight revealed a problem with the containment of particulates from the RAHFs and the provision of adequate water for the monkeys. On-board films showed that one of the monkeys experienced motion sickness, from which he recovered after 5 days in space. Necropsy of the subject rats documented suppressed interferon production, loss of muscle mass, an up to 13 percent loss in bone mass (after a one week flight), and a 20 percent decrease in growth-inducing hormone. The volume of data collected is thought to exceed the combined data gathered on all previous U.S. space missions.
NASA Astrophysics Data System (ADS)
Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.
2018-06-01
A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.
A user-friendly tool for medical-related patent retrieval.
Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnyakova, Dina; Lovis, Christian; Ruch, Patrick
2012-01-01
Health-related information retrieval is complicated by the variety of nomenclatures available to name entities, since different communities of users will use different ways to name a same entity. We present in this report the development and evaluation of a user-friendly interactive Web application aiming at facilitating health-related patent search. Our tool, called TWINC, relies on a search engine tuned during several patent retrieval competitions, enhanced with intelligent interaction modules, such as chemical query, normalization and expansion. While the functionality of related article search showed promising performances, the ad hoc search results in fairly contrasted results. Nonetheless, TWINC performed well during the PatOlympics competition and was appreciated by intellectual property experts. This result should be balanced by the limited evaluation sample. We can also assume that it can be customized to be applied in corporate search environments to process domain and company-specific vocabularies, including non-English literature and patents reports.
Machine Translation-Supported Cross-Language Information Retrieval for a Consumer Health Resource
Rosemblat, Graciela; Gemoets, Darren; Browne, Allen C.; Tse, Tony
2003-01-01
The U.S. National Institutes of Health, through its National Library of Medicine, developed ClinicalTrials.gov to provide the public with easy access to information on clinical trials on a wide range of conditions or diseases. Only English language information retrieval is currently supported. Given the growing number of Spanish speakers in the U.S. and their increasing use of the Web, we anticipate a significant increase in Spanish-speaking users. This study compares the effectiveness of two common cross-language information retrieval methods using machine translation, query translation versus document translation, using a subset of genuine user queries from ClinicalTrials.gov. Preliminary results conducted with the ClinicalTrials.gov search engine show that in our environment, query translation is statistically significantly better than document translation. We discuss possible reasons for this result and we conclude with suggestions for future work. PMID:14728236
Andrenucci, Andrea
2016-01-01
Few studies have been performed within cross-language information retrieval (CLIR) in the field of psychology and psychotherapy. The aim of this paper is to to analyze and assess the quality of available query translation methods for CLIR on a health portal for psychology. A test base of 100 user queries, 50 Multi Word Units (WUs) and 50 Single WUs, was used. Swedish was the source language and English the target language. Query translation methods based on machine translation (MT) and dictionary look-up were utilized in order to submit query translations to two search engines: Google Site Search and Quick Ask. Standard IR evaluation measures and a qualitative analysis were utilized to assess the results. The lexicon extracted with word alignment of the portal's parallel corpus provided better statistical results among dictionary look-ups. Google Translate provided more linguistically correct translations overall and also delivered better retrieval results in MT.
Chen, Xi; Chen, Huajun; Bi, Xuan; Gu, Peiqin; Chen, Jiaoyan; Wu, Zhaohui
2014-01-01
Understanding the functional mechanisms of the complex biological system as a whole is drawing more and more attention in global health care management. Traditional Chinese Medicine (TCM), essentially different from Western Medicine (WM), is gaining increasing attention due to its emphasis on individual wellness and natural herbal medicine, which satisfies the goal of integrative medicine. However, with the explosive growth of biomedical data on the Web, biomedical researchers are now confronted with the problem of large-scale data analysis and data query. Besides that, biomedical data also has a wide coverage which usually comes from multiple heterogeneous data sources and has different taxonomies, making it hard to integrate and query the big biomedical data. Embedded with domain knowledge from different disciplines all regarding human biological systems, the heterogeneous data repositories are implicitly connected by human expert knowledge. Traditional search engines cannot provide accurate and comprehensive search results for the semantically associated knowledge since they only support keywords-based searches. In this paper, we present BioTCM-SE, a semantic search engine for the information retrieval of modern biology and TCM, which provides biologists with a comprehensive and accurate associated knowledge query platform to greatly facilitate the implicit knowledge discovery between WM and TCM.
Chen, Xi; Chen, Huajun; Bi, Xuan; Gu, Peiqin; Chen, Jiaoyan; Wu, Zhaohui
2014-01-01
Understanding the functional mechanisms of the complex biological system as a whole is drawing more and more attention in global health care management. Traditional Chinese Medicine (TCM), essentially different from Western Medicine (WM), is gaining increasing attention due to its emphasis on individual wellness and natural herbal medicine, which satisfies the goal of integrative medicine. However, with the explosive growth of biomedical data on the Web, biomedical researchers are now confronted with the problem of large-scale data analysis and data query. Besides that, biomedical data also has a wide coverage which usually comes from multiple heterogeneous data sources and has different taxonomies, making it hard to integrate and query the big biomedical data. Embedded with domain knowledge from different disciplines all regarding human biological systems, the heterogeneous data repositories are implicitly connected by human expert knowledge. Traditional search engines cannot provide accurate and comprehensive search results for the semantically associated knowledge since they only support keywords-based searches. In this paper, we present BioTCM-SE, a semantic search engine for the information retrieval of modern biology and TCM, which provides biologists with a comprehensive and accurate associated knowledge query platform to greatly facilitate the implicit knowledge discovery between WM and TCM. PMID:24772189
Using Engineering Cameras on Mars Landers and Rovers to Retrieve Atmospheric Dust Loading
NASA Astrophysics Data System (ADS)
Wolfe, C. A.; Lemmon, M. T.
2014-12-01
Dust in the Martian atmosphere influences energy deposition, dynamics, and the viability of solar powered exploration vehicles. The Viking, Pathfinder, Spirit, Opportunity, Phoenix, and Curiosity landers and rovers each included the ability to image the Sun with a science camera that included a neutral density filter. Direct images of the Sun provide the ability to measure extinction by dust and ice in the atmosphere. These observations have been used to characterize dust storms, to provide ground truth sites for orbiter-based global measurements of dust loading, and to help monitor solar panel performance. In the cost-constrained environment of Mars exploration, future missions may omit such cameras, as the solar-powered InSight mission has. We seek to provide a robust capability of determining atmospheric opacity from sky images taken with cameras that have not been designed for solar imaging, such as lander and rover engineering cameras. Operational use requires the ability to retrieve optical depth on a timescale useful to mission planning, and with an accuracy and precision sufficient to support both mission planning and validating orbital measurements. We will present a simulation-based assessment of imaging strategies and their error budgets, as well as a validation based on archival engineering camera data.
Implementation of a thesaurus in an electronic photograph imaging system
NASA Astrophysics Data System (ADS)
Partlow, Denise
1995-11-01
A photograph imaging system presents a unique set of requirements for indexing and retrieving images, unlike a standard imaging system for written documents. This paper presents the requirements, technical design, and development results for a hierarchical ANSI standard thesaurus embedded into a photograph archival system. The thesaurus design incorporates storage reduction techniques, permits fast searches, and contains flexible indexing methods. It can be extended to many applications other than the retrieval of photographs. When photographic images are indexed into an electronic system, they are subject to a variety of indexing problems based on what the indexer `sees.' For instance, the indexer may categorize an image as a boat when others might refer to it as a ship, sailboat, or raft. The thesaurus will allow a user to locate images containing any synonym for boat, regardless of how the image was actually indexed. In addition to indexing problems, photos may need to be retrieved based on a broad category, for instance, flowers. The thesaurus allows a search for `flowers' to locate all images containing a rose, hibiscus, or daisy, yet still allow a specific search for an image containing only a rose. The technical design and method of implementation for such a thesaurus is presented. The thesaurus is implemented using an SQL relational data base management system that supports blobs, binary large objects. The design incorporates unique compression methods for storing the thesaurus words. Words are indexed to photographs using the compressed word and allow for very rapid searches, eliminating lengthy string matches.
Data collection and preparation of authoritative reviews on space food and nutrition research
NASA Technical Reports Server (NTRS)
1972-01-01
The collection and classification of information for a manually operated information retrieval system on the subject of space food and nutrition research are described. The system as it currently exists is designed for retrieval of documents, either in hard copy or on microfiche, from the technical files of the MSC Food and Nutrition Section by accession number, author, and/or subject. The system could readily be extended to include retrieval by affiliation, report and contract number, and sponsoring agency should the need arise. It can also be easily converted to computerized retrieval. At present the information retrieval system contains nearly 3000 documents which consist of technical papers, contractors' reports, and reprints obtained from the food and nutrition files at MSC, Technical Library, the library at the Texas Medical Center in Houston, the BMI Technical Libraries, Dr. E. B. Truitt at MBI, and the OSU Medical Libraries. Additional work was done to compile 18 selected bibliographies on subjects of immediate interest on the MSC Food and Nutrition Section.
Users guide for the Water Resources Division bibliographic retrieval and report generation system
Tamberg, Nora
1983-01-01
The WRDBIB Retrieval and Report-generation system has been developed by applying Multitrieve (CSD 1980, Reston) software to bibliographic data files. The WRDBIB data base includes some 9 ,000 records containing bibliographic citations and descriptors of WRD reports released for publication during 1968-1982. The data base is resident in the Reston Multics computer and may be accessed by registered Multics users in the field. The WRDBIB Users Guide provides detailed procedures on how to run retrieval programs using WRDBIB library files, and how to prepare custom bibliographic reports and author indexes. Users may search the WRDBIB data base on the following variable fields as described in the Data Dictionary: Authors, organizational source, title, citation, publication year, descriptors, and the WRSIC (accession) number. The Users Guide provides ample examples of program runs illustrating various retrieval and report generation aspects. Appendices include Multics access and file manipulation procedures; a ' Glossary of Selected Terms'; and a complete ' Retrieval Session ' with step-by-step outlines. (USGS)
LandEx - Fast, FOSS-Based Application for Query and Retrieval of Land Cover Patterns
NASA Astrophysics Data System (ADS)
Netzel, P.; Stepinski, T.
2012-12-01
The amount of satellite-based spatial data is continuously increasing making a development of efficient data search tools a priority. The bulk of existing research on searching satellite-gathered data concentrates on images and is based on the concept of Content-Based Image Retrieval (CBIR); however, available solutions are not efficient and robust enough to be put to use as deployable web-based search tools. Here we report on development of a practical, deployable tool that searches classified, rather than raw image. LandEx (Landscape Explorer) is a GeoWeb-based tool for Content-Based Pattern Retrieval (CBPR) contained within the National Land Cover Dataset 2006 (NLCD2006). The USGS-developed NLCD2006 is derived from Landsat multispectral images; it covers the entire conterminous U.S. with the resolution of 30 meters/pixel and it depicts 16 land cover classes. The size of NLCD2006 is about 10 Gpixels (161,000 x 100,000 pixels). LandEx is a multi-tier GeoWeb application based on Open Source Software. Main components are: GeoExt/OpenLayers (user interface), GeoServer (OGC WMS, WCS and WPS server), and GRASS (calculation engine). LandEx performs search using query-by-example approach: user selects a reference scene (exhibiting a chosen pattern of land cover classes) and the tool produces, in real time, a map indicating a degree of similarity between the reference pattern and all local patterns across the U.S. Scene pattern is encapsulated by a 2D histogram of classes and sizes of single-class clumps. Pattern similarity is based on the notion of mutual information. The resultant similarity map can be viewed and navigated in a web browser, or it can download as a GeoTiff file for more in-depth analysis. The LandEx is available at http://sil.uc.edu
Wiley, Laura K.; Sivley, R. Michael; Bush, William S.
2013-01-01
Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist PMID:23894185
Wiley, Laura K; Sivley, R Michael; Bush, William S
2013-01-01
Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist.
User's operating procedures. Volume 2: Scout project financial analysis program
NASA Technical Reports Server (NTRS)
Harris, C. G.; Haris, D. K.
1985-01-01
A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.
Use of incidentally encoded memory from a single experience in cats.
Takagi, Saho; Tsuzuki, Mana; Chijiiwa, Hitomi; Arahori, Minori; Watanabe, Arii; Saito, Atsuko; Fujita, Kazuo
2017-08-01
We examined whether cats could retrieve and utilize incidentally encoded information from a single past event in a simple food-exploration task previously used for dogs (Fujita et al., 2012). In Experiment 1, cats were led to four open, baited containers and allowed to eat from two of them (Exposure phase). After a 15-min delay during which the cats were absent and all containers were replaced with empty ones, the cats were unexpectedly returned to the room and allowed to explore the containers (Test phase). Although the cats' first choice of container to visit was random, they explored containers from which they had not previously eaten for longer than those from which they did previously eat. In the Exposure phase of Experiment 2, two containers held food, one held a nonedible object, and the fourth was empty. Cats were allowed to eat from one of them. In the post-delay Test phase, the cats first visited the remaining baited-uneaten container significantly more often than chance and they spent more time exploring this container. Because the cats' behavior in the Test phase cannot be explained by association of the container with a pleasant experience (eating), the results suggest that cats retrieved and utilized "what" and "where" information from an incidentally encoded memory from a single experience. Copyright © 2017 Elsevier B.V. All rights reserved.
A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text
ERIC Educational Resources Information Center
Nguyen, Bao-An; Yang, Don-Lin
2012-01-01
An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…
Electronic Reference Library: Silverplatter's Database Networking Solution.
ERIC Educational Resources Information Center
Millea, Megan
Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…
The Shared Bibliographic Input Network (SBIN): A Summary of the Experiment.
ERIC Educational Resources Information Center
Cotter, Gladys A.
As part of its mission to provide centralized services for the acquisition, storage, retrieval, and dissemination of scientific and technical information (STI) to support Department of Defense (DoD) research, development, and engineering studies programs, the Defense Technical Information Center (DTIC) sponsors the Shared Bibliographic Input…
2018-04-30
iss055e043245 (April 30, 2018) --- NASA astronaut Ricky Arnold transfers frozen biological samples from science freezers aboard the International Space Station to science freezers inside the SpaceX Dragon resupply ship. The research samples were returned to Earth aboard Dragon for retrieval by SpaceX engineers and analysis by NASA scientists.
Result Merging Strategies for a Current News Metasearcher.
ERIC Educational Resources Information Center
Rasolofo, Yves; Hawking, David; Savoy, Jacques
2003-01-01
Metasearching of online current news services is a potentially useful Web application of distributed information retrieval techniques. Reports experiences in building a metasearcher designed to provide up-to-date searching over a significant number of rapidly changing current news sites, focusing on how to merge results from the search engines at…
A Nugget-Based Test Collection Construction Paradigm
ERIC Educational Resources Information Center
Rajput, Shahzad K.
2012-01-01
The problem of building test collections is central to the development of information retrieval systems such as search engines. The primary use of test collections is the evaluation of IR systems. The widely employed "Cranfield paradigm" dictates that the information relevant to a topic be encoded at the level of documents, therefore…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, BE
2003-10-07
The Gunite and Associated Tanks (GAAT) Remediation Project was the first of its kind performed in the United States. Robotics and remotely operated equipment were used to successfully transfer almost 94,000 gal of remote-handled transuranic sludge containing over 81,000 Ci of radioactive contamination from nine large underground storage tanks at the Oak Ridge National Laboratory (ORNL). The sludge was transferred with over 439,000 gal of radioactive waste supernatant and {approx}420,500 gal of fresh water that was used in sluicing operations. The GAATs are located in a high-traffic area of ORNL near a main thoroughfare. A phased and integrated approach tomore » waste retrieval operations was used for the GAAT Remediation Project. The project promoted safety by obtaining experience from low-risk operations in the North Tank Farm before moving to higher-risk operations in the South Tank Farm. This approach allowed project personnel to become familiar with the tanks and waste, as well as the equipment, processes, procedures, and operations required to perform successful waste retrieval. By using an integrated approach to tank waste retrieval and tank waste management, the project was completed years ahead of the original baseline schedule, which resulted in avoiding millions of dollars in associated costs. This report is organized in two volumes. Volume 1 provides information on the various phases of the GAAT Remediation Project. It also describes the different types of equipment and how they were used. The emphasis of Volume 1 is on the description of the tank waste retrieval performance and the lessons learned during the GAAT Remediation Project. Volume 2 provides the appendixes for the report, which include the following information: (A) Background Information for the Gunite and Associated Tanks Operable Unit; (B) Annotated Bibliography; (C) Comprehensive Listing of the Sample Analysis Data from the GAAT Remediation Project; (D) GAAT Equipment Matrix; and (E) Vendor List for the GAAT Remediation Project. The remediation of the GAATs was completed {approx}5.5 years ahead of schedule and {approx}$120,435,000 below the cost estimated in the Remedial Investigation/Feasibility Study for the project. These schedule and cost savings were a direct result of the selection and use of state-of-the-art technologies and the dedication and drive of the engineers, technicians, managers, craft workers, and support personnel that made up the GAAT Remediation Project Team.« less
Project RAMA: Reconstructing Asteroids Into Mechanical Automata
NASA Technical Reports Server (NTRS)
Dunn, Jason; Fagin, Max; Snyder, Michael; Joyce, Eric
2017-01-01
Many interesting ideas have been conceived for building space-based infrastructure in cislunar space. From O'Neill's space colonies, to solar power satellite farms, and even prospecting retrieved near earth asteroids. In all the scenarios, one thing remained fixed - the need for space resources at the outpost. To satisfy this need, O'Neill suggested an electromagnetic railgun to deliver resources from the lunar surface, while NASA's Asteroid Redirect Mission called for a solar electric tug to deliver asteroid materials from interplanetary space. At Made In Space, we propose an entirely new concept. One which is scalable, cost effective, and ensures that the abundant material wealth of the inner solar system becomes readily available to humankind in a nearly automated fashion. We propose the RAMA architecture, which turns asteroids into self-contained spacecraft capable of moving themselves back to cislunar space. The RAMA architecture is just as capable of transporting conventional-sized asteroids on the 10-meter length scale as transporting asteroids 100 meters or larger, making it the most versatile asteroid retrieval architecture in terms of retrieved-mass capability. This report describes the results of the Phase I study funded by the NASA NIAC program for Made In Space to establish the concept feasibility of using space manufacturing to convert asteroids into autonomous, mechanical spacecraft. Project RAMA, Reconstituting Asteroids into Mechanical Automata, is designed to leverage the future advances of additive manufacturing (AM), in-situ resource utilization (ISRU) and in-situ manufacturing (ISM) to realize enormous efficiencies in repeated asteroid redirect missions. A team of engineers at Made In Space performed the study work with consultation from the asteroid mining industry, academia, and NASA. Previous studies for asteroid retrieval have been constrained to studying only asteroids that are both large enough to be discovered, and small enough to be captured and transported using Earth-launched propulsion technology. Project RAMA is not forced into this constraint. The mission concept studied involved transporting a much larger approximately 50-meter asteroid to cislunar space. Demonstration of transport of a 50-meter-class asteroid has several ground-breaking advantages. First, the returned material is of an industrial, rather than just scientific, quantity (greater than 10,000 tonnes versus approximately10s of tonnes). Second, the "useless" material in the asteroid is gathered and expended as part of the asteroid's propulsion system, allowing the returned asteroid to be considerably "purer" than a conventional asteroid retrieval mission. Third, the infrastructure used to convert and return the asteroid is reusable, and capable of continually returning asteroids to cislunar space.
An approach in building a chemical compound search engine in oracle database.
Wang, H; Volarath, P; Harrison, R
2005-01-01
A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.
Science Data System Contribution to Calibrating and Validating SMAP Data Products
NASA Astrophysics Data System (ADS)
Cuddy, D.
2015-12-01
NASA's Soil Moisture Active Passive (SMAP) mission retrieves global surface soil moisture and freeze/thaw state using measurements acquired by a radiometer and a synthetic aperture radar that fly on an Earth orbiting satellite. The SMAP observatory launched from Vandenberg Air Force Base on January 31, 2015 into a near-polar, sun-synchronous orbit. This paper describes the contribution of the SMAP Science Data System (SDS) to the calibration and on-going validation of the radar backscatter and radiometer brightness temperatures. The Science Data System designed, implemented and operated the software that generates data products that contain various geophysical parameters including soil moisture and freeze/thaw states, daily maps of these geophysical parameters, as well as modeled analyses of global soil moisture and carbon flux in Boreal regions. The SDS is a fully automated system that processes the incoming raw data from the instruments, incorporates spacecraft and instrument engineering data, and uses both dynamic and static ancillary products provided by the scientific community. The standard data products appear in Hierarchical Data Format-5 (HDF5) format. These products contain metadata that conform to the ISO 19115 standard. The Alaska Satellite Facility (ASF) hosts and distributes SMAP radar data products. The National Snow and Ice Data Center (NSIDC) hosts and distributes all of the other SMAP data products.
Readability of websites containing information on dental implants.
Jayaratne, Yasas S N; Anderson, Nina K; Zwahlen, Roger A
2014-12-01
It is recommended that health-related materials for patients be written at sixth grade level or below. Many websites oriented toward patient education about dental implants are available, but the readability of these sites has not been evaluated. To assess readability of patient-oriented online information on dental implants. Websites containing patient-oriented information on dental implants were retrieved using the Google search engine. Individual and mean readability/grade levels were calculated using standardized formulas. Readability of each website was classified as easy (≤ 6th-grade level) or difficult (≥ 10th grade level). Thirty nine websites with patient-oriented information on dental implant were found. The average readability grade level of these websites was 11.65 ± 1.36. No website scored at/below the recommended 6th grade level. Thirty four of 39 websites (87.18%) were difficult to read. The number of characters, words, and sentences on these sites varied widely. All patient-oriented websites on dental implants scored above the recommended grade level, and majority of these sites were "difficult" in their readability. There is a dire need to create patient information websites on implants, which the majority can read. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension
Sekerina, Irina A.; Campanelli, Luca; Van Dyke, Julie A.
2016-01-01
The cue-based retrieval theory (Lewis et al., 2006) predicts that interference from similar distractors should create difficulty for argument integration, however this hypothesis has only been examined in the written modality. The current study uses the Visual World Paradigm (VWP) to assess its feasibility to study retrieval interference arising from distractors present in a visual display during spoken language comprehension. The study aims to extend findings from Van Dyke and McElree (2006), which utilized a dual-task paradigm with written sentences in which they manipulated the relationship between extra-sentential distractors and the semantic retrieval cues from a verb, to the spoken modality. Results indicate that retrieval interference effects do occur in the spoken modality, manifesting immediately upon encountering the verbal retrieval cue for inaccurate trials when the distractors are present in the visual field. We also observed indicators of repair processes in trials containing semantic distractors, which were ultimately answered correctly. We conclude that the VWP is a useful tool for investigating retrieval interference effects, including both the online effects of distractors and their after-effects, when repair is initiated. This work paves the way for further studies of retrieval interference in the spoken modality, which is especially significant for examining the phenomenon in pre-reading children, non-reading adults (e.g., people with aphasia), and spoken language bilinguals. PMID:27378974
15. INTERIOR OF ENGINE ROOM, CONTAINING MESTACORLISS CROSSCOMPOUND ENGINE, FOR ...
15. INTERIOR OF ENGINE ROOM, CONTAINING MESTA-CORLISS CROSS-COMPOUND ENGINE, FOR 40" BLOOMING MILL. THIS VIEW IS TAKEN FROM THE HIGH-PRESSURE SIDE OF THE ENGINE SHOWING THE HOUSING EXTENSION; TO THE RIGHT, IN THE BACKGROUND, IS THE 24' CAST-IRON FLYWHEEL. - Republic Iron & Steel Company, Youngstown Works, Blooming Mill & Blooming Mill Engines, North of Poland Avenue, Youngstown, Mahoning County, OH
Integrating Engineering Data Systems for NASA Spaceflight Projects
NASA Technical Reports Server (NTRS)
Carvalho, Robert E.; Tollinger, Irene; Bell, David G.; Berrios, Daniel C.
2012-01-01
NASA has a large range of custom-built and commercial data systems to support spaceflight programs. Some of the systems are re-used by many programs and projects over time. Management and systems engineering processes require integration of data across many of these systems, a difficult problem given the widely diverse nature of system interfaces and data models. This paper describes an ongoing project to use a central data model with a web services architecture to support the integration and access of linked data across engineering functions for multiple NASA programs. The work involves the implementation of a web service-based middleware system called Data Aggregator to bring together data from a variety of systems to support space exploration. Data Aggregator includes a central data model registry for storing and managing links between the data in disparate systems. Initially developed for NASA's Constellation Program needs, Data Aggregator is currently being repurposed to support the International Space Station Program and new NASA projects with processes that involve significant aggregating and linking of data. This change in user needs led to development of a more streamlined data model registry for Data Aggregator in order to simplify adding new project application data as well as standardization of the Data Aggregator query syntax to facilitate cross-application querying by client applications. This paper documents the approach from a set of stand-alone engineering systems from which data are manually retrieved and integrated, to a web of engineering data systems from which the latest data are automatically retrieved and more quickly and accurately integrated. This paper includes the lessons learned through these efforts, including the design and development of a service-oriented architecture and the evolution of the data model registry approaches as the effort continues to evolve and adapt to support multiple NASA programs and priorities.
JANE, A new information retrieval system for the Radiation Shielding Information Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trubey, D.K.
A new information storage and retrieval system has been developed for the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory to replace mainframe systems that have become obsolete. The database contains citations and abstracts of literature which were selected by RSIC analysts and indexed with terms from a controlled vocabulary. The database, begun in 1963, has been maintained continuously since that time. The new system, called JANE, incorporates automatic indexing techniques and on-line retrieval using the RSIC Data General Eclipse MV/4000 minicomputer, Automatic indexing and retrieval techniques based on fuzzy-set theory allow the presentation of results in ordermore » of Retrieval Status Value. The fuzzy-set membership function depends on term frequency in the titles and abstracts and on Term Discrimination Values which indicate the resolving power of the individual terms. These values are determined by the Cover Coefficient method. The use of a commercial database base to store and retrieve the indexing information permits rapid retrieval of the stored documents. Comparisons of the new and presently-used systems for actual searches of the literature indicate that it is practical to replace the mainframe systems with a minicomputer system similar to the present version of JANE. 18 refs., 10 figs.« less
Managing geometric information with a data base management system
NASA Technical Reports Server (NTRS)
Dube, R. P.
1984-01-01
The strategies for managing computer based geometry are described. The computer model of geometry is the basis for communication, manipulation, and analysis of shape information. The research on integrated programs for aerospace-vehicle design (IPAD) focuses on the use of data base management system (DBMS) technology to manage engineering/manufacturing data. The objectives of IPAD is to develop a computer based engineering complex which automates the storage, management, protection, and retrieval of engineering data. In particular, this facility must manage geometry information as well as associated data. The approach taken on the IPAD project to achieve this objective is discussed. Geometry management in current systems and the approach taken in the early IPAD prototypes are examined.
Automated inspection of turbine blades: Challenges and opportunities
NASA Technical Reports Server (NTRS)
Mehta, Manish; Marron, Joseph C.; Sampson, Robert E.; Peace, George M.
1994-01-01
Current inspection methods for complex shapes and contours exemplified by aircraft engine turbine blades are expensive, time-consuming and labor intensive. The logistics support of new manufacturing paradigms such as integrated product-process development (IPPD) for current and future engine technology development necessitates high speed, automated inspection of forged and cast jet engine blades, combined with a capability of retaining and retrieving metrology data for process improvements upstream (designer-level) and downstream (end-user facilities) at commercial and military installations. The paper presents the opportunities emerging from a feasibility study conducted using 3-D holographic laser radar in blade inspection. Requisite developments in computing technologies for systems integration of blade inspection in production are also discussed.
14. INTERIOR OF ENGINE ROOM, CONTAINING MESTACORLISS CROSSCOMPOUND ENGINE, FOR ...
14. INTERIOR OF ENGINE ROOM, CONTAINING MESTA-CORLISS CROSS-COMPOUND ENGINE, FOR 40" BLOOMING MILL. THIS VIEW HIGHLIGHTS THE CRANK AND 24' DIAMETER FLYWHEEL. THE ENGINE IS A 7,940 HP MESTA-CORLISS CROSS-COMPOUND STEAM ENGINE ITS BORE AND STROKE ARE 32"X84"X60". NOTE FLY BALL GOVERNOR ON ENGINE. MILL DRIVE SHAFT ATTACHED TO PULLEY ON LOCATED ON CRANK. - Republic Iron & Steel Company, Youngstown Works, Blooming Mill & Blooming Mill Engines, North of Poland Avenue, Youngstown, Mahoning County, OH
Jupiter Europa Orbiter Architecture Definition Process
NASA Technical Reports Server (NTRS)
Rasmussen, Robert; Shishko, Robert
2011-01-01
The proposed Jupiter Europa Orbiter mission, planned for launch in 2020, is using a new architectural process and framework tool to drive its model-based systems engineering effort. The process focuses on getting the architecture right before writing requirements and developing a point design. A new architecture framework tool provides for the structured entry and retrieval of architecture artifacts based on an emerging architecture meta-model. This paper describes the relationships among these artifacts and how they are used in the systems engineering effort. Some early lessons learned are discussed.
STS-118 Astronaut Williams and Expedition 15 Engineer Anderson Perform EVA
NASA Technical Reports Server (NTRS)
2007-01-01
As the construction continued on the International Space Station (ISS), STS-118 Astronaut Dave Williams, representing the Canadian Space Agency, participated in the fourth and final session of Extra Vehicular Activity (EVA). During the 5 hour space walk, Williams and Expedition 15 engineer Clay Anderson (out of frame) installed the External Wireless Instrumentation System Antenna, attached a stand for the shuttle robotic arm extension boom, and retrieved the two Materials International Space Station Experiments (MISSE) for return to Earth. MISSE collects information on how different materials weather in the environment of space.
Develop advanced nonlinear signal analysis topographical mapping system
NASA Technical Reports Server (NTRS)
Jong, Jen-Yi
1993-01-01
This study will provide timely assessment of SSME component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. The final result of this program will yield an advanced nonlinear signal analysis topographical mapping system (ATMS) of nonlinear and nonstationary spectral analysis software package integrated with the Compressed SSME TOPO Data Base (CSTDB) on the same platform. This system will allow NASA engineers to retrieve any unique defect signatures and trends associated with different failure modes and anomalous phenomena over the entire SSME test history across turbopump families.
van Haagen, Herman H. H. B. M.; 't Hoen, Peter A. C.; Mons, Barend; Schultes, Erik A.
2013-01-01
Motivation Weighted semantic networks built from text-mined literature can be used to retrieve known protein-protein or gene-disease associations, and have been shown to anticipate associations years before they are explicitly stated in the literature. Our text-mining system recognizes over 640,000 biomedical concepts: some are specific (i.e., names of genes or proteins) others generic (e.g., ‘Homo sapiens’). Generic concepts may play important roles in automated information retrieval, extraction, and inference but may also result in concept overload and confound retrieval and reasoning with low-relevance or even spurious links. Here, we attempted to optimize the retrieval performance for protein-protein interactions (PPI) by filtering generic concepts (node filtering) or links to generic concepts (edge filtering) from a weighted semantic network. First, we defined metrics based on network properties that quantify the specificity of concepts. Then using these metrics, we systematically filtered generic information from the network while monitoring retrieval performance of known protein-protein interactions. We also systematically filtered specific information from the network (inverse filtering), and assessed the retrieval performance of networks composed of generic information alone. Results Filtering generic or specific information induced a two-phase response in retrieval performance: initially the effects of filtering were minimal but beyond a critical threshold network performance suddenly drops. Contrary to expectations, networks composed exclusively of generic information demonstrated retrieval performance comparable to unfiltered networks that also contain specific concepts. Furthermore, an analysis using individual generic concepts demonstrated that they can effectively support the retrieval of known protein-protein interactions. For instance the concept “binding” is indicative for PPI retrieval and the concept “mutation abnormality” is indicative for gene-disease associations. Conclusion Generic concepts are important for information retrieval and cannot be removed from semantic networks without negative impact on retrieval performance. PMID:24260124
The Comprehensive Microbial Resource
Peterson, Jeremy D.; Umayam, Lowell A.; Dickinson, Tanja; Hickey, Erin K.; White, Owen
2001-01-01
One challenge presented by large-scale genome sequencing efforts is effective display of uniform information to the scientific community. The Comprehensive Microbial Resource (CMR) contains robust annotation of all complete microbial genomes and allows for a wide variety of data retrievals. The bacterial information has been placed on the Web at http://www.tigr.org/CMR for retrieval using standard web browsing technology. Retrievals can be based on protein properties such as molecular weight or hydrophobicity, GC-content, functional role assignments and taxonomy. The CMR also has special web-based tools to allow data mining using pre-run homology searches, whole genome dot-plots, batch downloading and traversal across genomes using a variety of datatypes. PMID:11125067
Burdo, Joseph; O'Dwyer, Laura
2015-12-01
Concept mapping and retrieval practice are both educational methods that have separately been reported to provide significant benefits for learning in diverse settings. Concept mapping involves diagramming a hierarchical representation of relationships between distinct pieces of information, whereas retrieval practice involves retrieving information that was previously coded into memory. The relative benefits of these two methods have never been tested against each other in a classroom setting. Our study was designed to investigate whether or not concept mapping or retrieval practice produced a significant learning benefit in an undergraduate physiology course as measured by exam performance and, if so, was the benefit of one method significantly greater than the other. We found that there was a trend toward increased exam scores for the retrieval practice group compared with both the control group and concept mapping group, and that trend achieved statistical significance for one of the four module exams in the course. We also found that women performed statistically better than men on the module exam that contained a substantial amount of material relating to female reproductive physiology. Copyright © 2015 The American Physiological Society.
Study to Improve Airframe Turbine Engine Rotor Blade Containment
1977-07-01
REPORT NO. FAA-RD-77-44 ( DOT-FA76WA-3843 JUNE 1976 STUDY TO IMPROVE AIRFRAME TURBINE ENGINE ROTOR BLADE CONTAINMENT C. 0. GUNDERSON SOF Tftj. -" So...both engines appeared to be able to marginally contain the 1 and 2 blade fragments in all compressor and turbine stages, but probably would rfiot have...adjacent blades including serrations from any stage. The investigation was made on high bypass ratio turbofan engines which power wide body transports
Pre-Engineering Program. Introduction to Engineering. Advanced Engineering.
ERIC Educational Resources Information Center
Henrico County Public Schools, Glen Allen, VA. Virginia Vocational Curriculum and Resource Center.
This guide contains information and hands-on activities to guide students through the problem-solving process needed in engineering (problem solving, presentation, and impact analysis) and information to help the instructor manage the program or courses in Virginia. Following an introduction, the guide contains a program description that supplies…
NASA Technical Reports Server (NTRS)
Gasso, Santiago; O'Neill, Norm
2006-01-01
We present sunphotometer-retrieved and in situ fine mode fractions (FMF) measured onboard the same aircraft during the ACE-Asia experiment. Comparisons indicate that the latter can be used to identify whether the aerosol under observation is dominated by a mixture of modes or a single mode. Differences between retrieved and in situ FMF range from 5-20%. When profiles contained multiple layers of aerosols, the retrieved and measured FMF were segregated by layers. The comparison of layered and total FMF from the same profile indicates that columnar values are intermediate to those derived from layers. As a result, a remotely sensed FMF cannot be used to distinguish whether the aerosol under observation is composed of layers each with distinctive modal features or all layers with the same modal features. Thus, the use of FMF in multiple layer environments does not provide unique information on the aerosol under observation.
Room Temperature Memory for Few Photon Polarization Qubits
NASA Astrophysics Data System (ADS)
Kupchak, Connor; Mittiga, Thomas; Jordan, Bertus; Nazami, Mehdi; Nolleke, Christian; Figueroa, Eden
2014-05-01
We have developed a room temperature quantum memory device based on Electromagnetically Induced Transparency capable of reliably storing and retrieving polarization qubits on the few photon level. Our system is realized in a vapor of 87Rb atoms utilizing a Λ-type energy level scheme. We create a dual-rail storage scheme mediated by an intense control field to allow storage and retrieval of any arbitrary polarization state. Upon retrieval, we employ a filtering system to sufficiently remove the strong pump field, and subject retrieved light states to polarization tomography. To date, our system has produced signal-to-noise ratios near unity with a memory fidelity of >80 % using coherent state qubits containing four photons on average. Our results thus demonstrate the feasibility of room temperature systems for the storage of single-photon-level photonic qubits. Such room temperature systems will be attractive for future long distance quantum communication schemes.
MISR Near Real Time Products Available
Atmospheric Science Data Center
2014-09-04
... containing both Ellipsoid- and Terrain-projected radiance information, and the L2 Cloud Motion Vector (CMV) product containing ... The NRT versions of MISR data products employ the same retrieval algorithms as standard production, yielding equivalent science ... product is available in HDFEOS and BUFR format. For more information, please consult the MISR CMV DPS and Documentation for the ...
Romanova, G A; Mirzoev, T K; Barskov, I V; Victorov, I V; Gudasheva, T A; Ostrovskaya, R U
2000-09-01
Antiamnestic effect of acyl-prolyl-containing dipeptide GVS-111 was demonstrated in rats with bilateral compression-induced damage to the frontal cortex. Both intraperitoneal and oral administration of the dipeptide improved retrieval of passive avoidance responses in rats with compression-induced cerebral ischemia compared to untreated controls.
NASA Technical Reports Server (NTRS)
1975-01-01
An on-line data storage and retrieval system which allows the user to extract and process information from stored data bases is described. The capabilities of the system are provided by a general purpose computer program containing several functional modules. The modules contained in MIRADS are briefly described along with user terminal operation procedures and MIRADS commands.
Knowing what, where, and when: event comprehension in language processing.
Kukona, Anuenue; Altmann, Gerry T M; Kamide, Yuki
2014-10-01
We investigated the retrieval of location information, and the deployment of attention to these locations, following (described) event-related location changes. In two visual world experiments, listeners viewed arrays with containers like a bowl, jar, pan, and jug, while hearing sentences like "The boy will pour the sweetcorn from the bowl into the jar, and he will pour the gravy from the pan into the jug. And then, he will taste the sweetcorn". At the discourse-final "sweetcorn", listeners fixated context-relevant "Target" containers most (jar). Crucially, we also observed two forms of competition: listeners fixated containers that were not directly referred to but associated with "sweetcorn" (bowl), and containers that played the same role as Targets (goals of moving events; jug), more than distractors (pan). These results suggest that event-related location changes are encoded across representations that compete for comprehenders' attention, such that listeners retrieve, and fixate, locations that are not referred to in the unfolding language, but related to them via object or role information. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.; Kennedy, John M.; White, Terry F.
1992-01-01
A telephone survey of U.S. aerospace engineers and scientists who were on the Society of Automotive Engineers (SAE) mailing list was conducted between August 14-26, 1991. The survey was undertaken to obtain information on the daily work activities of aerospace engineers and scientists, to measure various practices used by aerospace engineers and scientists to obtain STI, and to ask aerospace engineers and scientists about their use of electronic networks. Co-workers were found important sources of information. Co-workers are used to obtain technical information because the information they have is relevant, not because co-workers are accessible. As technical uncertainty increases, so does the need for information internal and external to the organization. Electronic networks enjoy widespread use within the aerospace community. These networks are accessible and they are used to contact people at remote sites. About 80 percent of the respondents used electronic mail, file transfer, and information or data retrieval to commercial or in-house data bases.
An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.
Zweigenbaum, P.; Darmoni, S. J.; Grabar, N.; Douyère, M.; Benichou, J.
2002-01-01
Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF. PMID:12463965
Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália
2016-07-01
Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations meaningful to that particular scope of research. Conversely, indirect concept associations, i.e. concepts related by other intermediary concepts, can be useful to integrate information from different studies and look into non-trivial relations. The BIOMedical Search Engine Framework supports the development of domain-specific search engines. The key strengths of the framework are modularity and extensibilityin terms of software design, the use of open-source consolidated Web technologies, and the ability to integrate any number of biomedical text mining tools and information resources. Currently, the Smart Drug Search keeps over 1,186,000 documents, containing more than 11,854,000 annotations for 77,200 different concepts. The Smart Drug Search is publicly accessible at http://sing.ei.uvigo.es/sds/. The BIOMedical Search Engine Framework is freely available for non-commercial use at https://github.com/agjacome/biomsef. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
LIVIVO - the Vertical Search Engine for Life Sciences.
Müller, Bernd; Poley, Christoph; Pössel, Jana; Hagelstein, Alexandra; Gübitz, Thomas
2017-01-01
The explosive growth of literature and data in the life sciences challenges researchers to keep track of current advancements in their disciplines. Novel approaches in the life science like the One Health paradigm require integrated methodologies in order to link and connect heterogeneous information from databases and literature resources. Current publications in the life sciences are increasingly characterized by the employment of trans-disciplinary methodologies comprising molecular and cell biology, genetics, genomic, epigenomic, transcriptional and proteomic high throughput technologies with data from humans, plants, and animals. The literature search engine LIVIVO empowers retrieval functionality by incorporating various literature resources from medicine, health, environment, agriculture and nutrition. LIVIVO is developed in-house by ZB MED - Information Centre for Life Sciences. It provides a user-friendly and usability-tested search interface with a corpus of 55 Million citations derived from 50 databases. Standardized application programming interfaces are available for data export and high throughput retrieval. The search functions allow for semantic retrieval with filtering options based on life science entities. The service oriented architecture of LIVIVO uses four different implementation layers to deliver search services. A Knowledge Environment is developed by ZB MED to deal with the heterogeneity of data as an integrative approach to model, store, and link semantic concepts within literature resources and databases. Future work will focus on the exploitation of life science ontologies and on the employment of NLP technologies in order to improve query expansion, filters in faceted search, and concept based relevancy rankings in LIVIVO.
ERIC Educational Resources Information Center
Denning, Rebecca; Smith, Philip J.
1994-01-01
Describes issues and advances in the design of appropriate inference engines and knowledge structures needed by commercially feasible intelligent intermediary systems for information retrieval. Issues associated with the design of interfaces to such functions are discussed in detail. Design principles for guiding implementation of these interfaces…
ERIC Educational Resources Information Center
Hopkins, Robin F.; Lyle, Keith B.; Hieb, Jeff L.; Ralston, Patricia A. S.
2016-01-01
A major challenge college students face is retaining the knowledge they acquire in their classes, especially in cumulative disciplines such as engineering, where ultimate success depends on long-term retention of foundational content. Cognitive psychologists have recently recommended various techniques educators might use to increase retention.…
Online Research Behaviors of Engineering Graduate Students in Taiwan
ERIC Educational Resources Information Center
Cheng, Ying-Hsueh; Tsai, Chin-Chung
2017-01-01
Previous studies have examined the online research behaviors of graduate students in terms of how they seek and retrieve research-related information on the Web across diverse disciplines. However, few have focused on graduate students' searching activities, and particularly for their research tasks. Drawing on Kuiper, Volman, and Terwel's (2008)…
Pictorial Visual Rotation Ability of Engineering Design Graphics Students
ERIC Educational Resources Information Center
Ernst, Jeremy Vaughn; Lane, Diarmaid; Clark, Aaron C.
2015-01-01
The ability to rotate visual mental images is a complex cognitive skill. It requires the building of graphical libraries of information through short or long term memory systems and the subsequent retrieval and manipulation of these towards a specified goal. The development of mental rotation skill is of critical importance within engineering…
The Heinz Electronic Library Interactive On-line System (HELIOS): An Update.
ERIC Educational Resources Information Center
Galloway, Edward A.; Michalek, Gabrielle V.
1998-01-01
Describes a project at Carnegie Mellon University libraries to convert the congressional papers of the late Senator John Heinz to digital format and to create an online system to search and retrieve these papers. Highlights include scanning, optical character recognition, and a search engine utilizing natural language processing. (Author/LRW)
Recommendations for a Habitability Data Base.
ERIC Educational Resources Information Center
Illinois Univ., Urbana. Library Research Center.
A prototype Habitability Data Base was developed for the United States Army Corps of Engineers. From a review of selected Army documents, standards in the form of goals or architectural criteria were identified as significant to man-environment relations (MER). A search of appropriate information systems was conducted to retrieve a minimum of 500…
40 CFR 1054.660 - What are the provisions for exempting emergency rescue equipment?
Code of Federal Regulations, 2010 CFR
2010-07-01
... certified to current emission standards under the following conditions if the equipment will be used solely in emergency rescue situations: (1) You must determine annually that no engines certified to current... situations” means firefighting or other situations in which a person is retrieved from imminent danger. (c...
Sagace: A web-based search engine for biomedical databases in Japan
2012-01-01
Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data) and biological resource banks (such as mouse models of disease and cell lines). With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/. PMID:23110816
Petaminer: Using ROOT for efficient data storage in MySQL database
NASA Astrophysics Data System (ADS)
Cranshaw, J.; Malon, D.; Vaniachine, A.; Fine, V.; Lauret, J.; Hamill, P.
2010-04-01
High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.
Mayer, Miguel A; Karampiperis, Pythagoras; Kukurikos, Antonis; Karkaletsis, Vangelis; Stamatakis, Kostas; Villarroel, Dagmar; Leis, Angela
2011-06-01
The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various "trust marks" and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.
Popoola, Segun I; Atayero, Aderemi A; Badejo, Joke A; Odukoya, Jonathan A; Omole, David O; Ajayi, Priscilla
2018-06-01
In this data article, we present and analyze the demographic data of undergraduates admitted into engineering programs at Covenant University, Nigeria. The population distribution of 2649 candidates admitted into Chemical Engineering, Civil Engineering, Computer Engineering, Electrical and Electronics Engineering, Information and Communication Engineering, Mechanical Engineering, and Petroleum Engineering programs between 2002 and 2009 are analyzed by gender, age, and state of origin. The data provided in this data article were retrieved from the student bio-data submitted to the Department of Admissions and Student Records (DASR) and Center for Systems and Information Services (CSIS) by the candidates during the application process into the various engineering undergraduate programs. These vital information is made publicly available, after proper data anonymization, to facilitate empirical research in the emerging field of demographics analytics in higher education. A Microsoft Excel spreadsheet file is attached to this data article and the data is thoroughly described for easy reuse. Descriptive statistics and frequency distributions of the demographic data are presented in tables, plots, graphs, and charts. Unrestricted access to these demographic data will facilitate reliable and evidence-based research findings for sustainable education in developing countries.
Lightweight engine containment. [Kevlar shielding
NASA Technical Reports Server (NTRS)
Weaver, A. T.
1977-01-01
Kevlar fabric styles and weaves were studied, as well as methods of application for advanced gas turbine engines. The Kevlar material was subjected to high speed impacts by simple projectiles fired from a rifle, as well as more complex shapes such as fan blades released from gas turbine rotors in a spin pit. Just contained data was developed for a variety of weave and/or application techniques, and a comparative containment weight efficiency was established for Kevlar containment applications. The data generated during these tests is being incorporated into an analytical design system so that blade containment trade-off studies between Kevlar and metal case engine structures can be made. Laboratory tests and engine environment tests were performed to determine the survivability of Kevlar in a gas turbine environment.
NASA Astrophysics Data System (ADS)
Nelson, R. R.; O'Dell, C.
2017-12-01
The primary goal of OCO-2 is to use hyperspectral measurements of reflected near-infrared sunlight to retrieve the column-averaged dry-air mole fraction of carbon dioxide (XCO2) with high accuracy. This is only possible for measurements of scenes nearly free of optically thick clouds and aerosols. As some cloud or aerosol contamination will always be present, the OCO-2 retrieval algorithm includes clouds and aerosols as retrieved properties in its state vector. Information content analyses demonstrate that there are only 2-6 pieces of information about aerosols in the OCO-2 radiances. However, the upcoming OCO-2 algorithm (B8) attempts to retrieve 9 aerosol parameters; this over-fitting can hinder convergence and produce multiple solutions. In this work, we develop a simplified cloud and aerosol parameterization that intelligently reduces the number of retrieved parameters to 5 by only retrieving information about two aerosol layers: a lower tropospheric layer and an upper tropospheric / stratospheric layer. We retrieve the optical depth of each layer and the height of the lower tropospheric layer. Each of these layers contains a mixture of fine and coarse mode aerosol. In comparisons between OCO-2 XCO2 estimates and validation sources including TCCON, this scheme performs about as well as the more complicated OCO-2 retrieval algorithm, but has the potential benefits of more interpretable aerosol results, faster convergence, less nonlinearity, and greater throughput. We also investigate the dependence of our results on the optical properties of the fine and coarse mode aerosol types, such as their effective radii and the environmental relative humidity.
Mehta, Rohini; Baranova, Ancha; Birerdinc, Aybike
2012-01-01
Liquid nitrogen is colorless, odorless, extremely cold (-196 °C) liquid kept under pressure. It is commonly used as a cryogenic fluid for long term storage of biological materials such as blood, cells and tissues 1,2. The cryogenic nature of liquid nitrogen, while ideal for sample preservation, can cause rapid freezing of live tissues on contact - known as 'cryogenic burn'2, which may lead to severe frostbite in persons closely involved in storage and retrieval of samples from Dewars. Additionally, as liquid nitrogen evaporates it reduces the oxygen concentration in the air and might cause asphyxia, especially in confined spaces2. In laboratories, biological samples are often stored in cryovials or cryoboxes stacked in stainless steel racks within the Dewar tanks1. These storage racks are provided with a long shaft to prevent boxes from slipping out from the racks and into the bottom of Dewars during routine handling. All too often, however, boxes or vials with precious samples slip out and sink to the bottom of liquid nitrogen filled tank. In such cases, samples could be tediously retrieved after transferring the liquid nitrogen into a spare container or discarding it. The boxes and vials can then be relatively safely recovered from emptied Dewar. However, the cryogenic nature of liquid nitrogen and its expansion rate makes sunken sample retrieval hazardous. It is commonly recommended by Safety Offices that sample retrieval be never carried out by a single person. Another alternative is to use commercially available cool grabbers or tongs to pull out the vials3. However, limited visibility within the dark liquid filled Dewars poses a major limitation in their use. In this article, we describe the construction of a Cryotolerant DIY retrieval device, which makes sample retrieval from Dewar containing cryogenic fluids both safe and easy. PMID:22617806
Mehta, Rohini; Baranova, Ancha; Birerdinc, Aybike
2012-05-11
Liquid nitrogen is colorless, odorless, extremely cold (-196 °C) liquid kept under pressure. It is commonly used as a cryogenic fluid for long term storage of biological materials such as blood, cells and tissues (1,2). The cryogenic nature of liquid nitrogen, while ideal for sample preservation, can cause rapid freezing of live tissues on contact - known as 'cryogenic burn' (2), which may lead to severe frostbite in persons closely involved in storage and retrieval of samples from Dewars. Additionally, as liquid nitrogen evaporates it reduces the oxygen concentration in the air and might cause asphyxia, especially in confined spaces (2). In laboratories, biological samples are often stored in cryovials or cryoboxes stacked in stainless steel racks within the Dewar tanks (1). These storage racks are provided with a long shaft to prevent boxes from slipping out from the racks and into the bottom of Dewars during routine handling. All too often, however, boxes or vials with precious samples slip out and sink to the bottom of liquid nitrogen filled tank. In such cases, samples could be tediously retrieved after transferring the liquid nitrogen into a spare container or discarding it. The boxes and vials can then be relatively safely recovered from emptied Dewar. However, the cryogenic nature of liquid nitrogen and its expansion rate makes sunken sample retrieval hazardous. It is commonly recommended by Safety Offices that sample retrieval be never carried out by a single person. Another alternative is to use commercially available cool grabbers or tongs to pull out the vials (3). However, limited visibility within the dark liquid filled Dewars poses a major limitation in their use. In this article, we describe the construction of a Cryotolerant DIY retrieval device, which makes sample retrieval from Dewar containing cryogenic fluids both safe and easy.
49 CFR 1007.11 - Public notice of records systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... BOARD, DEPARTMENT OF TRANSPORTATION GENERAL RULES AND REGULATIONS RECORDS CONTAINING INFORMATION ABOUT... use; (5) The policies and practices of the Board regarding storage, retrieval, access controls...
49 CFR 1007.11 - Public notice of records systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... BOARD, DEPARTMENT OF TRANSPORTATION GENERAL RULES AND REGULATIONS RECORDS CONTAINING INFORMATION ABOUT... use; (5) The policies and practices of the Board regarding storage, retrieval, access controls...
49 CFR 1007.11 - Public notice of records systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... BOARD, DEPARTMENT OF TRANSPORTATION GENERAL RULES AND REGULATIONS RECORDS CONTAINING INFORMATION ABOUT... use; (5) The policies and practices of the Board regarding storage, retrieval, access controls...
49 CFR 1007.11 - Public notice of records systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... BOARD, DEPARTMENT OF TRANSPORTATION GENERAL RULES AND REGULATIONS RECORDS CONTAINING INFORMATION ABOUT... use; (5) The policies and practices of the Board regarding storage, retrieval, access controls...
49 CFR 1007.11 - Public notice of records systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... BOARD, DEPARTMENT OF TRANSPORTATION GENERAL RULES AND REGULATIONS RECORDS CONTAINING INFORMATION ABOUT... use; (5) The policies and practices of the Board regarding storage, retrieval, access controls...
Standardized Curriculum for Small Engine Repair.
ERIC Educational Resources Information Center
Mississippi State Dept. of Education, Jackson. Office of Vocational, Technical and Adult Education.
This curriculum guide for small engine repair was developed by the state of Mississippi to standardize vocational education course titles and core contents. The objectives contained in this document are common to all small engine repair programs in the state. The guide contains objectives for small engine repair I and II courses. Units in course I…
Code of Federal Regulations, 2012 CFR
2012-07-01
... I am a manufacturer of stationary SI internal combustion engines or equipment containing stationary SI internal combustion engines or a manufacturer of equipment containing such engines? 60.4242... Ignition Internal Combustion Engines Compliance Requirements for Manufacturers § 60.4242 What other...
Code of Federal Regulations, 2010 CFR
2010-07-01
... I am a manufacturer of stationary SI internal combustion engines or equipment containing stationary SI internal combustion engines or a manufacturer of equipment containing such engines? 60.4242... Ignition Internal Combustion Engines Compliance Requirements for Manufacturers § 60.4242 What other...
Code of Federal Regulations, 2013 CFR
2013-07-01
... I am a manufacturer of stationary SI internal combustion engines or equipment containing stationary SI internal combustion engines or a manufacturer of equipment containing such engines? 60.4242... Ignition Internal Combustion Engines Compliance Requirements for Manufacturers § 60.4242 What other...
Code of Federal Regulations, 2014 CFR
2014-07-01
... I am a manufacturer of stationary SI internal combustion engines or equipment containing stationary SI internal combustion engines or a manufacturer of equipment containing such engines? 60.4242... Ignition Internal Combustion Engines Compliance Requirements for Manufacturers § 60.4242 What other...
Code of Federal Regulations, 2011 CFR
2011-07-01
... I am a manufacturer of stationary SI internal combustion engines or equipment containing stationary SI internal combustion engines or a manufacturer of equipment containing such engines? 60.4242... Ignition Internal Combustion Engines Compliance Requirements for Manufacturers § 60.4242 What other...
Auto Mechanics I. Learning Activity Packets (LAPs). Section C--Engine.
ERIC Educational Resources Information Center
Oklahoma State Board of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.
This document contains five learning activity packets (LAPs) that outline the study activities for the "engine" instructional area for an Auto Mechanics I course. The five LAPs cover the following topics: basic engine principles, cooling system, engine lubrication system, exhaust system, and fuel system. Each LAP contains a cover sheet…
The Tribology of Explanted Hip Resurfacings Following Early Fracture of the Femur.
Lord, James K; Langton, David J; Nargol, Antoni V F; Meek, R M Dominic; Joyce, Thomas J
2015-10-15
A recognized issue related to metal-on-metal hip resurfacings is early fracture of the femur. Most theories regarding the cause of fracture relate to clinical factors but an engineering analysis of failed hip resurfacings has not previously been reported. The objective of this work was to determine the wear volumes and surface roughness values of a cohort of retrieved hip resurfacings which were removed due to early femoral fracture, infection and avascular necrosis (AVN). Nine resurfacing femoral heads were obtained following early fracture of the femur, a further five were retrieved due to infection and AVN. All fourteen were measured for volumetric wear using a co-ordinate measuring machine. Wear rates were then calculated and regions of the articulating surface were divided into "worn" and "unworn". Roughness values in these regions were measured using a non-contacting profilometer. The mean time to fracture was 3.7 months compared with 44.4 months for retrieval due to infection and AVN. Average wear rates in the early fracture heads were 64 times greater than those in the infection and AVN retrievals. Given the high wear rates of the early fracture components, such wear may be linked to an increased risk of femoral neck fracture.
NASA Technical Reports Server (NTRS)
2002-01-01
A system that retrieves problem reports from a NASA database is described. The database is queried with natural language questions. Part-of-speech tags are first assigned to each word in the question using a rule based tagger. A partial parse of the question is then produced with independent sets of deterministic finite state a utomata. Using partial parse information, a look up strategy searches the database for problem reports relevant to the question. A bigram stemmer and irregular verb conjugates have been incorporated into the system to improve accuracy. The system is evaluated by a set of fifty five questions posed by NASA engineers. A discussion of future research is also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitschkowetz, N.; Vickers, D.L.
This report provides a summary of the Computer-aided Acquisition and Logistic Support (CALS) Test Network (CTN) Laboratory Acceptance Test (LAT) and User Application Test (UAT) activities undertaken to evaluate the CALS capabilities being implemented as part of the Department of Defense (DOD) engineering repositories. Although the individual testing activities provided detailed reports for each repository, a synthesis of the results, conclusions, and recommendations is offered to provide a more concise presentation of the issues and the strategies, as viewed from the CTN perspective.
Expanding the PACS archive to support clinical review, research, and education missions
NASA Astrophysics Data System (ADS)
Honeyman-Buck, Janice C.; Frost, Meryll M.; Drane, Walter E.
1999-07-01
Designing an image archive and retrieval system that supports multiple users with many different requirements and patterns of use without compromising the performance and functionality required by diagnostic radiology is an intellectual and technical challenge. A diagnostic archive, optimized for performance when retrieving diagnostic images for radiologists needed to be expanded to support a growing clinical review network, the University of Florida Brain Institute's demands for neuro-imaging, Biomedical Engineering's imaging sciences, and an electronic teaching file. Each of the groups presented a different set of problems for the designers of the system. In addition, the radiologists did not want to see nay loss of performance as new users were added.
An Improved Forensic Science Information Search.
Teitelbaum, J
2015-01-01
Although thousands of search engines and databases are available online, finding answers to specific forensic science questions can be a challenge even to experienced Internet users. Because there is no central repository for forensic science information, and because of the sheer number of disciplines under the forensic science umbrella, forensic scientists are often unable to locate material that is relevant to their needs. The author contends that using six publicly accessible search engines and databases can produce high-quality search results. The six resources are Google, PubMed, Google Scholar, Google Books, WorldCat, and the National Criminal Justice Reference Service. Carefully selected keywords and keyword combinations, designating a keyword phrase so that the search engine will search on the phrase and not individual keywords, and prompting search engines to retrieve PDF files are among the techniques discussed. Copyright © 2015 Central Police University.
A High-Resolution Aerosol Retrieval Method for Urban Areas Using MISR Data
NASA Astrophysics Data System (ADS)
Moon, T.; Wang, Y.; Liu, Y.; Yu, B.
2012-12-01
Satellite-retrieved Aerosol Optical Depth (AOD) can provide a cost-effective way to monitor particulate air pollution without using expensive ground measurement sensors. One of the current state-of-the-art AOD retrieval method is NASA's Multi-angle Imaging SpectroRadiometer (MISR) operational algorithm, which has the spatial resolution of 17.6 km x 17.6 km. While the MISR baseline scheme already leads to exciting research opportunities to study particle compositions at regional scale, its spatial resolution is too coarse for analyzing urban areas where the AOD level has stronger spatial variations. We develop a novel high-resolution AOD retrieval algorithm that still uses MISR's radiance observations but has the resolution of 4.4km x 4.4km. We achieve the high resolution AOD retrieval by implementing a hierarchical Bayesian model and Monte-Carlo Markov Chain (MCMC) inference method. Our algorithm not only improves the spatial resolution, but also extends the coverage of AOD retrieval and provides with additional composition information of aerosol components that contribute to the AOD. We validate our method using the recent NASA's DISCOVER-AQ mission data, which contains the ground measured AOD values for Washington DC and Baltimore area. The validation result shows that, compared to the operational MISR retrievals, our scheme has 41.1% more AOD retrieval coverage for the DISCOVER-AQ data points and 24.2% improvement in mean-squared error (MSE) with respect to the AERONET ground measurements.
Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.
Kahn, Charles E
2008-09-01
Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.
NASA Astrophysics Data System (ADS)
Mueller, Wolfgang; Mueller, Henning; Marchand-Maillet, Stephane; Pun, Thierry; Squire, David M.; Pecenovic, Zoran; Giess, Christoph; de Vries, Arjen P.
2000-10-01
While in the area of relational databases interoperability is ensured by common communication protocols (e.g. ODBC/JDBC using SQL), Content Based Image Retrieval Systems (CBIRS) and other multimedia retrieval systems are lacking both a common query language and a common communication protocol. Besides its obvious short term convenience, interoperability of systems is crucial for the exchange and analysis of user data. In this paper, we present and describe an extensible XML-based query markup language, called MRML (Multimedia Retrieval markup Language). MRML is primarily designed so as to ensure interoperability between different content-based multimedia retrieval systems. Further, MRML allows researchers to preserve their freedom in extending their system as needed. MRML encapsulates multimedia queries in a way that enable multimedia (MM) query languages, MM content descriptions, MM query engines, and MM user interfaces to grow independently from each other, reaching a maximum of interoperability while ensuring a maximum of freedom for the developer. For benefitting from this, only a few simple design principles have to be respected when extending MRML for one's fprivate needs. The design of extensions withing the MRML framework will be described in detail in the paper. MRML has been implemented and tested for the CBIRS Viper, using the user interface Snake Charmer. Both are part of the GNU project and can be downloaded at our site.
Natural language information retrieval in digital libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strzalkowski, T.; Perez-Carballo, J.; Marinescu, M.
In this paper we report on some recent developments in joint NYU and GE natural language information retrieval system. The main characteristic of this system is the use of advanced natural language processing to enhance the effectiveness of term-based document retrieval. The system is designed around a traditional statistical backbone consisting of the indexer module, which builds inverted index files from pre-processed documents, and a retrieval engine which searches and ranks the documents in response to user queries. Natural language processing is used to (1) preprocess the documents in order to extract content-carrying terms, (2) discover inter-term dependencies and buildmore » a conceptual hierarchy specific to the database domain, and (3) process user`s natural language requests into effective search queries. This system has been used in NIST-sponsored Text Retrieval Conferences (TREC), where we worked with approximately 3.3 GBytes of text articles including material from the Wall Street Journal, the Associated Press newswire, the Federal Register, Ziff Communications`s Computer Library, Department of Energy abstracts, U.S. Patents and the San Jose Mercury News, totaling more than 500 million words of English. The system have been designed to facilitate its scalability to deal with ever increasing amounts of data. In particular, a randomized index-splitting mechanism has been installed which allows the system to create a number of smaller indexes that can be independently and efficiently searched.« less
Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula
2012-01-01
AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,
Supervised graph hashing for histopathology image retrieval and classification.
Shi, Xiaoshuang; Xing, Fuyong; Xu, KaiDi; Xie, Yuanpu; Su, Hai; Yang, Lin
2017-12-01
In pathology image analysis, morphological characteristics of cells are critical to grade many diseases. With the development of cell detection and segmentation techniques, it is possible to extract cell-level information for further analysis in pathology images. However, it is challenging to conduct efficient analysis of cell-level information on a large-scale image dataset because each image usually contains hundreds or thousands of cells. In this paper, we propose a novel image retrieval based framework for large-scale pathology image analysis. For each image, we encode each cell into binary codes to generate image representation using a novel graph based hashing model and then conduct image retrieval by applying a group-to-group matching method to similarity measurement. In order to improve both computational efficiency and memory requirement, we further introduce matrix factorization into the hashing model for scalable image retrieval. The proposed framework is extensively validated with thousands of lung cancer images, and it achieves 97.98% classification accuracy and 97.50% retrieval precision with all cells of each query image used. Copyright © 2017 Elsevier B.V. All rights reserved.
Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission
NASA Technical Reports Server (NTRS)
Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan
2010-01-01
The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.
Controlled Retrieval of Specific Context Information in Children and Adults.
Lorsbach, Thomas C; Friehe, Mary J; Teten, Amy Fair; Reimer, Jason F; Armendarez, Joseph J
2015-01-01
This study adapted a procedure used by Luo and Craik (2009) to examine whether developmental differences exist in the ability to use controlled retrieval processes to access the contextual details of memory representations. Participants from 3 age groups (mean ages 9, 12, and 25 years) were presented with words in 3 study contexts: with a black-and-white picture, with a color picture, or alone without a picture. Six recognition tests were then presented that varied in the demands (high or low) placed on the retrieval of specific contextual information. Each test consisted of a mixture of words that were old targets from 1 study context, distractors (i.e., previously studied words from a different context), and completely new words. A high-specificity and a low-specificity test list was paired with each test question, with high and low specificity being determined by the nature of the distractors used in a test list. High-specificity tests contained words that were studied in similar contexts: old targets (e.g., words studied with black-and-white pictures) and distractors (e.g., words studied with color pictures). In contrast, low-specificity tests contained words that were studied in dissimilar contexts: old targets (e.g., words studied with black-and-white pictures) and distractors (e.g., words previously studied without a picture). Relative to low-specificity tests, the retrieval conditions of high-specificity tests were assumed to place greater demands on the controlled access of specific contextual information. Analysis of recollection scores revealed that age differences were present on high-but not low-specificity tests, with the performance of 9-year-olds disproportionately affected by the retrieval demands of high-specificity tests.
Lucey, K.J.
1990-01-01
The U.S. Geological Survey conducts an external blind sample quality assurance project for its National Water Quality Laboratory in Denver, Colorado, based on the analysis of reference water samples. Reference samples containing selected inorganic and nutrient constituents are disguised as environmental samples at the Survey 's office in Ocala, Florida, and are sent periodically through other Survey offices to the laboratory. The results of this blind sample project indicate the quality of analytical data produced by the laboratory. This report provides instructions on the use of QADATA, an interactive, menu-driven program that allows users to retrieve the results of the blind sample quality- assurance project. The QADATA program, which is available on the U.S. Geological Survey 's national computer network, accesses a blind sample data base that contains more than 50,000 determinations from the last five water years for approximately 40 constituents at various concentrations. The data can be retrieved from the database for any user- defined time period and for any or all available constituents. After the user defines the retrieval, the program prepares statistical tables, control charts, and precision plots and generates a report which can be transferred to the user 's office through the computer network. A discussion of the interpretation of the program output is also included. This quality assurance information will permit users to document the quality of the analytical results received from the laboratory. The blind sample data is entered into the database within weeks after being produced by the laboratory and can be retrieved to meet the needs of specific projects or programs. (USGS)
NASA Technical Reports Server (NTRS)
Xie, Yu; Minnis, Patrick; Hu, Yong X.; Kattawar, George W.; Yang, Ping
2008-01-01
Spherical or spheroidal air bubbles are generally trapped in the formation of rapidly growing ice crystals. In this study the single-scattering properties of inhomogeneous ice crystals containing air bubbles are investigated. Specifically, a computational model based on an improved geometric-optics method (IGOM) has been developed to simulate the scattering of light by randomly oriented hexagonal ice crystals containing spherical or spheroidal air bubbles. A combination of the ray-tracing technique and the Monte Carlo method is used. The effect of the air bubbles within ice crystals is to smooth the phase functions, diminish the 22deg and 46deg halo peaks, and substantially reduce the backscatter relative to bubble-free particles. These features vary with the number, sizes, locations and shapes of the air bubbles within ice crystals. Moreover, the asymmetry factors of inhomogeneous ice crystals decrease as the volume of air bubbles increases. Cloud reflectance lookup tables were generated at wavelengths 0.65 m and 2.13 m with different air-bubble conditions to examine the impact of the bubbles on retrieving ice cloud optical thickness and effective particle size. The reflectances simulated for inhomogeneous ice crystals are slightly larger than those computed for homogenous ice crystals at a wavelength of 0.65 microns. Thus, the retrieved cloud optical thicknesses are reduced by employing inhomogeneous ice cloud models. At a wavelength of 2.13 microns, including air bubbles in ice cloud models may also increase the reflectance. This effect implies that the retrieved effective particle sizes for inhomogeneous ice crystals are larger than those retrieved for homogeneous ice crystals, particularly, in the case of large air bubbles.
The Controlled Retrieval of Specific Context Information in Children and Adults
Lorsbach, Thomas C.; Reimer, Jason F.; Friehe, Mary J.; Armendarez, Joseph J.; Teten, Amy Fair
2017-01-01
The present study adapted a procedure used recently by Luo and Craik (2009) in order to examine whether developmental differences exist in the ability to use controlled retrieval processes to access the specific contextual details of memory representations. Participants from three age groups (Mean ages: 9, 12, and 25 years) were presented with words in three study contexts: with a black-and-white picture, with a color picture, or alone without a picture. Six recognition tests were then presented that varied in the demands (high or low) placed on the retrieval of specific contextual information. Each test consisted of a mixture of words that were old targets from one study context, distractors (i.e., previously studied words from a different context), and completely new words. A “high specificity” and a “low specificity” test list was paired with each test question, with “high” and “low” specificity being determined by the nature of the distractors used in a test list. High specificity tests contained words that were studied in similar contexts: old targets (e.g., words studied with black-and-white pictures) and distractors (e.g., words studied with color pictures). In contrast, low specificity tests contained words that were studied in dissimilar contexts: old targets (e.g., words studied with black-and-white pictures) and distractors (e.g., words previously studied without a picture). Relative to low specificity tests, the retrieval conditions of high specificity tests were assumed to place greater demands on the controlled access of specific contextual information. Analysis of recollection scores revealed that age differences were present on high, but not low specificity tests, with the performance of 9-year olds being disproportionately affected by the retrieval demands of high specificity tests. PMID:26219173
Collection of liquid from below-ground location
Phillips, Steven J.; Alexander, Robert G.
1995-01-01
A method of retrieving liquid from a below-ground collection area by permitting gravity flow of the liquid from the collection area to a first closed container; monitoring the level of the liquid in the closed container; and after the liquid reaches a given level in the first closed container, transferring the liquid to a second closed container disposed at a location above the first closed container, via a conduit, by introducing into the first closed container a gas which is substantially chemically inert with respect to the liquid, the gas being at a pressure sufficient to propel the liquid from the first closed container to the second closed container.
Information Content of Aerosol Retrievals in the Sunglint Region
NASA Technical Reports Server (NTRS)
Ottaviani, M.; Knobelspiesse, K.; Cairns, B.; Mishchenko, M.
2013-01-01
We exploit quantitative metrics to investigate the information content in retrievals of atmospheric aerosol parameters (with a focus on single-scattering albedo), contained in multi-angle and multi-spectral measurements with sufficient dynamical range in the sunglint region. The simulations are performed for two classes of maritime aerosols with optical and microphysical properties compiled from measurements of the Aerosol Robotic Network. The information content is assessed using the inverse formalism and is compared to that deriving from observations not affected by sunglint. We find that there indeed is additional information in measurements containing sunglint, not just for single-scattering albedo, but also for aerosol optical thickness and the complex refractive index of the fine aerosol size mode, although the amount of additional information varies with aerosol type.
NASA Astrophysics Data System (ADS)
Chou, Cheng-Ying; Anastasio, Mark A.
2016-04-01
In propagation-based X-ray phase-contrast (PB XPC) imaging, the measured image contains a mixture of absorption- and phase-contrast. To obtain separate images of the projected absorption and phase (i.e., refractive) properties of a sample, phase retrieval methods can be employed. It has been suggested that phase-retrieval can always improve image quality in PB XPC imaging. However, when objective (task-based) measures of image quality are employed, this is not necessarily true and phase retrieval can be detrimental. In this work, signal detection theory is utilized to quantify the performance of a Hotelling observer (HO) for detecting a known signal in a known background. Two cases are considered. In the first case, the HO acts directly on the measured intensity data. In the second case, the HO acts on either the retrieved phase or absorption image. We demonstrate that the performance of the HO is superior when acting on the measured intensity data. The loss of task-specific information induced by phase-retrieval is quantified by computing the efficiency of the HO as the ratio of the test statistic signal-to-noise ratio (SNR) for the two cases. The effect of the system geometry on this efficiency is systematically investigated. Our findings confirm that phase-retrieval can impair signal detection performance in XPC imaging.
Selectively Encrypted Pull-Up Based Watermarking of Biometric data
NASA Astrophysics Data System (ADS)
Shinde, S. A.; Patel, Kushal S.
2012-10-01
Biometric authentication systems are becoming increasingly popular due to their potential usage in information security. However, digital biometric data (e.g. thumb impression) are themselves vulnerable to security attacks. There are various methods are available to secure biometric data. In biometric watermarking the data are embedded in an image container and are only retrieved if the secrete key is available. This container image is encrypted to have more security against the attack. As wireless devices are equipped with battery as their power supply, they have limited computational capabilities; therefore to reduce energy consumption we use the method of selective encryption of container image. The bit pull-up-based biometric watermarking scheme is based on amplitude modulation and bit priority which reduces the retrieval error rate to great extent. By using selective Encryption mechanism we expect more efficiency in time at the time of encryption as well as decryption. Significant reduction in error rate is expected to be achieved by the bit pull-up method.
User centered and ontology based information retrieval system for life sciences.
Sy, Mohameth-François; Ranwez, Sylvie; Montmain, Jacky; Regnault, Armelle; Crampes, Michel; Ranwez, Vincent
2012-01-25
Because of the increasing number of electronic resources, designing efficient tools to retrieve and exploit them is a major challenge. Some improvements have been offered by semantic Web technologies and applications based on domain ontologies. In life science, for instance, the Gene Ontology is widely exploited in genomic applications and the Medical Subject Headings is the basis of biomedical publications indexation and information retrieval process proposed by PubMed. However current search engines suffer from two main drawbacks: there is limited user interaction with the list of retrieved resources and no explanation for their adequacy to the query is provided. Users may thus be confused by the selection and have no idea on how to adapt their queries so that the results match their expectations. This paper describes an information retrieval system that relies on domain ontology to widen the set of relevant documents that is retrieved and that uses a graphical rendering of query results to favor user interactions. Semantic proximities between ontology concepts and aggregating models are used to assess documents adequacy with respect to a query. The selection of documents is displayed in a semantic map to provide graphical indications that make explicit to what extent they match the user's query; this man/machine interface favors a more interactive and iterative exploration of data corpus, by facilitating query concepts weighting and visual explanation. We illustrate the benefit of using this information retrieval system on two case studies one of which aiming at collecting human genes related to transcription factors involved in hemopoiesis pathway. The ontology based information retrieval system described in this paper (OBIRS) is freely available at: http://www.ontotoolkit.mines-ales.fr/ObirsClient/. This environment is a first step towards a user centred application in which the system enlightens relevant information to provide decision help.
User centered and ontology based information retrieval system for life sciences
2012-01-01
Background Because of the increasing number of electronic resources, designing efficient tools to retrieve and exploit them is a major challenge. Some improvements have been offered by semantic Web technologies and applications based on domain ontologies. In life science, for instance, the Gene Ontology is widely exploited in genomic applications and the Medical Subject Headings is the basis of biomedical publications indexation and information retrieval process proposed by PubMed. However current search engines suffer from two main drawbacks: there is limited user interaction with the list of retrieved resources and no explanation for their adequacy to the query is provided. Users may thus be confused by the selection and have no idea on how to adapt their queries so that the results match their expectations. Results This paper describes an information retrieval system that relies on domain ontology to widen the set of relevant documents that is retrieved and that uses a graphical rendering of query results to favor user interactions. Semantic proximities between ontology concepts and aggregating models are used to assess documents adequacy with respect to a query. The selection of documents is displayed in a semantic map to provide graphical indications that make explicit to what extent they match the user's query; this man/machine interface favors a more interactive and iterative exploration of data corpus, by facilitating query concepts weighting and visual explanation. We illustrate the benefit of using this information retrieval system on two case studies one of which aiming at collecting human genes related to transcription factors involved in hemopoiesis pathway. Conclusions The ontology based information retrieval system described in this paper (OBIRS) is freely available at: http://www.ontotoolkit.mines-ales.fr/ObirsClient/. This environment is a first step towards a user centred application in which the system enlightens relevant information to provide decision help. PMID:22373375
ERIC Educational Resources Information Center
Hilley, Robert
This curriculum guide contains teacher and student materials for a course on outboard-engine boat systems and service for power product equipment technician occupations. The course contains the following four units of instruction: (1) Outboard-Engine Design and Identification; (2) Operation and Service of Engine-Support Systems; (3) Operation and…
An overview of the National Space Science data Center Standard Information Retrieval System (SIRS)
NASA Technical Reports Server (NTRS)
Shapiro, A.; Blecher, S.; Verson, E. E.; King, M. L. (Editor)
1974-01-01
A general overview is given of the National Space Science Data Center (NSSDC) Standard Information Retrieval System. A description, in general terms, the information system that contains the data files and the software system that processes and manipulates the files maintained at the Data Center. Emphasis is placed on providing users with an overview of the capabilities and uses of the NSSDC Standard Information Retrieval System (SIRS). Examples given are taken from the files at the Data Center. Detailed information about NSSDC data files is documented in a set of File Users Guides, with one user's guide prepared for each file processed by SIRS. Detailed information about SIRS is presented in the SIRS Users Guide.
NSWC-NADC interactive communication links for AN/UYS-1 loadtape creation and retrieval
NASA Astrophysics Data System (ADS)
Greathouse, D. M.
1984-09-01
This report contains an alternative method of communication (interactive vs. remote batch) with the Naval Air Development Center for the creation and retrieval of AN/UYS-1 Advanced Signal Processor (ASP) operational software loadtapes. Operational software for the Digital Acoustic Sensor Simulator (DASS) program is developed and maintained at the Naval Air Development Center (NADC). The Facility for Automated Software Production (FASP), an NADC-resident software generation facility, provides the support tools necessary for data base creation, software development and maintenance, and loadtape generation. Once a loadtape file is generated at NADC, it must be retrieved via telephone transmission and placed in a format suitable for loading into the AN/UYS-1 Advanced Signal Processor (ASP).
50 CFR 300.109 - Gear disposal.
Code of Federal Regulations, 2011 CFR
2011-10-01
... articles and substances include, but are not limited to, fishing gear, net scraps, bale straps, plastic bags, oil drums, petroleum containers, oil, toxic chemicals or any manmade items retrieved in a...
50 CFR 300.109 - Gear disposal.
Code of Federal Regulations, 2010 CFR
2010-10-01
... articles and substances include, but are not limited to, fishing gear, net scraps, bale straps, plastic bags, oil drums, petroleum containers, oil, toxic chemicals or any manmade items retrieved in a...
4. INTERIOR OF ENGINE ROOM, CONTAINING UNITEDTOD TWINTANDEM ENGINE, FOR ...
4. INTERIOR OF ENGINE ROOM, CONTAINING UNITED-TOD TWIN-TANDEM ENGINE, FOR 40" BLOOMING MILL; AS SEEN FROM THE UPPER LEVEL BRIDGE CRANE, THIS ENGINE WAS THE DIRECT DRIVE TO THE 40" BLOOMING MILL LOCATED IN THE ADJACENT ROOM TO THE LEFT. THE UNITED-TOD ENGINE, A TWIN TANDEM COMPOUND STEAM ENGINE, WAS RATED AT 20,000 MP. IN 1946 NEW HIGH PRESSURE CYLINDERS WERE INSTALLED AND THE ENGINE RAN ON 200 PSI STEAM, WITH A 44"X76"X60" STROKE, TO A BUILT-UP COUNTER-BALANCED CENTER CRANK. - Republic Iron & Steel Company, Youngstown Works, Blooming Mill & Blooming Mill Engines, North of Poland Avenue, Youngstown, Mahoning County, OH
A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campos, E; Sisterson, Douglas
The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess,more » and report measurement quality. Therefore, an easily accessible, well-articulated estimate of ARM measurement uncertainty is needed. Note that some of the instrument observations require mathematical algorithms (retrievals) to convert a measured engineering variable into a useful geophysical measurement. While those types of retrieval measurements are identified, this study does not address particular methods for retrieval uncertainty. As well, the ARM Facility also provides engineered data products, or value-added products (VAPs), based on multiple instrument measurements. This study does not include uncertainty estimates for those data products. We propose here that a total measurement uncertainty should be calculated as a function of the instrument uncertainty (calibration factors), the field uncertainty (environmental factors), and the retrieval uncertainty (algorithm factors). The study will not expand on methods for computing these uncertainties. Instead, it will focus on the practical identification, characterization, and inventory of the measurement uncertainties already available in the ARM community through the ARM instrument mentors and their ARM instrument handbooks. As a result, this study will address the first steps towards reporting ARM measurement uncertainty: 1) identifying how the uncertainty of individual ARM measurements is currently expressed, 2) identifying a consistent approach to measurement uncertainty, and then 3) reclassifying ARM instrument measurement uncertainties in a common framework.« less
The Study on Collaborative Manufacturing Platform Based on Agent
NASA Astrophysics Data System (ADS)
Zhang, Xiao-yan; Qu, Zheng-geng
To fulfill the trends of knowledge-intensive in collaborative manufacturing development, we have described multi agent architecture supporting knowledge-based platform of collaborative manufacturing development platform. In virtue of wrapper service and communication capacity agents provided, the proposed architecture facilitates organization and collaboration of multi-disciplinary individuals and tools. By effectively supporting the formal representation, capture, retrieval and reuse of manufacturing knowledge, the generalized knowledge repository based on ontology library enable engineers to meaningfully exchange information and pass knowledge across boundaries. Intelligent agent technology increases traditional KBE systems efficiency and interoperability and provides comprehensive design environments for engineers.
RIM as the data base management system for a material properties data base
NASA Technical Reports Server (NTRS)
Karr, P. H.; Wilson, D. J.
1984-01-01
Relational Information Management (RIM) was selected as the data base management system for a prototype engineering materials data base. The data base provides a central repository for engineering material properties data, which facilitates their control. Numerous RIM capabilities are exploited to satisfy prototype data base requirements. Numerical, text, tabular, and graphical data and references are being stored for five material types. Data retrieval will be accomplished both interactively and through a FORTRAN interface. The experience gained in creating and exercising the prototype will be used in specifying requirements for a production system.
Semantic Clustering of Search Engine Results
Soliman, Sara Saad; El-Sayed, Maged F.; Hassan, Yasser F.
2015-01-01
This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision. PMID:26933673
SLUDGE RETRIEVAL FROM HANFORD K WEST BASIN SETTLER TANKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
ERPENBECK EG; LESHIKAR GA
In 2010, an innovative, remotely operated retrieval system was deployed to successfully retrieve over 99.7% of the radioactive sludge from ten submerged tanks in Hanford's K-West Basin. As part of K-West Basin cleanup, the accumulated sludge needed to be removed from the 0.5 meter diameter by 5 meter long settler tanks and transferred approximately 45 meters to an underwater container for sampling and waste treatment. The abrasive, dense, non-homogeneous sludge was the product of the washing process of corroded nuclear fuel. It consists of small (less than 600 micron) particles of uranium metal, uranium oxide, and various other constituents, potentiallymore » agglomerated or cohesive after 10 years of storage. The Settler Tank Retrieval System (STRS) was developed to access, mobilize and pump out the sludge from each tank using a standardized process of retrieval head insertion, periodic high pressure water spray, retraction, and continuous pumping of the sludge. Blind operations were guided by monitoring flow rate, radiation levels in the sludge stream, and solids concentration. The technology developed and employed in the STRS can potentially be adapted to similar problematic waste tanks or pipes that must be remotely accessed to achieve mobilization and retrieval of the sludge within.« less
Heteroassociative storage of hippocampal pattern sequences in the CA3 subregion
Recio, Renan S.; Reyes, Marcelo B.
2018-01-01
Background Recent research suggests that the CA3 subregion of the hippocampus has properties of both autoassociative network, due to its ability to complete partial cues, tolerate noise, and store associations between memories, and heteroassociative one, due to its ability to store and retrieve sequences of patterns. Although there are several computational models of the CA3 as an autoassociative network, more detailed evaluations of its heteroassociative properties are missing. Methods We developed a model of the CA3 subregion containing 10,000 integrate-and-fire neurons with both recurrent excitatory and inhibitory connections, and which exhibits coupled oscillations in the gamma and theta ranges. We stored thousands of pattern sequences using a heteroassociative learning rule with competitive synaptic scaling. Results We showed that a purely heteroassociative network model can (i) retrieve pattern sequences from partial cues with external noise and incomplete connectivity, (ii) achieve homeostasis regarding the number of connections per neuron when many patterns are stored when using synaptic scaling, (iii) continuously update the set of retrievable patterns, guaranteeing that the last stored patterns can be retrieved and older ones can be forgotten. Discussion Heteroassociative networks with synaptic scaling rules seem sufficient to achieve many desirable features regarding connectivity homeostasis, pattern sequence retrieval, noise tolerance and updating of the set of retrievable patterns. PMID:29312826
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Robertson, Franklin R.; Clayson, Carol Anne
2012-01-01
Improved estimates of near-surface air temperature and air humidity are critical to the development of more accurate turbulent surface heat fluxes over the ocean. Recent progress in retrieving these parameters has been made through the application of artificial neural networks (ANN) and the use of multi-sensor passive microwave observations. Details are provided on the development of an improved retrieval algorithm that applies the nonlinear statistical ANN methodology to a set of observations from the Advanced Microwave Scanning Radiometer (AMSR-E) and the Advanced Microwave Sounding Unit (AMSU-A) that are currently available from the NASA AQUA satellite platform. Statistical inversion techniques require an adequate training dataset to properly capture embedded physical relationships. The development of multiple training datasets containing only in-situ observations, only synthetic observations produced using the Community Radiative Transfer Model (CRTM), or a mixture of each is discussed. An intercomparison of results using each training dataset is provided to highlight the relative advantages and disadvantages of each methodology. Particular emphasis will be placed on the development of retrievals in cloudy versus clear-sky conditions. Near-surface air temperature and humidity retrievals using the multi-sensor ANN algorithms are compared to previous linear and non-linear retrieval schemes.
Ten Most Searched Databases by a Business Generalist--Part 1 or A Day in the Life of....
ERIC Educational Resources Information Center
Meredith, Meri
1986-01-01
Describes databases frequently used in Business Information Center, Cummins Engine Company (Columbus, Indiana): Dun and Bradstreet Business Information Report System, Newsearch, Dun and Bradstreet Market Identifiers, Trade and Industry Index, PTS PROMT, Bureau of Labor Statistics files, ABI/INFORM, Magazine Index, NEXIS, Dow Jones News/Retrieval.…
Engineering study for the functional design of a multiprocessor system
NASA Technical Reports Server (NTRS)
Miller, J. S.; Vandever, W. H.; Stanten, S. F.; Avakian, A. E.; Kosmala, A. L.
1972-01-01
The results are presented of a study to generate a functional system design of a multiprocessing computer system capable of satisfying the computational requirements of a space station. These data management system requirements were specified to include: (1) real time control, (2) data processing and storage, (3) data retrieval, and (4) remote terminal servicing.
Data systems and computer science programs: Overview
NASA Technical Reports Server (NTRS)
Smith, Paul H.; Hunter, Paul
1991-01-01
An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.
Knowledge Retrieval as Specialized Inference.
1987-05-01
by both the Science and 0 Engineering Research Council and the Alvey Directorate under grant SERC GR/D/16062. -DISR~UtIOr s~rT EMiNT r.jAv ’r’v’vI for...Accommodating value assignments at this point facilitates the incorporation of quantifiers in Chapter 4 %S -". PLS - 20 - way. It should now be clear
Users' Perceptions of the Web As Revealed by Transaction Log Analysis.
ERIC Educational Resources Information Center
Moukdad, Haidar; Large, Andrew
2001-01-01
Describes the results of a transaction log analysis of a Web search engine, WebCrawler, to analyze user's queries for information retrieval. Results suggest most users do not employ advanced search features, and the linguistic structure often resembles a human-human communication model that is not always successful in human-computer communication.…
On-Line Analysis of Southern FIA Data
Michael P. Spinney; Paul C. Van Deusen; Francis A. Roesch
2006-01-01
The Southern On-Line Estimator (SOLE) is a web-based FIA database analysis tool designed with an emphasis on modularity. The Java-based user interface is simple and intuitive to use and the R-based analysis engine is fast and stable. Each component of the program (data retrieval, statistical analysis and output) can be individually modified to accommodate major...
ERIC Educational Resources Information Center
Micco, Mary; Popp, Rich
Techniques for building a world-wide information infrastructure by reverse engineering existing databases to link them in a hierarchical system of subject clusters to create an integrated database are explored. The controlled vocabulary of the Library of Congress Subject Headings is used to ensure consistency and group similar items. Each database…
Finding Relevant Data in a Sea of Languages
2016-04-26
full machine-translated text , unbiased word clouds , query-biased word clouds , and query-biased sentence...and information retrieval to automate language processing tasks so that the limited number of linguists available for analyzing text and spoken...the crime (stock market). The Cross-LAnguage Search Engine (CLASE) has already preprocessed the documents, extracting text to identify the language
Do Family Physicians Retrieve Synopses of Clinical Research Previously Read as Email Alerts?
Pluye, Pierre; Johnson-Lafleur, Janique; Granikov, Vera; Shulha, Michael; Bartlett, Gillian; Marlow, Bernard
2011-01-01
Background A synopsis of new clinical research highlights important aspects of one study in a brief structured format. When delivered as email alerts, synopses enable clinicians to become aware of new developments relevant for practice. Once read, a synopsis can become a known item of clinical information. In time-pressured situations, remembering a known item may facilitate information retrieval by the clinician. However, exactly how synopses first delivered as email alerts influence retrieval at some later time is not known. Objectives We examined searches for clinical information in which a synopsis previously read as an email alert was retrieved (defined as a dyad). Our study objectives were to (1) examine whether family physicians retrieved synopses they previously read as email alerts and then to (2) explore whether family physicians purposefully retrieved these synopses. Methods We conducted a mixed-methods study in which a qualitative multiple case study explored the retrieval of email alerts within a prospective longitudinal cohort of practicing family physicians. Reading of research-based synopses was tracked in two contexts: (1) push, meaning to read on email and (2) pull, meaning to read after retrieval from one electronic knowledge resource. Dyads, defined as synopses first read as email alerts and subsequently retrieved in a search of a knowledge resource, were prospectively identified. Participants were interviewed about all of their dyads. Outcomes were the total number of dyads and their type. Results Over a period of 341 days, 194 unique synopses delivered to 41 participants resulted in 4937 synopsis readings. In all, 1205 synopses were retrieved over an average of 320 days. Of the 1205 retrieved synopses, 21 (1.7%) were dyads made by 17 family physicians. Of the 1205 retrieved synopses, 6 (0.5%) were known item type dyads. However, dyads also occurred serendipitously. Conclusion In the single knowledge resource we studied, email alerts containing research-based synopses were rarely retrieved. Our findings help us to better understand the effect of push on pull and to improve the integration of research-based information within electronic resources for clinicians. PMID:22130465