Sample records for development a search

  1. Routine development of objectively derived search strategies.

    PubMed

    Hausner, Elke; Waffenschmidt, Siw; Kaiser, Thomas; Simon, Michael

    2012-02-29

    Over the past few years, information retrieval has become more and more professionalized, and information specialists are considered full members of a research team conducting systematic reviews. Research groups preparing systematic reviews and clinical practice guidelines have been the driving force in the development of search strategies, but open questions remain regarding the transparency of the development process and the available resources. An empirically guided approach to the development of a search strategy provides a way to increase transparency and efficiency. Our aim in this paper is to describe the empirically guided development process for search strategies as applied by the German Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, or "IQWiG"). This strategy consists of the following steps: generation of a test set, as well as the development, validation and standardized documentation of the search strategy. We illustrate our approach by means of an example, that is, a search for literature on brachytherapy in patients with prostate cancer. For this purpose, a test set was generated, including a total of 38 references from 3 systematic reviews. The development set for the generation of the strategy included 25 references. After application of textual analytic procedures, a strategy was developed that included all references in the development set. To test the search strategy on an independent set of references, the remaining 13 references in the test set (the validation set) were used. The validation set was also completely identified. Our conclusion is that an objectively derived approach similar to that used in search filter development is a feasible way to develop and validate reliable search strategies. Besides creating high-quality strategies, the widespread application of this approach will result in a substantial increase in the transparency of the development process of search strategies.

  2. Validation of search filters for identifying pediatric studies in PubMed.

    PubMed

    Leclercq, Edith; Leeflang, Mariska M G; van Dalen, Elvira C; Kremer, Leontien C M

    2013-03-01

    To identify and validate PubMed search filters for retrieving studies including children and to develop a new pediatric search filter for PubMed. We developed 2 different datasets of studies to evaluate the performance of the identified pediatric search filters, expressed in terms of sensitivity, precision, specificity, accuracy, and number needed to read (NNR). An optimal search filter will have a high sensitivity and high precision with a low NNR. In addition to the PubMed Limits: All Child: 0-18 years filter (in May 2012 renamed to PubMed Filter Child: 0-18 years), 6 search filters for identifying studies including children were identified: 3 developed by Kastner et al, 1 developed by BestBets, one by the Child Health Field, and 1 by the Cochrane Childhood Cancer Group. Three search filters (Cochrane Childhood Cancer Group, Child Health Field, and BestBets) had the highest sensitivity (99.3%, 99.5%, and 99.3%, respectively) but a lower precision (64.5%, 68.4%, and 66.6% respectively) compared with the other search filters. Two Kastner search filters had a high precision (93.0% and 93.7%, respectively) but a low sensitivity (58.5% and 44.8%, respectively). They failed to identify many pediatric studies in our datasets. The search terms responsible for false-positive results in the reference dataset were determined. With these data, we developed a new search filter for identifying studies with children in PubMed with an optimal sensitivity (99.5%) and precision (69.0%). Search filters to identify studies including children either have a low sensitivity or a low precision with a high NNR. A new pediatric search filter with a high sensitivity and a low NNR has been developed. Copyright © 2013 Mosby, Inc. All rights reserved.

  3. Standardization of search methods for guideline development: an international survey of evidence-based guideline development groups.

    PubMed

    Deurenberg, Rikie; Vlayen, Joan; Guillo, Sylvie; Oliver, Thomas K; Fervers, Beatrice; Burgers, Jako

    2008-03-01

    Effective literature searching is particularly important for clinical practice guideline development. Sophisticated searching and filtering mechanisms are needed to help ensure that all relevant research is reviewed. To assess the methods used for the selection of evidence for guideline development by evidence-based guideline development organizations. A semistructured questionnaire assessing the databases, search filters and evaluation methods used for literature retrieval was distributed to eight major organizations involved in evidence-based guideline development. All of the organizations used search filters as part of guideline development. The medline database was the primary source accessed for literature retrieval. The OVID or SilverPlatter interfaces were used in preference to the freely accessed PubMed interface. The Cochrane Library, embase, cinahl and psycinfo databases were also frequently used by the organizations. All organizations reported the intention to improve and validate their filters for finding literature specifically relevant for guidelines. In the first international survey of its kind, eight major guideline development organizations indicated a strong interest in identifying, improving and standardizing search filters to improve guideline development. It is to be hoped that this will result in the standardization of, and open access to, search filters, an improvement in literature searching outcomes and greater collaboration among guideline development organizations.

  4. ISE: An Integrated Search Environment. The manual

    NASA Technical Reports Server (NTRS)

    Chu, Lon-Chan

    1992-01-01

    Integrated Search Environment (ISE), a software package that implements hierarchical searches with meta-control, is described in this manual. ISE is a collection of problem-independent routines to support solving searches. Mainly, these routines are core routines for solving a search problem and they handle the control of searches and maintain the statistics related to searches. By separating the problem-dependent and problem-independent components in ISE, new search methods based on a combination of existing methods can be developed by coding a single master control program. Further, new applications solved by searches can be developed by coding the problem-dependent parts and reusing the problem-independent parts already developed. Potential users of ISE are designers of new application solvers and new search algorithms, and users of experimental application solvers and search algorithms. The ISE is designed to be user-friendly and information rich. In this manual, the organization of ISE is described and several experiments carried out on ISE are also described.

  5. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges.

    PubMed

    Stansfield, Claire; O'Mara-Eves, Alison; Thomas, James

    2017-09-01

    Using text mining to aid the development of database search strings for topics described by diverse terminology has potential benefits for systematic reviews; however, methods and tools for accomplishing this are poorly covered in the research methods literature. We briefly review the literature on applications of text mining for search term development for systematic reviewing. We found that the tools can be used in 5 overarching ways: improving the precision of searches; identifying search terms to improve search sensitivity; aiding the translation of search strategies across databases; searching and screening within an integrated system; and developing objectively derived search strategies. Using a case study and selected examples, we then reflect on the utility of certain technologies (term frequency-inverse document frequency and Termine, term frequency, and clustering) in improving the precision and sensitivity of searches. Challenges in using these tools are discussed. The utility of these tools is influenced by the different capabilities of the tools, the way the tools are used, and the text that is analysed. Increased awareness of how the tools perform facilitates the further development of methods for their use in systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Development and Validation of Search Filters to Identify Articles on Family Medicine in Online Medical Databases

    PubMed Central

    Pols, David H.J.; Bramer, Wichor M.; Bindels, Patrick J.E.; van de Laar, Floris A.; Bohnen, Arthur M.

    2015-01-01

    Physicians and researchers in the field of family medicine often need to find relevant articles in online medical databases for a variety of reasons. Because a search filter may help improve the efficiency and quality of such searches, we aimed to develop and validate search filters to identify research studies of relevance to family medicine. Using a new and objective method for search filter development, we developed and validated 2 search filters for family medicine. The sensitive filter had a sensitivity of 96.8% and a specificity of 74.9%. The specific filter had a specificity of 97.4% and a sensitivity of 90.3%. Our new filters should aid literature searches in the family medicine field. The sensitive filter may help researchers conducting systematic reviews, whereas the specific filter may help family physicians find answers to clinical questions at the point of care when time is limited. PMID:26195683

  7. Development of Health Information Search Engine Based on Metadata and Ontology

    PubMed Central

    Song, Tae-Min; Jin, Dal-Lae

    2014-01-01

    Objectives The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Methods Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. Results A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Conclusions Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers. PMID:24872907

  8. Development of health information search engine based on metadata and ontology.

    PubMed

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  9. Improving e-book access via a library-developed full-text search tool.

    PubMed

    Foust, Jill E; Bergen, Phillip; Maxeiner, Gretchen L; Pawlowski, Peter N

    2007-01-01

    This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single "Google-style" query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products.

  10. Improving e-book access via a library-developed full-text search tool*

    PubMed Central

    Foust, Jill E.; Bergen, Phillip; Maxeiner, Gretchen L.; Pawlowski, Peter N.

    2007-01-01

    Purpose: This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. Setting: The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. Brief Description: The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single “Google-style” query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. Results/Evaluation: A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. Conclusion: This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products. PMID:17252065

  11. Development and tuning of an original search engine for patent libraries in medicinal chemistry.

    PubMed

    Pasche, Emilie; Gobeill, Julien; Kreim, Olivier; Oezdemir-Zaech, Fatma; Vachon, Therese; Lovis, Christian; Ruch, Patrick

    2014-01-01

    The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks clearly increases the effectiveness of the system. We conclude that different search tasks demand different information retrieval engines' settings in order to yield optimal end-user retrieval.

  12. Development and tuning of an original search engine for patent libraries in medicinal chemistry

    PubMed Central

    2014-01-01

    Background The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. Methods We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. Results The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. Conclusions We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks clearly increases the effectiveness of the system. We conclude that different search tasks demand different information retrieval engines' settings in order to yield optimal end-user retrieval. PMID:24564220

  13. Search Strategy Development in a Flipped Library Classroom: A Student-Focused Assessment

    ERIC Educational Resources Information Center

    Goates, Michael C.; Nelson, Gregory M.; Frost, Megan

    2017-01-01

    Librarians at Brigham Young University compared search statement development between traditional lecture and flipped instruction sessions. Students in lecture sessions scored significantly higher on developing search statements than those in flipped sessions. However, student evaluations show a strong preference for pedagogies that incorporate…

  14. DOE Research and Development Accomplishments Help

    Science.gov Websites

    be used to search, locate, access, and electronically download full-text research and development (R Browse Downloading, Viewing, and/or Searching Full-text Documents/Pages Searching the Database Search Features Search allows you to search the OCRed full-text document and bibliographic information, the

  15. Development and Validation of Search Filters to Identify Articles on Family Medicine in Online Medical Databases.

    PubMed

    Pols, David H J; Bramer, Wichor M; Bindels, Patrick J E; van de Laar, Floris A; Bohnen, Arthur M

    2015-01-01

    Physicians and researchers in the field of family medicine often need to find relevant articles in online medical databases for a variety of reasons. Because a search filter may help improve the efficiency and quality of such searches, we aimed to develop and validate search filters to identify research studies of relevance to family medicine. Using a new and objective method for search filter development, we developed and validated 2 search filters for family medicine. The sensitive filter had a sensitivity of 96.8% and a specificity of 74.9%. The specific filter had a specificity of 97.4% and a sensitivity of 90.3%. Our new filters should aid literature searches in the family medicine field. The sensitive filter may help researchers conducting systematic reviews, whereas the specific filter may help family physicians find answers to clinical questions at the point of care when time is limited. © 2015 Annals of Family Medicine, Inc.

  16. Quality of Life

    Science.gov Websites

    USPACOM U.S. Pacific Command Search USPACOM: Search Search Search USPACOM: Search Home Leadership Communication and Information, Student Resiliency and Leadership Development, Partnerships and Collaboration the Hawaii public school system. * Student Resiliency and Leadership Development Strategy Group

  17. Development of a Computerized Visual Search Test

    ERIC Educational Resources Information Center

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  18. Mirador: A Simple, Fast Search Interface for Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Strub, Richard; Seiler, Edward; Joshi, Talak; MacHarrie, Peter

    2008-01-01

    A major challenge for remote sensing science researchers is searching and acquiring relevant data files for their research projects based on content, space and time constraints. Several structured query (SQ) and hierarchical navigation (HN) search interfaces have been develop ed to satisfy this requirement, yet the dominant search engines in th e general domain are based on free-text search. The Goddard Earth Sci ences Data and Information Services Center has developed a free-text search interface named Mirador that supports space-time queries, inc luding a gazetteer and geophysical event gazetteer. In order to compe nsate for a slightly reduced search precision relative to SQ and HN t echniques, Mirador uses several search optimizations to return result s quickly. The quick response enables a more iterative search strateg y than is available with many SQ and HN techniques.

  19. [Development of domain specific search engines].

    PubMed

    Takai, T; Tokunaga, M; Maeda, K; Kaminuma, T

    2000-01-01

    As cyber space exploding in a pace that nobody has ever imagined, it becomes very important to search cyber space efficiently and effectively. One solution to this problem is search engines. Already a lot of commercial search engines have been put on the market. However these search engines respond with such cumbersome results that domain specific experts can not tolerate. Using a dedicate hardware and a commercial software called OpenText, we have tried to develop several domain specific search engines. These engines are for our institute's Web contents, drugs, chemical safety, endocrine disruptors, and emergent response for chemical hazard. These engines have been on our Web site for testing.

  20. Development and Validation of a Self-reported Questionnaire for Measuring Internet Search Dependence

    PubMed Central

    Wang, Yifan; Wu, Lingdan; Zhou, Hongli; Xu, Jiaojing; Dong, Guangheng

    2016-01-01

    Internet search has become the most common way that people deal with issues and problems in everyday life. The wide use of Internet search has largely changed the way people search for and store information. There is a growing interest in the impact of Internet search on users’ affect, cognition, and behavior. Thus, it is essential to develop a tool to measure the changes in psychological characteristics as a result of long-term use of Internet search. The aim of this study is to develop a Questionnaire on Internet Search Dependence (QISD) and test its reliability and validity. We first proposed a preliminary structure and items of the QISD based on literature review, supplemental investigations, and interviews. And then, we assessed the psychometric properties and explored the factor structure of the initial version via exploratory factor analysis (EFA). The EFA results indicated that four dimensions of the QISD were very reliable, i.e., habitual use of Internet search, withdrawal reaction, Internet search trust, and external storage under Internet search. Finally, we tested the factor solution obtained from EFA through confirmatory factor analysis (CFA). The results of CFA confirmed that the four dimensions model fits the data well. In all, this study suggests that the 12-item QISD is of high reliability and validity and can serve as a preliminary tool to measure the features of Internet search dependence. PMID:28066753

  1. Development and Validation of a Self-reported Questionnaire for Measuring Internet Search Dependence.

    PubMed

    Wang, Yifan; Wu, Lingdan; Zhou, Hongli; Xu, Jiaojing; Dong, Guangheng

    2016-01-01

    Internet search has become the most common way that people deal with issues and problems in everyday life. The wide use of Internet search has largely changed the way people search for and store information. There is a growing interest in the impact of Internet search on users' affect, cognition, and behavior. Thus, it is essential to develop a tool to measure the changes in psychological characteristics as a result of long-term use of Internet search. The aim of this study is to develop a Questionnaire on Internet Search Dependence (QISD) and test its reliability and validity. We first proposed a preliminary structure and items of the QISD based on literature review, supplemental investigations, and interviews. And then, we assessed the psychometric properties and explored the factor structure of the initial version via exploratory factor analysis (EFA). The EFA results indicated that four dimensions of the QISD were very reliable, i.e., habitual use of Internet search, withdrawal reaction, Internet search trust, and external storage under Internet search. Finally, we tested the factor solution obtained from EFA through confirmatory factor analysis (CFA). The results of CFA confirmed that the four dimensions model fits the data well. In all, this study suggests that the 12-item QISD is of high reliability and validity and can serve as a preliminary tool to measure the features of Internet search dependence.

  2. Social Work Literature Searching: Current Issues with Databases and Online Search Engines

    ERIC Educational Resources Information Center

    McGinn, Tony; Taylor, Brian; McColgan, Mary; McQuilkan, Janice

    2016-01-01

    Objectives: To compare the performance of a range of search facilities; and to illustrate the execution of a comprehensive literature search for qualitative evidence in social work. Context: Developments in literature search methods and comparisons of search facilities help facilitate access to the best available evidence for social workers.…

  3. Mercury- Distributed Metadata Management, Data Discovery and Access System

    NASA Astrophysics Data System (ADS)

    Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.

    2007-12-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  4. ReSEARCH: A Requirements Search Engine: Progress Report 2

    DTIC Science & Technology

    2008-09-01

    and provides a convenient user interface for the search process. Ideally, the web application would be based on Tomcat, a free Java Servlet and JSP...Implementation issues Lucene Java is an Open Source project, available under the Apache License, which provides an accessible API for the development of...from the Apache Lucene website (Lucene- java Wiki , 2008). A search application developed with Lucene consists of the same two major com- ponents

  5. Search Pathways: Modeling GeoData Search Behavior to Support Usable Application Development

    NASA Astrophysics Data System (ADS)

    Yarmey, L.; Rosati, A.; Tressel, S.

    2014-12-01

    Recent technical advances have enabled development of new scientific data discovery systems. Metadata brokering, linked data, and other mechanisms allow users to discover scientific data of interes across growing volumes of heterogeneous content. Matching this complex content with existing discovery technologies, people looking for scientific data are presented with an ever-growing array of features to sort, filter, subset, and scan through search returns to help them find what they are looking for. This paper examines the applicability of available technologies in connecting searchers with the data of interest. What metrics can be used to track success given shifting baselines of content and technology? How well do existing technologies map to steps in user search patterns? Taking a user-driven development approach, the team behind the Arctic Data Explorer interdisciplinary data discovery application invested heavily in usability testing and user search behavior analysis. Building on earlier library community search behavior work, models were developed to better define the diverse set of thought processes and steps users took to find data of interest, here called 'search pathways'. This research builds a deeper understanding of the user community that seeks to reuse scientific data. This approach ensures that development decisions are driven by clearly articulated user needs instead of ad hoc technology trends. Initial results from this research will be presented along with lessons learned for other discovery platform development and future directions for informatics research into search pathways.

  6. A Model of Price Search Behavior in Electronic Marketplace.

    ERIC Educational Resources Information Center

    Jiang, Pingjun

    2002-01-01

    Discussion of online consumer behavior focuses on the development of a conceptual model and a set of propositions to explain the main factors influencing online price search. Integrates the psychological search literature into the context of online searching by incorporating ability and cost to search for information into perceived search…

  7. Developing a Systematic Patent Search Training Program

    ERIC Educational Resources Information Center

    Zhang, Li

    2009-01-01

    This study aims to develop a systematic patent training program using patent analysis and citation analysis techniques applied to patents held by the University of Saskatchewan. The results indicate that the target audience will be researchers in life sciences, and aggregated patent database searching and advanced search techniques should be…

  8. Environmental Information Management For Data Discovery and Access System

    NASA Astrophysics Data System (ADS)

    Giriprakash, P.

    2011-01-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury was developed during 2007 and released in early 2008. This new version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow ! the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.

  9. Evidence-based Medicine Search: a customizable federated search engine.

    PubMed

    Bracke, Paul J; Howse, David K; Keim, Samuel M

    2008-04-01

    This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center.

  10. Evidence-based Medicine Search: a customizable federated search engine

    PubMed Central

    Bracke, Paul J.; Howse, David K.; Keim, Samuel M.

    2008-01-01

    Purpose: This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. Brief Description: The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Outcomes/Conclusion: Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center. PMID:18379665

  11. The Development of Information Search Expertise of Research Students

    ERIC Educational Resources Information Center

    Kai-Wah Chu, Samuel; Law, Nancy

    2008-01-01

    This study identifies the development of information search expertise of 12 beginning research students (six in education and six in engineering) who were provided with a set of systematic search training sessions over a period of one year. The study adopts a longitudinal approach in investigating whether there were different stages in the…

  12. A Modular Simulation Framework for Assessing Swarm Search Models

    DTIC Science & Technology

    2014-09-01

    SUBTITLE A MODULAR SIMULATION FRAMEWORK FOR ASSESSING SWARM SEARCH MODELS 5. FUNDING NUMBERS 6. AUTHOR(S) Blake M. Wanier 7. PERFORMING ORGANIZATION...Numerical studies demonstrate the ability to leverage the developed simulation and analysis framework to investigate three canonical swarm search models ...as benchmarks for future exploration of more sophisticated swarm search scenarios. 14. SUBJECT TERMS Swarm Search, Search Theory, Modeling Framework

  13. Identifying nurse staffing research in Medline: development and testing of empirically derived search strategies with the PubMed interface

    PubMed Central

    2010-01-01

    Background The identification of health services research in databases such as PubMed/Medline is a cumbersome task. This task becomes even more difficult if the field of interest involves the use of diverse methods and data sources, as is the case with nurse staffing research. This type of research investigates the association between nurse staffing parameters and nursing and patient outcomes. A comprehensively developed search strategy may help identify nurse staffing research in PubMed/Medline. Methods A set of relevant references in PubMed/Medline was identified by means of three systematic reviews. This development set was used to detect candidate free-text and MeSH terms. The frequency of these terms was compared to a random sample from PubMed/Medline in order to identify terms specific to nurse staffing research, which were then used to develop a sensitive, precise and balanced search strategy. To determine their precision, the newly developed search strategies were tested against a) the pool of relevant references extracted from the systematic reviews, b) a reference set identified from an electronic journal screening, and c) a sample from PubMed/Medline. Finally, all newly developed strategies were compared to PubMed's Health Services Research Queries (PubMed's HSR Queries). Results The sensitivities of the newly developed search strategies were almost 100% in all of the three test sets applied; precision ranged from 6.1% to 32.0%. PubMed's HSR queries were less sensitive (83.3% to 88.2%) than the new search strategies. Only minor differences in precision were found (5.0% to 32.0%). Conclusions As with other literature on health services research, nurse staffing studies are difficult to identify in PubMed/Medline. Depending on the purpose of the search, researchers can choose between high sensitivity and retrieval of a large number of references or high precision, i.e. and an increased risk of missing relevant references, respectively. More standardized terminology (e.g. by consistent use of the term "nurse staffing") could improve the precision of future searches in this field. Empirically selected search terms can help to develop effective search strategies. The high consistency between all test sets confirmed the validity of our approach. PMID:20731858

  14. Identifying nurse staffing research in Medline: development and testing of empirically derived search strategies with the PubMed interface.

    PubMed

    Simon, Michael; Hausner, Elke; Klaus, Susan F; Dunton, Nancy E

    2010-08-23

    The identification of health services research in databases such as PubMed/Medline is a cumbersome task. This task becomes even more difficult if the field of interest involves the use of diverse methods and data sources, as is the case with nurse staffing research. This type of research investigates the association between nurse staffing parameters and nursing and patient outcomes. A comprehensively developed search strategy may help identify nurse staffing research in PubMed/Medline. A set of relevant references in PubMed/Medline was identified by means of three systematic reviews. This development set was used to detect candidate free-text and MeSH terms. The frequency of these terms was compared to a random sample from PubMed/Medline in order to identify terms specific to nurse staffing research, which were then used to develop a sensitive, precise and balanced search strategy. To determine their precision, the newly developed search strategies were tested against a) the pool of relevant references extracted from the systematic reviews, b) a reference set identified from an electronic journal screening, and c) a sample from PubMed/Medline. Finally, all newly developed strategies were compared to PubMed's Health Services Research Queries (PubMed's HSR Queries). The sensitivities of the newly developed search strategies were almost 100% in all of the three test sets applied; precision ranged from 6.1% to 32.0%. PubMed's HSR queries were less sensitive (83.3% to 88.2%) than the new search strategies. Only minor differences in precision were found (5.0% to 32.0%). As with other literature on health services research, nurse staffing studies are difficult to identify in PubMed/Medline. Depending on the purpose of the search, researchers can choose between high sensitivity and retrieval of a large number of references or high precision, i.e. and an increased risk of missing relevant references, respectively. More standardized terminology (e.g. by consistent use of the term "nurse staffing") could improve the precision of future searches in this field. Empirically selected search terms can help to develop effective search strategies. The high consistency between all test sets confirmed the validity of our approach.

  15. The Front-End to Google for Teachers' Online Searching

    ERIC Educational Resources Information Center

    Seyedarabi, Faezeh

    2006-01-01

    This paper reports on an ongoing work in designing and developing a personalised search tool for teachers' online searching using Google search engine (repository) for the implementation and testing of the first research prototype.

  16. Graphical Representations of Electronic Search Patterns.

    ERIC Educational Resources Information Center

    Lin, Xia; And Others

    1991-01-01

    Discussion of search behavior in electronic environments focuses on the development of GRIP (Graphic Representor of Interaction Patterns), a graphing tool based on HyperCard that produces graphic representations of search patterns. Search state spaces are explained, and forms of data available from electronic searches are described. (34…

  17. Search strategies for identifying qualitative studies in CINAHL.

    PubMed

    Wilczynski, Nancy L; Marks, Susan; Haynes, R Brian

    2007-05-01

    Nurses, allied health professionals, clinicians, and researchers increasingly use online access to evidence in the course of patient care or when conducting reviews on a particular topic. Qualitative research has an important role in evidence-based health care. Online searching for qualitative studies can be difficult, however, resulting in the need to develop search filters. The objective of this study was to develop optimal search strategies to retrieve qualitative studies in CINAHL for the 2000 publishing year. The authors conducted an analytic survey comparing hand searches of journals with retrievals from CINAHL for candidate search terms and combinations. Combinations of search terms reached peak sensitivities of 98.9% and peak specificities of 99.5%. Combining search terms optimized both sensitivity and specificity at 94.2%. Empirically derived search strategies combining indexing terms and textwords can achieve high sensitivity and high specificity for retrieving qualitative studies from CINAHL.

  18. Online Patent Searching: Guided by an Expert System.

    ERIC Educational Resources Information Center

    Ardis, Susan B.

    1990-01-01

    Describes the development of an expert system for online patent searching that uses menu driven software to interpret the user's knowledge level and the general nature of the search problem. The discussion covers the rationale for developing such a system, current system functions, cost effectiveness, user reactions, and plans for future…

  19. A Detailed Analysis of End-User Search Behaviors.

    ERIC Educational Resources Information Center

    Wildemuth, Barbara M.; And Others

    1991-01-01

    Discussion of search strategy formulation focuses on a study at the University of North Carolina at Chapel Hill that analyzed how medical students developed and revised search strategies for microbiology database searches. Implications for future research on search behavior, for system interface design, and for end user training are suggested. (16…

  20. A proposed model of psychodynamic psychotherapy linked to Erik Erikson's eight stages of psychosocial development.

    PubMed

    Knight, Zelda Gillian

    2017-09-01

    Just as Freud used stages of psychosexual development to ground his model of psychoanalysis, it is possible to do the same with Erik Erikson's stages of development with regards to a model of psychodynamic psychotherapy. This paper proposes an eight-stage model of psychodynamic psychotherapy linked to Erik Erikson's eight stages of psychosocial development. Various suggestions are offered. One such suggestion is that as each of Erikson's developmental stages is triggered by a crisis, in therapy it is triggered by the client's search. The resolution of the search often leads to the development of another search, which implies that the therapy process comprises a series of searches. This idea of a series of searches and resolutions leads to the understanding that identity is developmental and therapy is a space in which a new sense of identity may emerge. The notion of hope is linked to Erikson's stage of Basic Trust and the proposed model of therapy views hope and trust as essential for the therapy process. Two clinical vignettes are offered to illustrate these ideas. Psychotherapy can be approached as an eight-stage process and linked to Erikson's eight stages model of development. Psychotherapy may be viewed as a series of searches and thus as a developmental stage resolution process, which leads to the understanding that identity is ongoing throughout the life span. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Developing a search engine for pharmacotherapeutic information that is not published in biomedical journals.

    PubMed

    Do Pazo-Oubiña, F; Calvo Pita, C; Puigventós Latorre, F; Periañez-Párraga, L; Ventayol Bosch, P

    2011-01-01

    To identify publishers of pharmacotherapeutic information not found in biomedical journals that focuses on evaluating and providing advice on medicines and to develop a search engine to access this information. Compiling web sites that publish information on the rational use of medicines and have no commercial interests. Free-access web sites in Spanish, Galician, Catalan or English. Designing a search engine using the Google "custom search" application. Overall 159 internet addresses were compiled and were classified into 9 labels. We were able to recover the information from the selected sources using a search engine, which is called "AlquimiA" and available from http://www.elcomprimido.com/FARHSD/AlquimiA.htm. The main sources of pharmacotherapeutic information not published in biomedical journals were identified. The search engine is a useful tool for searching and accessing "grey literature" on the internet. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  2. A Multi-Level Model of Information Seeking in the Clinical Domain

    PubMed Central

    Hung, Peter W.; Johnson, Stephen B.; Kaufman, David R.; Mendonça, Eneida A.

    2008-01-01

    Objective: Clinicians often have difficulty translating information needs into effective search strategies to find appropriate answers. Information retrieval systems employing an intelligent search agent that generates adaptive search strategies based on human search expertise could be helpful in meeting clinician information needs. A prerequisite for creating such systems is an information seeking model that facilitates the representation of human search expertise. The purpose of developing such a model is to provide guidance to information seeking system development and to shape an empirical research program. Design: The information seeking process was modeled as a complex problem-solving activity. After considering how similarly complex activities had been modeled in other domains, we determined that modeling context-initiated information seeking across multiple problem spaces allows the abstraction of search knowledge into functionally consistent layers. The knowledge layers were identified in the information science literature and validated through our observations of searches performed by health science librarians. Results: A hierarchical multi-level model of context-initiated information seeking is proposed. Each level represents (1) a problem space that is traversed during the online search process, and (2) a distinct layer of knowledge that is required to execute a successful search. Grand strategy determines what information resources will be searched, for what purpose, and in what order. The strategy level represents an overall approach for searching a single resource. Tactics are individual moves made to further a strategy. Operations are mappings of abstract intentions to information resource-specific concrete input. Assessment is the basis of interaction within the strategic hierarchy, influencing the direction of the search. Conclusion: The described multi-level model provides a framework for future research and the foundation for development of an automated information retrieval system that uses an intelligent search agent to bridge clinician information needs and human search expertise. PMID:18006383

  3. Development of intelligent semantic search system for rubber research data in Thailand

    NASA Astrophysics Data System (ADS)

    Kaewboonma, Nattapong; Panawong, Jirapong; Pianhanuruk, Ekkawit; Buranarach, Marut

    2017-10-01

    The rubber production of Thailand increased not only by strong demand from the world market, but was also stimulated strongly through the replanting program of the Thai Government from 1961 onwards. With the continuous growth of rubber research data volume on the Web, the search for information has become a challenging task. Ontologies are used to improve the accuracy of information retrieval from the web by incorporating a degree of semantic analysis during the search. In this context, we propose an intelligent semantic search system for rubber research data in Thailand. The research methods included 1) analyzing domain knowledge, 2) ontologies development, and 3) intelligent semantic search system development to curate research data in trusted digital repositories may be shared among the wider Thailand rubber research community.

  4. Library Search Prefilters for Vehicle Manufacturers to Assist in the Forensic Examination of Automotive Paints.

    PubMed

    Lavine, Barry K; White, Collin G; Ding, Tao

    2018-03-01

    Pattern recognition techniques have been applied to the infrared (IR) spectral libraries of the Paint Data Query (PDQ) database to differentiate between nonidentical but similar IR spectra of automotive paints. To tackle the problem of library searching, search prefilters were developed to identify the vehicle make from IR spectra of the clear coat, surfacer-primer, and e-coat layers. To develop these search prefilters with the appropriate degree of accuracy, IR spectra from the PDQ database were preprocessed using the discrete wavelet transform to enhance subtle but significant features in the IR spectral data. Wavelet coefficients characteristic of vehicle make were identified using a genetic algorithm for pattern recognition and feature selection. Search prefilters to identify automotive manufacturer through IR spectra obtained from a paint chip recovered at a crime scene were developed using 1596 original manufacturer's paint systems spanning six makes (General Motors, Chrysler, Ford, Honda, Nissan, and Toyota) within a limited production year range (2000-2006). Search prefilters for vehicle manufacturer that were developed as part of this study were successfully validated using IR spectra obtained directly from the PDQ database. Information obtained from these search prefilters can serve to quantify the discrimination power of original automotive paint encountered in casework and further efforts to succinctly communicate trace evidential significance to the courts.

  5. Validation of a search strategy to identify nutrition trials in PubMed using the relative recall method.

    PubMed

    Durão, Solange; Kredo, Tamara; Volmink, Jimmy

    2015-06-01

    To develop, assess, and maximize the sensitivity of a search strategy to identify diet and nutrition trials in PubMed using relative recall. We developed a search strategy to identify diet and nutrition trials in PubMed. We then constructed a gold standard reference set to validate the identified trials using the relative recall method. Relative recall was calculated by dividing the number of references from the gold standard our search strategy identified by the total number of references in the gold standard. Our gold standard comprised 298 trials, derived from 16 included systematic reviews. The initial search strategy identified 242 of 298 references, with a relative recall of 81.2% [95% confidence interval (CI): 76.3%, 85.5%]. We analyzed titles and abstracts of the 56 missed references for possible additional terms. We then modified the search strategy accordingly. The relative recall of the final search strategy was 88.6% (95% CI: 84.4%, 91.9%). We developed a search strategy to identify diet and nutrition trials in PubMed with a high relative recall (sensitivity). This could be useful for establishing a nutrition trials register to support the conduct of future research, including systematic reviews. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Dynamics of the job search process: developing and testing a mediated moderation model.

    PubMed

    Sun, Shuhua; Song, Zhaoli; Lim, Vivien K G

    2013-09-01

    Taking a self-regulatory perspective, we develop a mediated moderation model explaining how within-person changes in job search efficacy and chronic regulatory focus interactively affect the number of job interview offers and whether job search effort mediates the cross-level interactive effects. A sample of 184 graduating college students provided monthly reports of their job search activities over a period of 8 months. Findings supported the hypothesized relationships. Specifically, at the within-person level, job search efficacy was positively related with the number of interview offers for job seekers with strong prevention focus and negatively related with the number of interview offers for job seekers with strong promotion focus. Results show that job search effort mediated the moderated relationships. Findings enhance understandings of the complex self-regulatory processes underlying job search. PsycINFO Database Record (c) 2013 APA, all rights reserved

  7. The Theory of Planned Behaviour Applied to Search Engines as a Learning Tool

    ERIC Educational Resources Information Center

    Liaw, Shu-Sheng

    2004-01-01

    Search engines have been developed for helping learners to seek online information. Based on theory of planned behaviour approach, this research intends to investigate the behaviour of using search engines as a learning tool. After factor analysis, the results suggest that perceived satisfaction of search engine, search engines as an information…

  8. Searching in clutter : visual attention strategies of expert pilots

    DOT National Transportation Integrated Search

    2012-10-22

    Clutter can slow visual search. However, experts may develop attention strategies that alleviate the effects of clutter on search performance. In the current study we examined the effects of global and local clutter on visual search performance and a...

  9. Applying systematic review search methods to the grey literature: a case study examining guidelines for school-based breakfast programs in Canada.

    PubMed

    Godin, Katelyn; Stapleton, Jackie; Kirkpatrick, Sharon I; Hanning, Rhona M; Leatherdale, Scott T

    2015-10-22

    Grey literature is an important source of information for large-scale review syntheses. However, there are many characteristics of grey literature that make it difficult to search systematically. Further, there is no 'gold standard' for rigorous systematic grey literature search methods and few resources on how to conduct this type of search. This paper describes systematic review search methods that were developed and applied to complete a case study systematic review of grey literature that examined guidelines for school-based breakfast programs in Canada. A grey literature search plan was developed to incorporate four different searching strategies: (1) grey literature databases, (2) customized Google search engines, (3) targeted websites, and (4) consultation with contact experts. These complementary strategies were used to minimize the risk of omitting relevant sources. Since abstracts are often unavailable in grey literature documents, items' abstracts, executive summaries, or table of contents (whichever was available) were screened. Screening of publications' full-text followed. Data were extracted on the organization, year published, who they were developed by, intended audience, goal/objectives of document, sources of evidence/resources cited, meals mentioned in the guidelines, and recommendations for program delivery. The search strategies for identifying and screening publications for inclusion in the case study review was found to be manageable, comprehensive, and intuitive when applied in practice. The four search strategies of the grey literature search plan yielded 302 potentially relevant items for screening. Following the screening process, 15 publications that met all eligibility criteria remained and were included in the case study systematic review. The high-level findings of the case study systematic review are briefly described. This article demonstrated a feasible and seemingly robust method for applying systematic search strategies to identify web-based resources in the grey literature. The search strategy we developed and tested is amenable to adaptation to identify other types of grey literature from other disciplines and answering a wide range of research questions. This method should be further adapted and tested in future research syntheses.

  10. Development of a PubMed Based Search Tool for Identifying Sex and Gender Specific Health Literature.

    PubMed

    Song, Michael M; Simonsen, Cheryl K; Wilson, Joanna D; Jenkins, Marjorie R

    2016-02-01

    An effective literature search strategy is critical to achieving the aims of Sex and Gender Specific Health (SGSH): to understand sex and gender differences through research and to effectively incorporate the new knowledge into the clinical decision making process to benefit both male and female patients. The goal of this project was to develop and validate an SGSH literature search tool that is readily and freely available to clinical researchers and practitioners. PubMed, a freely available search engine for the Medline database, was selected as the platform to build the SGSH literature search tool. Combinations of Medical Subject Heading terms, text words, and title words were evaluated for optimal specificity and sensitivity. The search tool was then validated against reference bases compiled for two disease states, diabetes and stroke. Key sex and gender terms and limits were bundled to create a search tool to facilitate PubMed SGSH literature searches. During validation, the search tool retrieved 50 of 94 (53.2%) stroke and 62 of 95 (65.3%) diabetes reference articles selected for validation. A general keyword search of stroke or diabetes combined with sex difference retrieved 33 of 94 (35.1%) stroke and 22 of 95 (23.2%) diabetes reference base articles, with lower sensitivity and specificity for SGSH content. The Texas Tech University Health Sciences Center SGSH PubMed Search Tool provides higher sensitivity and specificity to sex and gender specific health literature. The tool will facilitate research, clinical decision-making, and guideline development relevant to SGSH.

  11. Development of a PubMed Based Search Tool for Identifying Sex and Gender Specific Health Literature

    PubMed Central

    Song, Michael M.; Simonsen, Cheryl K.; Wilson, Joanna D.

    2016-01-01

    Abstract Background: An effective literature search strategy is critical to achieving the aims of Sex and Gender Specific Health (SGSH): to understand sex and gender differences through research and to effectively incorporate the new knowledge into the clinical decision making process to benefit both male and female patients. The goal of this project was to develop and validate an SGSH literature search tool that is readily and freely available to clinical researchers and practitioners. Methods: PubMed, a freely available search engine for the Medline database, was selected as the platform to build the SGSH literature search tool. Combinations of Medical Subject Heading terms, text words, and title words were evaluated for optimal specificity and sensitivity. The search tool was then validated against reference bases compiled for two disease states, diabetes and stroke. Results: Key sex and gender terms and limits were bundled to create a search tool to facilitate PubMed SGSH literature searches. During validation, the search tool retrieved 50 of 94 (53.2%) stroke and 62 of 95 (65.3%) diabetes reference articles selected for validation. A general keyword search of stroke or diabetes combined with sex difference retrieved 33 of 94 (35.1%) stroke and 22 of 95 (23.2%) diabetes reference base articles, with lower sensitivity and specificity for SGSH content. Conclusions: The Texas Tech University Health Sciences Center SGSH PubMed Search Tool provides higher sensitivity and specificity to sex and gender specific health literature. The tool will facilitate research, clinical decision-making, and guideline development relevant to SGSH. PMID:26555409

  12. Teaching Non-Recursive Binary Searching: Establishing a Conceptual Framework.

    ERIC Educational Resources Information Center

    Magel, E. Terry

    1989-01-01

    Discusses problems associated with teaching non-recursive binary searching in computer language classes, and describes a teacher-directed dialog based on dictionary use that helps students use their previous searching experiences to conceptualize the binary search process. Algorithmic development is discussed and appropriate classroom discussion…

  13. Finding Your Voice: Talent Development Centers and the Academic Talent Search

    ERIC Educational Resources Information Center

    Rushneck, Amy S.

    2012-01-01

    Talent Development Centers are just one of many tools every family, teacher, and gifted advocate should have in their tool box. To understand the importance of Talent Development Centers, it is essential to also understand the Academic Talent Search Program. Talent Search participants who obtain scores comparable to college-bound high school…

  14. Design and Empirical Evaluation of Search Software for Legal Professionals on the WWW.

    ERIC Educational Resources Information Center

    Dempsey, Bert J.; Vreeland, Robert C.; Sumner, Robert G., Jr.; Yang, Kiduk

    2000-01-01

    Discussion of effective search aids for legal researchers on the World Wide Web focuses on the design and evaluation of two software systems developed to explore models for browsing and searching across a user-selected set of Web sites. Describes crawler-enhanced search engines, filters, distributed full-text searching, and natural language…

  15. Introducing PALETTE: an iterative method for conducting a literature search for a review in palliative care.

    PubMed

    Zwakman, Marieke; Verberne, Lisa M; Kars, Marijke C; Hooft, Lotty; van Delden, Johannes J M; Spijker, René

    2018-06-02

    In the rapidly developing specialty of palliative care, literature reviews have become increasingly important to inform and improve the field. When applying widely used methods for literature reviews developed for intervention studies onto palliative care, challenges are encountered such as the heterogeneity of palliative care in practice (wide range of domains in patient characteristics, stages of illness and stakeholders), the explorative character of review questions, and the poorly defined keywords and concepts. To overcome the challenges and to provide guidance for researchers to conduct a literature search for a review in palliative care, Palliative cAre Literature rEview iTeraTive mEthod (PALLETE), a pragmatic framework, was developed. We assessed PALETTE with a detailed description. PALETTE consists of four phases; developing the review question, building the search strategy, validating the search strategy and performing the search. The framework incorporates different information retrieval techniques: contacting experts, pearl growing, citation tracking and Boolean searching in a transparent way to maximize the retrieval of literature relevant to the topic of interest. The different components and techniques are repeated until no new articles are qualified for inclusion. The phases within PALETTE are interconnected by a recurrent process of validation on 'golden bullets' (articles that undoubtedly should be part of the review), citation tracking and concept terminology reflecting the review question. To give insight in the value of PALETTE, we compared PALETTE with the recommended search method for reviews of intervention studies. By using PALETTE on two palliative care literature reviews, we were able to improve our review questions and search strategies. Moreover, in comparison with the recommended search for intervention reviews, the number of articles needed to be screened was decreased whereas more relevant articles were retrieved. Overall, PALETTE helped us in gaining a thorough understanding of the topic of interest and made us confident that the included studies comprehensively represented the topic. PALETTE is a coherent and transparent pragmatic framework to overcome the challenges of performing a literature review in palliative care. The method enables researchers to improve question development and to maximise both sensitivity and precision in their search process.

  16. Exploring Contextual Models in Chemical Patent Search

    NASA Astrophysics Data System (ADS)

    Urbain, Jay; Frieder, Ophir

    We explore the development of probabilistic retrieval models for integrating term statistics with entity search using multiple levels of document context to improve the performance of chemical patent search. A distributed indexing model was developed to enable efficient named entity search and aggregation of term statistics at multiple levels of patent structure including individual words, sentences, claims, descriptions, abstracts, and titles. The system can be scaled to an arbitrary number of compute instances in a cloud computing environment to support concurrent indexing and query processing operations on large patent collections.

  17. The development of search filters for adverse effects of surgical interventions in medline and Embase.

    PubMed

    Golder, Su; Wright, Kath; Loke, Yoon Kong

    2018-06-01

    Search filter development for adverse effects has tended to focus on retrieving studies of drug interventions. However, a different approach is required for surgical interventions. To develop and validate search filters for medline and Embase for the adverse effects of surgical interventions. Systematic reviews of surgical interventions where the primary focus was to evaluate adverse effect(s) were sought. The included studies within these reviews were divided randomly into a development set, evaluation set and validation set. Using word frequency analysis we constructed a sensitivity maximising search strategy and this was tested in the evaluation and validation set. Three hundred and fifty eight papers were included from 19 surgical intervention reviews. Three hundred and fifty two papers were available on medline and 348 were available on Embase. Generic adverse effects search strategies in medline and Embase could achieve approximately 90% relative recall. Recall could be further improved with the addition of specific adverse effects terms to the search strategies. We have derived and validated a novel search filter that has reasonable performance for identifying adverse effects of surgical interventions in medline and Embase. However, we appreciate the limitations of our methods, and recommend further research on larger sample sizes and prospective systematic reviews. © 2018 The Authors Health Information and Libraries Journal published by John Wiley & Sons Ltd on behalf of Health Libraries Group.

  18. A highly sensitive search strategy for clinical trials in Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) was developed.

    PubMed

    Manríquez, Juan J

    2008-04-01

    Systematic reviews should include as many articles as possible. However, many systematic reviews use only databases with high English language content as sources of trials. Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) is an underused source of trials, and there is not a validated strategy for searching clinical trials to be used in this database. The objective of this study was to develop a sensitive search strategy for clinical trials in LILACS. An analytical survey was performed. Several single and multiple-term search strategies were tested for their ability to retrieve clinical trials in LILACS. Sensitivity, specificity, and accuracy of each single and multiple-term strategy were calculated using the results of a hand-search of 44 Chilean journals as gold standard. After combining the most sensitive, specific, and accurate single and multiple-term search strategy, a strategy with a sensitivity of 97.75% (95% confidence interval [CI]=95.98-99.53) and a specificity of 61.85 (95% CI=61.19-62.51) was obtained. LILACS is a source of trials that could improve systematic reviews. A new highly sensitive search strategy for clinical trials in LILACS has been developed. It is hoped this search strategy will improve and increase the utilization of LILACS in future systematic reviews.

  19. Assessing the performance of methodological search filters to improve the efficiency of evidence information retrieval: five literature reviews and a qualitative study.

    PubMed

    Lefebvre, Carol; Glanville, Julie; Beale, Sophie; Boachie, Charles; Duffy, Steven; Fraser, Cynthia; Harbour, Jenny; McCool, Rachael; Smith, Lynne

    2017-11-01

    Effective study identification is essential for conducting health research, developing clinical guidance and health policy and supporting health-care decision-making. Methodological search filters (combinations of search terms to capture a specific study design) can assist in searching to achieve this. This project investigated the methods used to assess the performance of methodological search filters, the information that searchers require when choosing search filters and how that information could be better provided. Five literature reviews were undertaken in 2010/11: search filter development and testing; comparison of search filters; decision-making in choosing search filters; diagnostic test accuracy (DTA) study methods; and decision-making in choosing diagnostic tests. We conducted interviews and a questionnaire with experienced searchers to learn what information assists in the choice of search filters and how filters are used. These investigations informed the development of various approaches to gathering and reporting search filter performance data. We acknowledge that there has been a regrettable delay between carrying out the project, including the searches, and the publication of this report, because of serious illness of the principal investigator. The development of filters most frequently involved using a reference standard derived from hand-searching journals. Most filters were validated internally only. Reporting of methods was generally poor. Sensitivity, precision and specificity were the most commonly reported performance measures and were presented in tables. Aspects of DTA study methods are applicable to search filters, particularly in the development of the reference standard. There is limited evidence on how clinicians choose between diagnostic tests. No published literature was found on how searchers select filters. Interviewing and questioning searchers via a questionnaire found that filters were not appropriate for all tasks but were predominantly used to reduce large numbers of retrieved records and to introduce focus. The Inter Technology Appraisal Support Collaboration (InterTASC) Information Specialists' Sub-Group (ISSG) Search Filters Resource was most frequently mentioned by both groups as the resource consulted to select a filter. Randomised controlled trial (RCT) and systematic review filters, in particular the Cochrane RCT and the McMaster Hedges filters, were most frequently mentioned. The majority indicated that they used different filters depending on the requirement for sensitivity or precision. Over half of the respondents used the filters available in databases. Interviewees used various approaches when using and adapting search filters. Respondents suggested that the main factors that would make choosing a filter easier were the availability of critical appraisals and more detailed performance information. Provenance and having the filter available in a central storage location were also important. The questionnaire could have been shorter and could have included more multiple choice questions, and the reviews of filter performance focused on only four study designs. Search filter studies should use a representative reference standard and explicitly report methods and results. Performance measures should be presented systematically and clearly. Searchers find filters useful in certain circumstances but expressed a need for more user-friendly performance information to aid filter choice. We suggest approaches to use, adapt and report search filter performance. Future work could include research around search filters and performance measures for study designs not addressed here, exploration of alternative methods of displaying performance results and numerical synthesis of performance comparison results. The National Institute for Health Research (NIHR) Health Technology Assessment programme and Medical Research Council-NIHR Methodology Research Programme (grant number G0901496).

  20. Assessing the performance of methodological search filters to improve the efficiency of evidence information retrieval: five literature reviews and a qualitative study.

    PubMed Central

    Lefebvre, Carol; Glanville, Julie; Beale, Sophie; Boachie, Charles; Duffy, Steven; Fraser, Cynthia; Harbour, Jenny; McCool, Rachael; Smith, Lynne

    2017-01-01

    BACKGROUND Effective study identification is essential for conducting health research, developing clinical guidance and health policy and supporting health-care decision-making. Methodological search filters (combinations of search terms to capture a specific study design) can assist in searching to achieve this. OBJECTIVES This project investigated the methods used to assess the performance of methodological search filters, the information that searchers require when choosing search filters and how that information could be better provided. METHODS Five literature reviews were undertaken in 2010/11: search filter development and testing; comparison of search filters; decision-making in choosing search filters; diagnostic test accuracy (DTA) study methods; and decision-making in choosing diagnostic tests. We conducted interviews and a questionnaire with experienced searchers to learn what information assists in the choice of search filters and how filters are used. These investigations informed the development of various approaches to gathering and reporting search filter performance data. We acknowledge that there has been a regrettable delay between carrying out the project, including the searches, and the publication of this report, because of serious illness of the principal investigator. RESULTS The development of filters most frequently involved using a reference standard derived from hand-searching journals. Most filters were validated internally only. Reporting of methods was generally poor. Sensitivity, precision and specificity were the most commonly reported performance measures and were presented in tables. Aspects of DTA study methods are applicable to search filters, particularly in the development of the reference standard. There is limited evidence on how clinicians choose between diagnostic tests. No published literature was found on how searchers select filters. Interviewing and questioning searchers via a questionnaire found that filters were not appropriate for all tasks but were predominantly used to reduce large numbers of retrieved records and to introduce focus. The Inter Technology Appraisal Support Collaboration (InterTASC) Information Specialists' Sub-Group (ISSG) Search Filters Resource was most frequently mentioned by both groups as the resource consulted to select a filter. Randomised controlled trial (RCT) and systematic review filters, in particular the Cochrane RCT and the McMaster Hedges filters, were most frequently mentioned. The majority indicated that they used different filters depending on the requirement for sensitivity or precision. Over half of the respondents used the filters available in databases. Interviewees used various approaches when using and adapting search filters. Respondents suggested that the main factors that would make choosing a filter easier were the availability of critical appraisals and more detailed performance information. Provenance and having the filter available in a central storage location were also important. LIMITATIONS The questionnaire could have been shorter and could have included more multiple choice questions, and the reviews of filter performance focused on only four study designs. CONCLUSIONS Search filter studies should use a representative reference standard and explicitly report methods and results. Performance measures should be presented systematically and clearly. Searchers find filters useful in certain circumstances but expressed a need for more user-friendly performance information to aid filter choice. We suggest approaches to use, adapt and report search filter performance. Future work could include research around search filters and performance measures for study designs not addressed here, exploration of alternative methods of displaying performance results and numerical synthesis of performance comparison results. FUNDING The National Institute for Health Research (NIHR) Health Technology Assessment programme and Medical Research Council-NIHR Methodology Research Programme (grant number G0901496). PMID:29188764

  1. Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces

    DTIC Science & Technology

    2011-02-28

    Final Report for AFOSR #FA9550-08-1-0422 Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces August 1, 2008 to November 30...focused on developing high level general purpose algorithms , such as Tabu Search and Genetic Algorithms . However, understanding of when and why these... algorithms perform well still lags. Our project extended the theory of certain combi- natorial optimization problems to develop analytical

  2. Development of a Search Strategy for an Evidence Based Retrieval Service

    PubMed Central

    Ho, Gah Juan; Liew, Su May; Ng, Chirk Jenn; Hisham Shunmugam, Ranita; Glasziou, Paul

    2016-01-01

    Background Physicians are often encouraged to locate answers for their clinical queries via an evidence-based literature search approach. The methods used are often not clearly specified. Inappropriate search strategies, time constraint and contradictory information complicate evidence retrieval. Aims Our study aimed to develop a search strategy to answer clinical queries among physicians in a primary care setting Methods Six clinical questions of different medical conditions seen in primary care were formulated. A series of experimental searches to answer each question was conducted on 3 commonly advocated medical databases. We compared search results from a PICO (patients, intervention, comparison, outcome) framework for questions using different combinations of PICO elements. We also compared outcomes from doing searches using text words, Medical Subject Headings (MeSH), or a combination of both. All searches were documented using screenshots and saved search strategies. Results Answers to all 6 questions using the PICO framework were found. A higher number of systematic reviews were obtained using a 2 PICO element search compared to a 4 element search. A more optimal choice of search is a combination of both text words and MeSH terms. Despite searching using the Systematic Review filter, many non-systematic reviews or narrative reviews were found in PubMed. There was poor overlap between outcomes of searches using different databases. The duration of search and screening for the 6 questions ranged from 1 to 4 hours. Conclusion This strategy has been shown to be feasible and can provide evidence to doctors’ clinical questions. It has the potential to be incorporated into an interventional study to determine the impact of an online evidence retrieval system. PMID:27935993

  3. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement.

    PubMed

    McGowan, Jessie; Sampson, Margaret; Salzwedel, Douglas M; Cogo, Elise; Foerster, Vicki; Lefebvre, Carol

    2016-07-01

    To develop an evidence-based guideline for Peer Review of Electronic Search Strategies (PRESS) for systematic reviews (SRs), health technology assessments, and other evidence syntheses. An SR, Web-based survey of experts, and consensus development forum were undertaken to identify checklists that evaluated or validated electronic literature search strategies and to determine which of their elements related to search quality or errors. Systematic review: No new search elements were identified for addition to the existing (2008-2010) PRESS 2015 Evidence-Based Checklist, and there was no evidence refuting any of its elements. Results suggested that structured PRESS could identify search errors and improve the selection of search terms. Web-based survey of experts: Most respondents felt that peer review should be undertaken after the MEDLINE search had been prepared but before it had been translated to other databases. Consensus development forum: Of the seven original PRESS elements, six were retained: translation of the research question; Boolean and proximity operators; subject headings; text word search; spelling, syntax and line numbers; and limits and filters. The seventh (skilled translation of the search strategy to additional databases) was removed, as there was consensus that this should be left to the discretion of searchers. An updated PRESS 2015 Guideline Statement was developed, which includes the following four documents: PRESS 2015 Evidence-Based Checklist, PRESS 2015 Recommendations for Librarian Practice, PRESS 2015 Implementation Strategies, and PRESS 2015 Guideline Assessment Form. The PRESS 2015 Guideline Statement should help to guide and improve the peer review of electronic literature search strategies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Playing shooter and driving videogames improves top-down guidance in visual search.

    PubMed

    Wu, Sijing; Spence, Ian

    2013-05-01

    Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills.

  5. A summary report on the search for current technologies and developers to develop depth profiling/physical parameter end effectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Q.H.

    1994-09-12

    This report documents the search strategies and results for available technologies and developers to develop tank waste depth profiling/physical parameter sensors. Sources searched include worldwide research reports, technical papers, journals, private industries, and work at Westinghouse Hanford Company (WHC) at Richland site. Tank waste physical parameters of interest are: abrasiveness, compressive strength, corrosiveness, density, pH, particle size/shape, porosity, radiation, settling velocity, shear strength, shear wave velocity, tensile strength, temperature, viscosity, and viscoelasticity. A list of related articles or sources for each physical parameters is provided.

  6. The BIOSIS data base: Evaluation of its indexes and the STRATBLDR, CHEMFILE, STAIRS and DIALOG systems for on-line searching

    NASA Technical Reports Server (NTRS)

    Nees, M.; Green, H. O.

    1977-01-01

    An IBM-developed program, STAIRS, was selected for performing a search on the BIOSIS file. The evaluation of the hardware and search systems and the strategies used are discussed. The searches are analyzed by type of end user.

  7. Can people find patient decision aids on the Internet?

    PubMed

    Morris, Debra; Drake, Elizabeth; Saarimaki, Anton; Bennett, Carol; O'Connor, Annette

    2008-12-01

    To determine if people could find patient decision aids (PtDAs) on the Internet using the most popular general search engines. We chose five medical conditions for which English language PtDAs were available from at least three different developers. The search engines used were: Google (www.google.com), Yahoo! (www.yahoo.com), and MSN (www.msn.com). For each condition and search engine we ran six searches using a combination of search terms. We coded all non-sponsored Web pages that were linked from the first page of the search results. Most first page results linked to informational Web pages about the condition, only 16% linked to PtDAs. PtDAs were more readily found for the breast cancer surgery decision (our searches found seven of the nine developers). The searches using Yahoo and Google search engines were more likely to find PtDAs. The following combination of search terms: condition, treatment, decision (e.g. breast cancer surgery decision) was most successful across all search engines (29%). While some terms and search engines were more successful, few resulted in direct links to PtDAs. Finding PtDAs would be improved with use of standardized labelling, providing patients with specific Web site addresses or access to an independent PtDA clearinghouse.

  8. Developing a distributed HTML5-based search engine for geospatial resource discovery

    NASA Astrophysics Data System (ADS)

    ZHOU, N.; XIA, J.; Nebert, D.; Yang, C.; Gui, Z.; Liu, K.

    2013-12-01

    With explosive growth of data, Geospatial Cyberinfrastructure(GCI) components are developed to manage geospatial resources, such as data discovery and data publishing. However, the efficiency of geospatial resources discovery is still challenging in that: (1) existing GCIs are usually developed for users of specific domains. Users may have to visit a number of GCIs to find appropriate resources; (2) The complexity of decentralized network environment usually results in slow response and pool user experience; (3) Users who use different browsers and devices may have very different user experiences because of the diversity of front-end platforms (e.g. Silverlight, Flash or HTML). To address these issues, we developed a distributed and HTML5-based search engine. Specifically, (1)the search engine adopts a brokering approach to retrieve geospatial metadata from various and distributed GCIs; (2) the asynchronous record retrieval mode enhances the search performance and user interactivity; (3) the search engine based on HTML5 is able to provide unified access capabilities for users with different devices (e.g. tablet and smartphone).

  9. Expert searching in health librarianship: a literature review to identify international issues and Australian concerns.

    PubMed

    Lasserre, Kaye

    2012-03-01

    The traditional role of health librarians as expert searchers is under challenge. The purpose of this review is to establish health librarians' views, practices and educational processes on expert searching. The search strategy was developed in LISTA and then customised for ten other databases: ALISA, PubMed, Embase, Scopus, Web of Science, CINAHL, ERIC, PsycINFO, Cochrane Library and Google Scholar. The search terms were (expert search* OR expert retriev* OR mediated search* OR information retriev*) AND librar*. The searches, completed in December 2010 and repeated in May 2011, were limited to English language publications from 2000 to 2011 (unless seminal works). Expert searching remains a key role for health librarians, especially for those supporting systematic reviews or employed as clinical librarians answering clinical questions. Although clients tend to be satisfied with searches carried out for them, improvements are required to effectively position the profession. Evidence-based guidelines, adherence to transparent standards, review of entry-level education requirements and a commitment to accredited, rigorous, ongoing professional development will ensure best practice. © 2012 The authors. Health Information and Libraries Journal © 2012 Health Libraries Group.

  10. SLIM: an alternative Web interface for MEDLINE/PubMed searches – a preliminary study

    PubMed Central

    Muin, Michael; Fontelo, Paul; Liu, Fang; Ackerman, Michael

    2005-01-01

    Background With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded. Results Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features. Conclusion SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine. PMID:16321145

  11. SLIM: an alternative Web interface for MEDLINE/PubMed searches - a preliminary study.

    PubMed

    Muin, Michael; Fontelo, Paul; Liu, Fang; Ackerman, Michael

    2005-12-01

    With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded. Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features. SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine.

  12. Understanding the Effects of One's Actions upon Hidden Objects and the Development of Search Behaviour in 7-Month-Old Infants

    ERIC Educational Resources Information Center

    O'Connor, Richard J.; Russell, James

    2015-01-01

    Infants' understanding of how their actions affect the visibility of hidden objects may be a crucial aspect of the development of search behaviour. To investigate this possibility, 7-month-old infants took part in a two-day training study. At the start of the first session, and at the end of the second, all infants performed a search task with a…

  13. Policy implications for familial searching

    PubMed Central

    2011-01-01

    In the United States, several states have made policy decisions regarding whether and how to use familial searching of the Combined DNA Index System (CODIS) database in criminal investigations. Familial searching pushes DNA typing beyond merely identifying individuals to detecting genetic relatedness, an application previously reserved for missing persons identifications and custody battles. The intentional search of CODIS for partial matches to an item of evidence offers law enforcement agencies a powerful tool for developing investigative leads, apprehending criminals, revitalizing cold cases and exonerating wrongfully convicted individuals. As familial searching involves a range of logistical, social, ethical and legal considerations, states are now grappling with policy options for implementing familial searching to balance crime fighting with its potential impact on society. When developing policies for familial searching, legislators should take into account the impact of familial searching on select populations and the need to minimize personal intrusion on relatives of individuals in the DNA database. This review describes the approaches used to narrow a suspect pool from a partial match search of CODIS and summarizes the economic, ethical, logistical and political challenges of implementing familial searching. We examine particular US state policies and the policy options adopted to address these issues. The aim of this review is to provide objective background information on the controversial approach of familial searching to inform policy decisions in this area. Herein we highlight key policy options and recommendations regarding effective utilization of familial searching that minimize harm to and afford maximum protection of US citizens. PMID:22040348

  14. Policy implications for familial searching.

    PubMed

    Kim, Joyce; Mammo, Danny; Siegel, Marni B; Katsanis, Sara H

    2011-11-01

    In the United States, several states have made policy decisions regarding whether and how to use familial searching of the Combined DNA Index System (CODIS) database in criminal investigations. Familial searching pushes DNA typing beyond merely identifying individuals to detecting genetic relatedness, an application previously reserved for missing persons identifications and custody battles. The intentional search of CODIS for partial matches to an item of evidence offers law enforcement agencies a powerful tool for developing investigative leads, apprehending criminals, revitalizing cold cases and exonerating wrongfully convicted individuals. As familial searching involves a range of logistical, social, ethical and legal considerations, states are now grappling with policy options for implementing familial searching to balance crime fighting with its potential impact on society. When developing policies for familial searching, legislators should take into account the impact of familial searching on select populations and the need to minimize personal intrusion on relatives of individuals in the DNA database. This review describes the approaches used to narrow a suspect pool from a partial match search of CODIS and summarizes the economic, ethical, logistical and political challenges of implementing familial searching. We examine particular US state policies and the policy options adopted to address these issues. The aim of this review is to provide objective background information on the controversial approach of familial searching to inform policy decisions in this area. Herein we highlight key policy options and recommendations regarding effective utilization of familial searching that minimize harm to and afford maximum protection of US citizens.

  15. Strategic Plan 2011 to 2016

    DTIC Science & Technology

    2011-02-01

    search capability for Air Force Research Information Management System (AFRIMS) data as a part of federated search under DTIC Online Access...provide vetted requests to dataset owners. • Develop a federated search capability for databases containing limited distribution material. • Deploy

  16. Protocols for Teaching Students How to Search for, Discover, and Evaluate Innovations

    ERIC Educational Resources Information Center

    Norton, William I., Jr.; Hale, Dena H.

    2011-01-01

    The authors introduce and develop protocols to guide aspiring entrepreneurs' behaviors in searching for and discovering innovative ideas that may have commercial potential. Systematic search has emerged as a theory-based, prescriptive framework to guide innovative behavior. Grounded in Fiet's theory of search and discovery, this article provides…

  17. Computer Use of a Medical Dictionary to Select Search Words.

    ERIC Educational Resources Information Center

    O'Connor, John

    1986-01-01

    Explains an experiment in text-searching retrieval for cancer questions which developed and used computer procedures (via human simulation) to select search words from medical dictionaries. This study is based on an earlier one in which search words were humanly selected, and the recall results of the two studies are compared. (Author/LRW)

  18. Millennial Students' Mental Models of Search: Implications for Academic Librarians and Database Developers

    ERIC Educational Resources Information Center

    Holman, Lucy

    2011-01-01

    Today's students exhibit generational differences in the way they search for information. Observations of first-year students revealed a proclivity for simple keyword or phrases searches with frequent misspellings and incorrect logic. Although no students had strong mental models of search mechanisms, those with stronger models did construct more…

  19. Be Happy, Don't Wait: The Role of Trait Affect in Job Search

    ERIC Educational Resources Information Center

    Turban, Daniel B.; Lee, Felissa K.; Veiga, Serge P. da Motta; Haggard, Dana L.; Wu, Sharon Y.

    2013-01-01

    In this study we developed and tested a self-regulatory model of trait affect in job search. Specifically, we theorized that trait positive and negative affect would influence both motivation control and procrastination, and these mediating variables would, in turn, influence job search outcomes through job search intensity. Using longitudinal…

  20. Search times and probability of detection in time-limited search

    NASA Astrophysics Data System (ADS)

    Wilson, David; Devitt, Nicole; Maurer, Tana

    2005-05-01

    When modeling the search and target acquisition process, probability of detection as a function of time is important to war games and physical entity simulations. Recent US Army RDECOM CERDEC Night Vision and Electronics Sensor Directorate modeling of search and detection has focused on time-limited search. Developing the relationship between detection probability and time of search as a differential equation is explored. One of the parameters in the current formula for probability of detection in time-limited search corresponds to the mean time to detect in time-unlimited search. However, the mean time to detect in time-limited search is shorter than the mean time to detect in time-unlimited search and the relationship between them is a mathematical relationship between these two mean times. This simple relationship is derived.

  1. Mediating Role of Career Coaching on Job-Search Behavior of Older Generations.

    PubMed

    Lim, Doo Hun; Oh, Eunjung; Ju, Boreum; Kim, Hae Na

    2018-01-01

    This study focuses on career development processes and options for older workers in South Korea and explores how career coaching enhances their career development efforts and transition needs. The purpose of this study is to investigate the structural relationship between older employees' goal-setting, self-efficacy, and job-search behavior mediated by career coaching. A total of 249 participants were recruited in a metropolitan city in South Korea. Based on the literature review, hypotheses were developed and tested on the structural model and the following findings were revealed. First, the findings indicate a positive effect of self-efficacy on older workers' job-search behavior. Second, the value of career coaching was found to affect older workers' job-search behavior in the South Korean context. Third, career-goal commitment alone did not have a positive significant effect on job-search behavior, but it was influential through the mediating process of the perceived quality of the career coaching program provided by an employment center in South Korea.

  2. Development of public science archive system of Subaru Telescope

    NASA Astrophysics Data System (ADS)

    Baba, Hajime; Yasuda, Naoki; Ichikawa, Shin-Ichi; Yagi, Masafumi; Iwamoto, Nobuyuki; Takata, Tadafumi; Horaguchi, Toshihiro; Taga, Masatochi; Watanabe, Masaru; Okumura, Shin-Ichiro; Ozawa, Tomohiko; Yamamoto, Naotaka; Hamabe, Masaru

    2002-09-01

    We have developed a public science archive system, Subaru-Mitaka-Okayama-Kiso Archive system (SMOKA), as a successor of Mitaka-Okayama-Kiso Archive (MOKA) system. SMOKA provides an access to the public data of Subaru Telescope, the 188 cm telescope at Okayama Astrophysical Observatory, and the 105 cm Schmidt telescope at Kiso Observatory of the University of Tokyo. Since 1997, we have tried to compile the dictionary of FITS header keywords. The accomplishment of the dictionary enabled us to construct an unified public archive of the data obtained with various instruments at the telescopes. SMOKA has two kinds of user interfaces; Simple Search and Advanced Search. Novices can search data by simply selecting the name of the target with the Simple Search interface. Experts would prefer to set detailed constraints on the query, using the Advanced Search interface. In order to improve the efficiency of searching, several new features are implemented, such as archive status plots, calibration data search, an annotation system, and an improved Quick Look Image browsing system. We can efficiently develop and operate SMOKA by adopting a three-tier model for the system. Java servlets and Java Server Pages (JSP) are useful to separate the front-end presentation from the middle and back-end tiers.

  3. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  4. Algebraic Algorithm Design and Local Search

    DTIC Science & Technology

    1996-12-01

    method for performing algorithm design that is more purely algebraic than that of KIDS. This method is then applied to local search. Local search is a...synthesis. Our approach was to follow KIDS in spirit, but to adopt a pure algebraic formalism, supported by Kestrel’s SPECWARE environment (79), that...design was developed that is more purely algebraic than that of KIDS. This method was then applied to local search. A general theory of local search was

  5. Querying archetype-based EHRs by search ontology-based XPath engineering.

    PubMed

    Kropf, Stefan; Uciteli, Alexandr; Schierle, Katrin; Krücken, Peter; Denecke, Kerstin; Herre, Heinrich

    2018-05-11

    Legacy data and new structured data can be stored in a standardized format as XML-based EHRs on XML databases. Querying documents on these databases is crucial for answering research questions. Instead of using free text searches, that lead to false positive results, the precision can be increased by constraining the search to certain parts of documents. A search ontology-based specification of queries on XML documents defines search concepts and relates them to parts in the XML document structure. Such query specification method is practically introduced and evaluated by applying concrete research questions formulated in natural language on a data collection for information retrieval purposes. The search is performed by search ontology-based XPath engineering that reuses ontologies and XML-related W3C standards. The key result is that the specification of research questions can be supported by the usage of search ontology-based XPath engineering. A deeper recognition of entities and a semantic understanding of the content is necessary for a further improvement of precision and recall. Key limitation is that the application of the introduced process requires skills in ontology and software development. In future, the time consuming ontology development could be overcome by implementing a new clinical role: the clinical ontologist. The introduced Search Ontology XML extension connects Search Terms to certain parts in XML documents and enables an ontology-based definition of queries. Search ontology-based XPath engineering can support research question answering by the specification of complex XPath expressions without deep syntax knowledge about XPaths.

  6. Setting the public agenda for online health search: a white paper and action agenda.

    PubMed

    Greenberg, Liza; D'Andrea, Guy; Lorence, Dan

    2004-06-08

    Searches for health information are among the most common reasons that consumers use the Internet. Both consumers and quality experts have raised concerns about the quality of information on the Web and the ability of consumers to find accurate information that meets their needs. To produce a national stakeholder-driven agenda for research, technical improvements, and education that will improve the results of consumer searches for health information on the Internet. URAC, a national accreditation organization, and Consumer WebWatch (CWW), a project of Consumers Union (a consumer advocacy organization), conducted a review of factors influencing the results of online health searches. The organizations convened two stakeholder groups of consumers, quality experts, search engine experts, researchers, health-care providers, informatics specialists, and others. Meeting participants reviewed existing information and developed recommendations for improving the results of online consumer searches for health information. Participants were not asked to vote on or endorse the recommendations. Our working definition of a quality Web site was one that contained accurate, reliable, and complete information. The Internet has greatly improved access to health information for consumers. There is great variation in how consumers seek information via the Internet, and in how successful they are in searching for health information. Further, there is variation among Web sites, both in quality and accessibility. Many Web site features affect the capability of search engines to find and index them. Research is needed to define quality elements of Web sites that could be retrieved by search engines and understand how to meet the needs of different types of searchers. Technological research should seek to develop more sophisticated approaches for tagging information, and to develop searches that "learn" from consumer behavior. Finally, education initiatives are needed to help consumers search more effectively and to help them critically evaluate the information they find.

  7. Setting the Public Agenda for Online Health Search: A White Paper and Action Agenda

    PubMed Central

    D'Andrea, Guy; Lorence, Dan

    2004-01-01

    Background Searches for health information are among the most common reasons that consumers use the Internet. Both consumers and quality experts have raised concerns about the quality of information on the Web and the ability of consumers to find accurate information that meets their needs. Objective To produce a national stakeholder-driven agenda for research, technical improvements, and education that will improve the results of consumer searches for health information on the Internet. Methods URAC, a national accreditation organization, and Consumer WebWatch (CWW), a project of Consumers Union (a consumer advocacy organization), conducted a review of factors influencing the results of online health searches. The organizations convened two stakeholder groups of consumers, quality experts, search engine experts, researchers, health-care providers, informatics specialists, and others. Meeting participants reviewed existing information and developed recommendations for improving the results of online consumer searches for health information. Participants were not asked to vote on or endorse the recommendations. Our working definition of a quality Web site was one that contained accurate, reliable, and complete information. Results The Internet has greatly improved access to health information for consumers. There is great variation in how consumers seek information via the Internet, and in how successful they are in searching for health information. Further, there is variation among Web sites, both in quality and accessibility. Many Web site features affect the capability of search engines to find and index them. Conclusions Research is needed to define quality elements of Web sites that could be retrieved by search engines and understand how to meet the needs of different types of searchers. Technological research should seek to develop more sophisticated approaches for tagging information, and to develop searches that "learn" from consumer behavior. Finally, education initiatives are needed to help consumers search more effectively and to help them critically evaluate the information they find. PMID:15249267

  8. [Progress in the spectral library based protein identification strategy].

    PubMed

    Yu, Derui; Ma, Jie; Xie, Zengyan; Bai, Mingze; Zhu, Yunping; Shu, Kunxian

    2018-04-25

    Exponential growth of the mass spectrometry (MS) data is exhibited when the mass spectrometry-based proteomics has been developing rapidly. It is a great challenge to develop some quick, accurate and repeatable methods to identify peptides and proteins. Nowadays, the spectral library searching has become a mature strategy for tandem mass spectra based proteins identification in proteomics, which searches the experiment spectra against a collection of confidently identified MS/MS spectra that have been observed previously, and fully utilizes the abundance in the spectrum, peaks from non-canonical fragment ions, and other features. This review provides an overview of the implement of spectral library search strategy, and two key steps, spectral library construction and spectral library searching comprehensively, and discusses the progress and challenge of the library search strategy.

  9. In Search of Speedier Searches.

    ERIC Educational Resources Information Center

    Peterson, Ivars

    1984-01-01

    Methods to make computer searching as simple and efficient as possible have led to the development of various data structures. Data structures specify the items involved in searching and what can be done to them. The nature and advantages of using "self-adjusting" data structures (self-adjusting binary search trees) are discussed. (JN)

  10. Automatically finding relevant citations for clinical guideline development.

    PubMed

    Bui, Duy Duc An; Jonnalagadda, Siddhartha; Del Fiol, Guilherme

    2015-10-01

    Literature database search is a crucial step in the development of clinical practice guidelines and systematic reviews. In the age of information technology, the process of literature search is still conducted manually, therefore it is costly, slow and subject to human errors. In this research, we sought to improve the traditional search approach using innovative query expansion and citation ranking approaches. We developed a citation retrieval system composed of query expansion and citation ranking methods. The methods are unsupervised and easily integrated over the PubMed search engine. To validate the system, we developed a gold standard consisting of citations that were systematically searched and screened to support the development of cardiovascular clinical practice guidelines. The expansion and ranking methods were evaluated separately and compared with baseline approaches. Compared with the baseline PubMed expansion, the query expansion algorithm improved recall (80.2% vs. 51.5%) with small loss on precision (0.4% vs. 0.6%). The algorithm could find all citations used to support a larger number of guideline recommendations than the baseline approach (64.5% vs. 37.2%, p<0.001). In addition, the citation ranking approach performed better than PubMed's "most recent" ranking (average precision +6.5%, recall@k +21.1%, p<0.001), PubMed's rank by "relevance" (average precision +6.1%, recall@k +14.8%, p<0.001), and the machine learning classifier that identifies scientifically sound studies from MEDLINE citations (average precision +4.9%, recall@k +4.2%, p<0.001). Our unsupervised query expansion and ranking techniques are more flexible and effective than PubMed's default search engine behavior and the machine learning classifier. Automated citation finding is promising to augment the traditional literature search. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Optimizing Search Patterns for Multiple Searchers Prosecuting a Single Contact In the South China Sea

    DTIC Science & Technology

    2016-09-01

    searching for lost car keys in a parking lot to prosecuting a submarine in the South China Sea. This research draws on oceanographic properties to...search area based on the oceanographic properties at 21N 119E. 14. SUBJECT TERMS Search Theory, Undersea Warfare, South China Sea, Anti- Submarine ...lot to prosecuting a submarine in the South China Sea. This research draws on oceanographic properties to develop a search radii for two surface ships

  12. Sundanese ancient manuscripts search engine using probability approach

    NASA Astrophysics Data System (ADS)

    Suryani, Mira; Hadi, Setiawan; Paulus, Erick; Nurma Yulita, Intan; Supriatna, Asep K.

    2017-10-01

    Today, Information and Communication Technology (ICT) has become a regular thing for every aspect of live include cultural and heritage aspect. Sundanese ancient manuscripts as Sundanese heritage are in damage condition and also the information that containing on it. So in order to preserve the information in Sundanese ancient manuscripts and make them easier to search, a search engine has been developed. The search engine must has good computing ability. In order to get the best computation in developed search engine, three types of probabilistic approaches: Bayesian Networks Model, Divergence from Randomness with PL2 distribution, and DFR-PL2F as derivative form DFR-PL2 have been compared in this study. The three probabilistic approaches supported by index of documents and three different weighting methods: term occurrence, term frequency, and TF-IDF. The experiment involved 12 Sundanese ancient manuscripts. From 12 manuscripts there are 474 distinct terms. The developed search engine tested by 50 random queries for three types of query. The experiment results showed that for the single query and multiple query, the best searching performance given by the combination of PL2F approach and TF-IDF weighting method. The performance has been evaluated using average time responds with value about 0.08 second and Mean Average Precision (MAP) about 0.33.

  13. Improving sensitivity in proteome studies by analysis of false discovery rates for multiple search engines

    PubMed Central

    Jones, Andrew R.; Siepen, Jennifer A.; Hubbard, Simon J.; Paton, Norman W.

    2010-01-01

    Tandem mass spectrometry, run in combination with liquid chromatography (LC-MS/MS), can generate large numbers of peptide and protein identifications, for which a variety of database search engines are available. Distinguishing correct identifications from false positives is far from trivial because all data sets are noisy, and tend to be too large for manual inspection, therefore probabilistic methods must be employed to balance the trade-off between sensitivity and specificity. Decoy databases are becoming widely used to place statistical confidence in results sets, allowing the false discovery rate (FDR) to be estimated. It has previously been demonstrated that different MS search engines produce different peptide identification sets, and as such, employing more than one search engine could result in an increased number of peptides being identified. However, such efforts are hindered by the lack of a single scoring framework employed by all search engines. We have developed a search engine independent scoring framework based on FDR which allows peptide identifications from different search engines to be combined, called the FDRScore. We observe that peptide identifications made by three search engines are infrequently false positives, and identifications made by only a single search engine, even with a strong score from the source search engine, are significantly more likely to be false positives. We have developed a second score based on the FDR within peptide identifications grouped according to the set of search engines that have made the identification, called the combined FDRScore. We demonstrate by searching large publicly available data sets that the combined FDRScore can differentiate between between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine. PMID:19253293

  14. SymDex: increasing the efficiency of chemical fingerprint similarity searches for comparing large chemical libraries by using query set indexing.

    PubMed

    Tai, David; Fang, Jianwen

    2012-08-27

    The large sizes of today's chemical databases require efficient algorithms to perform similarity searches. It can be very time consuming to compare two large chemical databases. This paper seeks to build upon existing research efforts by describing a novel strategy for accelerating existing search algorithms for comparing large chemical collections. The quest for efficiency has focused on developing better indexing algorithms by creating heuristics for searching individual chemical against a chemical library by detecting and eliminating needless similarity calculations. For comparing two chemical collections, these algorithms simply execute searches for each chemical in the query set sequentially. The strategy presented in this paper achieves a speedup upon these algorithms by indexing the set of all query chemicals so redundant calculations that arise in the case of sequential searches are eliminated. We implement this novel algorithm by developing a similarity search program called Symmetric inDexing or SymDex. SymDex shows over a 232% maximum speedup compared to the state-of-the-art single query search algorithm over real data for various fingerprint lengths. Considerable speedup is even seen for batch searches where query set sizes are relatively small compared to typical database sizes. To the best of our knowledge, SymDex is the first search algorithm designed specifically for comparing chemical libraries. It can be adapted to most, if not all, existing indexing algorithms and shows potential for accelerating future similarity search algorithms for comparing chemical databases.

  15. A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.

    PubMed

    Huang, Shiping; Wu, Zhifeng; Misra, Anil

    2017-12-11

    Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.

  16. Toward building a comprehensive data mart

    NASA Astrophysics Data System (ADS)

    Boulware, Douglas; Salerno, John; Bleich, Richard; Hinman, Michael L.

    2004-04-01

    To uncover new relationships or patterns one must first build a corpus of data or what some call a data mart. How can we make sure we have collected all the pertinent data and have maximized coverage? There are hundreds of search engines that are available for use on the Internet today. Which one is best? Is one better for one problem and a second better for another? Are meta-search engines better than individual search engines? In this paper we look at one possible approach in developing a methodology to compare a number of search engines. Before we present this methodology, we first provide our motivation towards the need for increased coverage. We next investigate how we can obtain ground truth and what the ground truth can provide us in the way of some insight into the Internet and search engine capabilities. We then conclude our discussion by developing a methodology in which we compare a number of the search engines and how we can increase overall coverage and thus a more comprehensive data mart.

  17. "Google Reigns Triumphant"?: Stemming the Tide of Googlitis via Collaborative, Situated Information Literacy Instruction

    ERIC Educational Resources Information Center

    Leibiger, Carol A.

    2011-01-01

    Googlitis, the overreliance on search engines for research and the resulting development of poor searching skills, is a recognized problem among today's students. Google is not an effective research tool because, in addition to encouraging keyword searching at the expense of more powerful subject searching, it only accesses the Surface Web and is…

  18. Optimizing event selection with the random grid search

    NASA Astrophysics Data System (ADS)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; Stewart, Chip

    2018-07-01

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  19. OrChem - An open source chemistry search engine for Oracle(R).

    PubMed

    Rijnbeek, Mark; Steinbeck, Christoph

    2009-10-22

    Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.

  20. Automated discovery of local search heuristics for satisfiability testing.

    PubMed

    Fukunaga, Alex S

    2008-01-01

    The development of successful metaheuristic algorithms such as local search for a difficult problem such as satisfiability testing (SAT) is a challenging task. We investigate an evolutionary approach to automating the discovery of new local search heuristics for SAT. We show that several well-known SAT local search algorithms such as Walksat and Novelty are composite heuristics that are derived from novel combinations of a set of building blocks. Based on this observation, we developed CLASS, a genetic programming system that uses a simple composition operator to automatically discover SAT local search heuristics. New heuristics discovered by CLASS are shown to be competitive with the best Walksat variants, including Novelty+. Evolutionary algorithms have previously been applied to directly evolve a solution for a particular SAT instance. We show that the heuristics discovered by CLASS are also competitive with these previous, direct evolutionary approaches for SAT. We also analyze the local search behavior of the learned heuristics using the depth, mobility, and coverage metrics proposed by Schuurmans and Southey.

  1. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  2. Efficient multifeature index structures for music data retrieval

    NASA Astrophysics Data System (ADS)

    Lee, Wegin; Chen, Arbee L. P.

    1999-12-01

    In this paper, we propose four index structures for music data retrieval. Based on suffix trees, we develop two index structures called combined suffix tree and independent suffix trees. These methods still show shortcomings for some search functions. Hence we develop another index, called Twin Suffix Trees, to overcome these problems. However, the Twin Suffix Trees lack of scalability when the amount of music data becomes large. Therefore we propose the fourth index, called Grid-Twin Suffix Trees, to provide scalability and flexibility for a large amount of music data. For each index, we can use different search functions, like exact search and approximate search, on different music features, like melody, rhythm or both. We compare the performance of the different search functions applied on each index structure by a series of experiments.

  3. "Rocky Mountain Talent Search" at the University of Denver

    ERIC Educational Resources Information Center

    Rigby, Kristin

    2005-01-01

    The "Rocky Mountain Talent Search" (RMTS) at the University of Denver was developed based on the talent search model developed by Dr Julian Stanley of Johns Hopkins University. This article summarizes the establishment of RMTS and outlines its contemporary programs. Guided by the philosophy that gifted students have unique needs, require academic…

  4. How to improve your PubMed/MEDLINE searches: 3. advanced searching, MeSH and My NCBI.

    PubMed

    Fatehi, Farhad; Gray, Leonard C; Wootton, Richard

    2014-03-01

    Although the basic PubMed search is often helpful, the results may sometimes be non-specific. For more control over the search process you can use the Advanced Search Builder interface. This allows a targeted search in specific fields, with the convenience of being able to select the intended search field from a list. It also provides a history of your previous searches. The search history is useful to develop a complex search query by combining several previous searches using Boolean operators. For indexing the articles in MEDLINE, the NLM uses a controlled vocabulary system called MeSH. This standardised vocabulary solves the problem of authors, researchers and librarians who may use different terms for the same concept. To be efficient in a PubMed search, you should start by identifying the most appropriate MeSH terms and use them in your search where possible. My NCBI is a personal workspace facility available through PubMed and makes it possible to customise the PubMed interface. It provides various capabilities that can enhance your search performance.

  5. A New Approximate Chimera Donor Cell Search Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Nixon, David (Technical Monitor)

    1998-01-01

    The objectives of this study were to develop chimera-based full potential methodology which is compatible with overflow (Euler/Navier-Stokes) chimera flow solver and to develop a fast donor cell search algorithm that is compatible with the chimera full potential approach. Results of this work included presenting a new donor cell search algorithm suitable for use with a chimera-based full potential solver. This algorithm was found to be extremely fast and simple producing donor cells as fast as 60,000 per second.

  6. The Department of Defense Net-Centric Data Strategy: Implementation Requires a Joint Community of Interest (COI) Working Group and Joint COI Oversight Council

    DTIC Science & Technology

    2007-05-17

    metadata formats, metadata repositories, enterprise portals and federated search engines that make data visible, available, and usable to users...and provides the metadata formats, metadata repositories, enterprise portals and federated search engines that make data visible, available, and...develop an enterprise- wide data sharing plan, establishment of mission area governance processes for CIOs, DISA development of federated search specifications

  7. Molecule database framework: a framework for creating database applications with chemical structure search capability

    PubMed Central

    2013-01-01

    Background Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Results Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes: • Support for multi-component compounds (mixtures) • Import and export of SD-files • Optional security (authorization) For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures). Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. Conclusions By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework. PMID:24325762

  8. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    PubMed

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework.

  9. Users’ guide to the surgical literature: how to perform a high-quality literature search

    PubMed Central

    Waltho, Daniel; Kaur, Manraj Nirmal; Haynes, R. Brian; Farrokhyar, Forough; Thoma, Achilleas

    2015-01-01

    Summary The article “Users’ guide to the surgical literature: how to perform a literature search” was published in 2003, but the continuing technological developments in databases and search filters have rendered that guide out of date. The present guide fills an existing gap in this area; it provides the reader with strategies for developing a searchable clinical question, creating an efficient search strategy, accessing appropriate databases, and skillfully retrieving the best evidence to address the research question. PMID:26384150

  10. OpenSearch (ECHO-ESIP) & REST API for Earth Science Data Access

    NASA Astrophysics Data System (ADS)

    Mitchell, A.; Cechini, M.; Pilone, D.

    2010-12-01

    This presentation will provide a brief technical overview of OpenSearch, the Earth Science Information Partners (ESIP) Federated Search framework, and the REST architecture; discuss NASA’s Earth Observing System (EOS) ClearingHOuse’s (ECHO) implementation lessons learned; and demonstrate the simplified usage of these technologies. SOAP, as a framework for web service communication has numerous advantages for Enterprise applications and Java/C# type programming languages. As a technical solution, SOAP has been a reliable framework on top of which many applications have been successfully developed and deployed. However, as interest grows for quick development cycles and more intriguing “mashups,” the SOAP API loses its appeal. Lightweight and simple are the vogue characteristics that are sought after. Enter the REST API architecture and OpenSearch format. Both of these items provide a new path for application development addressing some of the issues unresolved by SOAP. ECHO has made available all of its discovery, order submission, and data management services through a publicly accessible SOAP API. This interface is utilized by a variety of ECHO client and data partners to provide valuable capabilities to end users. As ECHO interacted with current and potential partners looking to develop Earth Science tools utilizing ECHO, it became apparent that the development overhead required to interact with the SOAP API was a growing barrier to entry. ECHO acknowledged the technical issues that were being uncovered by its partner community and chose to provide two new interfaces for interacting with the ECHO metadata catalog. The first interface is built upon the OpenSearch format and ESIP Federated Search framework. Leveraging these two items, a client (ECHO-ESIP) was developed with a focus on simplified searching and results presentation. The second interface is built upon the Representational State Transfer (REST) architecture. Leveraging the REST architecture, a new API has been made available that will provide access to the entire SOAP API suite of services. The results of these development activities has not only positioned to engage in the thriving world of mashup applications, but also provided an excellent real-world case study of how to successfully leverage these emerging technologies.

  11. Patient safety and systematic reviews: finding papers indexed in MEDLINE, EMBASE and CINAHL.

    PubMed

    Tanon, A A; Champagne, F; Contandriopoulos, A-P; Pomey, M-P; Vadeboncoeur, A; Nguyen, H

    2010-10-01

    To develop search strategies for identifying papers on patient safety in MEDLINE, EMBASE and CINAHL. Six journals were electronically searched for papers on patient safety published between 2000 and 2006. Identified papers were divided into two gold standards: one to build and the other to validate the search strategies. Candidate terms for strategy construction were identified using a word frequency analysis of titles, abstracts and keywords used to index the papers in the databases. Searches were run for each one of the selected terms independently in every database. Sensitivity, precision and specificity were calculated for each candidate term. Terms with sensitivity greater than 10% were combined to form the final strategies. The search strategies developed were run against the validation gold standard to assess their performance. A final step in the validation process was to compare the performance of each strategy to those of other strategies found in the literature. We developed strategies for all three databases that were highly sensitive (range 95%-100%), precise (range 40%-60%) and balanced (the product of sensitivity and precision being in the range of 30%-40%). The strategies were very specific and outperformed those found in the literature. The strategies we developed can meet the needs of users aiming to maximise either sensitivity or precision, or seeking a reasonable compromise between sensitivity and precision, when searching for papers on patient safety in MEDLINE, EMBASE or CINAHL.

  12. Visual Search in Typically Developing Toddlers and Toddlers with Fragile X or Williams Syndrome

    ERIC Educational Resources Information Center

    Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

    2004-01-01

    Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood. Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between…

  13. Developing Information Storage and Retrieval Systems on the Internet: A Knowledge Management Approach

    DTIC Science & Technology

    2011-09-01

    search engines to find information. Most commercial search engines (Google, Yahoo, Bing, etc.) provide their indexing and search services...at no cost. The DoD can achieve large gains at a small cost by making public documents available to search engines . This can be achieved through the...were organized on the website dodreports.com. The results of this research revealed improvement gains of 8-20% for finding reports through commercial search engines during the first six months of

  14. Dialysis search filters for PubMed, Ovid MEDLINE, and Embase databases.

    PubMed

    Iansavichus, Arthur V; Haynes, R Brian; Lee, Christopher W C; Wilczynski, Nancy L; McKibbon, Ann; Shariff, Salimah Z; Blake, Peter G; Lindsay, Robert M; Garg, Amit X

    2012-10-01

    Physicians frequently search bibliographic databases, such as MEDLINE via PubMed, for best evidence for patient care. The objective of this study was to develop and test search filters to help physicians efficiently retrieve literature related to dialysis (hemodialysis or peritoneal dialysis) from all other articles indexed in PubMed, Ovid MEDLINE, and Embase. A diagnostic test assessment framework was used to develop and test robust dialysis filters. The reference standard was a manual review of the full texts of 22,992 articles from 39 journals to determine whether each article contained dialysis information. Next, 1,623,728 unique search filters were developed, and their ability to retrieve relevant articles was evaluated. The high-performance dialysis filters consisted of up to 65 search terms in combination. These terms included the words "dialy" (truncated), "uremic," "catheters," and "renal transplant wait list." These filters reached peak sensitivities of 98.6% and specificities of 98.5%. The filters' performance remained robust in an independent validation subset of articles. These empirically derived and validated high-performance search filters should enable physicians to effectively retrieve dialysis information from PubMed, Ovid MEDLINE, and Embase.

  15. An FMRI-compatible Symbol Search task.

    PubMed

    Liebel, Spencer W; Clark, Uraina S; Xu, Xiaomeng; Riskin-Jones, Hannah H; Hawkshead, Brittany E; Schwarz, Nicolette F; Labbe, Donald; Jerskey, Beth A; Sweet, Lawrence H

    2015-03-01

    Our objective was to determine whether a Symbol Search paradigm developed for functional magnetic resonance imaging (FMRI) is a reliable and valid measure of cognitive processing speed (CPS) in healthy older adults. As all older adults are expected to experience cognitive declines due to aging, and CPS is one of the domains most affected by age, establishing a reliable and valid measure of CPS that can be administered inside an MR scanner may prove invaluable in future clinical and research settings. We evaluated the reliability and construct validity of a newly developed FMRI Symbol Search task by comparing participants' performance in and outside of the scanner and to the widely used and standardized Symbol Search subtest of the Wechsler Adult Intelligence Scale (WAIS). A brief battery of neuropsychological measures was also administered to assess the convergent and discriminant validity of the FMRI Symbol Search task. The FMRI Symbol Search task demonstrated high test-retest reliability when compared to performance on the same task administered out of the scanner (r=.791; p<.001). The criterion validity of the new task was supported, as it exhibited a strong positive correlation with the WAIS Symbol Search (r=.717; p<.001). Predicted convergent and discriminant validity patterns of the FMRI Symbol Search task were also observed. The FMRI Symbol Search task is a reliable and valid measure of CPS in healthy older adults and exhibits expected sensitivity to the effects of age on CPS performance.

  16. LigSearch: a knowledge-based web server to identify likely ligands for a protein target

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, Tjaart A. P. de; Laskowski, Roman A.; Duban, Mark-Eugene

    LigSearch is a web server for identifying ligands likely to bind to a given protein. Identifying which ligands might bind to a protein before crystallization trials could provide a significant saving in time and resources. LigSearch, a web server aimed at predicting ligands that might bind to and stabilize a given protein, has been developed. Using a protein sequence and/or structure, the system searches against a variety of databases, combining available knowledge, and provides a clustered and ranked output of possible ligands. LigSearch can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/LigSearch.

  17. The development of organized visual search

    PubMed Central

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  18. Technical development of PubMed interact: an improved interface for MEDLINE/PubMed searches.

    PubMed

    Muin, Michael; Fontelo, Paul

    2006-11-03

    The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications.

  19. Developing topic-specific search filters for PubMed with click-through data.

    PubMed

    Li, J; Lu, Z

    2013-01-01

    Search filters have been developed and demonstrated for better information access to the immense and ever-growing body of publications in the biomedical domain. However, to date the number of filters remains quite limited because the current filter development methods require significant human efforts in manual document review and filter term selection. In this regard, we aim to investigate automatic methods for generating search filters. We present an automated method to develop topic-specific filters on the basis of users' search logs in PubMed. Specifically, for a given topic, we first detect its relevant user queries and then include their corresponding clicked articles to serve as the topic-relevant document set accordingly. Next, we statistically identify informative terms that best represent the topic-relevant document set using a background set composed of topic irrelevant articles. Lastly, the selected representative terms are combined with Boolean operators and evaluated on benchmark datasets to derive the final filter with the best performance. We applied our method to develop filters for four clinical topics: nephrology, diabetes, pregnancy, and depression. For the nephrology filter, our method obtained performance comparable to the state of the art (sensitivity of 91.3%, specificity of 98.7%, precision of 94.6%, and accuracy of 97.2%). Similarly, high-performing results (over 90% in all measures) were obtained for the other three search filters. Based on PubMed click-through data, we successfully developed a high-performance method for generating topic-specific search filters that is significantly more efficient than existing manual methods. All data sets (topic-relevant and irrelevant document sets) used in this study and a demonstration system are publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/downloads/CQ_filter/

  20. Developing Topic-Specific Search Filters for PubMed with Click-Through Data

    PubMed Central

    Li, Jiao; Lu, Zhiyong

    2013-01-01

    Summary Objectives Search filters have been developed and demonstrated for better information access to the immense and ever-growing body of publications in the biomedical domain. However, to date the number of filters remains quite limited because the current filter development methods require significant human efforts in manual document review and filter term selection. In this regard, we aim to investigate automatic methods for generating search filters. Methods We present an automated method to develop topic-specific filters on the basis of users’ search logs in PubMed. Specifically, for a given topic, we first detect its relevant user queries and then include their corresponding clicked articles to serve as the topic-relevant document set accordingly. Next, we statistically identify informative terms that best represent the topic-relevant document set using a background set composed of topic irrelevant articles. Lastly, the selected representative terms are combined with Boolean operators and evaluated on benchmark datasets to derive the final filter with the best performance. Results We applied our method to develop filters for four clinical topics: nephrology, diabetes, pregnancy, and depression. For the nephrology filter, our method obtained performance comparable to the state of the art (sensitivity of 91.3%, specificity of 98.7%, precision of 94.6%, and accuracy of 97.2%). Similarly, high-performing results (over 90% in all measures) were obtained for the other three search filters. Conclusion Based on PubMed click-through data, we successfully developed a high-performance method for generating topic-specific search filters that is significantly more efficient than existing manual methods. All data sets (topic-relevant and irrelevant document sets) used in this study and a demonstration system are publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/downloads/CQ_filter/ PMID:23666447

  1. The NASA SETI sky survey - Recent developments

    NASA Technical Reports Server (NTRS)

    Klein, Michael J.; Gulkis, Samuel; Olsen, Edward T.; Renzetti, Nicholas A.

    1988-01-01

    NASA's Search for Extraterrestrial Intelligence (SETI) project utilizes two complimentary search strategies: a sky survey and a targeted search. The SETI team at the Jet Propulsion Laboratory have primary responsibility to develop and carry out the sky survey part of the Microwave Observing Project. The paper describes progress that has been made to develop the major elements of the survey including a two-million channel wideband spectrum analyzer system that is being developed and constructed by JPL for the Deep Space Network. The new system will be a multiuser instrument that will serve as a prototype for the SETI Sky Survey processor. This system will be used to test the signal detection and observational strategies on deep-space network antennas in the near future.

  2. Path integration mediated systematic search: a Bayesian model.

    PubMed

    Vickerstaff, Robert J; Merkle, Tobias

    2012-08-21

    The systematic search behaviour is a backup system that increases the chances of desert ants finding their nest entrance after foraging when the path integrator has failed to guide them home accurately enough. Here we present a mathematical model of the systematic search that is based on extensive behavioural studies in North African desert ants Cataglyphis fortis. First, a simple search heuristic utilising Bayesian inference and a probability density function is developed. This model, which optimises the short-term nest detection probability, is then compared to three simpler search heuristics and to recorded search patterns of Cataglyphis ants. To compare the different searches a method to quantify search efficiency is established as well as an estimate of the error rate in the ants' path integrator. We demonstrate that the Bayesian search heuristic is able to automatically adapt to increasing levels of positional uncertainty to produce broader search patterns, just as desert ants do, and that it outperforms the three other search heuristics tested. The searches produced by it are also arguably the most similar in appearance to the ant's searches. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Socio-Psycho-Linguistic Determined Expert-Search System (SPLDESS) Development with Multimedia Illustration Elements

    NASA Astrophysics Data System (ADS)

    Ponomarev, Vasily

    SPLDESS development with the elements of a multimedia illustration of traditional hypertext search results by Internet search engine provides research of information propagation innovative effect during the public access information-recruiting networks of information kiosks formation at the experimental stage with the mirrors at the constantly updating portal for Internet users. Author of this publication put the emphasis on a condition of pertinent search engine results of the total answer by the user inquiries, that provide the politically correct and not usurping socially-network data mining effect at urgent monitoring. Development of the access by devices of the new communication types with the newest technologies of data transmission, multimedia and an information exchange from the first innovation line usage support portal is presented also (including the device of social-psycho-linguistic determination according the author's conception).

  4. Guiding Students to Answers: Query Recommendation

    ERIC Educational Resources Information Center

    Yilmazel, Ozgur

    2011-01-01

    This paper reports on a guided navigation system built on the textbook search engine developed at Anadolu University to support distance education students. The search engine uses Turkish Language specific language processing modules to enable searches over course material presented in Open Education Faculty textbooks. We implemented a guided…

  5. Development of a One-Stop Data Search and Discovery Engine using Ontologies for Semantic Mappings (HydroSeek)

    NASA Astrophysics Data System (ADS)

    Piasecki, M.; Beran, B.

    2007-12-01

    Search engines have changed the way we see the Internet. The ability to find the information by just typing in keywords was a big contribution to the overall web experience. While the conventional search engine methodology worked well for textual documents, locating scientific data remains a problem since they are stored in databases not readily accessible by search engine bots. Considering different temporal, spatial and thematic coverage of different databases, especially for interdisciplinary research it is typically necessary to work with multiple data sources. These sources can be federal agencies which generally offer national coverage or regional sources which cover a smaller area with higher detail. However for a given geographic area of interest there often exists more than one database with relevant data. Thus being able to query multiple databases simultaneously is a desirable feature that would be tremendously useful for scientists. Development of such a search engine requires dealing with various heterogeneity issues. In scientific databases, systems often impose controlled vocabularies which ensure that they are generally homogeneous within themselves but are semantically heterogeneous when moving between different databases. This defines the boundaries of possible semantic related problems making it easier to solve than with the conventional search engines that deal with free text. We have developed a search engine that enables querying multiple data sources simultaneously and returns data in a standardized output despite the aforementioned heterogeneity issues between the underlying systems. This application relies mainly on metadata catalogs or indexing databases, ontologies and webservices with virtual globe and AJAX technologies for the graphical user interface. Users can trigger a search of dozens of different parameters over hundreds of thousands of stations from multiple agencies by providing a keyword, a spatial extent, i.e. a bounding box, and a temporal bracket. As part of this development we have also added an environment that allows users to do some of the semantic tagging, i.e. the linkage of a variable name (which can be anything they desire) to defined concepts in the ontology structure which in turn provides the backbone of the search engine.

  6. Optimizing event selection with the random grid search

    DOE PAGES

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; ...

    2018-02-27

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  7. Optimizing Event Selection with the Random Grid Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events inmore » the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  8. Optimizing event selection with the random grid search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  9. CALIL.JP, a new web service that provides one-stop searching of Japan-wide libraries' collections

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Ryuuji

    Calil.JP is a new free online service that enables federated searching, marshalling and integration of Web-OPAC data on the collections of libraries from around Japan. It offers the search results through user-friendly interface. Developed with a concept of accelerating discovery of fun-to-read books and motivating users to head for libraries, Calil was initially designed mainly for public library users. It now extends to cover university libraries and special libraries. This article presents the Calil's basic capabilities, concept, progress made thus far, and plan for further development as viewed from an engineering development manager.

  10. Googling DNA sequences on the World Wide Web.

    PubMed

    Hajibabaei, Mehrdad; Singer, Gregory A C

    2009-11-10

    New web-based technologies provide an excellent opportunity for sharing and accessing information and using web as a platform for interaction and collaboration. Although several specialized tools are available for analyzing DNA sequence information, conventional web-based tools have not been utilized for bioinformatics applications. We have developed a novel algorithm and implemented it for searching species-specific genomic sequences, DNA barcodes, by using popular web-based methods such as Google. We developed an alignment independent character based algorithm based on dividing a sequence library (DNA barcodes) and query sequence to words. The actual search is conducted by conventional search tools such as freely available Google Desktop Search. We implemented our algorithm in two exemplar packages. We developed pre and post-processing software to provide customized input and output services, respectively. Our analysis of all publicly available DNA barcode sequences shows a high accuracy as well as rapid results. Our method makes use of conventional web-based technologies for specialized genetic data. It provides a robust and efficient solution for sequence search on the web. The integration of our search method for large-scale sequence libraries such as DNA barcodes provides an excellent web-based tool for accessing this information and linking it to other available categories of information on the web.

  11. Sample size determination for bibliographic retrieval studies

    PubMed Central

    Yao, Xiaomei; Wilczynski, Nancy L; Walter, Stephen D; Haynes, R Brian

    2008-01-01

    Background Research for developing search strategies to retrieve high-quality clinical journal articles from MEDLINE is expensive and time-consuming. The objective of this study was to determine the minimal number of high-quality articles in a journal subset that would need to be hand-searched to update or create new MEDLINE search strategies for treatment, diagnosis, and prognosis studies. Methods The desired width of the 95% confidence intervals (W) for the lowest sensitivity among existing search strategies was used to calculate the number of high-quality articles needed to reliably update search strategies. New search strategies were derived in journal subsets formed by 2 approaches: random sampling of journals and top journals (having the most high-quality articles). The new strategies were tested in both the original large journal database and in a low-yielding journal (having few high-quality articles) subset. Results For treatment studies, if W was 10% or less for the lowest sensitivity among our existing search strategies, a subset of 15 randomly selected journals or 2 top journals were adequate for updating search strategies, based on each approach having at least 99 high-quality articles. The new strategies derived in 15 randomly selected journals or 2 top journals performed well in the original large journal database. Nevertheless, the new search strategies developed using the random sampling approach performed better than those developed using the top journal approach in a low-yielding journal subset. For studies of diagnosis and prognosis, no journal subset had enough high-quality articles to achieve the expected W (10%). Conclusion The approach of randomly sampling a small subset of journals that includes sufficient high-quality articles is an efficient way to update or create search strategies for high-quality articles on therapy in MEDLINE. The concentrations of diagnosis and prognosis articles are too low for this approach. PMID:18823538

  12. Methodological developments in searching for studies for systematic reviews: past, present and future?

    PubMed

    Lefebvre, Carol; Glanville, Julie; Wieland, L Susan; Coles, Bernadette; Weightman, Alison L

    2013-09-25

    The Cochrane Collaboration was established in 1993, following the opening of the UK Cochrane Centre in 1992, at a time when searching for studies for inclusion in systematic reviews was not well-developed. Review authors largely conducted their own searches or depended on medical librarians, who often possessed limited awareness and experience of systematic reviews. Guidance on the conduct and reporting of searches was limited. When work began to identify reports of randomized controlled trials (RCTs) for inclusion in Cochrane Reviews in 1992, there were only approximately 20,000 reports indexed as RCTs in MEDLINE and none indexed as RCTs in Embase. No search filters had been developed with the aim of identifying all RCTs in MEDLINE or other major databases. This presented The Cochrane Collaboration with a considerable challenge in identifying relevant studies.Over time, the number of studies indexed as RCTs in the major databases has grown considerably and the Cochrane Central Register of Controlled Trials (CENTRAL) has become the best single source of published controlled trials, with approximately 700,000 records, including records identified by the Collaboration from Embase and MEDLINE. Search filters for various study types, including systematic reviews and the Cochrane Highly Sensitive Search Strategies for RCTs, have been developed. There have been considerable advances in the evidence base for methodological aspects of information retrieval. The Cochrane Handbook for Systematic Reviews of Interventions now provides detailed guidance on the conduct and reporting of searches. Initiatives across The Cochrane Collaboration to improve the quality inter alia of information retrieval include: the recently introduced Methodological Expectations for Cochrane Intervention Reviews (MECIR) programme, which stipulates 'mandatory' and 'highly desirable' standards for various aspects of review conduct and reporting including searching, the development of Standard Training Materials for Cochrane Reviews and work on peer review of electronic search strategies. Almost all Cochrane Review Groups and some Cochrane Centres and Fields now have a Trials Search Co-ordinator responsible for study identification and medical librarians and other information specialists are increasingly experienced in searching for studies for systematic reviews.Prospective registration of clinical trials is increasing and searching trials registers is now mandatory for Cochrane Reviews, where relevant. Portals such as the WHO International Clinical Trials Registry Platform (ICTRP) are likely to become increasingly attractive, given concerns about the number of trials which may not be registered and/or published. The importance of access to information from regulatory and reimbursement agencies is likely to increase. Cross-database searching, gateways or portals and improved access to full-text databases will impact on how searches are conducted and reported, as will services such as Google Scholar, Scopus and Web of Science. Technologies such as textual analysis, semantic analysis, text mining and data linkage will have a major impact on the search process but efficient and effective updating of reviews may remain a challenge.In twenty years' time, we envisage that the impact of universal social networking, as well as national and international legislation, will mean that all trials involving humans will be registered at inception and detailed trial results will be routinely available to all. Challenges will remain, however, to ensure the discoverability of relevant information in diverse and often complex sources and the availability of metadata to provide the most efficient access to information. We envisage an ongoing role for information professionals as experts in identifying new resources, researching efficient ways to link or mine them for relevant data and managing their content for the efficient production of systematic reviews.

  13. Methodological developments in searching for studies for systematic reviews: past, present and future?

    PubMed Central

    2013-01-01

    The Cochrane Collaboration was established in 1993, following the opening of the UK Cochrane Centre in 1992, at a time when searching for studies for inclusion in systematic reviews was not well-developed. Review authors largely conducted their own searches or depended on medical librarians, who often possessed limited awareness and experience of systematic reviews. Guidance on the conduct and reporting of searches was limited. When work began to identify reports of randomized controlled trials (RCTs) for inclusion in Cochrane Reviews in 1992, there were only approximately 20,000 reports indexed as RCTs in MEDLINE and none indexed as RCTs in Embase. No search filters had been developed with the aim of identifying all RCTs in MEDLINE or other major databases. This presented The Cochrane Collaboration with a considerable challenge in identifying relevant studies. Over time, the number of studies indexed as RCTs in the major databases has grown considerably and the Cochrane Central Register of Controlled Trials (CENTRAL) has become the best single source of published controlled trials, with approximately 700,000 records, including records identified by the Collaboration from Embase and MEDLINE. Search filters for various study types, including systematic reviews and the Cochrane Highly Sensitive Search Strategies for RCTs, have been developed. There have been considerable advances in the evidence base for methodological aspects of information retrieval. The Cochrane Handbook for Systematic Reviews of Interventions now provides detailed guidance on the conduct and reporting of searches. Initiatives across The Cochrane Collaboration to improve the quality inter alia of information retrieval include: the recently introduced Methodological Expectations for Cochrane Intervention Reviews (MECIR) programme, which stipulates 'mandatory’ and 'highly desirable’ standards for various aspects of review conduct and reporting including searching, the development of Standard Training Materials for Cochrane Reviews and work on peer review of electronic search strategies. Almost all Cochrane Review Groups and some Cochrane Centres and Fields now have a Trials Search Co-ordinator responsible for study identification and medical librarians and other information specialists are increasingly experienced in searching for studies for systematic reviews. Prospective registration of clinical trials is increasing and searching trials registers is now mandatory for Cochrane Reviews, where relevant. Portals such as the WHO International Clinical Trials Registry Platform (ICTRP) are likely to become increasingly attractive, given concerns about the number of trials which may not be registered and/or published. The importance of access to information from regulatory and reimbursement agencies is likely to increase. Cross-database searching, gateways or portals and improved access to full-text databases will impact on how searches are conducted and reported, as will services such as Google Scholar, Scopus and Web of Science. Technologies such as textual analysis, semantic analysis, text mining and data linkage will have a major impact on the search process but efficient and effective updating of reviews may remain a challenge. In twenty years’ time, we envisage that the impact of universal social networking, as well as national and international legislation, will mean that all trials involving humans will be registered at inception and detailed trial results will be routinely available to all. Challenges will remain, however, to ensure the discoverability of relevant information in diverse and often complex sources and the availability of metadata to provide the most efficient access to information. We envisage an ongoing role for information professionals as experts in identifying new resources, researching efficient ways to link or mine them for relevant data and managing their content for the efficient production of systematic reviews. PMID:24066664

  14. Screening for Reading Problems: The Utility of SEARCH.

    ERIC Educational Resources Information Center

    Morrison, Delmont; And Others

    1988-01-01

    The accuracy of SEARCH for identifying children at risk for developing learning disabilities was evaluated with 1,107 kindergarten children. Children identified as at risk were of average intelligence. SEARCH scores were significantly correlated with sequential and simultaneous information processing skills. SEARCH predicted adequacy of…

  15. Monte Carlo-based searching as a tool to study carbohydrate structure

    USDA-ARS?s Scientific Manuscript database

    A torsion angle-based Monte-Carlo searching routine was developed and applied to several carbohydrate modeling problems. The routine was developed as a Unix shell script that calls several programs, which allows it to be interfaced with multiple potential functions and various functions for evaluat...

  16. Peer teaching and information retrieval: the role of the NICE Evidence search student champion scheme in enhancing students' confidence.

    PubMed

    Sbaffi, Laura; Hallsworth, Elaine; Weist, Anne

    2018-03-01

    This research reports on the NICE Evidence search (ES) student champion scheme (SCS) first five years of activity (2011-2016) in terms of its impact on health care undergraduate students' information search skills and search confidence. A review of students' evaluation of the scheme was carried out to chart the changes in attitude towards NICE Evidence search as an online health care information source and to monitor students' approach to information seeking. This study is based on the results of questionnaires distributed to students before and after attending a training session on NICE Evidence search delivered by their own peers. The exercise was implemented in health related universities in England over a period of five consecutive academic years. (i) Students' search confidence improved considerably after the training; (ii) ES was perceived as being an increasingly useful resource of evidence based information for their studies; (iii) the training helped students develop discerning search skills and use evidence based information sources more consistently and critically. The NICE SCS improves confidence in approaching information tasks amongst health care undergraduate students. Future developments could involve offering the training at the onset of a course of study and adopting online delivery formats to expand its geographical reach. © 2018 Health Libraries Group.

  17. Job Search as Goal-Directed Behavior: Objectives and Methods

    ERIC Educational Resources Information Center

    Van Hoye, Greet; Saks, Alan M.

    2008-01-01

    This study investigated the relationship between job search objectives (finding a new job/turnover, staying aware of job alternatives, developing a professional network, and obtaining leverage against an employer) and job search methods (looking at job ads, visiting job sites, networking, contacting employment agencies, contacting employers, and…

  18. Remote Sensing Capabilities to Detect Maritime Vessels in Distress

    NASA Technical Reports Server (NTRS)

    Larsen, Rudolph K.; Green, John M.; Huxtable, Barton D.; Rais, Houra

    2004-01-01

    The National Aeronautics and Space Administration (NASA) has the responsibility for conducting research and development for search and rescue as charged under the National Search and Rescue Plan. For over two decades this task has been undertaken by the Search and Rescue Mission Office at the NASA Goddard Space Flight Center (GSFC). The technology used by the highly successful beacon locating satellite system, Cospas-Sarsat, was conceived and developed at GSFC and is managed by the National Oceanographic and Atmospheric Administration (NOAA). Using beacon-less remote sensing to find people and vessels in distress complements the demonstrated life saving capabilities of this satellite system. The Search and Rescue Mission Office has been investigating the use of fully polarimetric synthetic aperture radar to locate crashed aircraft. An overview of this effort and potential maritime applications of Search and Rescue Synthetic Aperture Radar (SAR) will be presented. The Mission Office has also developed a Laser search and rescue system called L-SAR. The prototype instrument was designed and built by SenSyTech Inc. It specifically targets the location of novel retro-reflective material easily applied to rescue equipment and vessels in distress. An overview of this effort will also be presented.

  19. OrChem - An open source chemistry search engine for Oracle®

    PubMed Central

    2009-01-01

    Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521

  20. Development and use of a content search strategy for retrieving studies on patients' views and preferences.

    PubMed

    Selva, Anna; Solà, Ivan; Zhang, Yuan; Pardo-Hernandez, Hector; Haynes, R Brian; Martínez García, Laura; Navarro, Tamara; Schünemann, Holger; Alonso-Coello, Pablo

    2017-08-30

    Identifying scientific literature addressing patients' views and preferences is complex due to the wide range of studies that can be informative and the poor indexing of this evidence. Given the lack of guidance we developed a search strategy to retrieve this type of evidence. We assembled an initial list of terms from several sources, including the revision of the terms and indexing of topic-related studies and, methods research literature, and other relevant projects and systematic reviews. We used the relative recall approach, evaluating the capacity of the designed search strategy for retrieving studies included in relevant systematic reviews for the topic. We implemented in practice the final version of the search strategy for conducting systematic reviews and guidelines, and calculated search's precision and the number of references needed to read (NNR). We assembled an initial version of the search strategy, which had a relative recall of 87.4% (yield of 132/out of 151 studies). We then added some additional terms from the studies not initially identified, and re-tested this improved version against the studies included in a new set of systematic reviews, reaching a relative recall of 85.8% (151/out of 176 studies, 95% CI 79.9 to 90.2). This final version of the strategy includes two sets of terms related with two domains: "Patient Preferences and Decision Making" and "Health State Utilities Values". When we used the search strategy for the development of systematic reviews and clinical guidelines we obtained low precision values (ranging from 2% to 5%), and the NNR from 20 to 50. This search strategy fills an important research gap in this field. It will help systematic reviewers, clinical guideline developers, and policy-makers to retrieve published research on patients' views and preferences. In turn, this will facilitate the inclusion of this critical aspect when formulating heath care decisions, including recommendations.

  1. FOAMSearch.net: A custom search engine for emergency medicine and critical care.

    PubMed

    Raine, Todd; Thoma, Brent; Chan, Teresa M; Lin, Michelle

    2015-08-01

    The number of online resources read by and pertinent to clinicians has increased dramatically. However, most healthcare professionals still use mainstream search engines as their primary port of entry to the resources on the Internet. These search engines use algorithms that do not make it easy to find clinician-oriented resources. FOAMSearch, a custom search engine (CSE), was developed to find relevant, high-quality online resources for emergency medicine and critical care (EMCC) clinicians. Using Google™ algorithms, it searches a vetted list of >300 blogs, podcasts, wikis, knowledge translation tools, clinical decision support tools and medical journals. Utilisation has increased progressively to >3000 users/month since its launch in 2011. Further study of the role of CSEs to find medical resources is needed, and it might be possible to develop similar CSEs for other areas of medicine. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  2. Impact of guided exploration and enactive exploration on self-regulatory mechanisms and information acquisition through electronic search.

    PubMed

    Debowski, S; Wood, R E; Bandura, A

    2001-12-01

    Following instruction in basic skills for electronic search, participants who practiced in a guided exploration mode developed stronger self-efficacy and greater satisfaction than those who practiced in a self-guided exploratory mode. Intrinsic motivation was not affected by exploration mode. On 2 post-training tasks, guided exploration participants produced more effective search strategies. expended less effort, made fewer errors, rejected fewer lines of search, and achieved higher performance. Relative lack of support for self-regulatory factors as mediators of exploration mode impacts was attributed to the uninformative feedback from electronic search, which causes most people to remain at a novice level and to require external guidance for development of self-efficacy and skills. Self-guided learning will be more effective on structured tasks with more informative feedback and for individuals with greater expertise on dynamic tasks.

  3. BIOMedical Search Engine Framework: Lightweight and customized implementation of domain-specific biomedical search engines.

    PubMed

    Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália

    2016-07-01

    Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations meaningful to that particular scope of research. Conversely, indirect concept associations, i.e. concepts related by other intermediary concepts, can be useful to integrate information from different studies and look into non-trivial relations. The BIOMedical Search Engine Framework supports the development of domain-specific search engines. The key strengths of the framework are modularity and extensibilityin terms of software design, the use of open-source consolidated Web technologies, and the ability to integrate any number of biomedical text mining tools and information resources. Currently, the Smart Drug Search keeps over 1,186,000 documents, containing more than 11,854,000 annotations for 77,200 different concepts. The Smart Drug Search is publicly accessible at http://sing.ei.uvigo.es/sds/. The BIOMedical Search Engine Framework is freely available for non-commercial use at https://github.com/agjacome/biomsef. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Human Resource Development, Social Capital, Emotional Intelligence: Any Link to Productivity?

    ERIC Educational Resources Information Center

    Brooks, Kit; Nafukho, Fredrick Muyia

    2006-01-01

    Purpose: This article aims to offer a theoretical framework that attempts to show the integration among human resource development (HRD), social capital (SC), emotional intelligence (EI) and organizational productivity. Design/methodology/approach: The literature search included the following: a computerized search of accessible and available…

  5. Teaching Teachers to Search Electronically.

    ERIC Educational Resources Information Center

    Smith, Nancy H. G.

    1992-01-01

    Describes an inservice teacher training program developed to teach secondary school teachers how to search CD-ROMs, laser disks, and automated catalogs. Training sessions held during faculty meetings are described, computer activities are explained, a sample worksheet for searching an electronic encyclopedia is included, and sources for CD-ROMs…

  6. Building a better search engine for earth science data

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Yang, C. P.; Moroni, D. F.; McGibbney, L. J.; Jiang, Y.; Huang, T.; Greguska, F. R., III; Li, Y.; Finch, C. J.

    2017-12-01

    Free text data searching of earth science datasets has been implemented with varying degrees of success and completeness across the spectrum of the 12 NASA earth sciences data centers. At the JPL Physical Oceanography Distributed Active Archive Center (PO.DAAC) the search engine has been developed around the Solr/Lucene platform. Others have chosen other popular enterprise search platforms like Elasticsearch. Regardless, the default implementations of these search engines leveraging factors such as dataset popularity, term frequency and inverse document term frequency do not fully meet the needs of precise relevancy and ranking of earth science search results. For the PO.DAAC, this shortcoming has been identified for several years by its external User Working Group that has assigned several recommendations to improve the relevancy and discoverability of datasets related to remotely sensed sea surface temperature, ocean wind, waves, salinity, height and gravity that comprise a total count of over 500 public availability datasets. Recently, the PO.DAAC has teamed with an effort led by George Mason University to improve the improve the search and relevancy ranking of oceanographic data via a simple search interface and powerful backend services called MUDROD (Mining and Utilizing Dataset Relevancy from Oceanographic Datasets to Improve Data Discovery) funded by the NASA AIST program. MUDROD has mined and utilized the combination of PO.DAAC earth science dataset metadata, usage metrics, and user feedback and search history to objectively extract relevance for improved data discovery and access. In addition to improved dataset relevance and ranking, the MUDROD search engine also returns recommendations to related datasets and related user queries. This presentation will report on use cases that drove the architecture and development, and the success metrics and improvements on search precision and recall that MUDROD has demonstrated over the existing PO.DAAC search interfaces.

  7. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2008-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.

  8. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet

    2008-01-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfacesmore » then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible, easy to use Graphical User Interface which is driven by cascading style sheets, which make it even simpler for reusable design implementation. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book- markable search results, save, retrieve, and modify search criteria.« less

  9. Development of a computerized visual search test.

    PubMed

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-09-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed to provide an alternative to existing cancellation tests. Data from two pilot studies will be reported that examined some aspects of the test's validity. To date, our assessment of the test shows that it discriminates between healthy and head-injured persons. More research and development work is required to examine task performance changes in relation to task complexity. It is suggested that the conceptual design for the test is worthy of further investigation.

  10. Pounding the Payment. [A Job-Search Gaming-Simulation].

    ERIC Educational Resources Information Center

    Aiken, Rebecca; Lutrick, Angie; Kirk, James J.; Nickerson, Lisa; Wilder, Ginny

    This manual is a gaming simulation that career development professionals can use to promote awareness of and sensitivity to the job search experience encountered by their clientele. Goals of the simulation are to approximate a real life job search experience from different perspectives, while at the same time making it fun and interactive. Players…

  11. Epsilon-Q: An Automated Analyzer Interface for Mass Spectral Library Search and Label-Free Protein Quantification.

    PubMed

    Cho, Jin-Young; Lee, Hyoung-Joo; Jeong, Seul-Ki; Paik, Young-Ki

    2017-12-01

    Mass spectrometry (MS) is a widely used proteome analysis tool for biomedical science. In an MS-based bottom-up proteomic approach to protein identification, sequence database (DB) searching has been routinely used because of its simplicity and convenience. However, searching a sequence DB with multiple variable modification options can increase processing time, false-positive errors in large and complicated MS data sets. Spectral library searching is an alternative solution, avoiding the limitations of sequence DB searching and allowing the detection of more peptides with high sensitivity. Unfortunately, this technique has less proteome coverage, resulting in limitations in the detection of novel and whole peptide sequences in biological samples. To solve these problems, we previously developed the "Combo-Spec Search" method, which uses manually multiple references and simulated spectral library searching to analyze whole proteomes in a biological sample. In this study, we have developed a new analytical interface tool called "Epsilon-Q" to enhance the functions of both the Combo-Spec Search method and label-free protein quantification. Epsilon-Q performs automatically multiple spectral library searching, class-specific false-discovery rate control, and result integration. It has a user-friendly graphical interface and demonstrates good performance in identifying and quantifying proteins by supporting standard MS data formats and spectrum-to-spectrum matching powered by SpectraST. Furthermore, when the Epsilon-Q interface is combined with the Combo-Spec search method, called the Epsilon-Q system, it shows a synergistic function by outperforming other sequence DB search engines for identifying and quantifying low-abundance proteins in biological samples. The Epsilon-Q system can be a versatile tool for comparative proteome analysis based on multiple spectral libraries and label-free quantification.

  12. SA-Search: a web tool for protein structure mining based on a Structural Alphabet

    PubMed Central

    Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre

    2004-01-01

    SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search. PMID:15215446

  13. SA-Search: a web tool for protein structure mining based on a Structural Alphabet.

    PubMed

    Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre

    2004-07-01

    SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search.

  14. Maternal docosahexaenoic acid intake levels during pregnancy and infant performance on a novel object search task at 22 months.

    PubMed

    Rees, Alison; Sirois, Sylvain; Wearden, Alison

    2014-01-01

    This study investigated maternal prenatal docosahexaenoic acid (DHA) intake and infant cognitive development at 22 months. Estimates for second- and third-trimester maternal DHA intake levels were obtained using a comprehensive Food Frequency Questionnaire. Infants (n = 67) were assessed at 22 months on a novel object search task. Mothers' DHA intake levels were divided into high or low groups, with analyses revealing a significant positive effect of third-trimester DHA on object search task performance. The third trimester appears to be a critical time for ensuring adequate maternal DHA levels to facilitate optimum cognitive development in late infancy. © 2014 The Authors. Child Development published by Wiley Periodicals, Inc. on behalf of Society for Research in Child Development.

  15. ODISEES Data Portal Announcement

    Atmospheric Science Data Center

    2015-11-13

    ... larger image The Ontology-Driven Interactive Search Environment for Earth Science, developed at the Atmospheric Science Data Center ... The Ontology-Driven Interactive Search Environment for Earth Science, developed at the Atmospheric Science Data Center ...

  16. Mass spectrometry-based protein identification by integrating de novo sequencing with database searching.

    PubMed

    Wang, Penghao; Wilson, Susan R

    2013-01-01

    Mass spectrometry-based protein identification is a very challenging task. The main identification approaches include de novo sequencing and database searching. Both approaches have shortcomings, so an integrative approach has been developed. The integrative approach firstly infers partial peptide sequences, known as tags, directly from tandem spectra through de novo sequencing, and then puts these sequences into a database search to see if a close peptide match can be found. However the current implementation of this integrative approach has several limitations. Firstly, simplistic de novo sequencing is applied and only very short sequence tags are used. Secondly, most integrative methods apply an algorithm similar to BLAST to search for exact sequence matches and do not accommodate sequence errors well. Thirdly, by applying these methods the integrated de novo sequencing makes a limited contribution to the scoring model which is still largely based on database searching. We have developed a new integrative protein identification method which can integrate de novo sequencing more efficiently into database searching. Evaluated on large real datasets, our method outperforms popular identification methods.

  17. Evidential significance of automotive paint trace evidence using a pattern recognition based infrared library search engine for the Paint Data Query Forensic Database.

    PubMed

    Lavine, Barry K; White, Collin G; Allen, Matthew D; Fasasi, Ayuba; Weakley, Andrew

    2016-10-01

    A prototype library search engine has been further developed to search the infrared spectral libraries of the paint data query database to identify the line and model of a vehicle from the clear coat, surfacer-primer, and e-coat layers of an intact paint chip. For this study, search prefilters were developed from 1181 automotive paint systems spanning 3 manufacturers: General Motors, Chrysler, and Ford. The best match between each unknown and the spectra in the hit list generated by the search prefilters was identified using a cross-correlation library search algorithm that performed both a forward and backward search. In the forward search, spectra were divided into intervals and further subdivided into windows (which corresponds to the time lag for the comparison) within those intervals. The top five hits identified in each search window were compiled; a histogram was computed that summarized the frequency of occurrence for each library sample, with the IR spectra most similar to the unknown flagged. The backward search computed the frequency and occurrence of each line and model without regard to the identity of the individual spectra. Only those lines and models with a frequency of occurrence greater than or equal to 20% were included in the final hit list. If there was agreement between the forward and backward search results, the specific line and model common to both hit lists was always the correct assignment. Samples assigned to the same line and model by both searches are always well represented in the library and correlate well on an individual basis to specific library samples. For these samples, one can have confidence in the accuracy of the match. This was not the case for the results obtained using commercial library search algorithms, as the hit quality index scores for the top twenty hits were always greater than 99%. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A Comparitive Analysis of the Influence of Weather on the Flight Altitudes of Birds.

    NASA Astrophysics Data System (ADS)

    Shamoun-Baranes, Judy; van Loon, Emiel; van Gasteren, Hans; van Belle, Jelmer; Bouten, Willem; Buurma, Luit

    2006-01-01

    Birds pose a serious risk to flight safety worldwide. A Bird Avoidance Model (BAM) is being developed in the Netherlands to reduce the risk of bird aircraft collisions. In order to develop a temporally and spatially dynamic model of bird densities, data are needed on the flight-altitude distribution of birds and how this is influenced by weather. This study focuses on the dynamics of flight altitudes of several species of birds during local flights over land in relation to meteorological conditions.We measured flight altitudes of several species in the southeastern Netherlands using tracking radar during spring and summer 2000. Representatives of different flight strategy groups included four species: a soaring species (buzzard ), an obligatory aerial forager (swift Apus apus), a flapping and gliding species (blackheaded gull Larus ridibundus), and a flapping species (starling Sturnus vulgaris).Maximum flight altitudes varied among species, during the day and among days. Weather significantly influenced the flight altitudes of all species studied. Factors such as temperature, relative humidity, atmospheric instability, cloud cover, and sea level pressure were related to flight altitudes. Different combinations of factors explained 40% 70% of the variance in maximum flight altitudes. Weather affected flight strategy groups differently. Compared to flapping species, buzzards and swifts showed stronger variations in maximum daily altitude and f lew higher under conditions reflecting stronger thermal convection. The dynamic vertical distributions of birds are important for risk assessment and mitigation measures in flight safety as well as wind turbine studies.


  19. View-Based Searching Systems--Progress Towards Effective Disintermediation.

    ERIC Educational Resources Information Center

    Pollitt, A. Steven; Smith, Martin P.; Treglown, Mark; Braekevelt, Patrick

    This paper presents the background and then reports progress made in the development of two view-based searching systems--HIBROWSE for EMBASE, searching Europe's most important biomedical bibliographic database, and HIBROWSE for EPOQUE, improving access to the European Parliament's Online Query System. The HIBROWSE approach to searching promises…

  20. Maintaining the momentum of Open Search in Earth Science Data discovery

    NASA Astrophysics Data System (ADS)

    Newman, D. J.; Lynnes, C.

    2013-12-01

    Federated Search for Earth Observation data has been a hallmark of EOSDIS (Earth Observing System Data and Information System) for two decades. Originally, the EOSDIS Version 0 system provided both data-collection-level and granule/file-level search in the mid 1990s with EOSDIS-specific socket protocols and message formats. Since that time, the advent of several standards has helped to simplify EOSDIS federated search, beginning with HTTP as the transfer protocol. Most recently, OpenSearch (www.opensearch.org) was employed for the EOS Clearinghouse (ECHO), based on a set of conventions that had been developed within the Earth Science Information Partners (ESIP) Federation. The ECHO OpenSearch API has evolved to encompass the ESIP RFC and the Open Geospatial Consortium (OGC) Open Search standard. Uptake of the ECHO Open Search API has been significant and has made ECHO accessible to client developers that found the previous ECHO SOAP API and current REST API too complex. Client adoption of the OpenSearch API appears to be largely driven by the simplicity of the OpenSearch convention. This simplicity is thus important to retain as the standard and convention evolve. For example, ECHO metrics indicate that the vast majority of ECHO users favor the following search criteria when using the REST API, - Spatial - bounding box, polygon, line and point - Temporal - start and end time - Keywords - free text Fewer than 10% of searches use additional constraints, particularly those requiring a controlled vocabulary, such as instrument, sensor, etc. This suggests that ongoing standardization efforts around OpenSearch usage for Earth Observation data may be more productive if oriented toward improving support for the Spatial, Temporal and Keyword search aspects. Areas still requiring improvement include support of - Concrete requirements for keyword constraints - Phrasal search for keyword constraints - Temporal constraint relations - Terminological symmetry between search URLs and response documents for both temporal and spatial terms - Best practices for both servers and clients. Over the past year we have seen several ongoing efforts to further standardize Open Search in the earth science domain such as, - Federation of Earth Science Information Partners (ESIP) - Open Geospatial Consortium (OGC) - Committee on Earth Observation Satellites (CEOS)

  1. From features to dimensions: cognitive and motor development in pop-out search in children and young adults.

    PubMed

    Grubert, Anna; Indino, Marcello; Krummenacher, Joseph

    2014-01-01

    In an experiment involving a total of 124 participants, divided into eight age groups (6-, 8-, 10-, 12-, 14-, 16-, 18-, and 20-year-olds) the development of the processing components underlying visual search for pop-out targets was tracked. Participants indicated the presence or absence of color or orientation feature singleton targets. Observers also solved a detection task, in which they responded to the onset of search arrays. There were two main results. First, analyses of inter-trial effects revealed differences in the search strategies of the 6-year-old participants compared to older age groups. Participants older than 8 years based target detection on feature-less dimensional salience signals (indicated by cross-trial RT costs in target dimension change relative to repetition trials), the 6-year-olds accessed the target feature to make a target present or absent decision (cross-trial RT costs in target feature change relative to feature repetition trials). The result agrees with predictions derived from the Dimension Weighting account and previous investigations of inter-trial effects in adult observers (Müller et al., 1995; Found and Müller, 1996). The results are also in line with theories of cognitive development suggesting that the ability to abstract specific visual features into feature categories is developed after the age of 7 years. Second, overall search RTs decreased with increasing age in a decelerated fashion. RT differences between consecutive age groups can be explained by sensory-motor maturation up to the age of 10 years (as indicated by RTs in the onset detection task). Expedited RTs in older age groups (10-, vs. 12-year-olds; 14- vs. 16-year-olds), but also in the 6- vs. 8-year-olds, are due to the development of search-related (cognitive) processes. Overall, the results suggest that the level of adult performance in visual search for pop-out targets is achieved by the age of 16.

  2. From features to dimensions: cognitive and motor development in pop-out search in children and young adults

    PubMed Central

    Grubert, Anna; Indino, Marcello; Krummenacher, Joseph

    2014-01-01

    In an experiment involving a total of 124 participants, divided into eight age groups (6-, 8-, 10-, 12-, 14-, 16-, 18-, and 20-year-olds) the development of the processing components underlying visual search for pop-out targets was tracked. Participants indicated the presence or absence of color or orientation feature singleton targets. Observers also solved a detection task, in which they responded to the onset of search arrays. There were two main results. First, analyses of inter-trial effects revealed differences in the search strategies of the 6-year-old participants compared to older age groups. Participants older than 8 years based target detection on feature-less dimensional salience signals (indicated by cross-trial RT costs in target dimension change relative to repetition trials), the 6-year-olds accessed the target feature to make a target present or absent decision (cross-trial RT costs in target feature change relative to feature repetition trials). The result agrees with predictions derived from the Dimension Weighting account and previous investigations of inter-trial effects in adult observers (Müller et al., 1995; Found and Müller, 1996). The results are also in line with theories of cognitive development suggesting that the ability to abstract specific visual features into feature categories is developed after the age of 7 years. Second, overall search RTs decreased with increasing age in a decelerated fashion. RT differences between consecutive age groups can be explained by sensory-motor maturation up to the age of 10 years (as indicated by RTs in the onset detection task). Expedited RTs in older age groups (10-, vs. 12-year-olds; 14- vs. 16-year-olds), but also in the 6- vs. 8-year-olds, are due to the development of search-related (cognitive) processes. Overall, the results suggest that the level of adult performance in visual search for pop-out targets is achieved by the age of 16. PMID:24910627

  3. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  4. The History of the Internet Search Engine: Navigational Media and the Traffic Commodity

    NASA Astrophysics Data System (ADS)

    van Couvering, E.

    This chapter traces the economic development of the search engine industry over time, beginning with the earliest Web search engines and ending with the domination of the market by Google, Yahoo! and MSN. Specifically, it focuses on the ways in which search engines are similar to and different from traditional media institutions, and how the relations between traditional and Internet media have changed over time. In addition to its historical overview, a core contribution of this chapter is the analysis of the industry using a media value chain based on audiences rather than on content, and the development of traffic as the core unit of exchange. It shows that traditional media companies failed when they attempted to create vertically integrated portals in the late 1990s, based on the idea of controlling Internet content, while search engines succeeded in creating huge "virtually integrated" networks based on control of Internet traffic rather than Internet content.

  5. Technical development of PubMed Interact: an improved interface for MEDLINE/PubMed searches

    PubMed Central

    Muin, Michael; Fontelo, Paul

    2006-01-01

    Background The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. Results PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. Conclusion PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications. PMID:17083729

  6. Seeking health information on the web: positive hypothesis testing.

    PubMed

    Kayhan, Varol Onur

    2013-04-01

    The goal of this study is to investigate positive hypothesis testing among consumers of health information when they search the Web. After demonstrating the extent of positive hypothesis testing using Experiment 1, we conduct Experiment 2 to test the effectiveness of two debiasing techniques. A total of 60 undergraduate students searched a tightly controlled online database developed by the authors to test the validity of a hypothesis. The database had four abstracts that confirmed the hypothesis and three abstracts that disconfirmed it. Findings of Experiment 1 showed that majority of participants (85%) exhibited positive hypothesis testing. In Experiment 2, we found that the recommendation technique was not effective in reducing positive hypothesis testing since none of the participants assigned to this server could retrieve disconfirming evidence. Experiment 2 also showed that the incorporation technique successfully reduced positive hypothesis testing since 75% of the participants could retrieve disconfirming evidence. Positive hypothesis testing on the Web is an understudied topic. More studies are needed to validate the effectiveness of the debiasing techniques discussed in this study and develop new techniques. Search engine developers should consider developing new options for users so that both confirming and disconfirming evidence can be presented in search results as users test hypotheses using search engines. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Search and detection modeling of military imaging systems

    NASA Astrophysics Data System (ADS)

    Maurer, Tana; Wilson, David L.; Driggers, Ronald G.

    2013-04-01

    For more than 50 years, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has been studying the science behind the human processes of searching and detecting, and using that knowledge to develop and refine its models for military imaging systems. Modeling how human observers perform military tasks while using imaging systems in the field and linking that model with the physics of the systems has resulted in the comprehensive sensor models we have today. These models are used by the government, military, industry, and academia for sensor development, sensor system acquisition, military tactics development, and war-gaming. From the original hypothesis put forth by John Johnson in 1958, to modeling time-limited search, to modeling the impact of motion on target detection, to modeling target acquisition performance in different spectral bands, the concept of search has a wide-ranging history. Our purpose is to present a snapshot of that history; as such, it will begin with a description of the search-modeling task, followed by a summary of highlights from the early years, and concluding with a discussion of search and detection modeling today and the changing battlefield. Some of the topics to be discussed will be classic search, clutter, computational vision models and the ACQUIRE model with its variants. We do not claim to present a complete history here, but rather a look at some of the work that has been done, and this is meant to be an introduction to an extensive amount of work on a complex topic. That said, it is hoped that this overview of the history of search and detection modeling of military imaging systems pursued by NVESD directly, or in association with other government agencies or contractors, will provide both the novice and experienced search modeler with a useful historical summary and an introduction to current issues and future challenges.

  8. Sweep Width Estimation for Ground Search and Rescue

    DTIC Science & Technology

    2004-12-30

    Develop data compatible with search planning and POD estimation methods that are de- signed to use sweep width data. An experimental...important for Park Rangers and man- trackers . Search experience was expected to be a significant correction factor. However, the re- sults indicate...41 4.1.1 Signing In

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None Available

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  10. Metadata Creation, Management and Search System for your Scientific Data

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Palanisamy, G.

    2012-12-01

    Mercury Search Systems is a set of tools for creating, searching, and retrieving of biogeochemical metadata. Mercury toolset provides orders of magnitude improvements in search speed, support for any metadata format, integration with Google Maps for spatial queries, multi-facetted type search, search suggestions, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. Mercury's metadata editor provides a easy way for creating metadata and Mercury's search interface provides a single portal to search for data and information contained in disparate data management systems, each of which may use any metadata format including FGDC, ISO-19115, Dublin-Core, Darwin-Core, DIF, ECHO, and EML. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury is being used more than 14 different projects across 4 federal agencies. It was originally developed for NASA, with continuing development funded by NASA, USGS, and DOE for a consortium of projects. Mercury search won the NASA's Earth Science Data Systems Software Reuse Award in 2008. References: R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010);

  11. An open-source, mobile-friendly search engine for public medical knowledge.

    PubMed

    Samwald, Matthias; Hanbury, Allan

    2014-01-01

    The World Wide Web has become an important source of information for medical practitioners. To complement the capabilities of currently available web search engines we developed FindMeEvidence, an open-source, mobile-friendly medical search engine. In a preliminary evaluation, the quality of results from FindMeEvidence proved to be competitive with those from TRIP Database, an established, closed-source search engine for evidence-based medicine.

  12. Utilization of a radiology-centric search engine.

    PubMed

    Sharpe, Richard E; Sharpe, Megan; Siegel, Eliot; Siddiqui, Khan

    2010-04-01

    Internet-based search engines have become a significant component of medical practice. Physicians increasingly rely on information available from search engines as a means to improve patient care, provide better education, and enhance research. Specialized search engines have emerged to more efficiently meet the needs of physicians. Details about the ways in which radiologists utilize search engines have not been documented. The authors categorized every 25th search query in a radiology-centric vertical search engine by radiologic subspecialty, imaging modality, geographic location of access, time of day, use of abbreviations, misspellings, and search language. Musculoskeletal and neurologic imagings were the most frequently searched subspecialties. The least frequently searched were breast imaging, pediatric imaging, and nuclear medicine. Magnetic resonance imaging and computed tomography were the most frequently searched modalities. A majority of searches were initiated in North America, but all continents were represented. Searches occurred 24 h/day in converted local times, with a majority occurring during the normal business day. Misspellings and abbreviations were common. Almost all searches were performed in English. Search engine utilization trends are likely to mirror trends in diagnostic imaging in the region from which searches originate. Internet searching appears to function as a real-time clinical decision-making tool, a research tool, and an educational resource. A more thorough understanding of search utilization patterns can be obtained by analyzing phrases as actually entered as well as the geographic location and time of origination. This knowledge may contribute to the development of more efficient and personalized search engines.

  13. A Framework for Integrating Oceanographic Data Repositories

    NASA Astrophysics Data System (ADS)

    Rozell, E.; Maffei, A. R.; Beaulieu, S. E.; Fox, P. A.

    2010-12-01

    Oceanographic research covers a broad range of science domains and requires a tremendous amount of cross-disciplinary collaboration. Advances in cyberinfrastructure are making it easier to share data across disciplines through the use of web services and community vocabularies. Best practices in the design of web services and vocabularies to support interoperability amongst science data repositories are only starting to emerge. Strategic design decisions in these areas are crucial to the creation of end-user data and application integration tools. We present S2S, a novel framework for deploying customizable user interfaces to support the search and analysis of data from multiple repositories. Our research methods follow the Semantic Web methodology and technology development process developed by Fox et al. This methodology stresses the importance of close scientist-technologist interactions when developing scientific use cases, keeping the project well scoped and ensuring the result meets a real scientific need. The S2S framework motivates the development of standardized web services with well-described parameters, as well as the integration of existing web services and applications in the search and analysis of data. S2S also encourages the use and development of community vocabularies and ontologies to support federated search and reduce the amount of domain expertise required in the data discovery process. S2S utilizes the Web Ontology Language (OWL) to describe the components of the framework, including web service parameters, and OpenSearch as a standard description for web services, particularly search services for oceanographic data repositories. We have created search services for an oceanographic metadata database, a large set of quality-controlled ocean profile measurements, and a biogeographic search service. S2S provides an application programming interface (API) that can be used to generate custom user interfaces, supporting data and application integration across these repositories and other web resources. Although initially targeted towards a general oceanographic audience, the S2S framework shows promise in many science domains, inspired in part by the broad disciplinary coverage of oceanography. This presentation will cover the challenges addressed by the S2S framework, the research methods used in its development, and the resulting architecture for the system. It will demonstrate how S2S is remarkably extensible, and can be generalized to many science domains. Given these characteristics, the framework can simplify the process of data discovery and analysis for the end user, and can help to shift the responsibility of search interface development away from data managers.

  14. An optimal search filter for retrieving systematic reviews and meta-analyses

    PubMed Central

    2012-01-01

    Background Health-evidence.ca is an online registry of systematic reviews evaluating the effectiveness of public health interventions. Extensive searching of bibliographic databases is required to keep the registry up to date. However, search filters have been developed to assist in searching the extensive amount of published literature indexed. Search filters can be designed to find literature related to a certain subject (i.e. content-specific filter) or particular study designs (i.e. methodological filter). The objective of this paper is to describe the development and validation of the health-evidence.ca Systematic Review search filter and to compare its performance to other available systematic review filters. Methods This analysis of search filters was conducted in MEDLINE, EMBASE, and CINAHL. The performance of thirty-one search filters in total was assessed. A validation data set of 219 articles indexed between January 2004 and December 2005 was used to evaluate performance on sensitivity, specificity, precision and the number needed to read for each filter. Results Nineteen of 31 search filters were effective in retrieving a high level of relevant articles (sensitivity scores greater than 85%). The majority achieved a high degree of sensitivity at the expense of precision and yielded large result sets. The main advantage of the health-evidence.ca Systematic Review search filter in comparison to the other filters was that it maintained the same level of sensitivity while reducing the number of articles that needed to be screened. Conclusions The health-evidence.ca Systematic Review search filter is a useful tool for identifying published systematic reviews, with further screening to identify those evaluating the effectiveness of public health interventions. The filter that narrows the focus saves considerable time and resources during updates of this online resource, without sacrificing sensitivity. PMID:22512835

  15. Neural field model of memory-guided search.

    PubMed

    Kilpatrick, Zachary P; Poll, Daniel B

    2017-12-01

    Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.

  16. Neural field model of memory-guided search

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Zachary P.; Poll, Daniel B.

    2017-12-01

    Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.

  17. Developing and Applying a Protocol for a Systematic Review in the Social Sciences

    ERIC Educational Resources Information Center

    Campbell, Allison; Taylor, Brian; Bates, Jessica; O'Connor-Bones, Una

    2018-01-01

    The article reports on a systematic method of undertaking a literature search on the educational impact of being a young carer (16-24 years old). The search methodology applied and described in detail will be of value to academic librarians and to other education researchers who undertake systematic literature searches. Seven bibliographic…

  18. The importance and complexity of regret in the measurement of ‘good’ decisions: a systematic review and a content analysis of existing assessment instruments

    PubMed Central

    Joseph‐Williams, Natalie; Edwards, Adrian; Elwyn, Glyn

    2011-01-01

    Abstract Background or context  Regret is a common consequence of decisions, including those decisions related to individuals’ health. Several assessment instruments have been developed that attempt to measure decision regret. However, recent research has highlighted the complexity of regret. Given its relevance to shared decision making, it is important to understand its conceptualization and the instruments used to measure it. Objectives  To review current conceptions of regret. To systematically identify instruments used to measure decision regret and assess whether they capture recent conceptualizations of regret. Search strategy  Five electronic databases were searched in 2008. Search strategies used a combination of MeSH terms (or database equivalent) and free text searching under the following key headings: ‘Decision’ and ‘regret’ and ‘measurement’. Follow‐up manual searches were also performed. Inclusion criteria  Articles were included if they reported the development and psychometric testing of an instrument designed to measure decision regret, or the use of a previously developed and tested instrument. Main results  Thirty‐two articles were included: 10 report the development and validation of an instrument that measures decision regret and 22 report the use of a previously developed and tested instrument. Content analysis found that existing instruments for the measurement of regret do not capture current conceptualizations of regret and they do not enable the construct of regret to be measured comprehensively. Conclusions  Existing instrumentation requires further development. There is also a need to clarify the purpose for using regret assessment instruments as this will, and should, focus their future application. PMID:20860776

  19. Classification of Automated Search Traffic

    NASA Astrophysics Data System (ADS)

    Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.

    As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.

  20. Development and Validation of a Multiple Intelligences Assessment Scale for Children.

    ERIC Educational Resources Information Center

    Shearer, C. Branton

    Since Howard Gardner proposed the theory of multiple intelligences as an alternative to the unitary concept of general intelligence, educators have been searching for an acceptable method of assessment. To help with this search, three studies that describe the development and validation of a self- (and parent-) report measure of children's…

  1. Supporting inter-topic entity search for biomedical Linked Data based on heterogeneous relationships.

    PubMed

    Zong, Nansu; Lee, Sungin; Ahn, Jinhyun; Kim, Hong-Gee

    2017-08-01

    The keyword-based entity search restricts search space based on the preference of search. When given keywords and preferences are not related to the same biomedical topic, existing biomedical Linked Data search engines fail to deliver satisfactory results. This research aims to tackle this issue by supporting an inter-topic search-improving search with inputs, keywords and preferences, under different topics. This study developed an effective algorithm in which the relations between biomedical entities were used in tandem with a keyword-based entity search, Siren. The algorithm, PERank, which is an adaptation of Personalized PageRank (PPR), uses a pair of input: (1) search preferences, and (2) entities from a keyword-based entity search with a keyword query, to formalize the search results on-the-fly based on the index of the precomputed Individual Personalized PageRank Vectors (IPPVs). Our experiments were performed over ten linked life datasets for two query sets, one with keyword-preference topic correspondence (intra-topic search), and the other without (inter-topic search). The experiments showed that the proposed method achieved better search results, for example a 14% increase in precision for the inter-topic search than the baseline keyword-based search engine. The proposed method improved the keyword-based biomedical entity search by supporting the inter-topic search without affecting the intra-topic search based on the relations between different entities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A Personalised Information Support System for Searching Portals and E-Resources

    ERIC Educational Resources Information Center

    Sirisha, B. S.; Jeevan, V. K. J.; Raja Kumar, R. V.; Goswami, A.

    2009-01-01

    Purpose: The purpose of this paper is to describe the development of a personalised information support system to help faculty members to search various portals and e-resources without typing the search terms in different interfaces and to obtain results re-ordered without human intervention. Design/methodology/approach: After a careful survey of…

  3. Pharmacy Research Online. A Guide for Faculty.

    ERIC Educational Resources Information Center

    Parkin, Derral; And Others

    This document is a self-paced training packet developed for a pilot project at the University of Houston-University Park to teach pharmacy faculty members to do their own online searching. The training begins with general topics such as the kinds of searches that can be done effectively online, the selection of appropriate databases to search, and…

  4. Art Research Online. A Guide for Faculty.

    ERIC Educational Resources Information Center

    Parkin, Derral; And Others

    This document is a self-paced training packet developed for a pilot project at the University of Houston-University Park to teach art faculty members to do their own online searching. The training begins with general topics such as the kinds of searches that can be done most effectively online, the selection of appropriate databases to search, and…

  5. 75 FR 35962 - Special Evaluation Assistance for Rural Communities and Households Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-24

    ... and Households (SEARCH) Program as authorized by Section 306(a)(2) of the Consolidated Farm and Rural Development Act (CONACT) (7 U.S.C. 1926(a)(2)). The amendment added the new SEARCH grant program under which... Assistance for Rural Communities and Households Program (SEARCH). This catalog is available on a subscription...

  6. Development of the prototype data management system of the solar H-alpha full disk observation

    NASA Astrophysics Data System (ADS)

    Wei, Ka-Ning; Zhao, Shi-Qing; Li, Qiong-Ying; Chen, Dong

    2004-06-01

    The Solar Chromospheric Telescope in Yunnan Observatory generates about 2G bytes fits format data per day. Huge amounts of data will bring inconvenience for people to use. Hence, data searching and sharing are important at present. Data searching, on-line browsing, remote accesses and download are developed with a prototype data management system of the solar H-alpha full disk observation, and improved by the working flow technology. Based on Windows XP operating system and MySQL data management system, a prototype system of browse/server model is developed by JAVA and JSP. Data compression, searching, browsing, deletion need authority and download in real-time have been achieved.

  7. Evaluation of a Novel Conjunctive Exploratory Navigation Interface for Consumer Health Information: A Crowdsourced Comparative Study

    PubMed Central

    Cui, Licong; Carter, Rebecca

    2014-01-01

    Background Numerous consumer health information websites have been developed to provide consumers access to health information. However, lookup search is insufficient for consumers to take full advantage of these rich public information resources. Exploratory search is considered a promising complementary mechanism, but its efficacy has never before been rigorously evaluated for consumer health information retrieval interfaces. Objective This study aims to (1) introduce a novel Conjunctive Exploratory Navigation Interface (CENI) for supporting effective consumer health information retrieval and navigation, and (2) evaluate the effectiveness of CENI through a search-interface comparative evaluation using crowdsourcing with Amazon Mechanical Turk (AMT). Methods We collected over 60,000 consumer health questions from NetWellness, one of the first consumer health websites to provide high-quality health information. We designed and developed a novel conjunctive exploratory navigation interface to explore NetWellness health questions with health topics as dynamic and searchable menus. To investigate the effectiveness of CENI, we developed a second interface with keyword-based search only. A crowdsourcing comparative study was carefully designed to compare three search modes of interest: (A) the topic-navigation-based CENI, (B) the keyword-based lookup interface, and (C) either the most commonly available lookup search interface with Google, or the resident advanced search offered by NetWellness. To compare the effectiveness of the three search modes, 9 search tasks were designed with relevant health questions from NetWellness. Each task included a rating of difficulty level and questions for validating the quality of answers. Ninety anonymous and unique AMT workers were recruited as participants. Results Repeated-measures ANOVA analysis of the data showed the search modes A, B, and C had statistically significant differences among their levels of difficulty (P<.001). Wilcoxon signed-rank test (one-tailed) between A and B showed that A was significantly easier than B (P<.001). Paired t tests (one-tailed) between A and C showed A was significantly easier than C (P<.001). Participant responses on the preferred search modes showed that 47.8% (43/90) participants preferred A, 25.6% (23/90) preferred B, 24.4% (22/90) preferred C. Participant comments on the preferred search modes indicated that CENI was easy to use, provided better organization of health questions by topics, allowed users to narrow down to the most relevant contents quickly, and supported the exploratory navigation by non-experts or those unsure how to initiate their search. Conclusions We presented a novel conjunctive exploratory navigation interface for consumer health information retrieval and navigation. Crowdsourcing permitted a carefully designed comparative search-interface evaluation to be completed in a timely and cost-effective manner with a relatively large number of participants recruited anonymously. Accounting for possible biases, our study has shown for the first time with crowdsourcing that the combination of exploratory navigation and lookup search is more effective than lookup search alone. PMID:24513593

  8. Evaluation of a novel Conjunctive Exploratory Navigation Interface for consumer health information: a crowdsourced comparative study.

    PubMed

    Cui, Licong; Carter, Rebecca; Zhang, Guo-Qiang

    2014-02-10

    Numerous consumer health information websites have been developed to provide consumers access to health information. However, lookup search is insufficient for consumers to take full advantage of these rich public information resources. Exploratory search is considered a promising complementary mechanism, but its efficacy has never before been rigorously evaluated for consumer health information retrieval interfaces. This study aims to (1) introduce a novel Conjunctive Exploratory Navigation Interface (CENI) for supporting effective consumer health information retrieval and navigation, and (2) evaluate the effectiveness of CENI through a search-interface comparative evaluation using crowdsourcing with Amazon Mechanical Turk (AMT). We collected over 60,000 consumer health questions from NetWellness, one of the first consumer health websites to provide high-quality health information. We designed and developed a novel conjunctive exploratory navigation interface to explore NetWellness health questions with health topics as dynamic and searchable menus. To investigate the effectiveness of CENI, we developed a second interface with keyword-based search only. A crowdsourcing comparative study was carefully designed to compare three search modes of interest: (A) the topic-navigation-based CENI, (B) the keyword-based lookup interface, and (C) either the most commonly available lookup search interface with Google, or the resident advanced search offered by NetWellness. To compare the effectiveness of the three search modes, 9 search tasks were designed with relevant health questions from NetWellness. Each task included a rating of difficulty level and questions for validating the quality of answers. Ninety anonymous and unique AMT workers were recruited as participants. Repeated-measures ANOVA analysis of the data showed the search modes A, B, and C had statistically significant differences among their levels of difficulty (P<.001). Wilcoxon signed-rank test (one-tailed) between A and B showed that A was significantly easier than B (P<.001). Paired t tests (one-tailed) between A and C showed A was significantly easier than C (P<.001). Participant responses on the preferred search modes showed that 47.8% (43/90) participants preferred A, 25.6% (23/90) preferred B, 24.4% (22/90) preferred C. Participant comments on the preferred search modes indicated that CENI was easy to use, provided better organization of health questions by topics, allowed users to narrow down to the most relevant contents quickly, and supported the exploratory navigation by non-experts or those unsure how to initiate their search. We presented a novel conjunctive exploratory navigation interface for consumer health information retrieval and navigation. Crowdsourcing permitted a carefully designed comparative search-interface evaluation to be completed in a timely and cost-effective manner with a relatively large number of participants recruited anonymously. Accounting for possible biases, our study has shown for the first time with crowdsourcing that the combination of exploratory navigation and lookup search is more effective than lookup search alone.

  9. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information

    PubMed Central

    2013-01-01

    Background Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. Results We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user’s query, advanced data searching based on the specified user’s query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. Conclusions search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/. PMID:23452691

  10. search GenBank: interactive orchestration and ad-hoc choreography of Web services in the exploration of the biomedical resources of the National Center For Biotechnology Information.

    PubMed

    Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur

    2013-03-01

    Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user's query, advanced data searching based on the specified user's query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/.

  11. Task demands determine the specificity of the search template.

    PubMed

    Bravo, Mary J; Farid, Hany

    2012-01-01

    When searching for an object, an observer holds a representation of the target in mind while scanning the scene. If the observer repeats the search, performance may become more efficient as the observer hones this target representation, or "search template," to match the specific demands of the search task. An effective search template must have two characteristics: It must reliably discriminate the target from the distractors, and it must tolerate variability in the appearance of the target. The present experiment examined how the tolerance of the search template is affected by the search task. Two groups of 18 observers trained on the same set of stimuli blocked either by target image (block-by-image group) or by target category (block-by-category group). One or two days after training, both groups were tested on a related search task. The pattern of test results revealed that the two groups of observers had developed different search templates, and that the templates of the block-by-category observers better captured the general characteristics of the category. These results demonstrate that observers match their search templates to the demands of the search task.

  12. Data Mining and Optimization Tools for Developing Engine Parameters Tools

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1998-01-01

    This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. From the total budget of $5,000, Tricia and I studied the problem domain for developing ail Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy datasets. From the study and discussion with NASA LERC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of the data for GA based multi-resolution optimal search. Wavelet processing is proposed to create a coarse resolution representation of data providing two advantages in GA based search: 1. We will have less data to begin with to make search sub-spaces. 2. It will have robustness against the noise because at every level of wavelet based decomposition, we will be decomposing the signal into low pass and high pass filters.

  13. mTM-align: a server for fast protein structure database search and multiple protein structure alignment.

    PubMed

    Dong, Runze; Pan, Shuo; Peng, Zhenling; Zhang, Yang; Yang, Jianyi

    2018-05-21

    With the rapid increase of the number of protein structures in the Protein Data Bank, it becomes urgent to develop algorithms for efficient protein structure comparisons. In this article, we present the mTM-align server, which consists of two closely related modules: one for structure database search and the other for multiple structure alignment. The database search is speeded up based on a heuristic algorithm and a hierarchical organization of the structures in the database. The multiple structure alignment is performed using the recently developed algorithm mTM-align. Benchmark tests demonstrate that our algorithms outperform other peering methods for both modules, in terms of speed and accuracy. One of the unique features for the server is the interplay between database search and multiple structure alignment. The server provides service not only for performing fast database search, but also for making accurate multiple structure alignment with the structures found by the search. For the database search, it takes about 2-5 min for a structure of a medium size (∼300 residues). For the multiple structure alignment, it takes a few seconds for ∼10 structures of medium sizes. The server is freely available at: http://yanglab.nankai.edu.cn/mTM-align/.

  14. Information Discovery and Retrieval Tools

    DTIC Science & Technology

    2004-12-01

    information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.

  15. Information Discovery and Retrieval Tools

    DTIC Science & Technology

    2003-04-01

    information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.

  16. SearchGUI: A Highly Adaptable Common Interface for Proteomics Search and de Novo Engines.

    PubMed

    Barsnes, Harald; Vaudel, Marc

    2018-05-25

    Mass-spectrometry-based proteomics has become the standard approach for identifying and quantifying proteins. A vital step consists of analyzing experimentally generated mass spectra to identify the underlying peptide sequences for later mapping to the originating proteins. We here present the latest developments in SearchGUI, a common open-source interface for the most frequently used freely available proteomics search and de novo engines that has evolved into a central component in numerous bioinformatics workflows.

  17. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  18. The medline UK filter: development and validation of a geographic search filter to retrieve research about the UK from OVID medline.

    PubMed

    Ayiku, Lynda; Levay, Paul; Hudson, Tom; Craven, Jenny; Barrett, Elizabeth; Finnegan, Amy; Adams, Rachel

    2017-07-13

    A validated geographic search filter for the retrieval of research about the United Kingdom (UK) from bibliographic databases had not previously been published. To develop and validate a geographic search filter to retrieve research about the UK from OVID medline with high recall and precision. Three gold standard sets of references were generated using the relative recall method. The sets contained references to studies about the UK which had informed National Institute for Health and Care Excellence (NICE) guidance. The first and second sets were used to develop and refine the medline UK filter. The third set was used to validate the filter. Recall, precision and number-needed-to-read (NNR) were calculated using a case study. The validated medline UK filter demonstrated 87.6% relative recall against the third gold standard set. In the case study, the medline UK filter demonstrated 100% recall, 11.4% precision and a NNR of nine. A validated geographic search filter to retrieve research about the UK with high recall and precision has been developed. The medline UK filter can be applied to systematic literature searches in OVID medline for topics with a UK focus. © 2017 Crown copyright. Health Information and Libraries Journal © 2017 Health Libraries GroupThis article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.

  19. Exploratory power of the harmony search algorithm: analysis and improvements for global numerical optimization.

    PubMed

    Das, Swagatam; Mukhopadhyay, Arpan; Roy, Anwit; Abraham, Ajith; Panigrahi, Bijaya K

    2011-02-01

    The theoretical analysis of evolutionary algorithms is believed to be very important for understanding their internal search mechanism and thus to develop more efficient algorithms. This paper presents a simple mathematical analysis of the explorative search behavior of a recently developed metaheuristic algorithm called harmony search (HS). HS is a derivative-free real parameter optimization algorithm, and it draws inspiration from the musical improvisation process of searching for a perfect state of harmony. This paper analyzes the evolution of the population-variance over successive generations in HS and thereby draws some important conclusions regarding the explorative power of HS. A simple but very useful modification to the classical HS has been proposed in light of the mathematical analysis undertaken here. A comparison with the most recently published variants of HS and four other state-of-the-art optimization algorithms over 15 unconstrained and five constrained benchmark functions reflects the efficiency of the modified HS in terms of final accuracy, convergence speed, and robustness.

  20. New Martian satellite search

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The approach pictures taken by the Viking 1 and Viking 2 spacecrafts two days before their Mars orbital insertion maneuvers were analyzed in order to search for new satellites within the orbit of Phobos. To accomplish this task, search procedure and analysis strategy were formulated, developed and executed using the substantial image processing capabilities of the Image Processing Laboratory at the Jet Propulsion Laboratory. The development of these new search capabilities should prove to be valuable to NASA in processing of image data obtained from other spacecraft missions. The result of applying the search procedures to the Viking approach pictures was as follows: no new satellites of comparable size (approx. 20 km) and brightness to Phobos or Demios were detected within the orbit of Phobos.

  1. A novel computational model to probe visual search deficits during motor performance

    PubMed Central

    Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy

    2016-01-01

    Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596

  2. The NASA SETI sky survey: Recent developments

    NASA Technical Reports Server (NTRS)

    Klein, M. J.; Gulkis, S.; Olsen, E. T.; Renzetti, N. A.

    1989-01-01

    NASA's Search for Extraterrestrial Intelligence (SETI) project utilizes two complementary search strategies: a sky survey and a targeted search. The SETI team at the Jet Propulsion Laboratory (JPL) in Pasadena, California, has primary responsibility to develop and carry out the sky survey part. Described here is progress that has been made developing the major elements of the survey including a 2-million channel wideband spectrum analyzer system that is being designed and constructed by JPL for the Deep Space Network (DSN). The system will be a multiuser instrument; it will serve as a prototype for the SETI sky survey processor. This prototype system will be used to test the signal detection and observational strategies on DSN antennas in the near future.

  3. Deep Web video

    ScienceCinema

    None Available

    2018-02-06

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  4. DOE Research and Development Accomplishments Website Policies/Important

    Science.gov Websites

    Links RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery × Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Basics Stories Snapshots R&D Nuggets Database dropdown arrow Search Tag Cloud Browse Reports Database Help

  5. Why Is Visual Search Superior in Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Joseph, Robert M.; Keehn, Brandon; Connolly, Christine; Wolfe, Jeremy M.; Horowitz, Todd S.

    2009-01-01

    This study investigated the possibility that enhanced memory for rejected distractor locations underlies the superior visual search skills exhibited by individuals with autism spectrum disorder (ASD). We compared the performance of 21 children with ASD and 21 age- and IQ-matched typically developing (TD) children in a standard static search task…

  6. A Woman's Job Search: Five Strategies for Success.

    ERIC Educational Resources Information Center

    Reis, Susan L.

    An alternate approach to traditional job search methods which may be helpful to women is presented. The following five strategies are considered: (1) know what you want; (2) develop a network of professional contacts to help identify the hidden job market; (3) be selective in the job search; (4) research job openings thoroughly before deciding to…

  7. Managing the Grey Literature of a Discipline through Collaboration: AgEcon Search

    ERIC Educational Resources Information Center

    Kelly, Julia; Letnes, Louise

    2005-01-01

    AgEcon Search, http://www.agecon.lib.umn.edu, is an important and ground-breaking example of an alternative method of delivering current research results to many potential users. AgEcon Search, through a distributed model, collects and disseminates the grey literature of the fields of agricultural and resource economics. The development of this…

  8. Search for Artificial Stellar Sources of Infrared Radiation.

    PubMed

    Dyson, F J

    1960-06-03

    If extraterrestrial intelligent beings exist and have reached a high level of technical development, one by-product of their energy metabolism is likely to be the large-scale conversion of starlight into far-infrared radiation. It is proposed that a search for sources of infrared radiation should accompany the recently initiated search for interstellar radio communications.

  9. Inhibitory control differentiates rare target search performance in children.

    PubMed

    Li, Hongting; Chan, John S Y; Cheung, Sui-Yin; Yan, Jin H

    2012-02-01

    Age-related differences in rare-target search are primarily explained by the speed-accuracy trade-off, primed responses, or decision making. The goal was to examine how motor inhibition influences visual search. Children pressed a key when a rare target was detected. On no-target trials, children withheld reactions. Response time (RT), hits, misses, correct rejection, and false alarms were measured. Tapping tests assessed motor control. Older children tapped faster, were more sensitive to rare targets (higher d'), and reacted more slowly than younger ones. Girls outperformed boys in search sensitivity but not in RT. Motor speed was closely associated with hit rate and RT. Results suggest that development of inhibitory control plays a key role in visual detection. The potential implications for cognitive-motor development and individual differences are discussed.

  10. SCOOP: A Measurement and Database of Student Online Search Behavior and Performance

    ERIC Educational Resources Information Center

    Zhou, Mingming

    2015-01-01

    The ability to access and process massive amounts of online information is required in many learning situations. In order to develop a better understanding of student online search process especially in academic contexts, an online tool (SCOOP) is developed for tracking mouse behavior on the web to build a more extensive account of student web…

  11. An advanced search engine for patent analytics in medicinal chemistry.

    PubMed

    Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnykova, Dina; Lovis, Christian; Ruch, Patrick

    2012-01-01

    Patent collections contain an important amount of medical-related knowledge, but existing tools were reported to lack of useful functionalities. We present here the development of TWINC, an advanced search engine dedicated to patent retrieval in the domain of health and life sciences. Our tool embeds two search modes: an ad hoc search to retrieve relevant patents given a short query and a related patent search to retrieve similar patents given a patent. Both search modes rely on tuning experiments performed during several patent retrieval competitions. Moreover, TWINC is enhanced with interactive modules, such as chemical query expansion, which is of prior importance to cope with various ways of naming biomedical entities. While the related patent search showed promising performances, the ad-hoc search resulted in fairly contrasted results. Nonetheless, TWINC performed well during the Chemathlon task of the PatOlympics competition and experts appreciated its usability.

  12. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    PubMed

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Development and empirical user-centered evaluation of semantically-based query recommendation for an electronic health record search engine.

    PubMed

    Hanauer, David A; Wu, Danny T Y; Yang, Lei; Mei, Qiaozhu; Murkowski-Steffy, Katherine B; Vydiswaran, V G Vinod; Zheng, Kai

    2017-03-01

    The utility of biomedical information retrieval environments can be severely limited when users lack expertise in constructing effective search queries. To address this issue, we developed a computer-based query recommendation algorithm that suggests semantically interchangeable terms based on an initial user-entered query. In this study, we assessed the value of this approach, which has broad applicability in biomedical information retrieval, by demonstrating its application as part of a search engine that facilitates retrieval of information from electronic health records (EHRs). The query recommendation algorithm utilizes MetaMap to identify medical concepts from search queries and indexed EHR documents. Synonym variants from UMLS are used to expand the concepts along with a synonym set curated from historical EHR search logs. The empirical study involved 33 clinicians and staff who evaluated the system through a set of simulated EHR search tasks. User acceptance was assessed using the widely used technology acceptance model. The search engine's performance was rated consistently higher with the query recommendation feature turned on vs. off. The relevance of computer-recommended search terms was also rated high, and in most cases the participants had not thought of these terms on their own. The questions on perceived usefulness and perceived ease of use received overwhelmingly positive responses. A vast majority of the participants wanted the query recommendation feature to be available to assist in their day-to-day EHR search tasks. Challenges persist for users to construct effective search queries when retrieving information from biomedical documents including those from EHRs. This study demonstrates that semantically-based query recommendation is a viable solution to addressing this challenge. Published by Elsevier Inc.

  14. Evidence From Web-Based Dietary Search Patterns to the Role of B12 Deficiency in Non-Specific Chronic Pain: A Large-Scale Observational Study

    PubMed Central

    Giat, Eitan

    2018-01-01

    Background Profound vitamin B12 deficiency is a known cause of disease, but the role of low or intermediate levels of B12 in the development of neuropathy and other neuropsychiatric symptoms, as well as the relationship between eating meat and B12 levels, is unclear. Objective The objective of our study was to investigate the role of low or intermediate levels of B12 in the development of neuropathy and other neuropsychiatric symptoms. Methods We used food-related Internet search patterns from a sample of 8.5 million people based in the US as a proxy for B12 intake and correlated these searches with Internet searches related to possible effects of B12 deficiency. Results Food-related search patterns were highly correlated with known consumption and food-related searches (ρ=.69). Awareness of B12 deficiency was associated with a higher consumption of B12-rich foods and with queries for B12 supplements. Searches for terms related to neurological disorders were correlated with searches for B12-poor foods, in contrast with control terms. Popular medicines, those having fewer indications, and those which are predominantly used to treat pain, were more strongly correlated with the ability to predict neuropathic pain queries using the B12 contents of food. Conclusions Our findings show that Internet search patterns are a useful way of investigating health questions in large populations, and suggest that low B12 intake may be associated with a broader spectrum of neurological disorders than previously thought. PMID:29305340

  15. Strategies to assess the validity of recommendations: a study protocol

    PubMed Central

    2013-01-01

    Background Clinical practice guidelines (CPGs) become quickly outdated and require a periodic reassessment of evidence research to maintain their validity. However, there is little research about this topic. Our project will provide evidence for some of the most pressing questions in this field: 1) what is the average time for recommendations to become out of date?; 2) what is the comparative performance of two restricted search strategies to evaluate the need to update recommendations?; and 3) what is the feasibility of a more regular monitoring and updating strategy compared to usual practice?. In this protocol we will focus on questions one and two. Methods The CPG Development Programme of the Spanish Ministry of Health developed 14 CPGs between 2008 and 2009. We will stratify guidelines by topic and by publication year, and include one CPG by strata. We will develop a strategy to assess the validity of CPG recommendations, which includes a baseline survey of clinical experts, an update of the original exhaustive literature searches, the identification of key references (reference that trigger a potential recommendation update), and the assessment of the potential changes in each recommendation. We will run two alternative search strategies to efficiently identify important new evidence: 1) PLUS search based in McMaster Premium LiteratUre Service (PLUS) database; and 2) a Restrictive Search (ReSe) based on the least number of MeSH terms and free text words needed to locate all the references of each original recommendation. We will perform a survival analysis of recommendations using the Kaplan-Meier method and we will use the log-rank test to analyse differences between survival curves according to the topic, the purpose, the strength of recommendations and the turnover. We will retrieve key references from the exhaustive search and evaluate their presence in the PLUS and ReSe search results. Discussion Our project, using a highly structured and transparent methodology, will provide guidance of when recommendations are likely to be at risk of being out of date. We will also assess two novel restrictive search strategies which could reduce the workload without compromising rigour when CPGs developers check for the need of updating. PMID:23967896

  16. Online Information Search Performance and Search Strategies in a Health Problem-Solving Scenario.

    PubMed

    Sharit, Joseph; Taha, Jessica; Berkowsky, Ronald W; Profita, Halley; Czaja, Sara J

    2015-01-01

    Although access to Internet health information can be beneficial, solving complex health-related problems online is challenging for many individuals. In this study, we investigated the performance of a sample of 60 adults ages 18 to 85 years in using the Internet to resolve a relatively complex health information problem. The impact of age, Internet experience, and cognitive abilities on measures of search time, amount of search, and search accuracy was examined, and a model of Internet information seeking was developed to guide the characterization of participants' search strategies. Internet experience was found to have no impact on performance measures. Older participants exhibited longer search times and lower amounts of search but similar search accuracy performance as their younger counterparts. Overall, greater search accuracy was related to an increased amount of search but not to increased search duration and was primarily attributable to higher cognitive abilities, such as processing speed, reasoning ability, and executive function. There was a tendency for those who were younger, had greater Internet experience, and had higher cognitive abilities to use a bottom-up (i.e., analytic) search strategy, although use of a top-down (i.e., browsing) strategy was not necessarily unsuccessful. Implications of the findings for future studies and design interventions are discussed.

  17. Online Information Search Performance and Search Strategies in a Health Problem-Solving Scenario

    PubMed Central

    Sharit, Joseph; Taha, Jessica; Berkowsky, Ronald W.; Profita, Halley; Czaja, Sara J.

    2017-01-01

    Although access to Internet health information can be beneficial, solving complex health-related problems online is challenging for many individuals. In this study, we investigated the performance of a sample of 60 adults ages 18 to 85 years in using the Internet to resolve a relatively complex health information problem. The impact of age, Internet experience, and cognitive abilities on measures of search time, amount of search, and search accuracy was examined, and a model of Internet information seeking was developed to guide the characterization of participants’ search strategies. Internet experience was found to have no impact on performance measures. Older participants exhibited longer search times and lower amounts of search but similar search accuracy performance as their younger counterparts. Overall, greater search accuracy was related to an increased amount of search but not to increased search duration and was primarily attributable to higher cognitive abilities, such as processing speed, reasoning ability, and executive function. There was a tendency for those who were younger, had greater Internet experience, and had higher cognitive abilities to use a bottom-up (i.e., analytic) search strategy, although use of a top-down (i.e., browsing) strategy was not necessarily unsuccessful. Implications of the findings for future studies and design interventions are discussed. PMID:29056885

  18. Searching for life in the universe: lessons from the earth

    NASA Technical Reports Server (NTRS)

    Nealson, K. H.

    2001-01-01

    Space programs will soon allow us to search for life in situ on Mars and to return samples for analysis. A major focal point is to search for evidence of present or past life in these samples, evidence that, if found, would have far-reaching consequences for both science and religion. A search strategy will consider the entire gamut of life on our own planet, using that information to frame a search that would recognize life even if it were fundamentally different from that we know on Earth. We discuss here how the lessons learned from the study of life on Earth can be used to allow us to develop a general strategy for the search for life in the Universe.

  19. Searching for life in the universe: lessons from the earth.

    PubMed

    Nealson, K H

    2001-12-01

    Space programs will soon allow us to search for life in situ on Mars and to return samples for analysis. A major focal point is to search for evidence of present or past life in these samples, evidence that, if found, would have far-reaching consequences for both science and religion. A search strategy will consider the entire gamut of life on our own planet, using that information to frame a search that would recognize life even if it were fundamentally different from that we know on Earth. We discuss here how the lessons learned from the study of life on Earth can be used to allow us to develop a general strategy for the search for life in the Universe.

  20. Eagle-i: Making Invisible Resources, Visible

    PubMed Central

    Haendel, M.; Wilson, M.; Torniai, C.; Segerdell, E.; Shaffer, C.; Frost, R.; Bourges, D.; Brownstein, J.; McInnerney, K.

    2010-01-01

    RP-134 The eagle-i Consortium – Dartmouth College, Harvard Medical School, Jackson State University, Morehouse School of Medicine, Montana State University, Oregon Health and Science University (OHSU), the University of Alaska, the University of Hawaii, and the University of Puerto Rico – aims to make invisible resources for scientific research visible by developing a searchable network of resource repositories at research institutions nationwide. Now in early development, it is hoped that the system will scale beyond the consortium at the end of the two-year pilot. Data Model & Ontology: The eagle-i ontology development team at the OHSU Library is generating the data model and ontologies necessary for resource indexing and querying. Our indexing system will enable cores and research labs to represent resources within a defined vocabulary, leading to more effective searches and better linkage between data types. This effort is being guided by active discussions within the ontology community (http://RRontology.tk) bringing together relevant preexisting ontologies in a logical framework. The goal of these discussions is to provide context for interoperability and domain-wide standards for resource types used throughout biomedical research. Research community feedback is welcomed. Architecture Development, led by a team at Harvard, includes four main components: tools for data collection, management and curation; an institutional resource repository; a federated network; and a central search application. Each participating institution will populate and manage their repository locally, using data collection and curation tools. To help improve search performance, data tools will support the semi-automatic annotation of resources. A central search application will use a federated protocol to broadcast queries to all repositories and display aggregated results. The search application will leverage the eagle-i ontologies to help guide users to valid queries via auto-suggestions and taxonomy browsing and improve search result quality via concept-based search and synonym expansion. Website: http://eagle-i.org. NIH/NCRR ARRA award #U24RR029825

  1. A high-speed drug interaction search system for ease of use in the clinical environment.

    PubMed

    Takada, Masahiro; Inada, Hiroshi; Nakazawa, Kazuo; Tani, Shoko; Iwata, Michiaki; Sugimoto, Yoshihisa; Nagata, Satoru

    2012-12-01

    With the advancement of pharmaceutical development, drug interactions have become increasingly complex. As a result, a computer-based drug interaction search system is required to organize the whole of drug interaction data. To overcome problems faced with the existing systems, we developed a drug interaction search system using a hash table, which offers higher processing speeds and easier maintenance operations compared with relational databases (RDB). In order to compare the performance of our system and MySQL RDB in terms of search speed, drug interaction searches were repeated for all 45 possible combinations of two out of a group of 10 drugs for two cases: 5,604 and 56,040 drug interaction data. As the principal result, our system was able to process the search approximately 19 times faster than the system using the MySQL RDB. Our system also has several other merits such as that drug interaction data can be created in comma-separated value (CSV) format, thereby facilitating data maintenance. Although our system uses the well-known method of a hash table, it is expected to resolve problems common to existing systems and to be an effective system that enables the safe management of drugs.

  2. Visual Search Performance in the Autism Spectrum II: The Radial Frequency Search Task with Additional Segmentation Cues

    ERIC Educational Resources Information Center

    Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.

    2010-01-01

    The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…

  3. 24 CFR 982.629 - Homeownership option: Additional PHA requirements for family search and purchase.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... PHA requirements for family search and purchase. 982.629 Section 982.629 Housing and Urban Development...: Additional PHA requirements for family search and purchase. (a) The PHA may establish the maximum time for a family to locate a home, and to purchase the home. (b) The PHA may require periodic family reports on the...

  4. 24 CFR 982.629 - Homeownership option: Additional PHA requirements for family search and purchase.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... PHA requirements for family search and purchase. 982.629 Section 982.629 Housing and Urban Development...: Additional PHA requirements for family search and purchase. (a) The PHA may establish the maximum time for a family to locate a home, and to purchase the home. (b) The PHA may require periodic family reports on the...

  5. 24 CFR 982.629 - Homeownership option: Additional PHA requirements for family search and purchase.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... PHA requirements for family search and purchase. 982.629 Section 982.629 Housing and Urban Development...: Additional PHA requirements for family search and purchase. (a) The PHA may establish the maximum time for a family to locate a home, and to purchase the home. (b) The PHA may require periodic family reports on the...

  6. 24 CFR 982.629 - Homeownership option: Additional PHA requirements for family search and purchase.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... PHA requirements for family search and purchase. 982.629 Section 982.629 Housing and Urban Development...: Additional PHA requirements for family search and purchase. (a) The PHA may establish the maximum time for a family to locate a home, and to purchase the home. (b) The PHA may require periodic family reports on the...

  7. 24 CFR 982.629 - Homeownership option: Additional PHA requirements for family search and purchase.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... PHA requirements for family search and purchase. 982.629 Section 982.629 Housing and Urban Development...: Additional PHA requirements for family search and purchase. (a) The PHA may establish the maximum time for a family to locate a home, and to purchase the home. (b) The PHA may require periodic family reports on the...

  8. Development of a Search and Rescue Simulation to Study the Effects of Prolonged Isolation on Team Decision Making

    NASA Technical Reports Server (NTRS)

    Entin, Elliot E.; Kerrigan, Caroline; Serfaty, Daniel; Young, Philip

    1998-01-01

    The goals of this project were to identify and investigate aspects of team and individual decision-making and risk-taking behaviors hypothesized to be most affected by prolonged isolation. A key premise driving our research approach is that effects of stressors that impact individual and team cognitive processes in an isolated, confined, and hazardous environment will be projected onto the performance of a simulation task. To elicit and investigate these team behaviors we developed a search and rescue task concept as a scenario domain that would be relevant for isolated crews. We modified the Distributed Dynamic Decision-making (DDD) simulator, a platform that has been extensively used for empirical research in team processes and taskwork performance, to portray the features of a search and rescue scenario and present the task components incorporated into that scenario. The resulting software is called DD-Search and Rescue (Version 1.0). To support the use of the DDD-Search and Rescue simulator in isolated experiment settings, we wrote a player's manual for teaching team members to operate the simulator and play the scenario. We then developed a research design and experiment plan that would allow quantitative measures of individual and team decision making skills using the DDD-Search and Rescue simulator as the experiment platform. A description of these activities and the associated materials that were produced under this contract are contained in this report.

  9. An ontology-based search engine for protein-protein interactions

    PubMed Central

    2010-01-01

    Background Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. Results We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Conclusion Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology. PMID:20122195

  10. An ontology-based search engine for protein-protein interactions.

    PubMed

    Park, Byungkyu; Han, Kyungsook

    2010-01-18

    Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology.

  11. Large-scale feature searches of collections of medical imagery

    NASA Astrophysics Data System (ADS)

    Hedgcock, Marcus W.; Karshat, Walter B.; Levitt, Tod S.; Vosky, D. N.

    1993-09-01

    Large scale feature searches of accumulated collections of medical imagery are required for multiple purposes, including clinical studies, administrative planning, epidemiology, teaching, quality improvement, and research. To perform a feature search of large collections of medical imagery, one can either search text descriptors of the imagery in the collection (usually the interpretation), or (if the imagery is in digital format) the imagery itself. At our institution, text interpretations of medical imagery are all available in our VA Hospital Information System. These are downloaded daily into an off-line computer. The text descriptors of most medical imagery are usually formatted as free text, and so require a user friendly database search tool to make searches quick and easy for any user to design and execute. We are tailoring such a database search tool (Liveview), developed by one of the authors (Karshat). To further facilitate search construction, we are constructing (from our accumulated interpretation data) a dictionary of medical and radiological terms and synonyms. If the imagery database is digital, the imagery which the search discovers is easily retrieved from the computer archive. We describe our database search user interface, with examples, and compare the efficacy of computer assisted imagery searches from a clinical text database with manual searches. Our initial work on direct feature searches of digital medical imagery is outlined.

  12. National Rehabilitation Information Center

    MedlinePlus

    ... search the NARIC website or one of our databases Select a database or search for a webpage A NARIC webpage ... Projects conducting research and/or development (NIDILRR Program Database). Organizations, agencies, and online resources that support people ...

  13. Design implications for task-specific search utilities for retrieval and re-engineering of code

    NASA Astrophysics Data System (ADS)

    Iqbal, Rahat; Grzywaczewski, Adam; Halloran, John; Doctor, Faiyaz; Iqbal, Kashif

    2017-05-01

    The importance of information retrieval systems is unquestionable in the modern society and both individuals as well as enterprises recognise the benefits of being able to find information effectively. Current code-focused information retrieval systems such as Google Code Search, Codeplex or Koders produce results based on specific keywords. However, these systems do not take into account developers' context such as development language, technology framework, goal of the project, project complexity and developer's domain expertise. They also impose additional cognitive burden on users in switching between different interfaces and clicking through to find the relevant code. Hence, they are not used by software developers. In this paper, we discuss how software engineers interact with information and general-purpose information retrieval systems (e.g. Google, Yahoo!) and investigate to what extent domain-specific search and recommendation utilities can be developed in order to support their work-related activities. In order to investigate this, we conducted a user study and found that software engineers followed many identifiable and repeatable work tasks and behaviours. These behaviours can be used to develop implicit relevance feedback-based systems based on the observed retention actions. Moreover, we discuss the implications for the development of task-specific search and collaborative recommendation utilities embedded with the Google standard search engine and Microsoft IntelliSense for retrieval and re-engineering of code. Based on implicit relevance feedback, we have implemented a prototype of the proposed collaborative recommendation system, which was evaluated in a controlled environment simulating the real-world situation of professional software engineers. The evaluation has achieved promising initial results on the precision and recall performance of the system.

  14. Visual search for features and conjunctions in development.

    PubMed

    Lobaugh, N J; Cole, S; Rovet, J F

    1998-12-01

    Visual search performance was examined in three groups of children 7 to 12 years of age and in young adults. Colour and orientation feature searches and a conjunction search were conducted. Reaction time (RT) showed expected improvements in processing speed with age. Comparisons of RT's on target-present and target-absent trials were consistent with parallel search on the two feature conditions and with serial search in the conjunction condition. The RT results indicated searches for feature and conjunctions were treated similarly for children and adults. However, the youngest children missed more targets at the largest array sizes, most strikingly in conjunction search. Based on an analysis of speed/accuracy trade-offs, we suggest that low target-distractor discriminability leads to an undersampling of array elements, and is responsible for the high number of misses in the youngest children.

  15. OmniSearch: a semantic search system based on the Ontology for MIcroRNA Target (OMIT) for microRNA-target gene interaction data.

    PubMed

    Huang, Jingshan; Gutierrez, Fernando; Strachan, Harrison J; Dou, Dejing; Huang, Weili; Smith, Barry; Blake, Judith A; Eilbeck, Karen; Natale, Darren A; Lin, Yu; Wu, Bin; Silva, Nisansa de; Wang, Xiaowei; Liu, Zixing; Borchert, Glen M; Tan, Ming; Ruttenberg, Alan

    2016-01-01

    As a special class of non-coding RNAs (ncRNAs), microRNAs (miRNAs) perform important roles in numerous biological and pathological processes. The realization of miRNA functions depends largely on how miRNAs regulate specific target genes. It is therefore critical to identify, analyze, and cross-reference miRNA-target interactions to better explore and delineate miRNA functions. Semantic technologies can help in this regard. We previously developed a miRNA domain-specific application ontology, Ontology for MIcroRNA Target (OMIT), whose goal was to serve as a foundation for semantic annotation, data integration, and semantic search in the miRNA field. In this paper we describe our continuing effort to develop the OMIT, and demonstrate its use within a semantic search system, OmniSearch, designed to facilitate knowledge capture of miRNA-target interaction data. Important changes in the current version OMIT are summarized as: (1) following a modularized ontology design (with 2559 terms imported from the NCRO ontology); (2) encoding all 1884 human miRNAs (vs. 300 in previous versions); and (3) setting up a GitHub project site along with an issue tracker for more effective community collaboration on the ontology development. The OMIT ontology is free and open to all users, accessible at: http://purl.obolibrary.org/obo/omit.owl. The OmniSearch system is also free and open to all users, accessible at: http://omnisearch.soc.southalabama.edu/index.php/Software.

  16. Lacustrine flow (divers, side scan sonar, hydrogeology, water penetrating radar) used to understand the location of a drowned person

    NASA Astrophysics Data System (ADS)

    Ruffell, Alastair

    2014-05-01

    An unusual application of hydrological understanding to a police search is described. The lacustrine search for a missing person provided reports of bottom-water currents in the lake and contradictory indications from cadaver dogs. A hydrological model of the area was developed using pre-existing information from side scan sonar, a desktop hydrogeological study and deployment of water penetrating radar (WPR). These provided a hydrological theory for the initial search involving subaqueous groundwater flow, focused on an area of bedrock surrounded by sediment, on the lake floor. The work shows the value a hydrological explanation has to a police search operation (equally to search and rescue). With hindsight, the desktop study should have preceded the search, allowing better understanding of water conditions. The ultimate reason for lacustrine flow in this location is still not proven, but the hydrological model explained the problems encountered in the initial search.

  17. Building a gold standard to construct search filters: a case study with biomarkers for oral cancer.

    PubMed

    Frazier, John J; Stein, Corey D; Tseytlin, Eugene; Bekhuis, Tanja

    2015-01-01

    To support clinical researchers, librarians and informationists may need search filters for particular tasks. Development of filters typically depends on a "gold standard" dataset. This paper describes generalizable methods for creating a gold standard to support future filter development and evaluation using oral squamous cell carcinoma (OSCC) as a case study. OSCC is the most common malignancy affecting the oral cavity. Investigation of biomarkers with potential prognostic utility is an active area of research in OSCC. The methods discussed here should be useful for designing quality search filters in similar domains. The authors searched MEDLINE for prognostic studies of OSCC, developed annotation guidelines for screeners, ran three calibration trials before annotating the remaining body of citations, and measured inter-annotator agreement (IAA). We retrieved 1,818 citations. After calibration, we screened the remaining citations (n = 1,767; 97.2%); IAA was substantial (kappa = 0.76). The dataset has 497 (27.3%) citations representing OSCC studies of potential prognostic biomarkers. The gold standard dataset is likely to be high quality and useful for future development and evaluation of filters for OSCC studies of potential prognostic biomarkers. The methodology we used is generalizable to other domains requiring a reference standard to evaluate the performance of search filters. A gold standard is essential because the labels regarding relevance enable computation of diagnostic metrics, such as sensitivity and specificity. Librarians and informationists with data analysis skills could contribute to developing gold standard datasets and subsequent filters tuned for their patrons' domains of interest.

  18. DOE Research and Development Accomplishments Nobel Chemists Associated with

    Science.gov Websites

    the DOE and Predecessors RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery × Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Blog Archive SC Stories Snapshots R&D Nuggets Database dropdown arrow Search Tag Cloud Browse

  19. The Development of Automaticity in Short-Term Memory Search: Item-Response Learning and Category Learning

    ERIC Educational Resources Information Center

    Cao, Rui; Nosofsky, Robert M.; Shiffrin, Richard M.

    2017-01-01

    In short-term-memory (STM)-search tasks, observers judge whether a test probe was present in a short list of study items. Here we investigated the long-term learning mechanisms that lead to the highly efficient STM-search performance observed under conditions of consistent-mapping (CM) training, in which targets and foils never switch roles across…

  20. Guided Text Search Using Adaptive Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Symons, Christopher T; Senter, James K

    This research demonstrates the promise of augmenting interactive visualizations with semi- supervised machine learning techniques to improve the discovery of significant associations and insights in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source documents related to critical national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinate views. Furthermore, as the analyst interactsmore » with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the US Department of Homeland Security s Fusion Center, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in the search and investigative analysis of textual information.« less

  1. The development of PubMed search strategies for patient preferences for treatment outcomes.

    PubMed

    van Hoorn, Ralph; Kievit, Wietske; Booth, Andrew; Mozygemba, Kati; Lysdahl, Kristin Bakke; Refolo, Pietro; Sacchini, Dario; Gerhardus, Ansgar; van der Wilt, Gert Jan; Tummers, Marcia

    2016-07-29

    The importance of respecting patients' preferences when making treatment decisions is increasingly recognized. Efficiently retrieving papers from the scientific literature reporting on the presence and nature of such preferences can help to achieve this goal. The objective of this study was to create a search filter for PubMed to help retrieve evidence on patient preferences for treatment outcomes. A total of 27 journals were hand-searched for articles on patient preferences for treatment outcomes published in 2011. Selected articles served as a reference set. To develop optimal search strategies to retrieve this set, all articles in the reference set were randomly split into a development and a validation set. MeSH-terms and keywords retrieved using PubReMiner were tested individually and as combinations in PubMed and evaluated for retrieval performance (e.g. sensitivity (Se) and specificity (Sp)). Of 8238 articles, 22 were considered to report empirical evidence on patient preferences for specific treatment outcomes. The best search filters reached Se of 100 % [95 % CI 100-100] with Sp of 95 % [94-95 %] and Sp of 97 % [97-98 %] with 75 % Se [74-76 %]. In the validation set these queries reached values of Se of 90 % [89-91 %] with Sp 94 % [93-95 %] and Se of 80 % [79-81 %] with Sp of 97 % [96-96 %], respectively. Narrow and broad search queries were developed which can help in retrieving literature on patient preferences for treatment outcomes. Identifying such evidence may in turn enhance the incorporation of patient preferences in clinical decision making and health technology assessment.

  2. Applicability of internet search index for asthma admission forecast using machine learning.

    PubMed

    Luo, Li; Liao, Chengcheng; Zhang, Fengyi; Zhang, Wei; Li, Chunyang; Qiu, Zhixin; Huang, Debin

    2018-04-15

    This study aimed to determine whether a search index could provide insight into trends in asthma admission in China. An Internet search index is a powerful tool to monitor and predict epidemic outbreaks. However, whether using an internet search index can significantly improve asthma admissions forecasts remains unknown. The long-term goal is to develop a surveillance system to help early detection and interventions for asthma and to avoid asthma health care resource shortages in advance. In this study, we used a search index combined with air pollution data, weather data, and historical admissions data to forecast asthma admissions using machine learning. Results demonstrated that the best area under the curve in the test set that can be achieved is 0.832, using all predictors mentioned earlier. A search index is a powerful predictor in asthma admissions forecast, and a recent search index can reflect current asthma admissions with a lag-effect to a certain extent. The addition of a real-time, easily accessible search index improves forecasting capabilities and demonstrates the predictive potential of search index. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Visual Search Performance in Patients with Vision Impairment: A Systematic Review.

    PubMed

    Senger, Cassia; Margarido, Maria Rita Rodrigues Alves; De Moraes, Carlos Gustavo; De Fendi, Ligia Issa; Messias, André; Paula, Jayter Silva

    2017-11-01

    Patients with visual impairment are constantly facing challenges to achieve an independent and productive life, which depends upon both a good visual discrimination and search capacities. Given that visual search is a critical skill for several daily tasks and could be used as an index of the overall visual function, we investigated the relationship between vision impairment and visual search performance. A comprehensive search was undertaken using electronic PubMed, EMBASE, LILACS, and Cochrane databases from January 1980 to December 2016, applying the following terms: "visual search", "visual search performance", "visual impairment", "visual exploration", "visual field", "hemianopia", "search time", "vision lost", "visual loss", and "low vision". Two hundred seventy six studies from 12,059 electronic database files were selected, and 40 of them were included in this review. Studies included participants of all ages, both sexes, and the sample sizes ranged from 5 to 199 participants. Visual impairment was associated with worse visual search performance in several ophthalmologic conditions, which were either artificially induced, or related to specific eye and neurological diseases. This systematic review details all the described circumstances interfering with visual search tasks, highlights the need for developing technical standards, and outlines patterns for diagnosis and therapy using visual search capabilities.

  4. The Several-Circled Search for Self

    ERIC Educational Resources Information Center

    Copeland, Evelyn

    1973-01-01

    Reports on a sample mini-course in the humanities entitled A Several-Circled Search for Self'' which employs the circus as a theme while stressing the importance of student involvement and the development of self-concept. (RB)

  5. Modeling Group Interactions via Open Data Sources

    DTIC Science & Technology

    2011-08-30

    data. The state-of-art search engines are designed to help general query-specific search and not suitable for finding disconnected online groups. The...groups, (2) developing innovative mathematical and statistical models and efficient algorithms that leverage existing search engines and employ

  6. Protocol: a systematic review of studies developing and/or evaluating search strategies to identify prognosis studies.

    PubMed

    Corp, Nadia; Jordan, Joanne L; Hayden, Jill A; Irvin, Emma; Parker, Robin; Smith, Andrea; van der Windt, Danielle A

    2017-04-20

    Prognosis research is on the rise, its importance recognised because chronic health conditions and diseases are increasingly common and costly. Prognosis systematic reviews are needed to collate and synthesise these research findings, especially to help inform effective clinical decision-making and healthcare policy. A detailed, comprehensive search strategy is central to any systematic review. However, within prognosis research, this is challenging due to poor reporting and inconsistent use of available indexing terms in electronic databases. Whilst many published search filters exist for finding clinical trials, this is not the case for prognosis studies. This systematic review aims to identify and compare existing methodological filters developed and evaluated to identify prognosis studies of any of the three main types: overall prognosis, prognostic factors, and prognostic [risk prediction] models. Primary studies reporting the development and/or evaluation of methodological search filters to retrieve any type of prognosis study will be included in this systematic review. Multiple electronic bibliographic databases will be searched, grey literature will be sought from relevant organisations and websites, experts will be contacted, and citation tracking of key papers and reference list checking of all included papers will be undertaken. Titles will be screened by one person, and abstracts and full articles will be reviewed for inclusion independently by two reviewers. Data extraction and quality assessment will also be undertaken independently by two reviewers with disagreements resolved by discussion or by a third reviewer if necessary. Filters' characteristics and performance metrics reported in the included studies will be extracted and tabulated. To enable comparisons, filters will be grouped according to database, platform, type of prognosis study, and type of filter for which it was intended. This systematic review will identify all existing validated prognosis search filters and synthesise evidence about their applicability and performance. These findings will identify if current filters provide a proficient means of searching electronic bibliographic databases or if further prognosis filters are needed and can feasibly be developed for systematic searches of prognosis studies.

  7. Pipeline synthetic aperture radar data compression utilizing systolic binary tree-searched architecture for vector quantization

    NASA Technical Reports Server (NTRS)

    Chang, Chi-Yung (Inventor); Fang, Wai-Chi (Inventor); Curlander, John C. (Inventor)

    1995-01-01

    A system for data compression utilizing systolic array architecture for Vector Quantization (VQ) is disclosed for both full-searched and tree-searched. For a tree-searched VQ, the special case of a Binary Tree-Search VQ (BTSVQ) is disclosed with identical Processing Elements (PE) in the array for both a Raw-Codebook VQ (RCVQ) and a Difference-Codebook VQ (DCVQ) algorithm. A fault tolerant system is disclosed which allows a PE that has developed a fault to be bypassed in the array and replaced by a spare at the end of the array, with codebook memory assignment shifted one PE past the faulty PE of the array.

  8. DRUMS: a human disease related unique gene mutation search engine.

    PubMed

    Li, Zuofeng; Liu, Xingnan; Wen, Jingran; Xu, Ye; Zhao, Xin; Li, Xuan; Liu, Lei; Zhang, Xiaoyan

    2011-10-01

    With the completion of the human genome project and the development of new methods for gene variant detection, the integration of mutation data and its phenotypic consequences has become more important than ever. Among all available resources, locus-specific databases (LSDBs) curate one or more specific genes' mutation data along with high-quality phenotypes. Although some genotype-phenotype data from LSDB have been integrated into central databases little effort has been made to integrate all these data by a search engine approach. In this work, we have developed disease related unique gene mutation search engine (DRUMS), a search engine for human disease related unique gene mutation as a convenient tool for biologists or physicians to retrieve gene variant and related phenotype information. Gene variant and phenotype information were stored in a gene-centred relational database. Moreover, the relationships between mutations and diseases were indexed by the uniform resource identifier from LSDB, or another central database. By querying DRUMS, users can access the most popular mutation databases under one interface. DRUMS could be treated as a domain specific search engine. By using web crawling, indexing, and searching technologies, it provides a competitively efficient interface for searching and retrieving mutation data and their relationships to diseases. The present system is freely accessible at http://www.scbit.org/glif/new/drums/index.html. © 2011 Wiley-Liss, Inc.

  9. Google search behavior for status epilepticus.

    PubMed

    Brigo, Francesco; Trinka, Eugen

    2015-08-01

    Millions of people surf the Internet every day as a source of health-care information looking for materials about symptoms, diagnosis, treatments and their possible adverse effects, or diagnostic procedures. Google is the most popular search engine and is used by patients and physicians to search for online health-related information. This study aimed to evaluate changes in Google search behavior occurring in English-speaking countries over time for the term "status epilepticus" (SE). Using Google Trends, data on global search queries for the term SE between the 1st of January 2004 and 31st of December 2014 were analyzed. Search volume numbers over time (downloaded as CSV datasets) were analyzed by applying the "health" category filter. The research trends for the term SE remained fairly constant over time. The greatest search volume for the term SE was reported in the United States, followed by India, Australia, the United Kingdom, Canada, the Netherlands, Thailand, and Germany. Most terms associated with the search queries were related to SE definition, symptoms, subtypes, and treatment. The volume of searches for some queries (nonconvulsive, focal, and refractory SE; SE definition; SE guidelines; SE symptoms; SE management; SE treatment) was enormously increased over time (search popularity has exceeded a 5000% growth since 2004). Most people use search engines to look for the term SE to obtain information on its definition, subtypes, and management. The greatest search volume occurred not only in developed countries but also in developing countries where raising awareness about SE still remains a challenging task and where there is reduced public knowledge of epilepsy. Health information seeking (the extent to which people search for health information online) reflects the health-related information needs of Internet users for a specific disease. Google Trends shows that Internet users have a great demand for information concerning some aspects of SE (definition, subtypes, symptoms, treatment, and guidelines). Policy makers and neurological scientific societies have the responsibility to try to meet these information needs and to better target public information campaigns on SE to the general population. This article is part of a Special Issue entitled "Status Epilepticus". Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Recent developments in MrBUMP: better search-model preparation, graphical interaction with search models, and solution improvement and assessment.

    PubMed

    Keegan, Ronan M; McNicholas, Stuart J; Thomas, Jens M H; Simpkin, Adam J; Simkovic, Felix; Uski, Ville; Ballard, Charles C; Winn, Martyn D; Wilson, Keith S; Rigden, Daniel J

    2018-03-01

    Increasing sophistication in molecular-replacement (MR) software and the rapid expansion of the PDB in recent years have allowed the technique to become the dominant method for determining the phases of a target structure in macromolecular X-ray crystallography. In addition, improvements in bioinformatic techniques for finding suitable homologous structures for use as MR search models, combined with developments in refinement and model-building techniques, have pushed the applicability of MR to lower sequence identities and made weak MR solutions more amenable to refinement and improvement. MrBUMP is a CCP4 pipeline which automates all stages of the MR procedure. Its scope covers everything from the sourcing and preparation of suitable search models right through to rebuilding of the positioned search model. Recent improvements to the pipeline include the adoption of more sensitive bioinformatic tools for sourcing search models, enhanced model-preparation techniques including better ensembling of homologues, and the use of phase improvement and model building on the resulting solution. The pipeline has also been deployed as an online service through CCP4 online, which allows its users to exploit large bioinformatic databases and coarse-grained parallelism to speed up the determination of a possible solution. Finally, the molecular-graphics application CCP4mg has been combined with MrBUMP to provide an interactive visual aid to the user during the process of selecting and manipulating search models for use in MR. Here, these developments in MrBUMP are described with a case study to explore how some of the enhancements to the pipeline and to CCP4mg can help to solve a difficult case.

  11. Recent developments in MrBUMP: better search-model preparation, graphical interaction with search models, and solution improvement and assessment

    PubMed Central

    Keegan, Ronan M.; McNicholas, Stuart J.; Thomas, Jens M. H.; Simpkin, Adam J.; Uski, Ville; Ballard, Charles C.

    2018-01-01

    Increasing sophistication in molecular-replacement (MR) software and the rapid expansion of the PDB in recent years have allowed the technique to become the dominant method for determining the phases of a target structure in macromolecular X-ray crystallography. In addition, improvements in bioinformatic techniques for finding suitable homologous structures for use as MR search models, combined with developments in refinement and model-building techniques, have pushed the applicability of MR to lower sequence identities and made weak MR solutions more amenable to refinement and improvement. MrBUMP is a CCP4 pipeline which automates all stages of the MR procedure. Its scope covers everything from the sourcing and preparation of suitable search models right through to rebuilding of the positioned search model. Recent improvements to the pipeline include the adoption of more sensitive bioinformatic tools for sourcing search models, enhanced model-preparation techniques including better ensembling of homologues, and the use of phase improvement and model building on the resulting solution. The pipeline has also been deployed as an online service through CCP4 online, which allows its users to exploit large bioinformatic databases and coarse-grained parallelism to speed up the determination of a possible solution. Finally, the molecular-graphics application CCP4mg has been combined with MrBUMP to provide an interactive visual aid to the user during the process of selecting and manipulating search models for use in MR. Here, these developments in MrBUMP are described with a case study to explore how some of the enhancements to the pipeline and to CCP4mg can help to solve a difficult case. PMID:29533225

  12. Place Attachment, Place Identity and the Development of the Child's Self-Identity: Searching the Literature to Develop an Hypothesis

    ERIC Educational Resources Information Center

    Spencer, Christopher

    2005-01-01

    This is part of a campaign to encourage educational researchers, geographers in particular, to spread their literature searches beyond their immediate subject area. The question of place attachment and identity is reviewed through the psychologistal literature. The hypothesis is offered and supported, that place, in a geographical sense is also…

  13. Iconicity Influences How Effectively Minimally Verbal Children with Autism and Ability-Matched Typically Developing Children Use Pictures as Symbols in a Search Task

    ERIC Educational Resources Information Center

    Hartley, Calum; Allen, Melissa L.

    2015-01-01

    Previous word learning studies suggest that children with autism spectrum disorder may have difficulty understanding pictorial symbols. Here we investigate the ability of children with autism spectrum disorder and language-matched typically developing children to contextualize symbolic information communicated by pictures in a search task that did…

  14. A hybrid, auto-adaptive and rule-based multi-agent approach using evolutionary algorithms for improved searching

    NASA Astrophysics Data System (ADS)

    Izquierdo, Joaquín; Montalvo, Idel; Campbell, Enrique; Pérez-García, Rafael

    2016-08-01

    Selecting the most appropriate heuristic for solving a specific problem is not easy, for many reasons. This article focuses on one of these reasons: traditionally, the solution search process has operated in a given manner regardless of the specific problem being solved, and the process has been the same regardless of the size, complexity and domain of the problem. To cope with this situation, search processes should mould the search into areas of the search space that are meaningful for the problem. This article builds on previous work in the development of a multi-agent paradigm using techniques derived from knowledge discovery (data-mining techniques) on databases of so-far visited solutions. The aim is to improve the search mechanisms, increase computational efficiency and use rules to enrich the formulation of optimization problems, while reducing the search space and catering to realistic problems.

  15. Reconsidering the Rhizome: A Textual Analysis of Web Search Engines as Gatekeepers of the Internet

    NASA Astrophysics Data System (ADS)

    Hess, A.

    Critical theorists have often drawn from Deleuze and Guattari's notion of the rhizome when discussing the potential of the Internet. While the Internet may structurally appear as a rhizome, its day-to-day usage by millions via search engines precludes experiencing the random interconnectedness and potential democratizing function. Through a textual analysis of four search engines, I argue that Web searching has grown hierarchies, or "trees," that organize data in tracts of knowledge and place users in marketing niches rather than assist in the development of new knowledge.

  16. Development of user-centered interfaces to search the knowledge resources of the Virginia Henderson International Nursing Library.

    PubMed

    Jones, Josette; Harris, Marcelline; Bagley-Thompson, Cheryl; Root, Jane

    2003-01-01

    This poster describes the development of user-centered interfaces in order to extend the functionality of the Virginia Henderson International Nursing Library (VHINL) from library to web based portal to nursing knowledge resources. The existing knowledge structure and computational models are revised and made complementary. Nurses' search behavior is captured and analyzed, and the resulting search models are mapped to the revised knowledge structure and computational model.

  17. Health literacy and usability of clinical trial search engines.

    PubMed

    Utami, Dina; Bickmore, Timothy W; Barry, Barbara; Paasche-Orlow, Michael K

    2014-01-01

    Several web-based search engines have been developed to assist individuals to find clinical trials for which they may be interested in volunteering. However, these search engines may be difficult for individuals with low health and computer literacy to navigate. The authors present findings from a usability evaluation of clinical trial search tools with 41 participants across the health and computer literacy spectrum. The study consisted of 3 parts: (a) a usability study of an existing web-based clinical trial search tool; (b) a usability study of a keyword-based clinical trial search tool; and (c) an exploratory study investigating users' information needs when deciding among 2 or more candidate clinical trials. From the first 2 studies, the authors found that users with low health literacy have difficulty forming queries using keywords and have significantly more difficulty using a standard web-based clinical trial search tool compared with users with adequate health literacy. From the third study, the authors identified the search factors most important to individuals searching for clinical trials and how these varied by health literacy level.

  18. Multi-source and ontology-based retrieval engine for maize mutant phenotypes

    PubMed Central

    Green, Jason M.; Harnsomburana, Jaturon; Schaeffer, Mary L.; Lawrence, Carolyn J.; Shyu, Chi-Ren

    2011-01-01

    Model Organism Databases, including the various plant genome databases, collect and enable access to massive amounts of heterogeneous information, including sequence data, gene product information, images of mutant phenotypes, etc, as well as textual descriptions of many of these entities. While a variety of basic browsing and search capabilities are available to allow researchers to query and peruse the names and attributes of phenotypic data, next-generation search mechanisms that allow querying and ranking of text descriptions are much less common. In addition, the plant community needs an innovative way to leverage the existing links in these databases to search groups of text descriptions simultaneously. Furthermore, though much time and effort have been afforded to the development of plant-related ontologies, the knowledge embedded in these ontologies remains largely unused in available plant search mechanisms. Addressing these issues, we have developed a unique search engine for mutant phenotypes from MaizeGDB. This advanced search mechanism integrates various text description sources in MaizeGDB to aid a user in retrieving desired mutant phenotype information. Currently, descriptions of mutant phenotypes, loci and gene products are utilized collectively for each search, though expansion of the search mechanism to include other sources is straightforward. The retrieval engine, to our knowledge, is the first engine to exploit the content and structure of available domain ontologies, currently the Plant and Gene Ontologies, to expand and enrich retrieval results in major plant genomic databases. Database URL: http:www.PhenomicsWorld.org/QBTA.php PMID:21558151

  19. 3D Protein structure prediction with genetic tabu search algorithm

    PubMed Central

    2010-01-01

    Background Protein structure prediction (PSP) has important applications in different fields, such as drug design, disease prediction, and so on. In protein structure prediction, there are two important issues. The first one is the design of the structure model and the second one is the design of the optimization technology. Because of the complexity of the realistic protein structure, the structure model adopted in this paper is a simplified model, which is called off-lattice AB model. After the structure model is assumed, optimization technology is needed for searching the best conformation of a protein sequence based on the assumed structure model. However, PSP is an NP-hard problem even if the simplest model is assumed. Thus, many algorithms have been developed to solve the global optimization problem. In this paper, a hybrid algorithm, which combines genetic algorithm (GA) and tabu search (TS) algorithm, is developed to complete this task. Results In order to develop an efficient optimization algorithm, several improved strategies are developed for the proposed genetic tabu search algorithm. The combined use of these strategies can improve the efficiency of the algorithm. In these strategies, tabu search introduced into the crossover and mutation operators can improve the local search capability, the adoption of variable population size strategy can maintain the diversity of the population, and the ranking selection strategy can improve the possibility of an individual with low energy value entering into next generation. Experiments are performed with Fibonacci sequences and real protein sequences. Experimental results show that the lowest energy obtained by the proposed GATS algorithm is lower than that obtained by previous methods. Conclusions The hybrid algorithm has the advantages from both genetic algorithm and tabu search algorithm. It makes use of the advantage of multiple search points in genetic algorithm, and can overcome poor hill-climbing capability in the conventional genetic algorithm by using the flexible memory functions of TS. Compared with some previous algorithms, GATS algorithm has better performance in global optimization and can predict 3D protein structure more effectively. PMID:20522256

  20. A unified architecture for biomedical search engines based on semantic web technologies.

    PubMed

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  1. Beyond the search surface: visual search and attentional engagement.

    PubMed

    Duncan, J; Humphreys, G

    1992-05-01

    Treisman (1991) described a series of visual search studies testing feature integration theory against an alternative (Duncan & Humphreys, 1989) in which feature and conjunction search are basically similar. Here the latter account is noted to have 2 distinct levels: (a) a summary of search findings in terms of stimulus similarities, and (b) a theory of how visual attention is brought to bear on relevant objects. Working at the 1st level, Treisman found that even when similarities were calibrated and controlled, conjunction search was much harder than feature search. The theory, however, can only really be tested at the 2nd level, because the 1st is an approximation. An account of the findings is developed at the 2nd level, based on the 2 processes of input-template matching and spreading suppression. New data show that, when both of these factors are controlled, feature and conjunction search are equally difficult. Possibilities for unification of the alternative views are considered.

  2. Hand-held microwave search detector

    NASA Astrophysics Data System (ADS)

    Daniels, David J.; Philippakis, Mike

    2005-05-01

    This paper describes the further development of a patented, novel, low cost, microwave search detector using noise radar technology operating in the 27-40GHz range of frequencies, initially reported in SPIE 2004. Initial experiments have shown that plastic explosives, ceramics and plastic material hidden on the body can be detected with the system. This paper considers the basic physics of the technique and reports on the development of a initial prototype system for hand search of suspects and addresses the work carried out on optimisation of PD and FAR. The radar uses a novel lens system and the design and modelling of this for optimum depth of field of focus will be reported.

  3. To Boolean or Not To Boolean.

    ERIC Educational Resources Information Center

    Hildreth, Charles R.

    1983-01-01

    This editorial addresses the issue of whether or not to provide free-text, keyword/boolean search capabilities in the information retrieval mechanisms of online public access catalogs and discusses online catalogs developed prior to 1980--keyword searching, phrase searching, and precoordination and postcoordination. (EJS)

  4. Preservation of biological information in thermal spring deposits - Developing a strategy for the search for fossil life on Mars

    NASA Technical Reports Server (NTRS)

    Walter, M. R.; Des Marais, David J.

    1993-01-01

    Paleobiological experience on earth is used here to develop a search strategy for fossil life on Mars. In particular, the exploration of thermal spring deposits is proposed as a way to maximize the chance of finding fossil life on Mars. As a basis for this suggestion, the characteristics of thermal springs are discussed in some detail.

  5. New Capabilities in the Astrophysics Multispectral Archive Search Engine

    NASA Astrophysics Data System (ADS)

    Cheung, C. Y.; Kelley, S.; Roussopoulos, N.

    The Astrophysics Multispectral Archive Search Engine (AMASE) uses object-oriented database techniques to provide a uniform multi-mission and multi-spectral interface to search for data in the distributed archives. We describe our experience of porting AMASE from Illustra object-relational DBMS to the Informix Universal Data Server. New capabilities and utilities have been developed, including a spatial datablade that supports Nearest Neighbor queries.

  6. Contextual cueing impairment in patients with age-related macular degeneration.

    PubMed

    Geringswald, Franziska; Herbik, Anne; Hoffmann, Michael B; Pollmann, Stefan

    2013-09-12

    Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues.

  7. Analysis of Online Information Searching for Cardiovascular Diseases on a Consumer Health Information Portal

    PubMed Central

    Jadhav, Ashutosh; Sheth, Amit; Pathak, Jyotishman

    2014-01-01

    Since the early 2000’s, Internet usage for health information searching has increased significantly. Studying search queries can help us to understand users “information need” and how do they formulate search queries (“expression of information need”). Although cardiovascular diseases (CVD) affect a large percentage of the population, few studies have investigated how and what users search for CVD. We address this knowledge gap in the community by analyzing a large corpus of 10 million CVD related search queries from MayoClinic.com. Using UMLS MetaMap and UMLS semantic types/concepts, we developed a rule-based approach to categorize the queries into 14 health categories. We analyzed structural properties, types (keyword-based/Wh-questions/Yes-No questions) and linguistic structure of the queries. Our results show that the most searched health categories are ‘Diseases/Conditions’, ‘Vital-Sings’, ‘Symptoms’ and ‘Living-with’. CVD queries are longer and are predominantly keyword-based. This study extends our knowledge about online health information searching and provides useful insights for Web search engines and health websites. PMID:25954380

  8. The Development of Landmark and Beacon Use in Young Children: Evidence from a Touchscreen Search Task

    ERIC Educational Resources Information Center

    Sutton, Jennifer E.

    2006-01-01

    Children ages 2, 3 and 4 years participated in a novel hide-and-seek search task presented on a touchscreen monitor. On beacon trials, the target hiding place could be located using a beacon cue, but on landmark trials, searching required the use of a nearby landmark cue. In Experiment 1, 2-year-olds performed less accurately than older children…

  9. A Comparison of Costs of Searching the Machine-Readable Data Bases ERIC and "Psychological Abstracts" in an Annual Subscription Rate System Against Costs Estimated for the Same Searches Done in the Lockheed DIALOG System and the System Development Corporation for ERIC, and the Lockheed DIALOG System and PASAT for "Psychological Abstracts."

    ERIC Educational Resources Information Center

    Palmer, Crescentia

    A comparison of costs for computer-based searching of Psychological Abstracts and Educational Resources Information Center (ERIC) systems by the New York State Library at Albany was produced by combining data available from search request forms and from bills from the contract subscription service, the State University of New…

  10. Expert Search Strategies: The Information Retrieval Practices of Healthcare Information Professionals.

    PubMed

    Russell-Rose, Tony; Chamberlain, Jon

    2017-10-02

    Healthcare information professionals play a key role in closing the knowledge gap between medical research and clinical practice. Their work involves meticulous searching of literature databases using complex search strategies that can consist of hundreds of keywords, operators, and ontology terms. This process is prone to error and can lead to inefficiency and bias if performed incorrectly. The aim of this study was to investigate the search behavior of healthcare information professionals, uncovering their needs, goals, and requirements for information retrieval systems. A survey was distributed to healthcare information professionals via professional association email discussion lists. It investigated the search tasks they undertake, their techniques for search strategy formulation, their approaches to evaluating search results, and their preferred functionality for searching library-style databases. The popular literature search system PubMed was then evaluated to determine the extent to which their needs were met. The 107 respondents indicated that their information retrieval process relied on the use of complex, repeatable, and transparent search strategies. On average it took 60 minutes to formulate a search strategy, with a search task taking 4 hours and consisting of 15 strategy lines. Respondents reviewed a median of 175 results per search task, far more than they would ideally like (100). The most desired features of a search system were merging search queries and combining search results. Healthcare information professionals routinely address some of the most challenging information retrieval problems of any profession. However, their needs are not fully supported by current literature search systems and there is demand for improved functionality, in particular regarding the development and management of search strategies. ©Tony Russell-Rose, Jon Chamberlain. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 02.10.2017.

  11. Expert Search Strategies: The Information Retrieval Practices of Healthcare Information Professionals

    PubMed Central

    2017-01-01

    Background Healthcare information professionals play a key role in closing the knowledge gap between medical research and clinical practice. Their work involves meticulous searching of literature databases using complex search strategies that can consist of hundreds of keywords, operators, and ontology terms. This process is prone to error and can lead to inefficiency and bias if performed incorrectly. Objective The aim of this study was to investigate the search behavior of healthcare information professionals, uncovering their needs, goals, and requirements for information retrieval systems. Methods A survey was distributed to healthcare information professionals via professional association email discussion lists. It investigated the search tasks they undertake, their techniques for search strategy formulation, their approaches to evaluating search results, and their preferred functionality for searching library-style databases. The popular literature search system PubMed was then evaluated to determine the extent to which their needs were met. Results The 107 respondents indicated that their information retrieval process relied on the use of complex, repeatable, and transparent search strategies. On average it took 60 minutes to formulate a search strategy, with a search task taking 4 hours and consisting of 15 strategy lines. Respondents reviewed a median of 175 results per search task, far more than they would ideally like (100). The most desired features of a search system were merging search queries and combining search results. Conclusions Healthcare information professionals routinely address some of the most challenging information retrieval problems of any profession. However, their needs are not fully supported by current literature search systems and there is demand for improved functionality, in particular regarding the development and management of search strategies. PMID:28970190

  12. A systematic literature review of the key challenges for developing the structure of public health economic models.

    PubMed

    Squires, Hazel; Chilcott, James; Akehurst, Ronald; Burr, Jennifer; Kelly, Michael P

    2016-04-01

    To identify the key methodological challenges for public health economic modelling and set an agenda for future research. An iterative literature search identified papers describing methodological challenges for developing the structure of public health economic models. Additional multidisciplinary literature searches helped expand upon important ideas raised within the review. Fifteen articles were identified within the formal literature search, highlighting three key challenges: inclusion of non-healthcare costs and outcomes; inclusion of equity; and modelling complex systems and multi-component interventions. Based upon these and multidisciplinary searches about dynamic complexity, the social determinants of health, and models of human behaviour, six areas for future research were specified. Future research should focus on: the use of systems approaches within health economic modelling; approaches to assist the systematic consideration of the social determinants of health; methods for incorporating models of behaviour and social interactions; consideration of equity; and methodology to help modellers develop valid, credible and transparent public health economic model structures.

  13. Search for extraterrestrial intelligence/High Resolution Microwave Survey team member

    NASA Technical Reports Server (NTRS)

    Steffes, Paul G.

    1994-01-01

    This final report summarizes activities conducted during the three years of the NASA High Resolution Microwave Survey (HRMS). With primary interest in the Sky Survey activity, the principal investigator attended nine Working Group meetings and traveled independently to conduct experiments or present results at other meetings. The major activity involved evaluating the effects of spaceborne radio frequency interference (RFI) on both the SETI sky survey and targeted search. The development of a database of all unclassified earth or biting and deep space transmitters, along with accompanying search software, was a key accomplishment. The software provides information about potential sources of interference and gives complete information regarding the frequencies, positions and levels of interference generated by the spacecraft. A complete description of this search system (called HRS, or HRMS RFI Search) is provided. Other accomplishments include development of a 32,000 channel Fast-Fourier-Transform Spectrum analyzer for use in studies of interference from satellites and in a 1.4 mm SETI observational study. The latest revision of HRS has now been distributed to the extended radio astronomy and SETI community.

  14. Development of an on-site screening system for amphetamine-type stimulant tablets with a portable attenuated total reflection Fourier transform infrared spectrometer.

    PubMed

    Tsujikawa, Kenji; Kuwayama, Kenji; Miyaguchi, Hajime; Kanamori, Tatsuyuki; Iwata, Yuko T; Yoshida, Takemi; Inoue, Hiroyuki

    2008-02-04

    We tried to develop a library search system using a portable, attenuated total reflection Fourier transform infrared (ATR-FT-IR) spectrometer for on-site identification of 3,4-methylenedioxymethamphetamine (MDMA) and 3,4-methylenedioxyamphetamine (MDA) tablets. The library consisted of the spectra from mixtures of controlled drugs (e.g. MDMA and ketamine), adulterants (e.g. caffeine), and diluents (e.g. lactose). In the seven library search algorithms, the derivative correlation coefficient showed the best discriminant capability. This was enhanced by segmentation of the search area. The optimized search algorithm was validated by the positive (n=154, e.g. the standard mixtures containing the controlled drug, and the MDMA/MDA tablets confiscated) and negative samples (n=56, e.g. medicinal tablets). All validation samples except for four were judged truly. Final criteria for positive identification were decided on the basis of the results of the validation. In conclusion, a portable ATR-FT-IR spectrometer with our library search system would be a useful tool for on-site identification of amphetamine-type stimulant tablets.

  15. Identifying contributors of two-person DNA mixtures by familial database search.

    PubMed

    Chung, Yuk-Ka; Fung, Wing K

    2013-01-01

    The role of familial database search as a crime-solving tool has been increasingly recognized by forensic scientists. As an enhancement to the existing familial search approach on single source cases, this article presents our current progress in exploring the potential use of familial search to mixture cases. A novel method was established to predict the outcome of the search, from which a simple strategy for determining an appropriate scale of investigation by the police force is developed. Illustrated by an example using Swedish data, our approach is shown to have the potential for assisting the police force to decide on the scale of investigation, thereby achieving desirable crime-solving rate with reasonable cost.

  16. National Centers for Environmental Prediction

    Science.gov Websites

    Organization Search Enter text Search Navigation Bar End Cap Search EMC Go Branches Global Climate and Weather / VISION | About EMC EMC > GLOBAL BRANCH > GFS > HOME Home Implementations Documentation References Products Model Guidance Performance Developers VLab GLOBAL FORECAST SYSTEM Global Data

  17. Transportation research methods : a guide to searching for funding opportunities.

    DOT National Transportation Integrated Search

    2017-03-01

    This project developed a training methodology focused on external funding. This hands-on training presented the basics of external funding identification, teambuilding and collaborative partners, and proposal element design. Real-time searches and tu...

  18. TESS Data Processing and Quick-look Pipeline

    NASA Astrophysics Data System (ADS)

    Fausnaugh, Michael; Huang, Xu; Glidden, Ana; Guerrero, Natalia; TESS Science Office

    2018-01-01

    We describe the data analysis procedures and pipelines for the Transiting Exoplanet Survey Satellite (TESS). We briefly review the processing pipeline developed and implemented by the Science Processing Operations Center (SPOC) at NASA Ames, including pixel/full-frame image calibration, photometric analysis, pre-search data conditioning, transiting planet search, and data validation. We also describe data-quality diagnostic analyses and photometric performance assessment tests. Finally, we detail a "quick-look pipeline" (QLP) that has been developed by the MIT branch of the TESS Science Office (TSO) to provide a fast and adaptable routine to search for planet candidates in the 30 minute full-frame images.

  19. Biomedical and Health Informatics Education – the IMIA Years

    PubMed Central

    2016-01-01

    Summary Objective This paper presents the development of medical informatics education during the years from the establishment of the International Medical Informatics Association (IMIA) until today. Method A search in the literature was performed using search engines and appropriate keywords as well as a manual selection of papers. The search covered English language papers and was limited to search on papers title and abstract only. Results The aggregated papers were analyzed on the basis of the subject area, origin, time span, and curriculum development, and conclusions were drawn. Conclusions From the results, it is evident that IMIA has played a major role in comparing and integrating the Biomedical and Health Informatics educational efforts across the different levels of education and the regional distribution of educators and institutions. A large selection of references is presented facilitating future work on the field of education in biomedical and health informatics. PMID:27488405

  20. IntegromeDB: an integrated system and biological search engine.

    PubMed

    Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia

    2012-01-19

    With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.

  1. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  2. Toward a human-centered hyperlipidemia management system: the interaction between internal and external information on relational data search.

    PubMed

    Gong, Yang; Zhang, Jiajie

    2011-04-01

    In a distributed information search task, data representation and cognitive distribution jointly affect user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered framework, we proposed a search model and task taxonomy. The model defines its application in the context of healthcare setting. The taxonomy clarifies the legitimate operations for each type of search task of relational data. We then developed experimental prototypes of hyperlipidemia data displays. Based on the displays, we tested the search tasks performance through two experiments. The experiments are of a within-subject design with a random sample of 24 participants. The results support our hypotheses and validate the prediction of the model and task taxonomy. In this study, representation dimensions, data scales, and search task types are the main factors in determining search efficiency and effectiveness. Specifically, the more external representations provided on the interface the better search task performance of users. The results also suggest the ideal search performance occurs when the question type and its corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which could be more effectively designed in electronic medical records.

  3. Mercury: Reusable software application for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2009-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury is itself a reusable toolset for metadata, with current use in 12 different projects. Mercury also supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects To balance these common and project-specific needs, Mercury’s architecture includes three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of configuration files. The harvested files are then passed to the Indexing system, where each of the fields in these structured metadata records are indexed properly, so that the query engine can perform simple, keyword, spatial and temporal searches across these metadata sources. The search user interface software has two API categories; a common core API which is used by all the Mercury user interfaces for querying the index and a customized API for project specific user interfaces. For our work in producing a reusable, portable, robust, feature-rich application, Mercury received a 2008 NASA Earth Science Data Systems Software Reuse Working Group Peer-Recognition Software Reuse Award. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  4. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers

    PubMed Central

    Alsaleh, Mansour; Alarifi, Abdulrahman

    2016-01-01

    Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents. PMID:27855179

  5. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    PubMed

    Alsaleh, Mansour; Alarifi, Abdulrahman

    2016-01-01

    Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  6. When do I quit? The search termination problem in visual search.

    PubMed

    Wolfe, Jeremy M

    2012-01-01

    In visual search tasks, observers look for targets in displays or scenes containing distracting, non-target items. Most of the research on this topic has concerned the finding of those targets. Search termination is a less thoroughly studied topic. When is it time to abandon the current search? The answer is fairly straight forward when the one and only target has been found (There are my keys.). The problem is more vexed if nothing has been found (When is it time to stop looking for a weapon at the airport checkpoint?) or when the number of targets is unknown (Have we found all the tumors?). This chapter reviews the development of ideas about quitting time in visual search and offers an outline of our current theory.

  7. A Study of the Organization and Search of Bibliographic Holdings Records in On-Line Computer Systems: Phase I. Final Report.

    ERIC Educational Resources Information Center

    Cunningham, Jay L.; And Others

    This report presents the results of the initial phase of the File Organization Project, a study which focuses upon the on-line maintenance and search of the library's catalog holdings record. The focus of the project is to develop a facility for research and experimentation with the many issues of on-line file organizations and search. The first…

  8. Slowed Search in the Context of Unimpaired Grouping in Autism: Evidence from Multiple Conjunction Search.

    PubMed

    Keehn, Brandon; Joseph, Robert M

    2016-03-01

    In multiple conjunction search, the target is not known in advance but is defined only with respect to the distractors in a given search array, thus reducing the contributions of bottom-up and top-down attentional and perceptual processes during search. This study investigated whether the superior visual search skills typically demonstrated by individuals with autism spectrum disorder (ASD) would be evident in multiple conjunction search. Thirty-two children with ASD and 32 age- and nonverbal IQ-matched typically developing (TD) children were administered a multiple conjunction search task. Contrary to findings from the large majority of studies on visual search in ASD, response times of individuals with ASD were significantly slower than those of their TD peers. Evidence of slowed performance in ASD suggests that the mechanisms responsible for superior ASD performance in other visual search paradigms are not available in multiple conjunction search. Although the ASD group failed to exhibit superior performance, they showed efficient search and intertrial priming levels similar to the TD group. Efficient search indicates that ASD participants were able to group distractors into distinct subsets. In summary, while demonstrating grouping and priming effects comparable to those exhibited by their TD peers, children with ASD were slowed in their performance on a multiple conjunction search task, suggesting that their usual superior performance in visual search tasks is specifically dependent on top-down and/or bottom-up attentional and perceptual processes. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  9. Effects of an Employer-Based Intervention on Employment Outcomes for Youth with Significant Support Needs Due to Autism

    ERIC Educational Resources Information Center

    Wehman, Paul; Schall, Carol M.; McDonough, Jennifer; Graham, Carolyn; Brooke, Valerie; Riehle, J. Erin; Brooke, Alissa; Ham, Whitney; Lau, Stephanie; Allen, Jaclyn; Avellone, Lauren

    2017-01-01

    The purpose of this study was to develop and investigate an employer-based 9-month intervention for high school youth with autism spectrum disorder to learn job skills and acquire employment. The intervention modified a program titled Project SEARCH and incorporated the use of applied behavior analysis to develop Project SEARCH plus Autism…

  10. Improving sensitivity in proteome studies by analysis of false discovery rates for multiple search engines.

    PubMed

    Jones, Andrew R; Siepen, Jennifer A; Hubbard, Simon J; Paton, Norman W

    2009-03-01

    LC-MS experiments can generate large quantities of data, for which a variety of database search engines are available to make peptide and protein identifications. Decoy databases are becoming widely used to place statistical confidence in result sets, allowing the false discovery rate (FDR) to be estimated. Different search engines produce different identification sets so employing more than one search engine could result in an increased number of peptides (and proteins) being identified, if an appropriate mechanism for combining data can be defined. We have developed a search engine independent score, based on FDR, which allows peptide identifications from different search engines to be combined, called the FDR Score. The results demonstrate that the observed FDR is significantly different when analysing the set of identifications made by all three search engines, by each pair of search engines or by a single search engine. Our algorithm assigns identifications to groups according to the set of search engines that have made the identification, and re-assigns the score (combined FDR Score). The combined FDR Score can differentiate between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine.

  11. How to achieve universal coverage of cataract surgical services in developing countries: lessons from systematic reviews of other services.

    PubMed

    Blanchet, Karl; Gordon, Iris; Gilbert, Clare E; Wormald, Richard; Awan, Haroon

    2012-12-01

    Since the Declaration of Alma Ata, universal coverage has been at the heart of international health. The purpose of this study was to review the evidence on factors and interventions which are effective in promoting coverage and access to cataract and other health services, focusing on developing countries. A thorough literature search for systematic reviews was conducted. Information resources searched were Medline, The Cochrane Library and the Health System Evidence database. Medline was searched from January 1950 to June 2010. The Cochrane Library search consisted of identifying all systematic reviews produced by the Cochrane Eyes and Vision Group and the Cochrane Effective Practice and Organisation of Care. These reviews were assessed for potential inclusion in the review. The Health Systems Evidence database hosted by MacMaster University was searched to identify overviews of systematic reviews. No reviews met the inclusion criteria for cataract surgery. The literature search on other health sectors identified 23 systematic reviews providing robust evidence on the main factors facilitating universal coverage. The main enabling factors influencing access to services in developing countries were peer education, the deployment of staff to rural areas, task shifting, integration of services, supervision of health staff, eliminating user fees and scaling up of health insurance schemes. There are significant research gaps in eye care. There is a pressing need for further high quality primary research on health systems-related factors to understand how the delivery of eye care services and health systems' capacities are interrelated.

  12. The Search for Life in the Universe: The Past Through the Future

    NASA Astrophysics Data System (ADS)

    Lebofsky, L. A.; Lebofsky, A.; Lebofsky, M.; Lebofsky, N. R.

    2003-05-01

    ``Are we alone?'' This is a question that has been asked by humans for thousands of years. More than any other topic in science, the search for life in the Universe has captured the imagination. Now, for the first time in history, we are on the verge of answering this question. The search for life beyond the Earth can be seen as far back as the 17th century writings of Bishops F. Godwin and J. Wilkins and S. Cyrano de Bergerac to the early 20th century's H. G. Wells. From a scientific perspective, this search led to the formulation of the Drake Equation which in turn has led to a number of projects that are searching for signs of intelligent life beyond the Earth, the Search for Extraterrestrial Intelligence. SETI@home reaches millions of users, including thousands of K-12 teachers across the nation. We are developing a project that will enhance the SETI@home web site located at UC Berkeley. The project unites the resources of the SETI@home distributed computing community web site, university settings, and informal science learning centers. It will reach approximately 100,000 learners. The goal is to increase public understanding of math and science and to create and strengthen the connections between informal and formal learning communities. We will present a variety of ways that the Drake Equation and SETI@home can enhance the public and student understanding of the search for life in the Universe, from its roots in literature, to the development (and evolution) of the Drake Equation, to the actual search for life with SETI.

  13. BCM Search Launcher--an integrated interface to molecular biology data base search and analysis services available on the World Wide Web.

    PubMed

    Smith, R F; Wiese, B A; Wojzynski, M K; Davison, D B; Worley, K C

    1996-05-01

    The BCM Search Launcher is an integrated set of World Wide Web (WWW) pages that organize molecular biology-related search and analysis services available on the WWW by function, and provide a single point of entry for related searches. The Protein Sequence Search Page, for example, provides a single sequence entry form for submitting sequences to WWW servers that offer remote access to a variety of different protein sequence search tools, including BLAST, FASTA, Smith-Waterman, BEAUTY, PROSITE, and BLOCKS searches. Other Launch pages provide access to (1) nucleic acid sequence searches, (2) multiple and pair-wise sequence alignments, (3) gene feature searches, (4) protein secondary structure prediction, and (5) miscellaneous sequence utilities (e.g., six-frame translation). The BCM Search Launcher also provides a mechanism to extend the utility of other WWW services by adding supplementary hypertext links to results returned by remote servers. For example, links to the NCBI's Entrez data base and to the Sequence Retrieval System (SRS) are added to search results returned by the NCBI's WWW BLAST server. These links provide easy access to auxiliary information, such as Medline abstracts, that can be extremely helpful when analyzing BLAST data base hits. For new or infrequent users of sequence data base search tools, we have preset the default search parameters to provide the most informative first-pass sequence analysis possible. We have also developed a batch client interface for Unix and Macintosh computers that allows multiple input sequences to be searched automatically as a background task, with the results returned as individual HTML documents directly to the user's system. The BCM Search Launcher and batch client are available on the WWW at URL http:@gc.bcm.tmc.edu:8088/search-launcher.html.

  14. PERFORMANCE OF OVID MEDLINE SEARCH FILTERS TO IDENTIFY HEALTH STATE UTILITY STUDIES.

    PubMed

    Arber, Mick; Garcia, Sonia; Veale, Thomas; Edwards, Mary; Shaw, Alison; Glanville, Julie M

    2017-01-01

    This study was designed to assess the sensitivity of three Ovid MEDLINE search filters developed to identify studies reporting health state utility values (HSUVs), to improve the performance of the best performing filter, and to validate resulting search filters. Three quasi-gold standard sets (QGS1, QGS2, QGS3) of relevant studies were harvested from reviews of studies reporting HSUVs. The performance of three initial filters was assessed by measuring their relative recall of studies in QGS1. The best performing filter was then developed further using QGS2. This resulted in three final search filters (FSF1, FSF2, and FSF3), which were validated using QGS3. FSF1 (sensitivity maximizing) retrieved 132/139 records (sensitivity: 95 percent) in the QGS3 validation set. FSF1 had a number needed to read (NNR) of 842. FSF2 (balancing sensitivity and precision) retrieved 128/139 records (sensitivity: 92 percent) with a NNR of 502. FSF3 (precision maximizing) retrieved 123/139 records (sensitivity: 88 percent) with a NNR of 383. We have developed and validated a search filter (FSF1) to identify studies reporting HSUVs with high sensitivity (95 percent) and two other search filters (FSF2 and FSF3) with reasonably high sensitivity (92 percent and 88 percent) but greater precision, resulting in a lower NNR. These seem to be the first validated filters available for HSUVs. The availability of filters with a range of sensitivity and precision options enables researchers to choose the filter which is most appropriate to the resources available for their specific research.

  15. What Do Germans Want to Know About Skin Cancer? A Nationwide Google Search Analysis From 2013 to 2017.

    PubMed

    Seidl, Stefanie; Schuster, Barbara; Rüth, Melvin; Biedermann, Tilo; Zink, Alexander

    2018-05-02

    Experts worldwide agree that skin cancer is a global health issue, but only a few studies have reported on world populations' interest in skin cancer. Internet search data can reflect the interest of a population in different topics and thereby identify what the population wants to know. Our aim was to assess the interest of the German population in nonmelanoma skin cancer and melanoma. Google AdWords Keyword Planner was used to identify search terms related to nonmelanoma skin cancer and melanoma in Germany from November 2013 to October 2017. The identified search terms were assessed descriptively using SPSS version 24.0. In addition, the search terms were qualitatively categorized. A total of 646 skin cancer-related search terms were identified with 19,849,230 Google searches in the period under review. The search terms with the highest search volume were "skin cancer" (n=2,388,500, 12.03%), "white skin cancer" (n=2,056,900, 10.36%), "basalioma" (n=907,000, 4.57%), and "melanoma" (n=717,800, 3.62%). The most searched localizations of nonmelanoma skin cancer were "nose" (n=93,370, 38.99%) and "face" (n=53,270, 22.24%), and the most searched of melanoma were "nails" (n=46,270, 70.61%) and "eye" (n=10,480, 15.99%). The skin cancer‒related category with the highest search volume was "forms of skin cancer" (n=10,162,540, 23.28%) followed by "skin alterations" (n=4,962,020, 11.36%). Our study provides insight into terms and fields of interest related to skin cancer relevant to the German population. Furthermore, temporal trends and courses are shown. This information could aid in the development and implementation of effective and sustainable awareness campaigns by developing information sources targeted to the population's broad interest or by implementing new Internet campaigns. ©Stefanie Seidl, Barbara Schuster, Melvin Rüth, Tilo Biedermann, Alexander Zink. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 02.05.2018.

  16. An Analysis of Web Image Queries for Search.

    ERIC Educational Resources Information Center

    Pu, Hsiao-Tieh

    2003-01-01

    Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)

  17. Searching the ASRS Database Using QUORUM Keyword Search, Phrase Search, Phrase Generation, and Phrase Discovery

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W.; Connors, Mary M. (Technical Monitor)

    2001-01-01

    To support Search Requests and Quick Responses at the Aviation Safety Reporting System (ASRS), four new QUORUM methods have been developed: keyword search, phrase search, phrase generation, and phrase discovery. These methods build upon the core QUORUM methods of text analysis, modeling, and relevance-ranking. QUORUM keyword search retrieves ASRS incident narratives that contain one or more user-specified keywords in typical or selected contexts, and ranks the narratives on their relevance to the keywords in context. QUORUM phrase search retrieves narratives that contain one or more user-specified phrases, and ranks the narratives on their relevance to the phrases. QUORUM phrase generation produces a list of phrases from the ASRS database that contain a user-specified word or phrase. QUORUM phrase discovery finds phrases that are related to topics of interest. Phrase generation and phrase discovery are particularly useful for finding query phrases for input to QUORUM phrase search. The presentation of the new QUORUM methods includes: a brief review of the underlying core QUORUM methods; an overview of the new methods; numerous, concrete examples of ASRS database searches using the new methods; discussion of related methods; and, in the appendices, detailed descriptions of the new methods.

  18. BioEve Search: A Novel Framework to Facilitate Interactive Literature Search

    PubMed Central

    Ahmed, Syed Toufeeq; Davulcu, Hasan; Tikves, Sukru; Nair, Radhika; Zhao, Zhongming

    2012-01-01

    Background. Recent advances in computational and biological methods in last two decades have remarkably changed the scale of biomedical research and with it began the unprecedented growth in both the production of biomedical data and amount of published literature discussing it. An automated extraction system coupled with a cognitive search and navigation service over these document collections would not only save time and effort, but also pave the way to discover hitherto unknown information implicitly conveyed in the texts. Results. We developed a novel framework (named “BioEve”) that seamlessly integrates Faceted Search (Information Retrieval) with Information Extraction module to provide an interactive search experience for the researchers in life sciences. It enables guided step-by-step search query refinement, by suggesting concepts and entities (like genes, drugs, and diseases) to quickly filter and modify search direction, and thereby facilitating an enriched paradigm where user can discover related concepts and keywords to search while information seeking. Conclusions. The BioEve Search framework makes it easier to enable scalable interactive search over large collection of textual articles and to discover knowledge hidden in thousands of biomedical literature articles with ease. PMID:22693501

  19. [How do authors of systematic reviews restrict their literature searches when only studies from Germany should be included?

    PubMed

    Pieper, Dawid; Mathes, Tim; Palm, Rebecca; Hoffmann, Falk

    2016-11-01

    The use of search filters (e. g. for study types) facilitates the process of literature searching. Regional limits might be helpful depending on the research question. Regional search filters are already available for some regions, but not for Germany. Our aim is to give an overview of applied search strategies in systematic reviews (SRs) focusing on Germany. We searched Medline (via Pubmed) applying a focused search strategy to identify SRs focusing on Germany in January 2016. Study selection and data extraction were performed by two reviewers independently. The search strategies with a focus on Germany were analyzed in terms of reasonableness and completeness relying on the Peer Review of Electronic Search Strategies (PRESS) criteria. A narrative evidence synthesis was performed. In total, 36 SRs (13 written in English) were included. 78% were published in 2012 or later. The majority (89%) of SRs utilized at least two different sources for their search with databases and checking references being the most common. 17 SRs did not use any truncations, ten SRs did not restrict their search to Germany, six SRs reported to have searched for German OR Germany. Only ten articles searched for the term Germany (occasionally jointly with the term Deutschland) without any use of an adjective such as German. There is a high interest in regionally focused SRs. The identified search strategies revealed a need for improvement. It would be helpful to develop a regional search filter for Germany that is able to identify studies performed in Germany. Copyright © 2016. Published by Elsevier GmbH.

  20. Microscopy as a statistical, Rényi-Ulam, half-lie game: a new heuristic search strategy to accelerate imaging.

    PubMed

    Drumm, Daniel W; Greentree, Andrew D

    2017-11-07

    Finding a fluorescent target in a biological environment is a common and pressing microscopy problem. This task is formally analogous to the canonical search problem. In ideal (noise-free, truthful) search problems, the well-known binary search is optimal. The case of half-lies, where one of two responses to a search query may be deceptive, introduces a richer, Rényi-Ulam problem and is particularly relevant to practical microscopy. We analyse microscopy in the contexts of Rényi-Ulam games and half-lies, developing a new family of heuristics. We show the cost of insisting on verification by positive result in search algorithms; for the zero-half-lie case bisectioning with verification incurs a 50% penalty in the average number of queries required. The optimal partitioning of search spaces directly following verification in the presence of random half-lies is determined. Trisectioning with verification is shown to be the most efficient heuristic of the family in a majority of cases.

  1. The feasibility and appropriateness of introducing nursing curricula from developed countries into developing countries: a comprehensive systematic review.

    PubMed

    Jayasekara, Rasika; Schultz, Tim

    2006-09-01

    Objectives  The objective of this review was to appraise and synthesise the best available evidence on the feasibility and appropriateness of introducing nursing curricula from developed countries into developing countries. Inclusion criteria  This review considered quantitative and qualitative research papers that addressed the feasibility and appropriateness of introducing developed countries' nursing curricula into developing countries. Papers of the highest level of evidence rating were given priority. Participants of interest were all levels of nursing staff, nursing students, healthcare consumers and healthcare administrators. Outcomes of interest that are relevant to the evaluation of undergraduate nursing curricula were considered in the review including cost-effectiveness, cultural relevancy, adaptability, consumer satisfaction and student satisfaction. Search strategy  The search strategy sought to find both published and unpublished studies and papers, limited to the English language. An initial limited search of MEDLINE and CINAHL was undertaken followed by an analysis of the text words contained in the title and abstract, and of the index terms used to describe the article. A second extensive search was then undertaken using all identified key words and index terms. Finally, the reference list of all identified reports and articles was searched, the contents pages of a few relevant journals were hand searched and experts in the field were contacted to find any relevant studies missed from the first two searches. Methodological quality  Each paper was assessed by two independent reviewers for methodological quality before inclusion in the review using an appropriate critical appraisal instrument from the System for the Unified Management, Assessment and Review of Information (SUMARI) package. Results  A total of four papers, including one descriptive study and three textual papers, were included in the review. Because of the diverse nature of these papers, meta-synthesis of the results was not possible. For this reason, this section of the review is presented in narrative form. In this review, a descriptive study and a textual opinion paper examined the cultural relevancy of borrowed curriculum models, and the global influence of American nursing. Another two opinion papers evaluated the adaptability of another country curriculum models in their countries. Conclusion  The evidence regarding the feasibility and appropriateness of introducing developed countries' nursing curricula into developing countries is weak because of the paucity of high-quality studies. However, some lower-level evidence suggesting that direct transfer of the curriculum model from one country to another is not appropriate without first assessing the cultural context of both countries. Second, the approach of considering international, regional and local experiences more feasible and presumably a more effective strategy for adapting of a country's curriculum into a culturally or economically different country.

  2. Online literature-retrieval systems: how to get started.

    PubMed

    Tousignaut, D R

    1983-02-01

    Basic information describing online literature-retrieval systems is presented; the power of online searching is also discussed. The equipment, expense involved, and training necessary to perform online searching efficiently is described. An individual searcher needs only a computer terminal and a telephone; by telephone, the searcher connects with an online vendor's computer at another location. The four major U.S. vendors (Dialog, Bibliographic Retrieval Services, Systems Development Corporation, and the National Library of Medicine) are compared. A step-by-step procedure of logging in and searching is presented. Using the International Pharmaceutical Abstracts database as an example, 17 access points to locating an article via an online system are compared with only two (the subject and author index entry) of a printed service. By searching online, one can search the published literature on a specific topic in a matter of minutes. An online search is very useful when limited information is available or the search question contains a term that is not in a printed index.

  3. Promoting Knowledge to Action through the Study of Environmental Arctic Change (SEARCH) Program

    NASA Astrophysics Data System (ADS)

    Myers, B.; Wiggins, H. V.

    2016-12-01

    The Study of Environmental Arctic Change (SEARCH) is a multi-institutional collaborative U.S. program that advances scientific knowledge to inform societal responses to Arctic change. Currently, SEARCH focuses on how diminishing Arctic sea ice, thawing permafrost, and shrinking land ice impact both Arctic and global systems. Emphasizing "knowledge to action", SEARCH promotes collaborative research, synthesizes research findings, and broadly communicates the resulting knowledge to Arctic researchers, stakeholders, policy-makers, and the public. This poster presentation will highlight recent program products and findings; best practices and challenges for managing a distributed, interdisciplinary program; and plans for cross-disciplinary working groups focused on Arctic coastal erosion, synthesis of methane budgets, and development of Arctic scenarios. A specific focus will include how members of the broader research community can participate in SEARCH activities. http://www.arcus.org/search

  4. Family support for reducing morbidity and mortality in people with HIV/AIDS.

    PubMed

    Mohanan, Padma; Kamath, Asha

    2009-07-08

    Care and support play a critical role in assisting people who are HIV-positive to understand the need for prevention and to enable them to protect others. As the HIV/AIDS pandemic progresses and HIV-seropositive individuals contend with devastating illness, it seemed timely to inquire if they receive support from family members. It also was important to develop a normative idea of how much family support exists and from whom it emanates. To assess the effect of family support on morbidity, mortality, quality of life, and economics in families with at least one HIV-infected member, in developing countries. The following databases were searched:The Cochrane Central Register for Controlled Trials (CENTRAL), the Cochrane Database of Systematic Reviews, MEDLINE, AIDSLINE, CINAHL, Dissertation Abstracts International (DAI), EMBASE, BIOSIS, SCISEARCH, the Cochrane HIV/AIDS group specialized register, INDMED, Proquest, and various South Asian abstracting databases, will be included in the database list. The publication sites of the World Health Organization, the US Centers for Disease Control and Prevention, and other international research and non-governmental organizations. An extensive search strategy string was developed in consultation with the trial search coordinator of the HIV/AIDS Review Group. Numerous relevant keywords were included in the string to get an exhaustive electronic literature search. The search was not restricted by language. Articles from other languages were translated into English with the help of experts. A hand search was carried out in many journals and abstracts of the conference proceedings of national and international conferences related to AIDS (e.g. the International Conference on HIV/AIDS and STI in Africa [ICASA]). Efforts also were made to contact experts to identify unpublished research and trials still underway. Intervention studies. Randomized control trials (RCTs) and quasi-RCTs involving HIV-infected individuals with family support in developing countries. We independently screened the results of the search to select potentially relevant studies and to retrieve the full articles. We independently applied the inclusion criteria to the potentially relevant studies. No studies were identified that fulfilled the selection criteria. We were unable to find any trials of family support in reducing the morbidity and mortality in HIV-infected persons in developing countries. There is insufficient evidence to bring out the effect of family support in reducing the morbidity and mortality of HIV-infected persons in developing countries. This review has highlighted the dearth of high-quality quantitative research about family support. There is a clear need for rigorous studies of the clinical effects of family support on people with HIV in developing countries.

  5. New Evidence for Strategic Differences between Static and Dynamic Search Tasks: An Individual Observer Analysis of Eye Movements

    PubMed Central

    Dickinson, Christopher A.; Zelinsky, Gregory J.

    2013-01-01

    Two experiments are reported that further explore the processes underlying dynamic search. In Experiment 1, observers’ oculomotor behavior was monitored while they searched for a randomly oriented T among oriented L distractors under static and dynamic viewing conditions. Despite similar search slopes, eye movements were less frequent and more spatially constrained under dynamic viewing relative to static, with misses also increasing more with target eccentricity in the dynamic condition. These patterns suggest that dynamic search involves a form of sit-and-wait strategy in which search is restricted to a small group of items surrounding fixation. To evaluate this interpretation, we developed a computational model of a sit-and-wait process hypothesized to underlie dynamic search. In Experiment 2 we tested this model by varying fixation position in the display and found that display positions optimized for a sit-and-wait strategy resulted in higher d′ values relative to a less optimal location. We conclude that different strategies, and therefore underlying processes, are used to search static and dynamic displays. PMID:23372555

  6. ISART: A Generic Framework for Searching Books with Social Information

    PubMed Central

    Cui, Xiao-Ping; Qu, Jiao; Geng, Bin; Zhou, Fang; Song, Li; Hao, Hong-Wei

    2016-01-01

    Effective book search has been discussed for decades and is still future-proof in areas as diverse as computer science, informatics, e-commerce and even culture and arts. A variety of social information contents (e.g, ratings, tags and reviews) emerge with the huge number of books on the Web, but how they are utilized for searching and finding books is seldom investigated. Here we develop an Integrated Search And Recommendation Technology (IsArt), which breaks new ground by providing a generic framework for searching books with rich social information. IsArt comprises a search engine to rank books with book contents and professional metadata, a Generalized Content-based Filtering model to thereafter rerank books with user-generated social contents, and a learning-to-rank technique to finally combine a wide range of diverse reranking results. Experiments show that this technology permits embedding social information to promote book search effectiveness, and IsArt, by making use of it, has the best performance on CLEF/INEX Social Book Search Evaluation datasets of all 4 years (from 2011 to 2014), compared with some other state-of-the-art methods. PMID:26863545

  7. ISART: A Generic Framework for Searching Books with Social Information.

    PubMed

    Yin, Xu-Cheng; Zhang, Bo-Wen; Cui, Xiao-Ping; Qu, Jiao; Geng, Bin; Zhou, Fang; Song, Li; Hao, Hong-Wei

    2016-01-01

    Effective book search has been discussed for decades and is still future-proof in areas as diverse as computer science, informatics, e-commerce and even culture and arts. A variety of social information contents (e.g, ratings, tags and reviews) emerge with the huge number of books on the Web, but how they are utilized for searching and finding books is seldom investigated. Here we develop an Integrated Search And Recommendation Technology (IsArt), which breaks new ground by providing a generic framework for searching books with rich social information. IsArt comprises a search engine to rank books with book contents and professional metadata, a Generalized Content-based Filtering model to thereafter rerank books with user-generated social contents, and a learning-to-rank technique to finally combine a wide range of diverse reranking results. Experiments show that this technology permits embedding social information to promote book search effectiveness, and IsArt, by making use of it, has the best performance on CLEF/INEX Social Book Search Evaluation datasets of all 4 years (from 2011 to 2014), compared with some other state-of-the-art methods.

  8. Mars Analog Research and Technology Experiment (MARTE): A Simulated Mars Drilling Mission to Search for Subsurface Life at the Rio Tinto, Spain

    NASA Technical Reports Server (NTRS)

    Stoker, Carol; Lemke, Larry; Mandell, Humboldt; McKay, David; George, Jeffrey; Gomez-Alvera, Javier; Amils, Ricardo; Stevens, Todd; Miller, David

    2003-01-01

    The MARTE (Mars Astrobiology Research and Technology Experiment) project was selected by the new NASA ASTEP program, which supports field experiments having an equal emphasis on Astrobiology science and technology development relevant to future Astrobiology missions. MARTE will search for a hypothesized subsurface anaerobic chemoautotrophic biosphere in the region of the Tinto River in southwestern Spain while also demonstrating technology needed to search for a subsurface biosphere on Mars. The experiment is informed by the strategy for searching for life on Mars.

  9. Retrieval of overviews of systematic reviews in MEDLINE was improved by the development of an objectively derived and validated search strategy.

    PubMed

    Lunny, Carole; McKenzie, Joanne E; McDonald, Steve

    2016-06-01

    Locating overviews of systematic reviews is difficult because of an absence of appropriate indexing terms and inconsistent terminology used to describe overviews. Our objective was to develop a validated search strategy to retrieve overviews in MEDLINE. We derived a test set of overviews from the references of two method articles on overviews. Two population sets were used to identify discriminating terms, that is, terms that appear frequently in the test set but infrequently in two population sets of references found in MEDLINE. We used text mining to conduct a frequency analysis of terms appearing in the titles and abstracts. Candidate terms were combined and tested in MEDLINE in various permutations, and the performance of strategies measured using sensitivity and precision. Two search strategies were developed: a sensitivity-maximizing strategy, achieving 93% sensitivity (95% confidence interval [CI]: 87, 96) and 7% precision (95% CI: 6, 8), and a sensitivity-and-precision-maximizing strategy, achieving 66% sensitivity (95% CI: 58, 74) and 21% precision (95% CI: 17, 25). The developed search strategies enable users to more efficiently identify overviews of reviews compared to current strategies. Consistent language in describing overviews would aid in their identification, as would a specific MEDLINE Publication Type. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. The Brazilian Portuguese Lexicon: An Instrument for Psycholinguistic Research

    PubMed Central

    Estivalet, Gustavo L.; Meunier, Fanny

    2015-01-01

    In this article, we present the Brazilian Portuguese Lexicon, a new word-based corpus for psycholinguistic and computational linguistic research in Brazilian Portuguese. We describe the corpus development, the specific characteristics on the internet site and database for user access. We also perform distributional analyses of the corpus and comparisons to other current databases. Our main objective was to provide a large, reliable, and useful word-based corpus with a dynamic, easy-to-use, and intuitive interface with free internet access for word and word-criteria searches. We used the Núcleo Interinstitucional de Linguística Computacional’s corpus as the basic data source and developed the Brazilian Portuguese Lexicon by deriving and adding metalinguistic and psycholinguistic information about Brazilian Portuguese words. We obtained a final corpus with more than 30 million word tokens, 215 thousand word types and 25 categories of information about each word. This corpus was made available on the internet via a free-access site with two search engines: a simple search and a complex search. The simple engine basically searches for a list of words, while the complex engine accepts all types of criteria in the corpus categories. The output result presents all entries found in the corpus with the criteria specified in the input search and can be downloaded as a.csv file. We created a module in the results that delivers basic statistics about each search. The Brazilian Portuguese Lexicon also provides a pseudoword engine and specific tools for linguistic and statistical analysis. Therefore, the Brazilian Portuguese Lexicon is a convenient instrument for stimulus search, selection, control, and manipulation in psycholinguistic experiments, as also it is a powerful database for computational linguistics research and language modeling related to lexicon distribution, functioning, and behavior. PMID:26630138

  11. The Brazilian Portuguese Lexicon: An Instrument for Psycholinguistic Research.

    PubMed

    Estivalet, Gustavo L; Meunier, Fanny

    2015-01-01

    In this article, we present the Brazilian Portuguese Lexicon, a new word-based corpus for psycholinguistic and computational linguistic research in Brazilian Portuguese. We describe the corpus development, the specific characteristics on the internet site and database for user access. We also perform distributional analyses of the corpus and comparisons to other current databases. Our main objective was to provide a large, reliable, and useful word-based corpus with a dynamic, easy-to-use, and intuitive interface with free internet access for word and word-criteria searches. We used the Núcleo Interinstitucional de Linguística Computacional's corpus as the basic data source and developed the Brazilian Portuguese Lexicon by deriving and adding metalinguistic and psycholinguistic information about Brazilian Portuguese words. We obtained a final corpus with more than 30 million word tokens, 215 thousand word types and 25 categories of information about each word. This corpus was made available on the internet via a free-access site with two search engines: a simple search and a complex search. The simple engine basically searches for a list of words, while the complex engine accepts all types of criteria in the corpus categories. The output result presents all entries found in the corpus with the criteria specified in the input search and can be downloaded as a.csv file. We created a module in the results that delivers basic statistics about each search. The Brazilian Portuguese Lexicon also provides a pseudoword engine and specific tools for linguistic and statistical analysis. Therefore, the Brazilian Portuguese Lexicon is a convenient instrument for stimulus search, selection, control, and manipulation in psycholinguistic experiments, as also it is a powerful database for computational linguistics research and language modeling related to lexicon distribution, functioning, and behavior.

  12. Constructing Effective Search Strategies for Electronic Searching.

    ERIC Educational Resources Information Center

    Flanagan, Lynn; Parente, Sharon Campbell

    Electronic databases have grown tremendously in both number and popularity since their development during the 1960s. Access to electronic databases in academic libraries was originally offered primarily through mediated search services by trained librarians; however, the advent of CD-ROM and end-user interfaces for online databases has shifted the…

  13. Libraries and Computing Centers: Issues of Mutual Concern.

    ERIC Educational Resources Information Center

    Metz, Paul; Potter, William G.

    1989-01-01

    The first of two articles discusses the advantages of online subject searching, the recall and precision tradeoff, and possible future developments in electronic searching. The second reviews the experiences of academic libraries that offer online searching of bibliographic, full text, and statistical databases in addition to online catalogs. (CLB)

  14. Project SEARCH UK--Evaluating Its Employment Outcomes

    ERIC Educational Resources Information Center

    Kaehne, Axel

    2016-01-01

    Background: The study reports the findings of an evaluation of Project SEARCH UK. The programme develops internships for young people with intellectual disabilities who are about to leave school or college. The aim of the evaluation was to investigate at what rate Project SEARCH provided employment opportunities to participants. Methods: The…

  15. Detecting Outliers in Factor Analysis Using the Forward Search Algorithm

    ERIC Educational Resources Information Center

    Mavridis, Dimitris; Moustaki, Irini

    2008-01-01

    In this article we extend and implement the forward search algorithm for identifying atypical subjects/observations in factor analysis models. The forward search has been mainly developed for detecting aberrant observations in regression models (Atkinson, 1994) and in multivariate methods such as cluster and discriminant analysis (Atkinson, Riani,…

  16. Comparative homology agreement search: An effective combination of homology-search methods

    PubMed Central

    Alam, Intikhab; Dress, Andreas; Rehmsmeier, Marc; Fuellen, Georg

    2004-01-01

    Many methods have been developed to search for homologous members of a protein family in databases, and the reliability of results and conclusions may be compromised if only one method is used, neglecting the others. Here we introduce a general scheme for combining such methods. Based on this scheme, we implemented a tool called comparative homology agreement search (chase) that integrates different search strategies to obtain a combined “E value.” Our results show that a consensus method integrating distinct strategies easily outperforms any of its component algorithms. More specifically, an evaluation based on the Structural Classification of Proteins database reveals that, on average, a coverage of 47% can be obtained in searches for distantly related homologues (i.e., members of the same superfamily but not the same family, which is a very difficult task), accepting only 10 false positives, whereas the individual methods obtain a coverage of 28–38%. PMID:15367730

  17. An analysis of iterated local search for job-shop scheduling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitley, L. Darrell; Howe, Adele E.; Watson, Jean-Paul

    2003-08-01

    Iterated local search, or ILS, is among the most straightforward meta-heuristics for local search. ILS employs both small-step and large-step move operators. Search proceeds via iterative modifications to a single solution, in distinct alternating phases. In the first phase, local neighborhood search (typically greedy descent) is used in conjunction with the small-step operator to transform solutions into local optima. In the second phase, the large-step operator is applied to generate perturbations to the local optima obtained in the first phase. Ideally, when local neighborhood search is applied to the resulting solution, search will terminate at a different local optimum, i.e.,more » the large-step perturbations should be sufficiently large to enable escape from the attractor basins of local optima. ILS has proven capable of delivering excellent performance on numerous N P-Hard optimization problems. [LMS03]. However, despite its implicity, very little is known about why ILS can be so effective, and under what conditions. The goal of this paper is to advance the state-of-the-art in the analysis of meta-heuristics, by providing answers to this research question. They focus on characterizing both the relationship between the structure of the underlying search space and ILS performance, and the dynamic behavior of ILS. The analysis proceeds in the context of the job-shop scheduling problem (JSP) [Tai94]. They begin by demonstrating that the attractor basins of local optima in the JSP are surprisingly weak, and can be escaped with high probaiblity by accepting a short random sequence of less-fit neighbors. this result is used to develop a new ILS algorithms for the JSP, I-JAR, whose performance is competitive with tabu search on difficult benchmark instances. They conclude by developing a very accurate behavioral model of I-JAR, which yields significant insights into the dynamics of search. The analysis is based on a set of 100 random 10 x 10 problem instances, in addition to some widely used benchmark instances. Both I-JAR and the tabu search algorithm they consider are based on the N1 move operator introduced by van Laarhoven et al. [vLAL92]. The N1 operator induces a connected search space, such that it is always possible to move from an arbitrary solution to an optimal solution; this property is integral to the development of a behavioral model of I-JAR. However, much of the analysis generalizes to other move operators, including that of Nowicki and Smutnick [NS96]. Finally the models are based on the distance between two solutions, which they take as the well-known disjunctive graph distance [MBK99].« less

  18. Searching bioremediation patents through Cooperative Patent Classification (CPC).

    PubMed

    Prasad, Rajendra

    2016-03-01

    Patent classification systems have traditionally evolved independently at each patent jurisdiction to classify patents handled by their examiners to be able to search previous patents while dealing with new patent applications. As patent databases maintained by them went online for free access to public as also for global search of prior art by examiners, the need arose for a common platform and uniform structure of patent databases. The diversity of different classification, however, posed problems of integrating and searching relevant patents across patent jurisdictions. To address this problem of comparability of data from different sources and searching patents, WIPO in the recent past developed what is known as International Patent Classification (IPC) system which most countries readily adopted to code their patents with IPC codes along with their own codes. The Cooperative Patent Classification (CPC) is the latest patent classification system based on IPC/European Classification (ECLA) system, developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) which is likely to become a global standard. This paper discusses this new classification system with reference to patents on bioremediation.

  19. Concept similarity and related categories in information retrieval using formal concept analysis

    NASA Astrophysics Data System (ADS)

    Eklund, P.; Ducrou, J.; Dau, F.

    2012-11-01

    The application of formal concept analysis to the problem of information retrieval has been shown useful but has lacked any real analysis of the idea of relevance ranking of search results. SearchSleuth is a program developed to experiment with the automated local analysis of Web search using formal concept analysis. SearchSleuth extends a standard search interface to include a conceptual neighbourhood centred on a formal concept derived from the initial query. This neighbourhood of the concept derived from the search terms is decorated with its upper and lower neighbours representing more general and special concepts, respectively. SearchSleuth is in many ways an archetype of search engines based on formal concept analysis with some novel features. In SearchSleuth, the notion of related categories - which are themselves formal concepts - is also introduced. This allows the retrieval focus to shift to a new formal concept called a sibling. This movement across the concept lattice needs to relate one formal concept to another in a principled way. This paper presents the issues concerning exploring, searching, and ordering the space of related categories. The focus is on understanding the use and meaning of proximity and semantic distance in the context of information retrieval using formal concept analysis.

  20. Hiding and finding: the relationship between visual concealment and visual search.

    PubMed

    Smilek, Daniel; Weinheimer, Laura; Kwan, Donna; Reynolds, Mike; Kingstone, Alan

    2009-11-01

    As an initial step toward developing a theory of visual concealment, we assessed whether people would use factors known to influence visual search difficulty when the degree of concealment of objects among distractors was varied. In Experiment 1, participants arranged search objects (shapes, emotional faces, and graphemes) to create displays in which the targets were in plain sight but were either easy or hard to find. Analyses of easy and hard displays created during Experiment 1 revealed that the participants reliably used factors known to influence search difficulty (e.g., eccentricity, target-distractor similarity, presence/absence of a feature) to vary the difficulty of search across displays. In Experiment 2, a new participant group searched for the targets in the displays created by the participants in Experiment 1. Results indicated that search was more difficult in the hard than in the easy condition. In Experiments 3 and 4, participants used presence versus absence of a feature to vary search difficulty with several novel stimulus sets. Taken together, the results reveal a close link between the factors that govern concealment and the factors known to influence search difficulty, suggesting that a visual search theory can be extended to form the basis of a theory of visual concealment.

  1. GWFASTA: server for FASTA search in eukaryotic and microbial genomes.

    PubMed

    Issac, Biju; Raghava, G P S

    2002-09-01

    Similarity searches are a powerful method for solving important biological problems such as database scanning, evolutionary studies, gene prediction, and protein structure prediction. FASTA is a widely used sequence comparison tool for rapid database scanning. Here we describe the GWFASTA server that was developed to assist the FASTA user in similarity searches against partially and/or completely sequenced genomes. GWFASTA consists of more than 60 microbial genomes, eight eukaryote genomes, and proteomes of annotatedgenomes. Infact, it provides the maximum number of databases for similarity searching from a single platform. GWFASTA allows the submission of more than one sequence as a single query for a FASTA search. It also provides integrated post-processing of FASTA output, including compositional analysis of proteins, multiple sequences alignment, and phylogenetic analysis. Furthermore, it summarizes the search results organism-wise for prokaryotes and chromosome-wise for eukaryotes. Thus, the integration of different tools for sequence analyses makes GWFASTA a powerful toolfor biologists.

  2. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    PubMed

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  3. MSblender: a probabilistic approach for integrating peptide identifications from multiple database search engines

    PubMed Central

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I.; Marcotte, Edward M.

    2011-01-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for all possible PSMs and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for all detected proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses. PMID:21488652

  4. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    PubMed

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.

  5. Developing new mathematical method for search of the time series periodicity with deletions and insertions

    NASA Astrophysics Data System (ADS)

    Korotkov, E. V.; Korotkova, M. A.

    2017-01-01

    The purpose of this study was to detect latent periodicity in the presence of deletions or insertions in the analyzed data, when the points of deletions or insertions are unknown. A mathematical method was developed to search for periodicity in the numerical series, using dynamic programming and random matrices. The developed method was applied to search for periodicity in the Euro/Dollar (Eu/) exchange rate, since 2001. The presence of periodicity within the period length equal to 24 h in the analyzed financial series was shown. Periodicity can be detected only with insertions and deletions. The results of this study show that periodicity phase shifts, depend on the observation time. The reasons for the existence of the periodicity in the financial ranks are discussed.

  6. IntegromeDB: an integrated system and biological search engine

    PubMed Central

    2012-01-01

    Background With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Description Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. Conclusions The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback. PMID:22260095

  7. Combinatorial Fusion Analysis for Meta Search Information Retrieval

    NASA Astrophysics Data System (ADS)

    Hsu, D. Frank; Taksa, Isak

    Leading commercial search engines are built as single event systems. In response to a particular search query, the search engine returns a single list of ranked search results. To find more relevant results the user must frequently try several other search engines. A meta search engine was developed to enhance the process of multi-engine querying. The meta search engine queries several engines at the same time and fuses individual engine results into a single search results list. The fusion of multiple search results has been shown (mostly experimentally) to be highly effective. However, the question of why and how the fusion should be done still remains largely unanswered. In this chapter, we utilize the combinatorial fusion analysis proposed by Hsu et al. to analyze combination and fusion of multiple sources of information. A rank/score function is used in the design and analysis of our framework. The framework provides a better understanding of the fusion phenomenon in information retrieval. For example, to improve the performance of the combined multiple scoring systems, it is necessary that each of the individual scoring systems has relatively high performance and the individual scoring systems are diverse. Additionally, we illustrate various applications of the framework using two examples from the information retrieval domain.

  8. Guidelines and Criteria for the Search Strategy, Evaluation, Selection, and Documentation of Key Data and Supporting Data Used for the Derivation of AEGL Values

    EPA Pesticide Factsheets

    This is Section 2.3 of the Standing Operating Procedures for Developing Acute Exposure Guideline Levels (AEGLs) for Hazardous Chemicals. It discusses methodologies used to search for and select data for development of AEGL values.

  9. Development of a biomarkers database for the National Children's Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobdell, Danelle T.; Mendola, Pauline

    The National Children's Study (NCS) is a federally-sponsored, longitudinal study of environmental influences on the health and development of children across the United States (www.nationalchildrensstudy.gov). Current plans are to study approximately 100,000 children and their families beginning before birth up to age 21 years. To explore potential biomarkers that could be important measurements in the NCS, we compiled the relevant scientific literature to identify both routine or standardized biological markers as well as new and emerging biological markers. Although the search criteria encouraged examination of factors that influence the breadth of child health and development, attention was primarily focused onmore » exposure, susceptibility, and outcome biomarkers associated with four important child health outcomes: autism and neurobehavioral disorders, injury, cancer, and asthma. The Biomarkers Database was designed to allow users to: (1) search the biomarker records compiled by type of marker (susceptibility, exposure or effect), sampling media (e.g., blood, urine, etc.), and specific marker name; (2) search the citations file; and (3) read the abstract evaluations relative to our search criteria. A searchable, user-friendly database of over 2000 articles was created and is publicly available at: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=85844. PubMed was the primary source of references with some additional searches of Toxline, NTIS, and other reference databases. Our initial focus was on review articles, beginning as early as 1996, supplemented with searches of the recent primary research literature from 2001 to 2003. We anticipate this database will have applicability for the NCS as well as other studies of children's environmental health.« less

  10. Space shuttle search and rescue experiment using synthetic aperture radar

    NASA Technical Reports Server (NTRS)

    Sivertson, W. E., Jr.; Larson, R. W.; Zelenka, J. S.

    1977-01-01

    The feasibility of a synthetic aperture radar for search and rescue applications was demonstrated with aircraft experiments. One experiment was conducted using the ERIM four-channel radar and several test sites in the Michigan area. In this test simple corner-reflector targets were successfully imaged. Results from this investigation were positive and indicate that the concept can be used to investigate new approaches focused on the development of a global search and rescue system. An orbital experiment to demonstrate the application of synthetic aperture radar to search and rescue is proposed using the space shuttle.

  11. Large-scale database searching using tandem mass spectra: looking up the answer in the back of the book.

    PubMed

    Sadygov, Rovshan G; Cociorva, Daniel; Yates, John R

    2004-12-01

    Database searching is an essential element of large-scale proteomics. Because these methods are widely used, it is important to understand the rationale of the algorithms. Most algorithms are based on concepts first developed in SEQUEST and PeptideSearch. Four basic approaches are used to determine a match between a spectrum and sequence: descriptive, interpretative, stochastic and probability-based matching. We review the basic concepts used by most search algorithms, the computational modeling of peptide identification and current challenges and limitations of this approach for protein identification.

  12. Search prefilters to assist in library searching of infrared spectra of automotive clear coats.

    PubMed

    Lavine, Barry K; Fasasi, Ayuba; Mirjankar, Nikhil; White, Collin; Sandercock, Mark

    2015-01-01

    Clear coat searches of the infrared (IR) spectral library of the paint data query (PDQ) forensic database often generate an unusable number of hits that span multiple manufacturers, assembly plants, and years. To improve the accuracy of the hit list, pattern recognition methods have been used to develop search prefilters (i.e., principal component models) that differentiate between similar but non-identical IR spectra of clear coats on the basis of manufacturer (e.g., General Motors, Ford, Chrysler) or assembly plant. A two step procedure to develop these search prefilters was employed. First, the discrete wavelet transform was used to decompose each IR spectrum into wavelet coefficients to enhance subtle but significant features in the spectral data. Second, a genetic algorithm for IR spectral pattern recognition was employed to identify wavelet coefficients characteristic of the manufacturer or assembly plant of the vehicle. Even in challenging trials where the paint samples evaluated were all from the same manufacturer (General Motors) within a limited production year range (2000-2006), the respective assembly plant of the vehicle was correctly identified. Search prefilters to identify assembly plants were successfully validated using 10 blind samples provided by the Royal Canadian Mounted Police (RCMP) as part of a study to populate PDQ to current production years, whereas the search prefilter to discriminate among automobile manufacturers was successfully validated using IR spectra obtained directly from the PDQ database. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Brazilian academic search filter: application to the scientific literature on physical activity.

    PubMed

    Sanz-Valero, Javier; Ferreira, Marcos Santos; Castiel, Luis David; Wanden-Berghe, Carmina; Guilam, Maria Cristina Rodrigues

    2010-10-01

    To develop a search filter in order to retrieve scientific publications on physical activity from Brazilian academic institutions. The academic search filter consisted of the descriptor "exercise" associated through the term AND, to the names of the respective academic institutions, which were connected by the term OR. The MEDLINE search was performed with PubMed on 11/16/2008. The institutions were selected according to the classification from the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for interuniversity agreements. A total of 407 references were retrieved, corresponding to about 0.9% of all articles about physical activity and 0.5% of the Brazilian academic publications indexed in MEDLINE on the search date. When compared with the manual search undertaken, the search filter (descriptor + institutional filter) showed a sensitivity of 99% and a specificity of 100%. The institutional search filter showed high sensitivity and specificity, and is applicable to other areas of knowledge in health sciences. It is desirable that every Brazilian academic institution establish its "standard name/brand" in order to efficiently retrieve their scientific literature.

  14. Real-time earthquake monitoring using a search engine method.

    PubMed

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  15. Real-time earthquake monitoring using a search engine method

    PubMed Central

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-01-01

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861

  16. Sequence search on a supercomputer.

    PubMed

    Gotoh, O; Tagashira, Y

    1986-01-10

    A set of programs was developed for searching nucleic acid and protein sequence data bases for sequences similar to a given sequence. The programs, written in FORTRAN 77, were optimized for vector processing on a Hitachi S810-20 supercomputer. A search of a 500-residue protein sequence against the entire PIR data base Ver. 1.0 (1) (0.5 M residues) is carried out in a CPU time of 45 sec. About 4 min is required for an exhaustive search of a 1500-base nucleotide sequence against all mammalian sequences (1.2M bases) in Genbank Ver. 29.0. The CPU time is reduced to about a quarter with a faster version.

  17. KONAGAbase: a genomic and transcriptomic database for the diamondback moth, Plutella xylostella.

    PubMed

    Jouraku, Akiya; Yamamoto, Kimiko; Kuwazaki, Seigo; Urio, Masahiro; Suetsugu, Yoshitaka; Narukawa, Junko; Miyamoto, Kazuhisa; Kurita, Kanako; Kanamori, Hiroyuki; Katayose, Yuichi; Matsumoto, Takashi; Noda, Hiroaki

    2013-07-09

    The diamondback moth (DBM), Plutella xylostella, is one of the most harmful insect pests for crucifer crops worldwide. DBM has rapidly evolved high resistance to most conventional insecticides such as pyrethroids, organophosphates, fipronil, spinosad, Bacillus thuringiensis, and diamides. Therefore, it is important to develop genomic and transcriptomic DBM resources for analysis of genes related to insecticide resistance, both to clarify the mechanism of resistance of DBM and to facilitate the development of insecticides with a novel mode of action for more effective and environmentally less harmful insecticide rotation. To contribute to this goal, we developed KONAGAbase, a genomic and transcriptomic database for DBM (KONAGA is the Japanese word for DBM). KONAGAbase provides (1) transcriptomic sequences of 37,340 ESTs/mRNAs and 147,370 RNA-seq contigs which were clustered and assembled into 84,570 unigenes (30,695 contigs, 50,548 pseudo singletons, and 3,327 singletons); and (2) genomic sequences of 88,530 WGS contigs with 246,244 degenerate contigs and 106,455 singletons from which 6,310 de novo identified repeat sequences and 34,890 predicted gene-coding sequences were extracted. The unigenes and predicted gene-coding sequences were clustered and 32,800 representative sequences were extracted as a comprehensive putative gene set. These sequences were annotated with BLAST descriptions, Gene Ontology (GO) terms, and Pfam descriptions, respectively. KONAGAbase contains rich graphical user interface (GUI)-based web interfaces for easy and efficient searching, browsing, and downloading sequences and annotation data. Five useful search interfaces consisting of BLAST search, keyword search, BLAST result-based search, GO tree-based search, and genome browser are provided. KONAGAbase is publicly available from our website (http://dbm.dna.affrc.go.jp/px/) through standard web browsers. KONAGAbase provides DBM comprehensive transcriptomic and draft genomic sequences with useful annotation information with easy-to-use web interfaces, which helps researchers to efficiently search for target sequences such as insect resistance-related genes. KONAGAbase will be continuously updated and additional genomic/transcriptomic resources and analysis tools will be provided for further efficient analysis of the mechanism of insecticide resistance and the development of effective insecticides with a novel mode of action for DBM.

  18. 'Sciencenet'--towards a global search and share engine for all scientific knowledge.

    PubMed

    Lütjohann, Dominic S; Shah, Asmi H; Christen, Michael P; Richter, Florian; Knese, Karsten; Liebel, Urban

    2011-06-15

    Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, 'Sciencenet', which facilitates rapid searching over this large data space. By 'bringing the search engine to the data', we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the 'AskMe' experiment publisher is written in Python 2.7, and the backend 'YaCy' search engine is based on Java 1.6.

  19. ECOTOX knowledgebase: Search features and customized reports

    EPA Science Inventory

    The ECOTOXicology knowledgebase (ECOTOX) is a comprehensive, publicly available knowledgebase developed and maintained by ORD/NHEERL. It is used for environmental toxicity data on aquatic life, terrestrial plants and wildlife. ECOTOX has the capability to refine and filter search...

  20. The rise and fall of the medical mediated searcher

    PubMed Central

    Atlas, Michel C.

    2000-01-01

    The relationship between the development of mediated online literature searching and the recruitment of medical librarians to fill positions as online searchers was investigated. The history of database searching by medical librarians was outlined and a content analysis of thirty-five years of job advertisements in MLA News from 1961 through 1996 was summarized. Advertisements for online searchers were examined to test the hypothesis that the growth of mediated online searching was reflected in the recruitment of librarians to fill positions as mediated online searchers in medical libraries. The advent of end-user searching was also traced to determine how this trend affected the demand for mediated online searching and job availability of online searchers. Job advertisements were analyzed to determine what skills were in demand as end-user searching replaced mediated online searching as the norm in medical libraries. Finally, the trend away from mediated online searching to support of other library services was placed in the context of new roles for medical librarians. PMID:10658961

  1. WebCSD: the online portal to the Cambridge Structural Database

    PubMed Central

    Thomas, Ian R.; Bruno, Ian J.; Cole, Jason C.; Macrae, Clare F.; Pidcock, Elna; Wood, Peter A.

    2010-01-01

    WebCSD, a new web-based application developed by the Cambridge Crystallographic Data Centre, offers fast searching of the Cambridge Structural Database using only a standard internet browser. Search facilities include two-dimensional substructure, molecular similarity, text/numeric and reduced cell searching. Text, chemical diagrams and three-dimensional structural information can all be studied in the results browser using the efficient entry summaries and embedded three-dimensional viewer. PMID:22477776

  2. B* Probability Based Search

    DTIC Science & Technology

    1994-06-27

    success . The key ideas behind the algorithm are: 1. Stopping when one alternative is clearly better than all the others, and 2. Focusing the search on...search algorithm has been implemented on the chess machine Hitech . En route we have developed effective techniques for: "* Dealing with independence of...report describes the implementation, and the results of tests including games played against brute- force programs. The data indicate that B* Hitech is a

  3. Target-motion prediction for robotic search and rescue in wilderness environments.

    PubMed

    Macwan, Ashish; Nejat, Goldie; Benhabib, Beno

    2011-10-01

    This paper presents a novel modular methodology for predicting a lost person's (motion) behavior for autonomous coordinated multirobot wilderness search and rescue. The new concept of isoprobability curves is introduced and developed, which represents a unique mechanism for identifying the target's probable location at any given time within the search area while accounting for influences such as terrain topology, target physiology and psychology, clues found, etc. The isoprobability curves are propagated over time and space. The significant tangible benefit of the proposed target-motion prediction methodology is demonstrated through a comparison to a nonprobabilistic approach, as well as through a simulated realistic wilderness search scenario.

  4. Where to search top-K biomedical ontologies?

    PubMed

    Oliveira, Daniela; Butt, Anila Sahar; Haller, Armin; Rebholz-Schuhmann, Dietrich; Sahay, Ratnesh

    2018-03-20

    Searching for precise terms and terminological definitions in the biomedical data space is problematic, as researchers find overlapping, closely related and even equivalent concepts in a single or multiple ontologies. Search engines that retrieve ontological resources often suggest an extensive list of search results for a given input term, which leads to the tedious task of selecting the best-fit ontological resource (class or property) for the input term and reduces user confidence in the retrieval engines. A systematic evaluation of these search engines is necessary to understand their strengths and weaknesses in different search requirements. We have implemented seven comparable Information Retrieval ranking algorithms to search through ontologies and compared them against four search engines for ontologies. Free-text queries have been performed, the outcomes have been judged by experts and the ranking algorithms and search engines have been evaluated against the expert-based ground truth (GT). In addition, we propose a probabilistic GT that is developed automatically to provide deeper insights and confidence to the expert-based GT as well as evaluating a broader range of search queries. The main outcome of this work is the identification of key search factors for biomedical ontologies together with search requirements and a set of recommendations that will help biomedical experts and ontology engineers to select the best-suited retrieval mechanism in their search scenarios. We expect that this evaluation will allow researchers and practitioners to apply the current search techniques more reliably and that it will help them to select the right solution for their daily work. The source code (of seven ranking algorithms), ground truths and experimental results are available at https://github.com/danielapoliveira/bioont-search-benchmark.

  5. Near-infrared spectroscopy as an auxiliary tool in the study of child development

    PubMed Central

    de Oliveira, Suelen Rosa; Machado, Ana Carolina Cabral de Paula; de Miranda, Débora Marques; Campos, Flávio dos Santos; Ribeiro, Cristina Oliveira; Magalhães, Lívia de Castro; Bouzada, Maria Cândida Ferrarez

    2015-01-01

    OBJECTIVE: To investigate the applicability of Near-Infrared Spectroscopy (NIRS) for cortical hemodynamic assessment tool as an aid in the study of child development. DATA SOURCE: Search was conducted in the PubMed and Lilacs databases using the following keywords: ''psychomotor performance/child development/growth and development/neurodevelopment/spectroscopy/near-infrared'' and their equivalents in Portuguese and Spanish. The review was performed according to criteria established by Cochrane and search was limited to 2003 to 2013. English, Portuguese and Spanish were included in the search. DATA SYNTHESIS: Of the 484 articles, 19 were selected: 17 cross-sectional and two longitudinal studies, published in non-Brazilian journals. The analyzed articles were grouped in functional and non-functional studies of child development. Functional studies addressed the object processing, social skills development, language and cognitive development. Non-functional studies discussed the relationship between cerebral oxygen saturation and neurological outcomes, and the comparison between the cortical hemodynamic response of preterm and term newborns. CONCLUSIONS: NIRS has become an increasingly feasible alternative and a potentially useful technique for studying functional activity of the infant brain. PMID:25862295

  6. Integrating Multilevel Command and Control into a Service Oriented Architecture to Provide Cross Domain Capability

    DTIC Science & Technology

    2006-06-01

    Horizontal Fusion, the JCDX team developed two web services, a Classification Policy Decision Service (cPDS), and a Federated Search Provider (FSP...The cPDS web service primarily provides other systems with methods for handling labeled data such as label comparison. The federated search provider...level domains. To provide defense-in-depth, cPDS and the Federated Search Provider are implemented on a separate server known as the JCDX Web

  7. The Search for Life in the Universe: The Past Through the Future

    NASA Astrophysics Data System (ADS)

    Lebofsky, L. A.; Lebofsky, A.; Lebofsky, M.; Lebofsky, N. R.

    2003-05-01

    ``Are we alone?" This is a question that has been asked by humans for thousands of years. More than any other topic in science, the search for life in the Universe has captured everyone's imagination. Now, for the first time in history, we are on the verge of answering this question. The search for life beyond the Earth can be seen as far back as the 16th century writings of J. Kepler, Bishops F. Godwin and J. Wilkins, and S. Cyrano de Bergerac to the early 20th century's H. G. Wells. From a scientific perspective, this search led to the formulation of the Drake Equation which in turn has led to a number of projects that are searching for signs of intelligent life beyond the Earth, the Search for Extraterrestrial Intellegence. SETI@home reaches millions of users, including thousands of K-12 teachers across the nation. We are developing a project that will enhance the SETI@home web site located at UC Berkeley. The project unites the resources of the SETI@home distributed computing community web site , university settings, and informal science learning centers. It will reach approximately 100,000 learners. The goal is to increase public understanding of math and science and to create and strengthen the connections between informal and formal learning communities. We will present a variety of ways that the Drake Equation and SETI@home can enhance the public and student understanding of the search for life in the Universe, from its roots in literature, to the development (and evolution) of the Drake Equation, to the actual search for life with SETI.

  8. MIRASS: medical informatics research activity support system using information mashup network.

    PubMed

    Kiah, M L M; Zaidan, B B; Zaidan, A A; Nabi, Mohamed; Ibraheem, Rabiu

    2014-04-01

    The advancement of information technology has facilitated the automation and feasibility of online information sharing. The second generation of the World Wide Web (Web 2.0) enables the collaboration and sharing of online information through Web-serving applications. Data mashup, which is considered a Web 2.0 platform, plays an important role in information and communication technology applications. However, few ideas have been transformed into education and research domains, particularly in medical informatics. The creation of a friendly environment for medical informatics research requires the removal of certain obstacles in terms of search time, resource credibility, and search result accuracy. This paper considers three glitches that researchers encounter in medical informatics research; these glitches include the quality of papers obtained from scientific search engines (particularly, Web of Science and Science Direct), the quality of articles from the indices of these search engines, and the customizability and flexibility of these search engines. A customizable search engine for trusted resources of medical informatics was developed and implemented through data mashup. Results show that the proposed search engine improves the usability of scientific search engines for medical informatics. Pipe search engine was found to be more efficient than other engines.

  9. Development and Evaluation of Thesauri-Based Bibliographic Biomedical Search Engine

    ERIC Educational Resources Information Center

    Alghoson, Abdullah

    2017-01-01

    Due to the large volume and exponential growth of biomedical documents (e.g., books, journal articles), it has become increasingly challenging for biomedical search engines to retrieve relevant documents based on users' search queries. Part of the challenge is the matching mechanism of free-text indexing that performs matching based on…

  10. A systematic review of micro correlates of maternal mortality.

    PubMed

    Yakubu, Yahaya; Mohamed Nor, Norashidah; Abidin, Emilia Zainal

    2018-05-05

    In the year 2000, the World Health Organization launched the Millennium Development Goals (MDGs) which were to be achieved in 2015. Though most of the goals were not achieved, a follow-up post 2015 development agenda, the Sustainable Development Goals (SDGs) was launched in 2015, which are to be achieved by 2030. Maternal mortality reduction is a focal goal in both the MDGs and SDGs. Achieving the maternal mortality target in the SDGs requires multiple approaches, particularly in developing countries with high maternal mortality. Low-income developing countries rely to a great extent on macro determinants such as public health expenditure, which are spent mostly on curative health and health facilities, to improve population health. To complement the macro determinants, this study employs the systematic review technique to reveal significant micro correlates of maternal mortality. The study searched MEDLINE, PubMed, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Science Direct, and Global Index Medicus of the World Health Organization. Our search was time framed from the 1st January, 2000 to the 30th September, 2016. In the overall search result, 6758 articles were identified, out of which 33 were found to be eligible for the review. The outcome of the systematic search for relevant literature revealed a concentration of literature on the micro factors and maternal mortality in developing countries. This shows that maternal mortality and micro factors are a major issue in developing countries. The studies reviewed support the significant relationship between the micro factors and maternal mortality. This study therefore suggests that more effort should be channelled to improving the micro factors in developing countries to pave the way for the timely achievement of the SDGs' maternal mortality ratio (MMR) target.

  11. Bengali-English Relevant Cross Lingual Information Access Using Finite Automata

    NASA Astrophysics Data System (ADS)

    Banerjee, Avishek; Bhattacharyya, Swapan; Hazra, Simanta; Mondal, Shatabdi

    2010-10-01

    CLIR techniques searches unrestricted texts and typically extract term and relationships from bilingual electronic dictionaries or bilingual text collections and use them to translate query and/or document representations into a compatible set of representations with a common feature set. In this paper, we focus on dictionary-based approach by using a bilingual data dictionary with a combination to statistics-based methods to avoid the problem of ambiguity also the development of human computer interface aspects of NLP (Natural Language processing) is the approach of this paper. The intelligent web search with regional language like Bengali is depending upon two major aspect that is CLIA (Cross language information access) and NLP. In our previous work with IIT, KGP we already developed content based CLIA where content based searching in trained on Bengali Corpora with the help of Bengali data dictionary. Here we want to introduce intelligent search because to recognize the sense of meaning of a sentence and it has a better real life approach towards human computer interactions.

  12. A method for the design and development of medical or health care information websites to optimize search engine results page rankings on Google.

    PubMed

    Dunne, Suzanne; Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter

    2013-08-27

    The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion.

  13. Effectiveness of job search interventions: a meta-analytic review.

    PubMed

    Liu, Songqi; Huang, Jason L; Wang, Mo

    2014-07-01

    The current meta-analytic review examined the effectiveness of job search interventions in facilitating job search success (i.e., obtaining employment). Major theoretical perspectives on job search interventions, including behavioral learning theory, theory of planned behavior, social cognitive theory, and coping theory, were reviewed and integrated to derive a taxonomy of critical job search intervention components. Summarizing the data from 47 experimentally or quasi-experimentally evaluated job search interventions, we found that the odds of obtaining employment were 2.67 times higher for job seekers participating in job search interventions compared to job seekers in the control group, who did not participate in such intervention programs. Our moderator analysis also suggested that job search interventions that contained certain components, including teaching job search skills, improving self-presentation, boosting self-efficacy, encouraging proactivity, promoting goal setting, and enlisting social support, were more effective than interventions that did not include such components. More important, job search interventions effectively promoted employment only when both skill development and motivation enhancement were included. In addition, we found that job search interventions were more effective in helping younger and older (vs. middle-aged) job seekers, short-term (vs. long-term) unemployed job seekers, and job seekers with special needs and conditions (vs. job seekers in general) to find employment. Furthermore, meta-analytic path analysis revealed that increased job search skills, job search self-efficacy, and job search behaviors partially mediated the positive effect of job search interventions on obtaining employment. Theoretical and practical implications and future research directions are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  14. HBVPathDB: a database of HBV infection-related molecular interaction network.

    PubMed

    Zhang, Yi; Bo, Xiao-Chen; Yang, Jing; Wang, Sheng-Qi

    2005-03-21

    To describe molecules or genes interaction between hepatitis B viruses (HBV) and host, for understanding how virus' and host's genes and molecules are networked to form a biological system and for perceiving mechanism of HBV infection. The knowledge of HBV infection-related reactions was organized into various kinds of pathways with carefully drawn graphs in HBVPathDB. Pathway information is stored with relational database management system (DBMS), which is currently the most efficient way to manage large amounts of data and query is implemented with powerful Structured Query Language (SQL). The search engine is written using Personal Home Page (PHP) with SQL embedded and web retrieval interface is developed for searching with Hypertext Markup Language (HTML). We present the first version of HBVPathDB, which is a HBV infection-related molecular interaction network database composed of 306 pathways with 1 050 molecules involved. With carefully drawn graphs, pathway information stored in HBVPathDB can be browsed in an intuitive way. We develop an easy-to-use interface for flexible accesses to the details of database. Convenient software is implemented to query and browse the pathway information of HBVPathDB. Four search page layout options-category search, gene search, description search, unitized search-are supported by the search engine of the database. The database is freely available at http://www.bio-inf.net/HBVPathDB/HBV/. The conventional perspective HBVPathDB have already contained a considerable amount of pathway information with HBV infection related, which is suitable for in-depth analysis of molecular interaction network of virus and host. HBVPathDB integrates pathway data-sets with convenient software for query, browsing, visualization, that provides users more opportunity to identify regulatory key molecules as potential drug targets and to explore the possible mechanism of HBV infection based on gene expression datasets.

  15. Development of a Google-based search engine for data mining radiology reports.

    PubMed

    Erinjeri, Joseph P; Picus, Daniel; Prior, Fred W; Rubin, David A; Koppel, Paul

    2009-08-01

    The aim of this study is to develop a secure, Google-based data-mining tool for radiology reports using free and open source technologies and to explore its use within an academic radiology department. A Health Insurance Portability and Accountability Act (HIPAA)-compliant data repository, search engine and user interface were created to facilitate treatment, operations, and reviews preparatory to research. The Institutional Review Board waived review of the project, and informed consent was not required. Comprising 7.9 GB of disk space, 2.9 million text reports were downloaded from our radiology information system to a fileserver. Extensible markup language (XML) representations of the reports were indexed using Google Desktop Enterprise search engine software. A hypertext markup language (HTML) form allowed users to submit queries to Google Desktop, and Google's XML response was interpreted by a practical extraction and report language (PERL) script, presenting ranked results in a web browser window. The query, reason for search, results, and documents visited were logged to maintain HIPAA compliance. Indexing averaged approximately 25,000 reports per hour. Keyword search of a common term like "pneumothorax" yielded the first ten most relevant results of 705,550 total results in 1.36 s. Keyword search of a rare term like "hemangioendothelioma" yielded the first ten most relevant results of 167 total results in 0.23 s; retrieval of all 167 results took 0.26 s. Data mining tools for radiology reports will improve the productivity of academic radiologists in clinical, educational, research, and administrative tasks. By leveraging existing knowledge of Google's interface, radiologists can quickly perform useful searches.

  16. A knowledge based search tool for performance measures in health care systems.

    PubMed

    Beyan, Oya D; Baykal, Nazife

    2012-02-01

    Performance measurement is vital for improving the health care systems. However, we are still far from having accepted performance measurement models. Researchers and developers are seeking comparable performance indicators. We developed an intelligent search tool to identify appropriate measures for specific requirements by matching diverse care settings. We reviewed the literature and analyzed 229 performance measurement studies published after 2000. These studies are evaluated with an original theoretical framework and stored in the database. A semantic network is designed for representing domain knowledge and supporting reasoning. We have applied knowledge based decision support techniques to cope with uncertainty problems. As a result we designed a tool which simplifies the performance indicator search process and provides most relevant indicators by employing knowledge based systems.

  17. Generating "fragment-based virtual library" using pocket similarity search of ligand-receptor complexes.

    PubMed

    Khashan, Raed S

    2015-01-01

    As the number of available ligand-receptor complexes is increasing, researchers are becoming more dedicated to mine these complexes to aid in the drug design and development process. We present free software which is developed as a tool for performing similarity search across ligand-receptor complexes for identifying binding pockets which are similar to that of a target receptor. The search is based on 3D-geometric and chemical similarity of the atoms forming the binding pocket. For each match identified, the ligand's fragment(s) corresponding to that binding pocket are extracted, thus forming a virtual library of fragments (FragVLib) that is useful for structure-based drug design. The program provides a very useful tool to explore available databases.

  18. Development and Testing of a New Area Search Model with Partially Overlapping Target and Searcher Patrol Areas

    DTIC Science & Technology

    2008-12-01

    1, /(2 ) T T VR A t A VR f t t F t VRt A t A VR t A VR ∈⎧ = ⎨ ⎩ <⎧ ⎪= ∈⎨ ⎪ >⎩ (2) The mean time to detection is given by...follows: Exhaustive Search (Black line): ( ) (2 ) /T sF t VRt A= . Random Search (Pink line): 2( ) 1 (1 / ) exp( 2 / )T s sF t R A RVt Aπ

  19. Advances in metaheuristics for gene selection and classification of microarray data.

    PubMed

    Duval, Béatrice; Hao, Jin-Kao

    2010-01-01

    Gene selection aims at identifying a (small) subset of informative genes from the initial data in order to obtain high predictive accuracy for classification. Gene selection can be considered as a combinatorial search problem and thus be conveniently handled with optimization methods. In this article, we summarize some recent developments of using metaheuristic-based methods within an embedded approach for gene selection. In particular, we put forward the importance and usefulness of integrating problem-specific knowledge into the search operators of such a method. To illustrate the point, we explain how ranking coefficients of a linear classifier such as support vector machine (SVM) can be profitably used to reinforce the search efficiency of Local Search and Evolutionary Search metaheuristic algorithms for gene selection and classification.

  20. Anatomy and evolution of database search engines-a central component of mass spectrometry based proteomic workflows.

    PubMed

    Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2017-09-13

    Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.

  1. Use of Emotional Cues for Lexical Learning: A Comparison of Autism Spectrum Disorder and Fragile X Syndrome

    PubMed Central

    Thurman, Angela John; McDuffie, Andrea; Kover, Sara T.; Hagerman, Randi; Channell, Marie Moore; Mastergeorge, Ann; Abbeduto, Leonard

    2014-01-01

    The present study evaluated the ability of males with fragile X syndrome (FXS), nonsyndromic autism spectrum disorder (ASD), or typical development to learn new words by using as a cue to the intended referent an emotional reaction indicating a successful (excitement) or unsuccessful (disappointment) search for a novel object. Performance for all groups exceeded chance-levels in both search conditions. In the Successful Search condition, participants with nonsyndromic ASD performed similarly to participants with FXS after controlling for severity of ASD. In the Unsuccessful Search condition, participants with FXS performed significantly worse than participants with nonsyndromic ASD, after controlling for severity of ASD. Predictors of performance in both search conditions differed between the three groups. Theoretical and clinical implications are discussed. PMID:25318904

  2. A hierarchical transition state search algorithm

    NASA Astrophysics Data System (ADS)

    del Campo, Jorge M.; Köster, Andreas M.

    2008-07-01

    A hierarchical transition state search algorithm is developed and its implementation in the density functional theory program deMon2k is described. This search algorithm combines the double ended saddle interpolation method with local uphill trust region optimization. A new formalism for the incorporation of the distance constrain in the saddle interpolation method is derived. The similarities between the constrained optimizations in the local trust region method and the saddle interpolation are highlighted. The saddle interpolation and local uphill trust region optimizations are validated on a test set of 28 representative reactions. The hierarchical transition state search algorithm is applied to an intramolecular Diels-Alder reaction with several internal rotors, which makes automatic transition state search rather challenging. The obtained reaction mechanism is discussed in the context of the experimentally observed product distribution.

  3. Factors Affecting Infants’ Manual Search for Occluded Objects and the Genesis of Object Permanence

    PubMed Central

    Moore, M. Keith; Meltzoff, Andrew N.

    2009-01-01

    Two experiments systematically examined factors that influence infants’ manual search for hidden objects (N = 96). Experiment 1 used a new procedure to assess infants’ search for partially versus totally occluded objects. Results showed that 8.75-month-old infants solved partial occlusions by removing the occluder and uncovering the object, but these same infants failed to use this skill on total occlusions. Experiment 2 used sound-producing objects to provide a perceptual clue to the objects’ hidden location. Sound clues significantly increased the success rate on total occlusions for 10-month-olds, but not for 8.75-month-olds. An identity development account is offered for why infants succeed on partial occlusions earlier than total occlusions and why sound helps only the older infants. We propose a mechanism for how infants use object identity as a basis for developing a notion of permanence. Implications are drawn for understanding the dissociation between looking-time and search assessments of object permanence. PMID:18036668

  4. Factors affecting infants' manual search for occluded objects and the genesis of object permanence.

    PubMed

    Moore, M Keith; Meltzoff, Andrew N

    2008-04-01

    Two experiments systematically examined factors that influence infants' manual search for hidden objects (N=96). Experiment 1 used a new procedure to assess infants' search for partially versus totally occluded objects. Results showed that 8.75-month-old infants solved partial occlusions by removing the occluder and uncovering the object, but these same infants failed to use this skill on total occlusions. Experiment 2 used sound-producing objects to provide a perceptual clue to the objects' hidden location. Sound clues significantly increased the success rate on total occlusions for 10-month-olds, but not for 8.75-month-olds. An identity development account is offered for why infants succeed on partial occlusions earlier than total occlusions and why sound helps only the older infants. We propose a mechanism for how infants use object identity as a basis for developing a notion of permanence. Implications are drawn for understanding the dissociation between looking time and search assessments of object permanence.

  5. Sequence tagging reveals unexpected modifications in toxicoproteomics

    PubMed Central

    Dasari, Surendra; Chambers, Matthew C.; Codreanu, Simona G.; Liebler, Daniel C.; Collins, Ben C.; Pennington, Stephen R.; Gallagher, William M.; Tabb, David L.

    2010-01-01

    Toxicoproteomic samples are rich in posttranslational modifications (PTMs) of proteins. Identifying these modifications via standard database searching can incur significant performance penalties. Here we describe the latest developments in TagRecon, an algorithm that leverages inferred sequence tags to identify modified peptides in toxicoproteomic data sets. TagRecon identifies known modifications more effectively than the MyriMatch database search engine. TagRecon outperformed state of the art software in recognizing unanticipated modifications from LTQ, Orbitrap, and QTOF data sets. We developed user-friendly software for detecting persistent mass shifts from samples. We follow a three-step strategy for detecting unanticipated PTMs in samples. First, we identify the proteins present in the sample with a standard database search. Next, identified proteins are interrogated for unexpected PTMs with a sequence tag-based search. Finally, additional evidence is gathered for the detected mass shifts with a refinement search. Application of this technology on toxicoproteomic data sets revealed unintended cross-reactions between proteins and sample processing reagents. Twenty five proteins in rat liver showed signs of oxidative stress when exposed to potentially toxic drugs. These results demonstrate the value of mining toxicoproteomic data sets for modifications. PMID:21214251

  6. Getting the fundamentals of movement: a meta-analysis of the effectiveness of motor skill interventions in children.

    PubMed

    Logan, S W; Robinson, L E; Wilson, A E; Lucas, W A

    2012-05-01

    The development of fundamental movement skills (FMS) is associated with positive health-related outcomes. Children do not develop FMS naturally through maturational processes. These skills need to be learned, practised and reinforced. The objective was to determine the effectiveness of motor skill interventions in children. The following databases were searched for relevant articles: Academic Search Premier, PsycArticles, PsycInfo, SportDiscus and ERIC. No date range was specified and each search was conducted to include all possible years of publication specific to each database. Key terms for the search included motor, skill, movement, intervention, programme or children. Searches were conducted using single and combined terms. Pertinent journals and article reference lists were also manually searched. (1) implementation of any type of motor skill intervention; (2) pre- and post-qualitative assessment of FMS; and (3) availability of means and standard deviations of motor performance. A significant positive effect of motor skill interventions on the improvement of FMS in children was found (d= 0.39, P < 0.001). Results indicate that object control (d= 0.41, P < 0.001) and locomotor skills (d= 0.45, P < 0.001) improved similarly from pre- to post-intervention. The overall effect size for control groups (i.e. free play) was not significant (d= 0.06, P= 0.33). A Pearson correlation indicated a non-significant (P= 0.296), negative correlation (r=-0.18) between effect size of pre- to post-improvement of FMS and the duration of the intervention (in minutes). Motor skill interventions are effective in improving FMS in children. Early childhood education centres should implement 'planned' movement programmes as a strategy to promote motor skill development in children. © 2011 Blackwell Publishing Ltd.

  7. Installation Restoration Program. Phase II: Stage 1 Problem Confirmation Study, Duluth International Airport, Duluth, Minnesota.

    DTIC Science & Technology

    1984-10-01

    8 iii "i t-. Table of Contents (cont.) Section Title Page -APPENDIX A Acronyms, Definitions, Nomenclature and Units of Measure B Scope of Work, Task...Identification/Records Search Phase II - Problem Confirmation and Quantification Phase III - Technology Base Development Phase IV - Corrective Action Only...Problem Identification/Records Search Phase II - Problem Confirmation and Quantification Phase III - Technology Base Development Phase IV - Corrective

  8. An almost-parameter-free harmony search algorithm for groundwater pollution source identification.

    PubMed

    Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui

    2013-01-01

    The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.

  9. ACMES: fast multiple-genome searches for short repeat sequences with concurrent cross-species information retrieval

    PubMed Central

    Reneker, Jeff; Shyu, Chi-Ren; Zeng, Peiyu; Polacco, Joseph C.; Gassmann, Walter

    2004-01-01

    We have developed a web server for the life sciences community to use to search for short repeats of DNA sequence of length between 3 and 10 000 bases within multiple species. This search employs a unique and fast hash function approach. Our system also applies information retrieval algorithms to discover knowledge of cross-species conservation of repeat sequences. Furthermore, we have incorporated a part of the Gene Ontology database into our information retrieval algorithms to broaden the coverage of the search. Our web server and tutorial can be found at http://acmes.rnet.missouri.edu. PMID:15215469

  10. Search Regimes and the Industrial Dynamics of Science

    ERIC Educational Resources Information Center

    Bonaccorsi, Andrea

    2008-01-01

    The article addresses the issue of dynamics of science, in particular of new sciences born in twentieth century and developed after the Second World War (information science, materials science, life science). The article develops the notion of search regime as an abstract characterization of dynamic patterns, based on three dimensions: the rate of…

  11. PIRIA: a general tool for indexing, search, and retrieval of multimedia content

    NASA Astrophysics Data System (ADS)

    Joint, Magali; Moellic, Pierre-Alain; Hede, P.; Adam, P.

    2004-05-01

    The Internet is a continuously expanding source of multimedia content and information. There are many products in development to search, retrieve, and understand multimedia content. But most of the current image search/retrieval engines, rely on a image database manually pre-indexed with keywords. Computers are still powerless to understand the semantic meaning of still or animated image content. Piria (Program for the Indexing and Research of Images by Affinity), the search engine we have developed brings this possibility closer to reality. Piria is a novel search engine that uses the query by example method. A user query is submitted to the system, which then returns a list of images ranked by similarity, obtained by a metric distance that operates on every indexed image signature. These indexed images are compared according to several different classifiers, not only Keywords, but also Form, Color and Texture, taking into account geometric transformations and variance like rotation, symmetry, mirroring, etc. Form - Edges extracted by an efficient segmentation algorithm. Color - Histogram, semantic color segmentation and spatial color relationship. Texture - Texture wavelets and local edge patterns. If required, Piria is also able to fuse results from multiple classifiers with a new classification of index categories: Single Indexer Single Call (SISC), Single Indexer Multiple Call (SIMC), Multiple Indexers Single Call (MISC) or Multiple Indexers Multiple Call (MIMC). Commercial and industrial applications will be explored and discussed as well as current and future development.

  12. Economic Recession and Obesity-Related Internet Search Behavior in Taiwan: Analysis of Google Trends Data

    PubMed Central

    2018-01-01

    Background Obesity is highly correlated with the development of chronic diseases and has become a critical public health issue that must be countered by aggressive action. This study determined whether data from Google Trends could provide insight into trends in obesity-related search behaviors in Taiwan. Objective Using Google Trends, we examined how changes in economic conditions—using business cycle indicators as a proxy—were associated with people’s internet search behaviors related to obesity awareness, health behaviors, and fast food restaurants. Methods Monthly business cycle indicators were obtained from the Taiwan National Development Council. Weekly Taiwan Stock Exchange (TWSE) weighted index data were accessed and downloaded from Yahoo Finance. The weekly relative search volumes (RSV) of obesity-related terms were downloaded from Google Trends. RSVs of obesity-related terms and the TWSE from January 2007 to December 2011 (60 months) were analyzed using correlation analysis. Results During an economic recession, the RSV of obesity awareness and health behaviors declined (r=.441, P<.001; r=.593, P<.001, respectively); however, the RSV for fast food restaurants increased (r=−.437, P<.001). Findings indicated that when the economy was faltering, people tended to be less likely to search for information related to health behaviors and obesity awareness; moreover, they were more likely to search for fast food restaurants. Conclusions Macroeconomic conditions can have an impact on people’s health-related internet searches. PMID:29625958

  13. Economic Recession and Obesity-Related Internet Search Behavior in Taiwan: Analysis of Google Trends Data.

    PubMed

    Wang, Ho-Wei; Chen, Duan-Rung

    2018-04-06

    Obesity is highly correlated with the development of chronic diseases and has become a critical public health issue that must be countered by aggressive action. This study determined whether data from Google Trends could provide insight into trends in obesity-related search behaviors in Taiwan. Using Google Trends, we examined how changes in economic conditions-using business cycle indicators as a proxy-were associated with people's internet search behaviors related to obesity awareness, health behaviors, and fast food restaurants. Monthly business cycle indicators were obtained from the Taiwan National Development Council. Weekly Taiwan Stock Exchange (TWSE) weighted index data were accessed and downloaded from Yahoo Finance. The weekly relative search volumes (RSV) of obesity-related terms were downloaded from Google Trends. RSVs of obesity-related terms and the TWSE from January 2007 to December 2011 (60 months) were analyzed using correlation analysis. During an economic recession, the RSV of obesity awareness and health behaviors declined (r=.441, P<.001; r=.593, P<.001, respectively); however, the RSV for fast food restaurants increased (r=-.437, P<.001). Findings indicated that when the economy was faltering, people tended to be less likely to search for information related to health behaviors and obesity awareness; moreover, they were more likely to search for fast food restaurants. Macroeconomic conditions can have an impact on people's health-related internet searches. ©Ho-Wei Wang, Duan-Rung Chen. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 06.04.2018.

  14. Time limited field of regard search

    NASA Astrophysics Data System (ADS)

    Flug, Eric; Maurer, Tana; Nguyen, Oanh-Tho

    2005-05-01

    Recent work by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has led to the Time-Limited Search (TLS) model, which has given new formulations for the field of view (FOV) search times. The next step in the evaluation of the overall search model (ACQUIRE) is to apply these parameters to the field of regard (FOR) model. Human perception experiments were conducted using synthetic imagery developed at NVESD. The experiments were competitive player-on-player search tests with the intention of imposing realistic time constraints on the observers. FOR detection probabilities, search times, and false alarm data are analyzed and compared to predictions using both the TLS model and ACQUIRE.

  15. Teaching Data Base Search Strategies.

    ERIC Educational Resources Information Center

    Hannah, Larry

    1987-01-01

    Discusses database searching as a method for developing thinking skills, and describes an activity suitable for fifth grade through high school using a president's and vice president's database. Teaching methods are presented, including student team activities, and worksheets designed for the AppleWorks database are included. (LRW)

  16. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  17. Quantification of Urbanization in Relation to Chronic Diseases in Developing Countries: A Systematic Review

    PubMed Central

    Foster, Charlie; Hutchinson, Lauren; Arambepola, Carukshi

    2008-01-01

    During and beyond the twentieth century, urbanization has represented a major demographic shift particularly in the developed world. The rapid urbanization experienced in the developing world brings increased mortality from lifestyle diseases such as cancer and cardiovascular disease. We set out to understand how urbanization has been measured in studies which examined chronic disease as an outcome. Following a pilot search of PUBMED, a full search strategy was developed to identify papers reporting the effect of urbanization in relation to chronic disease in the developing world. Full searches were conducted in MEDLINE, EMBASE, CINAHL, and GLOBAL HEALTH. Of the 868 titles identified in the initial search, nine studies met the final inclusion criteria. Five of these studies used demographic measures (such as population density) at an area level to measure urbanization. Four studies used more complicated summary measures of individual and area level data (such as distance from a city, occupation, home and land ownership) to define urbanization. The papers reviewed were limited by using simple area level summary measures (e.g., urban rural dichotomy) or having to rely on preexisting data at the individual level. Further work is needed to develop a measure of urbanization that treats urbanization as a process and which is sensitive enough to track changes in “urbanicity” and subsequent emergence of chronic disease risk factors and mortality. Electronic supplementary material The online version of this article doi:10.1007/s11524-008-9325-4 contains supplementary material, which is available to authorized users. PMID:18931915

  18. Rigour of development does not AGREE with recommendations in practice guidelines on the use of ice for acute ankle sprains.

    PubMed

    Van de Velde, S; Heselmans, A; Donceel, P; Vandekerckhove, P; Ramaekers, D; Aertgeerts, B

    2011-09-01

    OBJECTIVE This study evaluated whether the Appraisal of Guidelines Research and Evaluation (AGREE) rigour of development score of practice guidelines on ice for acute ankle sprains is related to the convergence between recommendations. DESIGN The authors systematically reviewed guidelines on ice for acute ankle sprains. Four appraisers independently used the AGREE instrument to evaluate the rigour of development of selected guidelines. For each guideline, one reviewer listed the cited evidence on ice and calculated a cited evidence score. The authors plotted the recommended durations and numbers of ice applications over the standardised rigour of development score to explore the relationships. DATA SOURCES Three reviewers searched for guidelines in Medline, Embase, Sportdiscus, PEDro, G-I-N Guideline Library, Trip Database, SumSearch, National Guideline Clearinghouse and the Health Technology Assessment database, and conducted a web-based search for guideline development organisations. ELIGIBILITY CRITERIA Eligible guidelines had a development methodology that included a process to search or use results from scientific studies and the participation of an expert group to formulate recommendations. RESULTS The authors identified 21 guidelines, containing clinically significant variations in recommended durations and numbers of ice applications. The median standardised rigour of development score was 57% (IQR 18 to 77). Variations occurred evenly among guidelines with low moderate or high rigour scores. The median evidence citation score in the guidelines was 7% (IQR 0 to 61). CONCLUSIONS There is no relationship between the rigour of development score and the recommendations in guidelines on ice for acute ankle sprains. The guidelines suffered from methodological problems which were not captured by the AGREE instrument.

  19. GeoSearch: A lightweight broking middleware for geospatial resources discovery

    NASA Astrophysics Data System (ADS)

    Gui, Z.; Yang, C.; Liu, K.; Xia, J.

    2012-12-01

    With petabytes of geodata, thousands of geospatial web services available over the Internet, it is critical to support geoscience research and applications by finding the best-fit geospatial resources from the massive and heterogeneous resources. Past decades' developments witnessed the operation of many service components to facilitate geospatial resource management and discovery. However, efficient and accurate geospatial resource discovery is still a big challenge due to the following reasons: 1)The entry barriers (also called "learning curves") hinder the usability of discovery services to end users. Different portals and catalogues always adopt various access protocols, metadata formats and GUI styles to organize, present and publish metadata. It is hard for end users to learn all these technical details and differences. 2)The cost for federating heterogeneous services is high. To provide sufficient resources and facilitate data discovery, many registries adopt periodic harvesting mechanism to retrieve metadata from other federated catalogues. These time-consuming processes lead to network and storage burdens, data redundancy, and also the overhead of maintaining data consistency. 3)The heterogeneous semantics issues in data discovery. Since the keyword matching is still the primary search method in many operational discovery services, the search accuracy (precision and recall) is hard to guarantee. Semantic technologies (such as semantic reasoning and similarity evaluation) offer a solution to solve these issues. However, integrating semantic technologies with existing service is challenging due to the expandability limitations on the service frameworks and metadata templates. 4)The capabilities to help users make final selection are inadequate. Most of the existing search portals lack intuitive and diverse information visualization methods and functions (sort, filter) to present, explore and analyze search results. Furthermore, the presentation of the value-added additional information (such as, service quality and user feedback), which conveys important decision supporting information, is missing. To address these issues, we prototyped a distributed search engine, GeoSearch, based on brokering middleware framework to search, integrate and visualize heterogeneous geospatial resources. Specifically, 1) A lightweight discover broker is developed to conduct distributed search. The broker retrieves metadata records for geospatial resources and additional information from dispersed services (portals and catalogues) and other systems on the fly. 2) A quality monitoring and evaluation broker (i.e., QoS Checker) is developed and integrated to provide quality information for geospatial web services. 3) The semantic assisted search and relevance evaluation functions are implemented by loosely interoperating with ESIP Testbed component. 4) Sophisticated information and data visualization functionalities and tools are assembled to improve user experience and assist resource selection.

  20. Search Algorithms as a Framework for the Optimization of Drug Combinations

    PubMed Central

    Coquin, Laurence; Schofield, Jennifer; Feala, Jacob D.; Reed, John C.; McCulloch, Andrew D.; Paternostro, Giovanni

    2008-01-01

    Combination therapies are often needed for effective clinical outcomes in the management of complex diseases, but presently they are generally based on empirical clinical experience. Here we suggest a novel application of search algorithms—originally developed for digital communication—modified to optimize combinations of therapeutic interventions. In biological experiments measuring the restoration of the decline with age in heart function and exercise capacity in Drosophila melanogaster, we found that search algorithms correctly identified optimal combinations of four drugs using only one-third of the tests performed in a fully factorial search. In experiments identifying combinations of three doses of up to six drugs for selective killing of human cancer cells, search algorithms resulted in a highly significant enrichment of selective combinations compared with random searches. In simulations using a network model of cell death, we found that the search algorithms identified the optimal combinations of 6–9 interventions in 80–90% of tests, compared with 15–30% for an equivalent random search. These findings suggest that modified search algorithms from information theory have the potential to enhance the discovery of novel therapeutic drug combinations. This report also helps to frame a biomedical problem that will benefit from an interdisciplinary effort and suggests a general strategy for its solution. PMID:19112483

  1. Search Strategy to Identify Dental Survival Analysis Articles Indexed in MEDLINE.

    PubMed

    Layton, Danielle M; Clarke, Michael

    2016-01-01

    Articles reporting survival outcomes (time-to-event outcomes) in patients over time are challenging to identify in the literature. Research shows the words authors use to describe their dental survival analyses vary, and that allocation of medical subject headings by MEDLINE indexers is inconsistent. Together, this undermines accurate article identification. The present study aims to develop and validate a search strategy to identify dental survival analyses indexed in MEDLINE (Ovid). A gold standard cohort of articles was identified to derive the search terms, and an independent gold standard cohort of articles was identified to test and validate the proposed search strategies. The first cohort included all 6,955 articles published in the 50 dental journals with the highest impact factors in 2008, of which 95 articles were dental survival articles. The second cohort included all 6,514 articles published in the 50 dental journals with the highest impact factors for 2012, of which 148 were dental survival articles. Each cohort was identified by a systematic hand search. Performance parameters of sensitivity, precision, and number needed to read (NNR) for the search strategies were calculated. Sensitive, precise, and optimized search strategies were developed and validated. The performances of the search strategy maximizing sensitivity were 92% sensitivity, 14% precision, and 7.11 NNR; the performances of the strategy maximizing precision were 93% precision, 10% sensitivity, and 1.07 NNR; and the performances of the strategy optimizing the balance between sensitivity and precision were 83% sensitivity, 24% precision, and 4.13 NNR. The methods used to identify search terms were objective, not subjective. The search strategies were validated in an independent group of articles that included different journals and different publication years. Across the three search strategies, dental survival articles can be identified with sensitivity up to 92%, precision up to 93%, and NNR of less than two articles to identify relevant records. This research has highlighted the impact that variation in reporting and indexing has on article identification and has improved researchers' ability to identify dental survival articles.

  2. RxnFinder: biochemical reaction search engines using molecular structures, molecular fragments and reaction similarity.

    PubMed

    Hu, Qian-Nan; Deng, Zhe; Hu, Huanan; Cao, Dong-Sheng; Liang, Yi-Zeng

    2011-09-01

    Biochemical reactions play a key role to help sustain life and allow cells to grow. RxnFinder was developed to search biochemical reactions from KEGG reaction database using three search criteria: molecular structures, molecular fragments and reaction similarity. RxnFinder is helpful to get reference reactions for biosynthesis and xenobiotics metabolism. RxnFinder is freely available via: http://sdd.whu.edu.cn/rxnfinder. qnhu@whu.edu.cn.

  3. Engaging Patients as Partners in Developing Patient-Reported Outcome Measures in Cancer-A Review of the Literature.

    PubMed

    Camuso, Natasha; Bajaj, Prerna; Dudgeon, Deborah; Mitera, Gunita

    2016-08-01

    Tools to collect patient-reported outcome measures (PROMs) are frequently used in the healthcare setting to collect information that is most meaningful to patients. Due to discordance among how patients and healthcare providers rank symptoms that are considered most meaningful to the patient, engagement of patients in the development of PROMs is extremely important. This review aimed to identify studies that described how patients are involved in the item generation stage of cancer-specific PROM tools developed for cancer patients. A literature search was conducted using keywords relevant to PROMs, cancer, and patient engagement. A manual search of relevant reference lists was also conducted. Inclusion criteria stipulated that publications must describe patient engagement in the item generation stage of development of cancer-specific PROM tools. Results were excluded if they were duplicate findings or non-English. The initial search yielded 230 publications. After removal of duplicates and review of publications, 6 were deemed relevant. Fourteen additional publications were retrieved through a manual search of references from relevant publications. A total of 13 unique PROM tools that included patient input in item generation were identified. The most common method of patient engagement was through qualitative interviews or focus groups. Despite recommendations from international groups and the emphasized importance of incorporating patient feedback in all stages of development of PROMs, few unique tools have incorporated patient input in item generation of cancer-specific tools. Moving forward, a framework of best practices on how to best engage patients in developing PROMs is warranted to support high-quality patient-centered care.

  4. Guide to Human Factors Information Sources.

    DTIC Science & Technology

    1984-11-01

    intermediary, a computer search is sometimes unnecessary. A lucid way of presenting a search objective is either by Boolean (and/or) expressions or by Venn...1965). Human factors evaluation in system development. New York: John Wiley & Sons. 56. Murray, E. J. (1965). Sleep, dreams , and arousal. New York

  5. How Adolescents Search for and Appraise Online Health Information: A Systematic Review.

    PubMed

    Freeman, Jaimie L; Caldwell, Patrina H Y; Bennett, Patricia A; Scott, Karen M

    2018-04-01

    To conduct a systematic review of the evidence concerning whether and how adolescents search for online health information and the extent to which they appraise the credibility of information they retrieve. A systematic search of online databases (MEDLINE, EMBASE, PsycINFO, ERIC) was performed. Reference lists of included papers were searched manually for additional articles. Included were studies on whether and how adolescents searched for and appraised online health information, where adolescent participants were aged 13-18 years. Thematic analysis was used to synthesize the findings. Thirty-four studies met the inclusion criteria. In line with the research questions, 2 key concepts were identified within the papers: whether and how adolescents search for online health information, and the extent to which adolescents appraise online health information. Four themes were identified regarding whether and how adolescents search for online health information: use of search engines, difficulties in selecting appropriate search strings, barriers to searching, and absence of searching. Four themes emerged concerning the extent to which adolescents appraise the credibility of online health information: evaluation based on Web site name and reputation, evaluation based on first impression of Web site, evaluation of Web site content, and absence of a sophisticated appraisal strategy. Adolescents are aware of the varying quality of online health information. Strategies used by individuals for searching and appraising online health information differ in their sophistication. It is important to develop resources to enhance search and appraisal skills and to collaborate with adolescents to ensure that such resources are appropriate for them. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. RNA motif search with data-driven element ordering.

    PubMed

    Rampášek, Ladislav; Jimenez, Randi M; Lupták, Andrej; Vinař, Tomáš; Brejová, Broňa

    2016-05-18

    In this paper, we study the problem of RNA motif search in long genomic sequences. This approach uses a combination of sequence and structure constraints to uncover new distant homologs of known functional RNAs. The problem is NP-hard and is traditionally solved by backtracking algorithms. We have designed a new algorithm for RNA motif search and implemented a new motif search tool RNArobo. The tool enhances the RNAbob descriptor language, allowing insertions in helices, which enables better characterization of ribozymes and aptamers. A typical RNA motif consists of multiple elements and the running time of the algorithm is highly dependent on their ordering. By approaching the element ordering problem in a principled way, we demonstrate more than 100-fold speedup of the search for complex motifs compared to previously published tools. We have developed a new method for RNA motif search that allows for a significant speedup of the search of complex motifs that include pseudoknots. Such speed improvements are crucial at a time when the rate of DNA sequencing outpaces growth in computing. RNArobo is available at http://compbio.fmph.uniba.sk/rnarobo .

  7. Reverse Nearest Neighbor Search on a Protein-Protein Interaction Network to Infer Protein-Disease Associations.

    PubMed

    Suratanee, Apichat; Plaimas, Kitiporn

    2017-01-01

    The associations between proteins and diseases are crucial information for investigating pathological mechanisms. However, the number of known and reliable protein-disease associations is quite small. In this study, an analysis framework to infer associations between proteins and diseases was developed based on a large data set of a human protein-protein interaction network integrating an effective network search, namely, the reverse k -nearest neighbor (R k NN) search. The R k NN search was used to identify an impact of a protein on other proteins. Then, associations between proteins and diseases were inferred statistically. The method using the R k NN search yielded a much higher precision than a random selection, standard nearest neighbor search, or when applying the method to a random protein-protein interaction network. All protein-disease pair candidates were verified by a literature search. Supporting evidence for 596 pairs was identified. In addition, cluster analysis of these candidates revealed 10 promising groups of diseases to be further investigated experimentally. This method can be used to identify novel associations to better understand complex relationships between proteins and diseases.

  8. MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.

    PubMed

    Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn

    2013-12-01

    We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.

  9. Search for general relativistic effects in table-top displacement metrology

    NASA Technical Reports Server (NTRS)

    Halverson, Peter G.; Macdonald, Daniel R.; Diaz, Rosemary T.

    2004-01-01

    As displacement metrology accuracy improves, general relativistic effects will become noticeable. Metrology gauges developed for the Space Interferometry Mission were used to search for locally anisotropic space-time, with a null result at the 10 to the negative tenth power level.

  10. Probabilistic consensus scoring improves tandem mass spectrometry peptide identification.

    PubMed

    Nahnsen, Sven; Bertsch, Andreas; Rahnenführer, Jörg; Nordheim, Alfred; Kohlbacher, Oliver

    2011-08-05

    Database search is a standard technique for identifying peptides from their tandem mass spectra. To increase the number of correctly identified peptides, we suggest a probabilistic framework that allows the combination of scores from different search engines into a joint consensus score. Central to the approach is a novel method to estimate scores for peptides not found by an individual search engine. This approach allows the estimation of p-values for each candidate peptide and their combination across all search engines. The consensus approach works better than any single search engine across all different instrument types considered in this study. Improvements vary strongly from platform to platform and from search engine to search engine. Compared to the industry standard MASCOT, our approach can identify up to 60% more peptides. The software for consensus predictions is implemented in C++ as part of OpenMS, a software framework for mass spectrometry. The source code is available in the current development version of OpenMS and can easily be used as a command line application or via a graphical pipeline designer TOPPAS.

  11. Advances in feature selection methods for hyperspectral image processing in food industry applications: a review.

    PubMed

    Dai, Qiong; Cheng, Jun-Hu; Sun, Da-Wen; Zeng, Xin-An

    2015-01-01

    There is an increased interest in the applications of hyperspectral imaging (HSI) for assessing food quality, safety, and authenticity. HSI provides abundance of spatial and spectral information from foods by combining both spectroscopy and imaging, resulting in hundreds of contiguous wavebands for each spatial position of food samples, also known as the curse of dimensionality. It is desirable to employ feature selection algorithms for decreasing computation burden and increasing predicting accuracy, which are especially relevant in the development of online applications. Recently, a variety of feature selection algorithms have been proposed that can be categorized into three groups based on the searching strategy namely complete search, heuristic search and random search. This review mainly introduced the fundamental of each algorithm, illustrated its applications in hyperspectral data analysis in the food field, and discussed the advantages and disadvantages of these algorithms. It is hoped that this review should provide a guideline for feature selections and data processing in the future development of hyperspectral imaging technique in foods.

  12. Development of Infrared Library Search Prefilters for Automotive Clear Coats from Simulated Attenuated Total Reflection (ATR) Spectra.

    PubMed

    Perera, Undugodage Don Nuwan; Nishikida, Koichi; Lavine, Barry K

    2018-06-01

    A previously published study featuring an attenuated total reflection (ATR) simulation algorithm that mitigated distortions in ATR spectra was further investigated to evaluate its efficacy to enhance searching of infrared (IR) transmission libraries. In the present study, search prefilters were developed from transformed ATR spectra to identify the assembly plant of a vehicle from ATR spectra of the clear coat layer. A total of 456 IR transmission spectra from the Paint Data Query (PDQ) database that spanned 22 General Motors assembly plants and served as a training set cohort were transformed into ATR spectra by the simulation algorithm. These search prefilters were formulated using the fingerprint region (1500 cm -1 to 500 cm -1 ). Both the transformed ATR spectra (training set) and the experimental ATR spectra (validation set) were preprocessed for pattern recognition analysis using the discrete wavelet transform, which increased the signal-to-noise of the ATR spectra by concentrating the signal in specific wavelet coefficients. Attenuated total reflection spectra of 14 clear coat samples (validation set) measured with a Nicolet iS50 Fourier transform IR spectrometer were correctly classified as to assembly plant(s) of the automotive vehicle from which the paint sample originated using search prefilters developed from 456 simulated ATR spectra. The ATR simulation (transformation) algorithm successfully facilitated spectral library matching of ATR spectra against IR transmission spectra of automotive clear coats in the PDQ database.

  13. Searching for evidence or approval? A commentary on database search in systematic reviews and alternative information retrieval methodologies.

    PubMed

    Delaney, Aogán; Tamás, Peter A

    2018-03-01

    Despite recognition that database search alone is inadequate even within the health sciences, it appears that reviewers in fields that have adopted systematic review are choosing to rely primarily, or only, on database search for information retrieval. This commentary reminds readers of factors that call into question the appropriateness of default reliance on database searches particularly as systematic review is adapted for use in new and lower consensus fields. It then discusses alternative methods for information retrieval that require development, formalisation, and evaluation. Our goals are to encourage reviewers to reflect critically and transparently on their choice of information retrieval methods and to encourage investment in research on alternatives. Copyright © 2017 John Wiley & Sons, Ltd.

  14. The pursuit of dark matter at colliders—an overview

    NASA Astrophysics Data System (ADS)

    Penning, Björn

    2018-06-01

    Dark matter is one of the main puzzles in fundamental physics and the goal of a diverse, multi-pronged research programme. Underground and astrophysical searches look for dark matter particles in the cosmos, either by interacting directly or by searching for dark matter annihilation. Particle colliders, in contrast, might produce dark matter in the laboratory and are able to probe most basic dark-matter–matter interactions. They are sensitive to low dark matter masses, provide complementary information at higher masses and are subject to different systematic uncertainties. Collider searches are therefore an important part of an inter-disciplinary dark matter search strategy. This article highlights the experimental and phenomenological development in collider dark matter searches of recent years and their connection with the wider field.

  15. Correlation between National Influenza Surveillance Data and Search Queries from Mobile Devices and Desktops in South Korea

    PubMed Central

    Seo, Dong-Woo; Sohn, Chang Hwan; Kim, Sung-Hoon; Ryoo, Seung Mok; Lee, Yoon-Seon; Lee, Jae Ho; Kim, Won Young; Lim, Kyoung Soo

    2016-01-01

    Background Digital surveillance using internet search queries can improve both the sensitivity and timeliness of the detection of a health event, such as an influenza outbreak. While it has recently been estimated that the mobile search volume surpasses the desktop search volume and mobile search patterns differ from desktop search patterns, the previous digital surveillance systems did not distinguish mobile and desktop search queries. The purpose of this study was to compare the performance of mobile and desktop search queries in terms of digital influenza surveillance. Methods and Results The study period was from September 6, 2010 through August 30, 2014, which consisted of four epidemiological years. Influenza-like illness (ILI) and virologic surveillance data from the Korea Centers for Disease Control and Prevention were used. A total of 210 combined queries from our previous survey work were used for this study. Mobile and desktop weekly search data were extracted from Naver, which is the largest search engine in Korea. Spearman’s correlation analysis was used to examine the correlation of the mobile and desktop data with ILI and virologic data in Korea. We also performed lag correlation analysis. We observed that the influenza surveillance performance of mobile search queries matched or exceeded that of desktop search queries over time. The mean correlation coefficients of mobile search queries and the number of queries with an r-value of ≥ 0.7 equaled or became greater than those of desktop searches over the four epidemiological years. A lag correlation analysis of up to two weeks showed similar trends. Conclusion Our study shows that mobile search queries for influenza surveillance have equaled or even become greater than desktop search queries over time. In the future development of influenza surveillance using search queries, the recognition of changing trend of mobile search data could be necessary. PMID:27391028

  16. Correlation between National Influenza Surveillance Data and Search Queries from Mobile Devices and Desktops in South Korea.

    PubMed

    Shin, Soo-Yong; Kim, Taerim; Seo, Dong-Woo; Sohn, Chang Hwan; Kim, Sung-Hoon; Ryoo, Seung Mok; Lee, Yoon-Seon; Lee, Jae Ho; Kim, Won Young; Lim, Kyoung Soo

    2016-01-01

    Digital surveillance using internet search queries can improve both the sensitivity and timeliness of the detection of a health event, such as an influenza outbreak. While it has recently been estimated that the mobile search volume surpasses the desktop search volume and mobile search patterns differ from desktop search patterns, the previous digital surveillance systems did not distinguish mobile and desktop search queries. The purpose of this study was to compare the performance of mobile and desktop search queries in terms of digital influenza surveillance. The study period was from September 6, 2010 through August 30, 2014, which consisted of four epidemiological years. Influenza-like illness (ILI) and virologic surveillance data from the Korea Centers for Disease Control and Prevention were used. A total of 210 combined queries from our previous survey work were used for this study. Mobile and desktop weekly search data were extracted from Naver, which is the largest search engine in Korea. Spearman's correlation analysis was used to examine the correlation of the mobile and desktop data with ILI and virologic data in Korea. We also performed lag correlation analysis. We observed that the influenza surveillance performance of mobile search queries matched or exceeded that of desktop search queries over time. The mean correlation coefficients of mobile search queries and the number of queries with an r-value of ≥ 0.7 equaled or became greater than those of desktop searches over the four epidemiological years. A lag correlation analysis of up to two weeks showed similar trends. Our study shows that mobile search queries for influenza surveillance have equaled or even become greater than desktop search queries over time. In the future development of influenza surveillance using search queries, the recognition of changing trend of mobile search data could be necessary.

  17. Regression Model Optimization for the Analysis of Experimental Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2009-01-01

    A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.

  18. Data Mining and Optimization Tools for Developing Engine Parameters Tools

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1998-01-01

    This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. Tricia Erhardt and I studied the problem domain for developing an Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy, datasets. From the study and discussion with NASA LeRC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of tile data for GA based multi-resolution optimal search.

  19. Exploring What's Missing: What Do Target Absent Trials Reveal about Autism Search Superiority?

    ERIC Educational Resources Information Center

    Keehn, Brandon; Joseph, Robert M.

    2016-01-01

    We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of…

  20. GA-optimization for rapid prototype system demonstration

    NASA Technical Reports Server (NTRS)

    Kim, Jinwoo; Zeigler, Bernard P.

    1994-01-01

    An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.

  1. A review of the scientific rationale and methods used in the search for other planetary systems

    NASA Technical Reports Server (NTRS)

    Black, D. C.

    1985-01-01

    Planetary systems appear to be one of the crucial links in the chain leading from simple molecules to living systems, particularly complex (intelligent?) living systems. Although there is currently no observational proof of the existence of any planetary system other than our own, techniques are now being developed which will permit a comprehensive search for other planetary systems. The scientific rationale for and methods used in such a search effort are reviewed here.

  2. A natural language based search engine for ICD10 diagnosis encoding.

    PubMed

    Baud, Robert

    2004-01-01

    We have developed a multiple step process for implementing an ICD10 search engine. The complexity of the task has been shown and we recommend collecting adequate expertise before starting any implementation. Underestimation of the expert time and inadequate data resources are probable reasons for failure. We also claim that when all conditions are met in term of resource and availability of the expertise, the benefits of a responsive ICD10 search engine will be present and the investment will be successful.

  3. A Systematic Review of Physician Leadership and Emotional Intelligence

    PubMed Central

    Mintz, Laura Janine; Stoller, James K.

    2014-01-01

    Objective This review evaluates the current understanding of emotional intelligence (EI) and physician leadership, exploring key themes and areas for future research. Literature Search We searched the literature using PubMed, Google Scholar, and Business Source Complete for articles published between 1990 and 2012. Search terms included physician and leadership, emotional intelligence, organizational behavior, and organizational development. All abstracts were reviewed. Full articles were evaluated if they addressed the connection between EI and physician leadership. Articles were included if they focused on physicians or physicians-in-training and discussed interventions or recommendations. Appraisal and Synthesis We assessed articles for conceptual rigor, study design, and measurement quality. A thematic analysis categorized the main themes and findings of the articles. Results The search produced 3713 abstracts, of which 437 full articles were read and 144 were included in this review. Three themes were identified: (1) EI is broadly endorsed as a leadership development strategy across providers and settings; (2) models of EI and leadership development practices vary widely; and (3) EI is considered relevant throughout medical education and practice. Limitations of the literature were that most reports were expert opinion or observational and studies used several different tools for measuring EI. Conclusions EI is widely endorsed as a component of curricula for developing physician leaders. Research comparing practice models and measurement tools will critically advance understanding about how to develop and nurture EI to enhance leadership skills in physicians throughout their careers. PMID:24701306

  4. Muscle Logic: New Knowledge Resource for Anatomy Enables Comprehensive Searches of the Literature on the Feeding Muscles of Mammals

    PubMed Central

    Druzinsky, Robert E.; Balhoff, James P.; Crompton, Alfred W.; Done, James; German, Rebecca Z.; Haendel, Melissa A.; Herrel, Anthony; Herring, Susan W.; Lapp, Hilmar; Mabee, Paula M.; Muller, Hans-Michael; Mungall, Christopher J.; Sternberg, Paul W.; Van Auken, Kimberly; Vinyard, Christopher J.; Williams, Susan H.; Wall, Christine E.

    2016-01-01

    Background In recent years large bibliographic databases have made much of the published literature of biology available for searches. However, the capabilities of the search engines integrated into these databases for text-based bibliographic searches are limited. To enable searches that deliver the results expected by comparative anatomists, an underlying logical structure known as an ontology is required. Development and Testing of the Ontology Here we present the Mammalian Feeding Muscle Ontology (MFMO), a multi-species ontology focused on anatomical structures that participate in feeding and other oral/pharyngeal behaviors. A unique feature of the MFMO is that a simple, computable, definition of each muscle, which includes its attachments and innervation, is true across mammals. This construction mirrors the logical foundation of comparative anatomy and permits searches using language familiar to biologists. Further, it provides a template for muscles that will be useful in extending any anatomy ontology. The MFMO is developed to support the Feeding Experiments End-User Database Project (FEED, https://feedexp.org/), a publicly-available, online repository for physiological data collected from in vivo studies of feeding (e.g., mastication, biting, swallowing) in mammals. Currently the MFMO is integrated into FEED and also into two literature-specific implementations of Textpresso, a text-mining system that facilitates powerful searches of a corpus of scientific publications. We evaluate the MFMO by asking questions that test the ability of the ontology to return appropriate answers (competency questions). We compare the results of queries of the MFMO to results from similar searches in PubMed and Google Scholar. Results and Significance Our tests demonstrate that the MFMO is competent to answer queries formed in the common language of comparative anatomy, but PubMed and Google Scholar are not. Overall, our results show that by incorporating anatomical ontologies into searches, an expanded and anatomically comprehensive set of results can be obtained. The broader scientific and publishing communities should consider taking up the challenge of semantically enabled search capabilities. PMID:26870952

  5. Use of PL/1 in a Bibliographic Information Retrieval System.

    ERIC Educational Resources Information Center

    Schipma, Peter B.; And Others

    The Information Sciences section of ITT Research Institute (IITRI) has developed a Computer Search Center and is currently conducting a research project to explore computer searching of a variety of machine-readable data bases. The Center provides Selective Dissemination of Information services to academic, industrial and research organizations…

  6. Advanced Image Search: A Strategy for Creating Presentation Boards

    ERIC Educational Resources Information Center

    Frey, Diane K.; Hines, Jean D.; Swinker, Mary E.

    2008-01-01

    Finding relevant digital images to create presentation boards requires advanced search skills. This article describes a course assignment involving a technique designed to develop students' literacy skills with respect to locating images of desired quality and content from Internet databases. The assignment was applied in a collegiate apparel…

  7. Early detection network design and search strategy issues

    EPA Science Inventory

    We conducted a series of field and related modeling studies (2005-2012) to evaluate search strategies for Great Lakes coastal ecosystems that are at risk of invasion by non-native aquatic species. In developing a network, we should design to achieve an acceptable limit of detect...

  8. New Tobacco and Tobacco-Related Products: Early Detection of Product Development, Marketing Strategies, and Consumer Interest

    PubMed Central

    Staal, Yvonne CM; van de Nobelen, Suzanne; Havermans, Anne

    2018-01-01

    Background A wide variety of new tobacco and tobacco-related products have emerged on the market in recent years. Objective To understand their potential implications for public health and to guide tobacco control efforts, we have used an infoveillance approach to identify new tobacco and tobacco-related products. Methods Our search for tobacco(-related) products consists of several tailored search profiles using combinations of keywords such as “e-cigarette” and “new” to extract information from almost 9000 preselected sources such as websites of online shops, tobacco manufacturers, and news sites. Results Developments in e-cigarette design characteristics show a trend toward customization by possibilities to adjust temperature and airflow, and by the large variety of flavors of e-liquids. Additionally, more e-cigarettes are equipped with personalized accessories, such as mobile phones, applications, and Bluetooth. Waterpipe products follow the trend toward electronic vaping. Various heat-not-burn products were reintroduced to the market. Conclusions Our search for tobacco(-related) products was specific and timely, though advances in product development require ongoing optimization of the search strategy. Our results show a trend toward products resembling tobacco cigarettes vaporizers that can be adapted to the consumers’ needs. Our search for tobacco(-related) products could aid in the assessment of the likelihood of new products to gain market share, as a possible health risk or as an indicator for the need on independent and reliable information of the product to the general public. PMID:29807884

  9. Electronic Biomedical Literature Search for Budding Researcher

    PubMed Central

    Thakre, Subhash B.; Thakre S, Sushama S.; Thakre, Amol D.

    2013-01-01

    Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research. PMID:24179937

  10. Electronic biomedical literature search for budding researcher.

    PubMed

    Thakre, Subhash B; Thakre S, Sushama S; Thakre, Amol D

    2013-09-01

    Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research.

  11. Choosing and using methodological search filters: searchers' views.

    PubMed

    Beale, Sophie; Duffy, Steven; Glanville, Julie; Lefebvre, Carol; Wright, Dianne; McCool, Rachael; Varley, Danielle; Boachie, Charles; Fraser, Cynthia; Harbour, Jenny; Smith, Lynne

    2014-06-01

    Search filters or hedges are search strategies developed to assist information specialists and librarians to retrieve different types of evidence from bibliographic databases. The objectives of this project were to learn about searchers' filter use, how searchers choose search filters and what information they would like to receive to inform their choices. Interviews with information specialists working in, or for, the National Institute for Health and Care Excellence (NICE) were conducted. An online questionnaire survey was also conducted and advertised via a range of email lists. Sixteen interviews were undertaken and 90 completed questionnaires were received. The use of search filters tends to be linked to reducing a large amount of literature, introducing focus and assisting with searches that are based on a single study type. Respondents use numerous ways to identify search filters and can find choosing between different filters problematic because of knowledge gaps and lack of time. Search filters are used mainly for reducing large result sets (introducing focus) and assisting with searches focused on a single study type. Features that would help with choosing filters include making information about filters less technical, offering ratings and providing more detail about filter validation strategies and filter provenance. © 2014 The authors. Health Information and Libraries Journal © 2014 Health Libraries Group.

  12. Llnking the EarthScope Data Virtual Catalog to the GEON Portal

    NASA Astrophysics Data System (ADS)

    Lin, K.; Memon, A.; Baru, C.

    2008-12-01

    The EarthScope Data Portal provides a unified, single-point of access to EarthScope data and products from USArray, Plate Boundary Observatory (PBO), and San Andreas Fault Observatory at Depth (SAFOD) experiments. The portal features basic search and data access capabilities to allow users to discover and access EarthScope data using spatial, temporal, and other metadata-based (data type, station specific) search conditions. The portal search module is the user interface implementation of the EarthScope Data Search Web Service. This Web Service acts as a virtual catalog that in turn invokes Web services developed by IRIS (Incorporated Research Institutions for Seismology), UNAVCO (University NAVSTAR Consortium), and GFZ (German Research Center for Geosciences) to search for EarthScope data in the archives at each of these locations. These Web Services provide information about all resources (data) that match the specified search conditions. In this presentation we will describe how the EarthScope Data Search Web service can be integrated into the GEONsearch application in the GEON Portal (see http://portal.geongrid.org). Thus, a search request issued at the GEON Portal will also search the EarthScope virtual catalog thereby providing users seamless access to data in GEON as well as the Earthscope via a common user interface.

  13. Male Diaphorina citri searching responses to vibrational communication signals

    USDA-ARS?s Scientific Manuscript database

    Prototype devices have been developed that mimic D. citri female replies to male communication signals and lure males to a trap. The objective is to trap a high proportion of males that have landed on a host tree and have begun searching for females. This presentation describes the construction and ...

  14. The Day Search Stood Still

    ERIC Educational Resources Information Center

    Sexton, Will

    2010-01-01

    That little rectangle with a button next to it? (Those things called search boxes but might just as well be called "resource drains.") Imagine it disappearing from a library's webpages. The intricate works behind these design elements make up a major portion of what library staff spends time and money developing, populating, supporting,…

  15. Assessment of OmpATb as a Novel Antigen for the Diagnosis of Bovine Tuberculosis

    USDA-ARS?s Scientific Manuscript database

    In search for better tools to control bovine tuberculosis, the development of diagnostic tests with improved specificity and sensitivity has a high priority. We chose to search for novel immunodiagnostic reagents. In this study, Rv0899 (Outer membrane protein A of Mycobacterium tuberculosis, OmpATb)...

  16. Research Trends with Cross Tabulation Search Engine

    ERIC Educational Resources Information Center

    Yin, Chengjiu; Hirokawa, Sachio; Yau, Jane Yin-Kim; Hashimoto, Kiyota; Tabata, Yoshiyuki; Nakatoh, Tetsuya

    2013-01-01

    To help researchers in building a knowledge foundation of their research fields which could be a time-consuming process, the authors have developed a Cross Tabulation Search Engine (CTSE). Its purpose is to assist researchers in 1) conducting research surveys, 2) efficiently and effectively retrieving information (such as important researchers,…

  17. Search strategies for top partners in composite Higgs models

    NASA Astrophysics Data System (ADS)

    Gripaios, Ben; Müller, Thibaut; Parker, M. A.; Sutherland, Dave

    2014-08-01

    We consider how best to search for top partners in generic composite Higgs models. We begin by classifying the possible group representations carried by top partners in models with and without a custodial SU(2) × SU(2) ⋊ 2 symmetry protecting the rate for Z → decays. We identify a number of minimal models whose top partners only have electric charges of , , or and thus decay to top or bottom quarks via a single Higgs or electroweak gauge boson. We develop an inclusive search for these based on a top veto, which we find to be more effective than existing searches. Less minimal models feature light states that can be sought in final states with like-sign leptons and so we find that 2 straightforward LHC searches give a reasonable coverage of the gamut of composite Higgs models.

  18. Transition From Clinical to Educator Roles in Nursing: An Integrative Review.

    PubMed

    Fritz, Elizabeth

    This review identified barriers to and facilitators of nurses' transition from clinical positions into nursing professional development and other nurse educator roles. The author conducted literature searches using multiple databases. Twenty-one articles met search criteria, representing a variety of practice settings. The findings, both barriers and facilitators, were remarkably consistent across practice settings. Four practice recommendations were drawn from the literature to promote nurses' successful transition to nursing professional development roles.

  19. Nonstandard working schedules and health: the systematic search for a comprehensive model.

    PubMed

    Merkus, Suzanne L; Holte, Kari Anne; Huysmans, Maaike A; van Mechelen, Willem; van der Beek, Allard J

    2015-10-23

    Theoretical models on shift work fall short of describing relevant health-related pathways associated with the broader concept of nonstandard working schedules. Shift work models neither combine relevant working time characteristics applicable to nonstandard schedules nor include the role of rest periods and recovery in the development of health complaints. Therefore, this paper aimed to develop a comprehensive model on nonstandard working schedules to address these shortcomings. A literature review was conducted using a systematic search and selection process. Two searches were performed: one associating the working time characteristics time-of-day and working time duration with health and one associating recovery after work with health. Data extracted from the models were used to develop a comprehensive model on nonstandard working schedules and health. For models on the working time characteristics, the search strategy yielded 3044 references, of which 26 met the inclusion criteria that contained 22 distinctive models. For models on recovery after work, the search strategy yielded 896 references, of which seven met the inclusion criteria containing seven distinctive models. Of the models on the working time characteristics, three combined time-of-day with working time duration, 18 were on time-of-day (i.e. shift work), and one was on working time duration. The model developed in the paper has a comprehensive approach to working hours and other work-related risk factors and proposes that they should be balanced by positive non-work factors to maintain health. Physiological processes leading to health complaints are circadian disruption, sleep deprivation, and activation that should be counterbalanced by (re-)entrainment, restorative sleep, and recovery, respectively, to maintain health. A comprehensive model on nonstandard working schedules and health was developed. The model proposes that work and non-work as well as their associated physiological processes need to be balanced to maintain good health. The model gives researchers a useful overview over the various risk factors and pathways associated with health that should be considered when studying any form of nonstandard working schedule.

  20. Search for general relativistic effects in table-top displacement metrology

    NASA Technical Reports Server (NTRS)

    Halverson, Peter G.; Diaz, Rosemary T.; Macdonald, Daniel R.

    2004-01-01

    As displacement metrology accuracy improves, general relativistic effects will become noticeable. Metrology gauges developed for the Space Interferometry Mission, were used to search for locally anisotropic space-time, with a null result at the 10 to the negative 10th power level.

  1. 20 CFR 617.20 - Responsibilities for the delivery of reemployment services.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... services; (6) Providing or procuring self-directed job search training, when necessary; (7) Providing training, job search and relocation assistance; (8) Developing a training plan with the individual; (9... reemployment services. 617.20 Section 617.20 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION...

  2. In Search of New Ideas, Research Findings, and Emerging Technologies? Here's Where To Find Them.

    ERIC Educational Resources Information Center

    Powell, Gary C.

    There are many avenues available to computer-assisted instruction (CAI) practitioners and developers in search of access to new ideas, research findings, and emerging technologies that will assist them in developing CAI products. Seven such avenues are described in detail: (1) graduate student interns, who bring unique insights, theory, and…

  3. Emerging Developments in the Study of Organizations. ASHE Annual Meeting 1982 Paper.

    ERIC Educational Resources Information Center

    March, James G.

    Development in the study of organizations and needs for additional research are addressed. It is suggested that when goals are not achieved, an organization searches for new alternatives and new information. When aspirations are achieved, the search for new alternatives is assumed to be modest, slack accumulates, and aspirations rise. It has been…

  4. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  5. Phylogenetic search through partial tree mixing

    PubMed Central

    2012-01-01

    Background Recent advances in sequencing technology have created large data sets upon which phylogenetic inference can be performed. Current research is limited by the prohibitive time necessary to perform tree search on a reasonable number of individuals. This research develops new phylogenetic algorithms that can operate on tens of thousands of species in a reasonable amount of time through several innovative search techniques. Results When compared to popular phylogenetic search algorithms, better trees are found much more quickly for large data sets. These algorithms are incorporated in the PSODA application available at http://dna.cs.byu.edu/psoda Conclusions The use of Partial Tree Mixing in a partition based tree space allows the algorithm to quickly converge on near optimal tree regions. These regions can then be searched in a methodical way to determine the overall optimal phylogenetic solution. PMID:23320449

  6. A Dark Matter Search with MALBEK

    NASA Astrophysics Data System (ADS)

    Giovanetti, G. K.; Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S.; Mertens, S.; Nomachi, M.; Orrell, J. L.; O'Shaughnessy, C.; Overman, N. R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, A. G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Suriano, A. M.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.

    The Majorana Demonstrator is an array of natural and enriched high purity germanium detectors that will search for the neutrinoless double-beta decay of 76Ge and perform a search for weakly interacting massive particles (WIMPs) with masses below 10 GeV. As part of the Majorana research and development efforts, we have deployed a modified, low-background broad energy germanium detector at the Kimballton Underground Research Facility. With its sub-keV energy threshold, this detector is sensitive to potential non-Standard Model physics, including interactions with WIMPs. We discuss the backgrounds present in the WIMP region of interest and explore the impact of slow surface event contamination when searching for a WIMP signal.

  7. A dark matter search with MALBEK

    DOE PAGES

    Giovanetti, G. K.; Abgrall, N.; Aguayo, E.; ...

    2015-01-01

    The Majorana Demonstrator is an array of natural and enriched high purity germanium detectors that will search for the neutrinoless double-beta decay of ⁷⁶Ge and perform a search for weakly interacting massive particles (WIMPs) with masses below 10 GeV. As part of the Majorana research and development efforts, we have deployed a modified, low-background broad energy germanium detector at the Kimballton Underground Research Facility. With its sub-keV energy threshold, this detector is sensitive to potential non-Standard Model physics, including interactions with WIMPs. We discuss the backgrounds present in the WIMP region of interest and explore the impact of slow surfacemore » event contamination when searching for a WIMP signal.« less

  8. The Human Transcript Database: A Catalogue of Full Length cDNA Inserts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouckk John; Michael McLeod; Kim Worley

    1999-09-10

    The BCM Search Launcher provided improved access to web-based sequence analysis services during the granting period and beyond. The Search Launcher web site grouped analysis procedures by function and provided default parameters that provided reasonable search results for most applications. For instance, most queries were automatically masked for repeat sequences prior to sequence database searches to avoid spurious matches. In addition to the web-based access and arrangements that were made using the functions easier, the BCM Search Launcher provided unique value-added applications like the BEAUTY sequence database search tool that combined information about protein domains and sequence database search resultsmore » to give an enhanced, more complete picture of the reliability and relative value of the information reported. This enhanced search tool made evaluating search results more straight-forward and consistent. Some of the favorite features of the web site are the sequence utilities and the batch client functionality that allows processing of multiple samples from the command line interface. One measure of the success of the BCM Search Launcher is the number of sites that have adopted the models first developed on the site. The graphic display on the BLAST search from the NCBI web site is one such outgrowth, as is the display of protein domain search results within BLAST search results, and the design of the Biology Workbench application. The logs of usage and comments from users confirm the great utility of this resource.« less

  9. A search map for organic additives and solvents applicable in high-voltage rechargeable batteries.

    PubMed

    Park, Min Sik; Park, Insun; Kang, Yoon-Sok; Im, Dongmin; Doo, Seok-Gwang

    2016-09-29

    Chemical databases store information such as molecular formulas, chemical structures, and the physical and chemical properties of compounds. Although the massive databases of organic compounds exist, the search of target materials is constrained by a lack of physical and chemical properties necessary for specific applications. With increasing interest in the development of energy storage systems such as high-voltage rechargeable batteries, it is critical to find new electrolytes efficiently. Here we build a search map to screen organic additives and solvents with novel core and functional groups, and thus establish a database of electrolytes to identify the most promising electrolyte for high-voltage rechargeable batteries. This search map is generated from MAssive Molecular Map BUilder (MAMMBU) by combining a high-throughput quantum chemical simulation with an artificial neural network algorithm. MAMMBU is designed for predicting the oxidation and reduction potentials of organic compounds existing in the massive organic compound database, PubChem. We develop a search map composed of ∼1 000 000 redox potentials and elucidate the quantitative relationship between the redox potentials and functional groups. Finally, we screen a quinoxaline compound for an anode additive and apply it to electrolytes and improve the capacity retention from 64.3% to 80.8% near 200 cycles for a lithium ion battery in experiments.

  10. MASCOT HTML and XML parser: an implementation of a novel object model for protein identification data.

    PubMed

    Yang, Chunguang G; Granite, Stephen J; Van Eyk, Jennifer E; Winslow, Raimond L

    2006-11-01

    Protein identification using MS is an important technique in proteomics as well as a major generator of proteomics data. We have designed the protein identification data object model (PDOM) and developed a parser based on this model to facilitate the analysis and storage of these data. The parser works with HTML or XML files saved or exported from MASCOT MS/MS ions search in peptide summary report or MASCOT PMF search in protein summary report. The program creates PDOM objects, eliminates redundancy in the input file, and has the capability to output any PDOM object to a relational database. This program facilitates additional analysis of MASCOT search results and aids the storage of protein identification information. The implementation is extensible and can serve as a template to develop parsers for other search engines. The parser can be used as a stand-alone application or can be driven by other Java programs. It is currently being used as the front end for a system that loads HTML and XML result files of MASCOT searches into a relational database. The source code is freely available at http://www.ccbm.jhu.edu and the program uses only free and open-source Java libraries.

  11. Conceptual development of a transportable/deployable x-ray inspection system for cars and vans

    NASA Astrophysics Data System (ADS)

    Swift, Roderick D.

    1997-02-01

    The technology of transmission and backscatter imaging by flying-spot x-ray beams was extended to 450 kV beam energies with the installation of a prototype CargoSearchTM system at Otay Mesa, California in the summer of 1994. CargoSearchTM is a fixed-site system designed for the inspection of large over-the-road vehicles at border crossings. A self-contained, mobile implementation of the same technology has also been developed to scan objects ranging in size from a small car up to a full-scale tractor- trailer rig. MobileSearchTM is able to be moved over ordinary roadways to its intended operating site and set up easily by two or three people, but is currently limited to backscatter imaging only. It also lacks the ability to effectively image the vehicle's undercarriage, which is important for the detection of contraband concealed in the vehicle itself rather than its cargo. There is a need for a transportable, deployable scanning system that combines the self-contained mobility of MobileSearchTM and the combined transmission and backscatter imaging characteristics of CargoSearchTM, including its good geometry for backscatter imaging of the undercarriage of inspected vehicles. Concepts for two approaches that meet these needs are presented.

  12. Library Instruction from Scratch at a Career College

    ERIC Educational Resources Information Center

    Ward, Randall; Harrison, Tiffany; Pace, Sean

    2010-01-01

    Librarians at the Stevens-Henager Career College Salt Lake City Campus have developed a library-instruction program over the last year. The basic section consists of 40-45 minutes on primary, secondary, and tertiary literature, search techniques, live online searching using student-contributed examples, finishing off with short sections on…

  13. A Validation Study of the Existential Anxiety Scale.

    ERIC Educational Resources Information Center

    Hullett, Michael A.

    Logotherapy is a meaning-centered psychotherapy which focuses on both the meaning of human existence and the personal search for meaning. If the will to search for meaning is frustrated, "existential frustration" may result. This study validates the Existential Anxiety Scale (EAS) developed by Good and Good (1974). Basic principles of…

  14. Developing a feasible neighbourhood search for solving hub location problem in a communication network

    NASA Astrophysics Data System (ADS)

    Rakhmawati, Fibri; Mawengkang, Herman; Buulolo, F.; Mardiningsih

    2018-01-01

    The hub location with single assignment is the problem of locating hubs and assigning the terminal nodes to hubs in order to minimize the cost of hub installation and the cost of routing the traffic in the network. There may also be capacity restrictions on the amount of traffic that can transit by hubs. This paper discusses how to model the polyhedral properties of the problems and develop a feasible neighbourhood search method to solve the model.

  15. Simulation to Support Local Search in Trajectory Optimization Planning

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.; Venable, K. Brent; Lindsey, James

    2012-01-01

    NASA and the international community are investing in the development of a commercial transportation infrastructure that includes the increased use of rotorcraft, specifically helicopters and civil tilt rotors. However, there is significant concern over the impact of noise on the communities surrounding the transportation facilities. One way to address the rotorcraft noise problem is by exploiting powerful search techniques coming from artificial intelligence coupled with simulation and field tests to design low-noise flight profiles which can be tested in simulation or through field tests. This paper investigates the use of simulation based on predictive physical models to facilitate the search for low-noise trajectories using a class of automated search algorithms called local search. A novel feature of this approach is the ability to incorporate constraints directly into the problem formulation that addresses passenger safety and comfort.

  16. Chemical-text hybrid search engines.

    PubMed

    Zhou, Yingyao; Zhou, Bin; Jiang, Shumei; King, Frederick J

    2010-01-01

    As the amount of chemical literature increases, it is critical that researchers be enabled to accurately locate documents related to a particular aspect of a given compound. Existing solutions, based on text and chemical search engines alone, suffer from the inclusion of "false negative" and "false positive" results, and cannot accommodate diverse repertoire of formats currently available for chemical documents. To address these concerns, we developed an approach called Entity-Canonical Keyword Indexing (ECKI), which converts a chemical entity embedded in a data source into its canonical keyword representation prior to being indexed by text search engines. We implemented ECKI using Microsoft Office SharePoint Server Search, and the resultant hybrid search engine not only supported complex mixed chemical and keyword queries but also was applied to both intranet and Internet environments. We envision that the adoption of ECKI will empower researchers to pose more complex search questions that were not readily attainable previously and to obtain answers at much improved speed and accuracy.

  17. Faster sequence homology searches by clustering subsequences.

    PubMed

    Suzuki, Shuji; Kakuta, Masanori; Ishida, Takashi; Akiyama, Yutaka

    2015-04-15

    Sequence homology searches are used in various fields. New sequencing technologies produce huge amounts of sequence data, which continuously increase the size of sequence databases. As a result, homology searches require large amounts of computational time, especially for metagenomic analysis. We developed a fast homology search method based on database subsequence clustering, and implemented it as GHOSTZ. This method clusters similar subsequences from a database to perform an efficient seed search and ungapped extension by reducing alignment candidates based on triangle inequality. The database subsequence clustering technique achieved an ∼2-fold increase in speed without a large decrease in search sensitivity. When we measured with metagenomic data, GHOSTZ is ∼2.2-2.8 times faster than RAPSearch and is ∼185-261 times faster than BLASTX. The source code is freely available for download at http://www.bi.cs.titech.ac.jp/ghostz/ akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  18. Integrating unified medical language system and association mining techniques into relevance feedback for biomedical literature search.

    PubMed

    Ji, Yanqing; Ying, Hao; Tran, John; Dews, Peter; Massanari, R Michael

    2016-07-19

    Finding highly relevant articles from biomedical databases is challenging not only because it is often difficult to accurately express a user's underlying intention through keywords but also because a keyword-based query normally returns a long list of hits with many citations being unwanted by the user. This paper proposes a novel biomedical literature search system, called BiomedSearch, which supports complex queries and relevance feedback. The system employed association mining techniques to build a k-profile representing a user's relevance feedback. More specifically, we developed a weighted interest measure and an association mining algorithm to find the strength of association between a query and each concept in the article(s) selected by the user as feedback. The top concepts were utilized to form a k-profile used for the next-round search. BiomedSearch relies on Unified Medical Language System (UMLS) knowledge sources to map text files to standard biomedical concepts. It was designed to support queries with any levels of complexity. A prototype of BiomedSearch software was made and it was preliminarily evaluated using the Genomics data from TREC (Text Retrieval Conference) 2006 Genomics Track. Initial experiment results indicated that BiomedSearch increased the mean average precision (MAP) for a set of queries. With UMLS and association mining techniques, BiomedSearch can effectively utilize users' relevance feedback to improve the performance of biomedical literature search.

  19. Cooperative mobile agents search using beehive partitioned structure and Tabu Random search algorithm

    NASA Astrophysics Data System (ADS)

    Ramazani, Saba; Jackson, Delvin L.; Selmic, Rastko R.

    2013-05-01

    In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach. Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to search for the target. Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative multi-agent search has developed many applications recently that would benefit from the use of the approach presented in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is simulated, analyzed, and advantages of this approach are presented and discussed.

  20. Software Applications to Access Earth Science Data: Building an ECHO Client

    NASA Astrophysics Data System (ADS)

    Cohen, A.; Cechini, M.; Pilone, D.

    2010-12-01

    Historically, developing an ECHO (NASA’s Earth Observing System (EOS) ClearingHOuse) client required interaction with its SOAP API. SOAP, as a framework for web service communication has numerous advantages for Enterprise applications and Java/C# type programming languages. However, as interest has grown for quick development cycles and more intriguing “mashups,” ECHO has seen the SOAP API lose its appeal. In order to address these changing needs, ECHO has introduced two new interfaces facilitating simple access to its metadata holdings. The first interface is built upon the OpenSearch format and ESIP Federated Search framework. The second interface is built upon the Representational State Transfer (REST) architecture. Using the REST and OpenSearch APIs to access ECHO makes development with modern languages much more feasible and simpler. Client developers can leverage the simple interaction with ECHO to focus more of their time on the advanced functionality they are presenting to users. To demonstrate the simplicity of developing with the REST API, participants will be led through a hands-on experience where they will develop an ECHO client that performs the following actions: + Login + Provider discovery + Provider based dataset discovery + Dataset, Temporal, and Spatial constraint based Granule discovery + Online Data Access

  1. [Near-infrared spectroscopy as an auxiliary tool in the study of child development].

    PubMed

    Oliveira, Suelen Rosa de; Machado, Ana Carolina Cabral de Paula; Miranda, Débora Marques de; Campos, Flávio Dos Santos; Ribeiro, Cristina Oliveira; Magalhães, Lívia de Castro; Bouzada, Maria Cândida Ferrarez

    2015-01-01

    To investigate the applicability of Near-Infrared Spectroscopy (NIRS) for cortical hemodynamic assessment tool as an aid in the study of child development. Search was conducted in the PubMed and Lilacs databases using the following keywords: "psychomotor performance/child development/growth and development/neurodevelopment/spectroscopy/near-infrared" and their equivalents in Portuguese and Spanish. The review was performed according to criteria established by Cochrane and search was limited to 2003 to 2013. English, Portuguese and Spanish were included in the search. Of the 484 articles, 19 were selected: 17 cross-sectional and two longitudinal studies, published in non-Brazilian journals. The analyzed articles were grouped in functional and non-functional studies of child development. Functional studies addressed the object processing, social skills development, language and cognitive development. Non-functional studies discussed the relationship between cerebral oxygen saturation and neurological outcomes, and the comparison between the cortical hemodynamic response of preterm and term newborns. NIRS has become an increasingly feasible alternative and a potentially useful technique for studying functional activity of the infant brain. Copyright © 2015 Associação de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  2. NEWSdm: Nuclear Emulsions for WIMP Search with directional measurement

    NASA Astrophysics Data System (ADS)

    Di Crescenzo, A.

    2017-12-01

    Direct Dark Matter searches are nowadays one of the most exciting research topics. Several experimental efforts are concentrated on the development, construction, and operation of detectors looking for the scattering of target nuclei with Weakly Interactive Massive Particles (WIMPs). The measurement of the direction of WIMP-induced nuclear recoils is a challenging strategy to extend dark matter searches beyond the neutrino floor and provide an unambiguous signature of the detection of Galactic dark matter. Current directional experiments are based on the use of gas TPC whose sensitivity is strongly limited by the small achievable detector mass. We present an innovative directional experiment based on the use of a solid target made by newly developed nuclear emulsions and read-out systems reaching a position resolution of the order of 10 nm.

  3. Puppet to the Stars: Using Drama and Puppetry to Teach Astrobiology

    NASA Astrophysics Data System (ADS)

    Berkowitz, J.

    2010-04-01

    Based on my non-fiction book Out of This World: The Amazing Search for an Alien Earth (Kids Can Press, 2009), I've developed a succesful puppet show that introduces students in grades two to six to the current search for living exoplanets.

  4. Application of tabu search to deterministic and stochastic optimization problems

    NASA Astrophysics Data System (ADS)

    Gurtuna, Ozgur

    During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is developed. The theoretical underpinnings of the TSMC method and the flow of the algorithm are explained. Its performance is compared to other existing methods for financial option valuation. In the third, and final, problem, TSMC method is used to determine the conditions of feasibility for hybrid electric vehicles and fuel cell vehicles. There are many uncertainties related to the technologies and markets associated with new generation passenger vehicles. These uncertainties are analyzed in order to determine the conditions in which new generation vehicles can compete with established technologies.

  5. On the use of higher order wave forms in the search for gravitational waves emitted by compact binary coalescences

    NASA Astrophysics Data System (ADS)

    McKechan, David J. A.

    2010-11-01

    This thesis concerns the use, in gravitational wave data analysis, of higher order wave form models of the gravitational radiation emitted by compact binary coalescences. We begin with an introductory chapter that includes an overview of the theory of general relativity, gravitational radiation and ground-based interferometric gravitational wave detectors. We then discuss, in Chapter 2, the gravitational waves emitted by compact binary coalescences, with an explanation of higher order waveforms and how they differ from leading order waveforms we also introduce the post-Newtonian formalism. In Chapter 3 the method and results of a gravitational wave search for low mass compact binary coalescences using a subset of LIGO's 5th science run data are presented and in the subsequent chapter we examine how one could use higher order waveforms in such analyses. We follow the development of a new search algorithm that incorporates higher order waveforms with promising results for detection efficiency and parameter estimation. In Chapter 5, a new method of windowing time-domain waveforms that offers benefit to gravitational wave searches is presented. The final chapter covers the development of a game designed as an outreach project to raise public awareness and understanding of the search for gravitational waves.

  6. Using a Simple Knowledge Organization System to facilitate Catalogue and Search for the ESA CCI Open Data Portal

    NASA Astrophysics Data System (ADS)

    Wilson, Antony; Bennett, Victoria; Donegan, Steve; Juckes, Martin; Kershaw, Philip; Petrie, Ruth; Stephens, Ag; Waterfall, Alison

    2016-04-01

    The ESA Climate Change Initiative (CCI) is a €75m programme that runs from 2009-2016, with a goal to provide stable, long-term, satellite-based essential climate variable (ECV) data products for climate modellers and researchers. As part of the CCI, ESA have funded the Open Data Portal project to establish a central repository to bring together the data from these multiple sources and make it available in a consistent way, in order to maximise its dissemination amongst the international user community. Search capabilities are a critical component to attaining this goal. To this end, the project is providing dataset-level metadata in the form of ISO 19115 records served via a standard OGC CSW interface. In addition, the Open Data Portal is re-using the search system from the Earth System Grid Federation (ESGF), successfully applied to support CMIP5 (5th Coupled Model Intercomparison Project) and obs4MIPs. This uses a tightly defined controlled vocabulary of metadata terms, the DRS (The Data Reference Syntax) which encompass different aspects of the data. This system hs facilitated the construction of a powerful faceted search interface to enable users to discover data at the individual file level of granularity through ESGF's web portal frontend. The use of a consistent set of model experiments for CMIP5 allowed the definition of a uniform DRS for all model data served from ESGF. For CCI however, there are thirteen ECVs, each of which is derived from multiple sources and different science communities resulting in highly heterogeneous metadata. An analysis has been undertaken of the concepts in use, with the aim to produce a CCI DRS which could be provide a single authoritative source for cataloguing and searching the CCI data for the Open Data Portal. The use of SKOS (Simple Knowledge Organization System) and OWL (Web Ontology Language) to represent the DRS are a natural fit and provide controlled vocabularies as well as a way to represent relationships between similar terms used in different ECVs. An iterative approach has been adopted for the model development working closely with domain experts and drawing on practical experience working with content in the input datasets. Tooling has been developed to enable the definition of vocabulary terms via a simple spreadsheet format which can then be automatically converted into Turtle notation and uploaded to the CCI DRS vocabulary service. With a baseline model established, work is underway to develop an ingestion pipeline to import validated search metadata into the ESGF and OGC CSW search services. In addition to the search terms indexed into the ESGF search system, ISO 19115 records will also be similarly tagged during this process with search terms from the data model. In this way it will be possible to construct a faceted search user interface for the Portal which can yield linked search results for data both at the file and dataset level granularity. It is hoped that this will also provide a rich range of content for third-party organisations wishing to incorporate access to CCI data in their own applications and services.

  7. A new memetic algorithm for mitigating tandem automated guided vehicle system partitioning problem

    NASA Astrophysics Data System (ADS)

    Pourrahimian, Parinaz

    2017-11-01

    Automated Guided Vehicle System (AGVS) provides the flexibility and automation demanded by Flexible Manufacturing System (FMS). However, with the growing concern on responsible management of resource use, it is crucial to manage these vehicles in an efficient way in order reduces travel time and controls conflicts and congestions. This paper presents the development process of a new Memetic Algorithm (MA) for optimizing partitioning problem of tandem AGVS. MAs employ a Genetic Algorithm (GA), as a global search, and apply a local search to bring the solutions to a local optimum point. A new Tabu Search (TS) has been developed and combined with a GA to refine the newly generated individuals by GA. The aim of the proposed algorithm is to minimize the maximum workload of the system. After all, the performance of the proposed algorithm is evaluated using Matlab. This study also compared the objective function of the proposed MA with GA. The results showed that the TS, as a local search, significantly improves the objective function of the GA for different system sizes with large and small numbers of zone by 1.26 in average.

  8. Developing a Domain Ontology: the Case of Water Cycle and Hydrology

    NASA Astrophysics Data System (ADS)

    Gupta, H.; Pozzi, W.; Piasecki, M.; Imam, B.; Houser, P.; Raskin, R.; Ramachandran, R.; Martinez Baquero, G.

    2008-12-01

    A semantic web ontology enables semantic data integration and semantic smart searching. Several organizations have attempted to implement smart registration and integration or searching using ontologies. These are the NOESIS (NSF project: LEAD) and HydroSeek (NSF project: CUAHS HIS) data discovery engines and the NSF project GEON. All three applications use ontologies to discover data from multiple sources and projects. The NASA WaterNet project was established to identify creative, innovative ways to bridge NASA research results to real world applications, linking decision support needs to available data, observations, and modeling capability. WaterNet (NASA project) utilized the smart query tool Noesis as a testbed to test whether different ontologies (and different catalog searches) could be combined to match resources with user needs. NOESIS contains the upper level SWEET ontology that accepts plug in domain ontologies to refine user search queries, reducing the burden of multiple keyword searches. Another smart search interface was that developed for CUAHSI, HydroSeek, that uses a multi-layered concept search ontology, tagging variables names from any number of data sources to specific leaf and higher level concepts on which the search is executed. This approach has proven to be quite successful in mitigating semantic heterogeneity as the user does not need to know the semantic specifics of each data source system but just uses a set of common keywords to discover the data for a specific temporal and geospatial domain. This presentation will show tests with Noesis and Hydroseek lead to the conclusion that the construction of a complex, and highly heterogeneous water cycle ontology requires multiple ontology modules. To illustrate the complexity and heterogeneity of a water cycle ontology, Hydroseek successfully utilizes WaterOneFlow to integrate data across multiple different data collections, such as USGS NWIS. However,different methodologies are employed by the Earth Science, the Hydrological, and Hydraulic Engineering Communities, and each community employs models that require different input data. If a sub-domain ontology is created for each of these,describing water balance calculations, then the resulting structure of the semantic network describing these various terms can be rather complex, heterogeneous, and overlapping, and will require "mapping" between equivalent terms in the ontologies, along with the development of an upper level conceptual or domain ontology to utilize and link to those already in existence.

  9. An analytical study of composite laminate lay-up using search algorithms for maximization of flexural stiffness and minimization of springback angle

    NASA Astrophysics Data System (ADS)

    Singh, Ranjan Kumar; Rinawa, Moti Lal

    2018-04-01

    The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.

  10. PubMed and beyond: a survey of web tools for searching biomedical literature

    PubMed Central

    Lu, Zhiyong

    2011-01-01

    The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search PMID:21245076

  11. Age-related changes in conjunctive visual search in children with and without ASD.

    PubMed

    Iarocci, Grace; Armstrong, Kimberly

    2014-04-01

    Visual-spatial strengths observed among people with autism spectrum disorder (ASD) may be associated with increased efficiency of selective attention mechanisms such as visual search. In a series of studies, researchers examined the visual search of targets that share features with distractors in a visual array and concluded that people with ASD showed enhanced performance on visual search tasks. However, methodological limitations, the small sample sizes, and the lack of developmental analysis have tempered the interpretations of these results. In this study, we specifically addressed age-related changes in visual search. We examined conjunctive visual search in groups of children with (n = 34) and without ASD (n = 35) at 7-9 years of age when visual search performance is beginning to improve, and later, at 10-12 years, when performance has improved. The results were consistent with previous developmental findings; 10- to 12-year-old children were significantly faster visual searchers than their 7- to 9-year-old counterparts. However, we found no evidence of enhanced search performance among the children with ASD at either the younger or older ages. More research is needed to understand the development of visual search in both children with and without ASD. © 2014 International Society for Autism Research, Wiley Periodicals, Inc.

  12. Protein structural similarity search by Ramachandran codes

    PubMed Central

    Lo, Wei-Cheng; Huang, Po-Jung; Chang, Chih-Hung; Lyu, Ping-Chiang

    2007-01-01

    Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation). SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE) and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era. PMID:17716377

  13. Exploring What’s Missing: What Do Target Absent Trials Reveal About Autism Search Superiority?

    PubMed Central

    Keehn, Brandon; Joseph, Robert M.

    2016-01-01

    We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of discrimination or selection. Rather, consistent with prior ASD research, group differences were mainly the effect of faster performance on target-absent trials. Eye-tracking revealed a lack of left-visual-field search asymmetry in ASD, which may confer an additional advantage when the target is absent. Lastly, ASD symptomatology was positively associated with search superiority, the mechanisms of which may shed light on the atypical brain organization that underlies social-communicative impairment in ASD. PMID:26762114

  14. Association between Search Behaviors and Disease Prevalence Rates at 18 U.S. Children's Hospitals.

    PubMed

    Daniel, Dennis; Wolbrink, Traci; Logvinenko, Tanya; Harper, Marvin; Burns, Jeffrey

    2017-10-01

    Background Usage of online resources by clinicians in training and practice can provide insight into knowledge gaps and inform development of decision support tools. Although online information seeking is often driven by encountered patient problems, the relationship between disease prevalence and search rate has not been previously characterized. Objective This article aimed to (1) identify topics frequently searched by pediatric clinicians using UpToDate (http://www.uptodate.com) and (2) explore the association between disease prevalence rate and search rate using data from the Pediatric Health Information System. Methods We identified the most common search queries and resources most frequently accessed on UpToDate for a cohort of 18 children's hospitals during calendar year 2012. We selected 64 of the most frequently searched diseases and matched ICD-9 data from the PHIS database during the same time period. Using linear regression, we explored the relationship between clinician query rate and disease prevalence rate. Results The hospital cohort submitted 1,228,138 search queries across 592,454 sessions. The majority of search sessions focused on a single search topic. We identified no consistent overall association between disease prevalence and search rates. Diseases where search rate was substantially higher than prevalence rate were often infectious or immune/rheumatologic conditions, involved potentially complex diagnosis or management, and carried risk of significant morbidity or mortality. None of the examined diseases showed a decrease in search rate associated with increased disease prevalence rates. Conclusion This is one of the first medical learning needs assessments to use large-scale, multisite data to identify topics of interest to pediatric clinicians, and to examine the relationship between disease prevalence and search rate for a set of pediatric diseases. Overall, disease search rate did not appear to be associated with hospital disease prevalence rates based on ICD-9 codes. However, some diseases were consistently searched at a higher rate than their prevalence rate; many of these diseases shared common features.

  15. Characterizing the phylogenetic tree-search problem.

    PubMed

    Money, Daniel; Whelan, Simon

    2012-03-01

    Phylogenetic trees are important in many areas of biological research, ranging from systematic studies to the methods used for genome annotation. Finding the best scoring tree under any optimality criterion is an NP-hard problem, which necessitates the use of heuristics for tree-search. Although tree-search plays a major role in obtaining a tree estimate, there remains a limited understanding of its characteristics and how the elements of the statistical inferential procedure interact with the algorithms used. This study begins to answer some of these questions through a detailed examination of maximum likelihood tree-search on a wide range of real genome-scale data sets. We examine all 10,395 trees for each of the 106 genes of an eight-taxa yeast phylogenomic data set, then apply different tree-search algorithms to investigate their performance. We extend our findings by examining two larger genome-scale data sets and a large disparate data set that has been previously used to benchmark the performance of tree-search programs. We identify several broad trends occurring during tree-search that provide an insight into the performance of heuristics and may, in the future, aid their development. These trends include a tendency for the true maximum likelihood (best) tree to also be the shortest tree in terms of branch lengths, a weak tendency for tree-search to recover the best tree, and a tendency for tree-search to encounter fewer local optima in genes that have a high information content. When examining current heuristics for tree-search, we find that nearest-neighbor-interchange performs poorly, and frequently finds trees that are significantly different from the best tree. In contrast, subtree-pruning-and-regrafting tends to perform well, nearly always finding trees that are not significantly different to the best tree. Finally, we demonstrate that the precise implementation of a tree-search strategy, including when and where parameters are optimized, can change the character of tree-search, and that good strategies for tree-search may combine existing tree-search programs.

  16. Full Text and Figure Display Improves Bioscience Literature Search

    PubMed Central

    Divoli, Anna; Wooldridge, Michael A.; Hearst, Marti A.

    2010-01-01

    When reading bioscience journal articles, many researchers focus attention on the figures and their captions. This observation led to the development of the BioText literature search engine [1], a freely available Web-based application that allows biologists to search over the contents of Open Access Journals, and see figures from the articles displayed directly in the search results. This article presents a qualitative assessment of this system in the form of a usability study with 20 biologist participants using and commenting on the system. 19 out of 20 participants expressed a desire to use a bioscience literature search engine that displays articles' figures alongside the full text search results. 15 out of 20 participants said they would use a caption search and figure display interface either frequently or sometimes, while 4 said rarely and 1 said undecided. 10 out of 20 participants said they would use a tool for searching the text of tables and their captions either frequently or sometimes, while 7 said they would use it rarely if at all, 2 said they would never use it, and 1 was undecided. This study found evidence, supporting results of an earlier study, that bioscience literature search systems such as PubMed should show figures from articles alongside search results. It also found evidence that full text and captions should be searched along with the article title, metadata, and abstract. Finally, for a subset of users and information needs, allowing for explicit search within captions for figures and tables is a useful function, but it is not entirely clear how to cleanly integrate this within a more general literature search interface. Such a facility supports Open Access publishing efforts, as it requires access to full text of documents and the lifting of restrictions in order to show figures in the search interface. PMID:20418942

  17. The role of extra-foveal processing in 3D imaging

    NASA Astrophysics Data System (ADS)

    Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.

    2017-03-01

    The field of medical image quality has relied on the assumption that metrics of image quality for simple visual detection tasks are a reliable proxy for the more clinically realistic visual search tasks. Rank order of signal detectability across conditions often generalizes from detection to search tasks. Here, we argue that search in 3D images represents a paradigm shift in medical imaging: radiologists typically cannot exhaustively scrutinize all regions of interest with the high acuity fovea requiring detection of signals with extra-foveal areas (visual periphery) of the human retina. We hypothesize that extra-foveal processing can alter the detectability of certain types of signals in medical images with important implications for search in 3D medical images. We compare visual search of two different types of signals in 2D vs. 3D images. We show that a small microcalcification-like signal is more highly detectable than a larger mass-like signal in 2D search, but its detectability largely decreases (relative to the larger signal) in the 3D search task. Utilizing measurements of observer detectability as a function retinal eccentricity and observer eye fixations we can predict the pattern of results in the 2D and 3D search studies. Our findings: 1) suggest that observer performance findings with 2D search might not always generalize to 3D search; 2) motivate the development of a new family of model observers that take into account the inhomogeneous visual processing across the retina (foveated model observers).

  18. Federated Space-Time Query for Earth Science Data Using OpenSearch Conventions

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Beaumont, Bruce; Duerr, Ruth; Hua, Hook

    2009-01-01

    This slide presentation reviews a Space-time query system that has been developed to assist the user in finding Earth science data that fulfills the researchers needs. It reviews the reasons why finding Earth science data can be so difficult, and explains the workings of the Space-Time Query with OpenSearch and how this system can assist researchers in finding the required data, It also reviews the developments with client server systems.

  19. Visualization of ocean forecast in BYTHOS

    NASA Astrophysics Data System (ADS)

    Zhuk, E.; Zodiatis, G.; Nikolaidis, A.; Stylianou, S.; Karaolia, A.

    2016-08-01

    The Cyprus Oceanography Center has been constantly searching for new ideas for developing and implementing innovative methods and new developments concerning the use of Information Systems in Oceanography, to suit both the Center's monitoring and forecasting products. Within the frame of this scope two major online managing and visualizing data systems have been developed and utilized, those of CYCOFOS and BYTHOS. The Cyprus Coastal Ocean Forecasting and Observing System - CYCOFOS provides a variety of operational predictions such as ultra high, high and medium resolution ocean forecasts in the Levantine Basin, offshore and coastal sea state forecasts in the Mediterranean and Black Sea, tide forecasting in the Mediterranean, ocean remote sensing in the Eastern Mediterranean and coastal and offshore monitoring. As a rich internet application, BYTHOS enables scientists to search, visualize and download oceanographic data online and in real time. The recent improving of BYTHOS system is the extension with access and visualization of CYCOFOS data and overlay forecast fields and observing data. The CYCOFOS data are stored at OPENDAP Server in netCDF format. To search, process and visualize it the php and python scripts were developed. Data visualization is achieved through Mapserver. The BYTHOS forecast access interface allows to search necessary forecasting field by recognizing type, parameter, region, level and time. Also it provides opportunity to overlay different forecast and observing data that can be used for complex analyze of sea basin aspects.

  20. Astronomical Software Directory Service

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.; Payne, Harry; Hayes, Jeffrey

    1997-01-01

    With the support of NASA's Astrophysics Data Program (NRA 92-OSSA-15), we have developed the Astronomical Software Directory Service (ASDS): a distributed, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URLs indexed for full-text searching. Users are performing about 400 searches per month. A new aspect of our service is the inclusion of telescope and instrumentation manuals, which prompted us to change the name to the Astronomical Software and Documentation Service. ASDS was originally conceived to serve two purposes: to provide a useful Internet service in an area of expertise of the investigators (astronomical software), and as a research project to investigate various architectures for searching through a set of documents distributed across the Internet. Two of the co-investigators were then installing and maintaining astronomical software as their primary job responsibility. We felt that a service which incorporated our experience in this area would be more useful than a straightforward listing of software packages. The original concept was for a service based on the client/server model, which would function as a directory/referral service rather than as an archive. For performing the searches, we began our investigation with a decision to evaluate the Isite software from the Center for Networked Information Discovery and Retrieval (CNIDR). This software was intended as a replacement for Wide-Area Information Service (WAIS), a client/server technology for performing full-text searches through a set of documents. Isite had some additional features that we considered attractive, and we enjoyed the cooperation of the Isite developers, who were happy to have ASDS as a demonstration project. We ended up staying with the software throughout the project, making modifications to take advantage of new features as they came along, as well as influencing the software development. The Web interface to the search engine is provided by a gateway program written in C++ by a consultant to the project (A. Warnock).

  1. Using a terminology server and consumer search phrases to help patients find physicians with particular expertise.

    PubMed

    Cole, Curtis L; Kanter, Andrew S; Cummens, Michael; Vostinar, Sean; Naeymi-Rad, Frank

    2004-01-01

    To design and implement a real world application using a terminology server to assist patients and physicians who use common language search terms to find specialist physicians with a particular clinical expertise. Terminology servers have been developed to help users encoding of information using complicated structured vocabulary during data entry tasks, such as recording clinical information. We describe a methodology using Personal Health Terminology trade mark and a SNOMED CT-based hierarchical concept server. Construction of a pilot mediated-search engine to assist users who use vernacular speech in querying data which is more technical than vernacular. This approach, which combines theoretical and practical requirements, provides a useful example of concept-based searching for physician referrals.

  2. Guidelines for Conducting an Ethnic Heritage Search.

    ERIC Educational Resources Information Center

    Williams, Maxine Patrick

    Based on the work of a 22-member research team in the San Diego Community College District, this booklet offers guidelines for developing cultural awareness and presents instruments for conducting an ethnic heritage search, i.e., a systematic examination of a culture to, for example, reveal reasons for customs or practices or clarify the modes of…

  3. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    PubMed

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  4. Sagace: A web-based search engine for biomedical databases in Japan

    PubMed Central

    2012-01-01

    Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data) and biological resource banks (such as mouse models of disease and cell lines). With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/. PMID:23110816

  5. Development of a Library Search-Based Screening System for 3,4-Methylenedioxymethamphetamine in Ecstasy Tablets Using a Portable Near-Infrared Spectrometer.

    PubMed

    Tsujikawa, Kenji; Yamamuro, Tadashi; Kuwayama, Kenji; Kanamori, Tatsuyuki; Iwata, Yuko T; Miyamoto, Kazuna; Kasuya, Fumiyo; Inoue, Hiroyuki

    2016-09-01

    This is the first report on development of a library search-based screening system for 3,4-methylenedioxymethamphetamine (MDMA) in ecstasy tablets using a portable near-infrared (NIR) spectrometer. The spectrum library consisted of spectra originating from standard substances as well as mixtures of MDMA hydrochloride (MDMA-HCl) and diluents. The raw NIR spectra were mathematically pretreated, and then, a library search was performed using correlation coefficient. To enhance the discrimination ability, the wavelength used for the library search was limited. Mixtures of MDMA-HCl and diluents were used to decide criteria to judge MDMA-positive or MDMA-negative. Confiscated MDMA tablets and medicinal tablets were used for performance check of the criteria. Twenty-two of 27 MDMA tablets were truly judged as MDMA-positive. Five false-negative results may be caused by compounds not included in the library. No false-positive results were obtained for medicinal tablets. This system will be a useful tool for on-site screening of MDMA tablets. © 2016 American Academy of Forensic Sciences.

  6. Blade frequency program for nonuniform helicopter rotors, with automated frequency search

    NASA Technical Reports Server (NTRS)

    Sadler, S. G.

    1972-01-01

    A computer program for determining the natural frequencies and normal modes of a lumped parameter model of a rotating, twisted beam, with nonuniform mass and elastic properties was developed. The program is used to solve the conditions existing in a helicopter rotor where the outboard end of the rotor has zero forces and moments. Three frequency search methods have been implemented. Including an automatic search technique, which allows the program to find up to the fifteen lowest natural frequencies without the necessity for input estimates of these frequencies.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmieman, E.; Johns, W.E.

    This document was compiled by a group of about 12 graduate students in the Department of Mechanical Engineering and Material Science at Washington State University and was funded by the U.S. Department of Energy. The literature search resulting in the compilation of this bibliography was designed to be an exhaustive search for research and development work involving the vitrification of mixed wastes, published by domestic and foreign researchers, primarily during 1989-1994. The search techniques were dominated by electronic methods and this bibliography is also available in electronic format, Windows Reference Manager.

  8. Computer algorithms in the search for unrelated stem cell donors.

    PubMed

    Steiner, David

    2012-01-01

    Hematopoietic stem cell transplantation (HSCT) is a medical procedure in the field of hematology and oncology, most often performed for patients with certain cancers of the blood or bone marrow. A lot of patients have no suitable HLA-matched donor within their family, so physicians must activate a "donor search process" by interacting with national and international donor registries who will search their databases for adult unrelated donors or cord blood units (CBU). Information and communication technologies play a key role in the donor search process in donor registries both nationally and internationaly. One of the major challenges for donor registry computer systems is the development of a reliable search algorithm. This work discusses the top-down design of such algorithms and current practice. Based on our experience with systems used by several stem cell donor registries, we highlight typical pitfalls in the implementation of an algorithm and underlying data structure.

  9. Methods and means used in programming intelligent searches of technical documents

    NASA Technical Reports Server (NTRS)

    Gross, David L.

    1993-01-01

    In order to meet the data research requirements of the Safety, Reliability & Quality Assurance activities at Kennedy Space Center (KSC), a new computer search method for technical data documents was developed. By their very nature, technical documents are partially encrypted because of the author's use of acronyms, abbreviations, and shortcut notations. This problem of computerized searching is compounded at KSC by the volume of documentation that is produced during normal Space Shuttle operations. The Centralized Document Database (CDD) is designed to solve this problem. It provides a common interface to an unlimited number of files of various sizes, with the capability to perform any diversified types and levels of data searches. The heart of the CDD is the nature and capability of its search algorithms. The most complex form of search that the program uses is with the use of a domain-specific database of acronyms, abbreviations, synonyms, and word frequency tables. This database, along with basic sentence parsing, is used to convert a request for information into a relational network. This network is used as a filter on the original document file to determine the most likely locations for the data requested. This type of search will locate information that traditional techniques, (i.e., Boolean structured key-word searching), would not find.

  10. Construction of web-based nutrition education contents and searching engine for usage of healthy menu of children

    PubMed Central

    Lee, Tae-Kyong; Chung, Hea-Jung; Park, Hye-Kyung; Lee, Eun-Ju; Nam, Hye-Seon; Jung, Soon-Im; Cho, Jee-Ye; Lee, Jin-Hee; Kim, Gon; Kim, Min-Chan

    2008-01-01

    A diet habit, which is developed in childhood, lasts for a life time. In this sense, nutrition education and early exposure to healthy menus in childhood is important. Children these days have easy access to the internet. Thus, a web-based nutrition education program for children is an effective tool for nutrition education of children. This site provides the material of the nutrition education for children with characters which are personified nutrients. The 151 menus are stored in the site together with video script of the cooking process. The menus are classified by the criteria based on age, menu type and the ethnic origin of the menu. The site provides a search function. There are three kinds of search conditions which are key words, menu type and "between" expression of nutrients such as calorie and other nutrients. The site is developed with the operating system Windows 2003 Server, the web server ZEUS 5, development language JSP, and database management system Oracle 10 g. PMID:20126375

  11. The feasibility and appropriateness of introducing nursing curricula from developed countries into developing countries: a comprehensive systematic review.

    PubMed

    Jayasekara, Rasika; Schultz, Tim

    The objective of this review was to appraise and synthesise the best available evidence on the feasibility and appropriateness of introducing nursing curricula from developed countries into developing countries. This review considered quantitative and qualitative research papers that addressed the feasibility and appropriateness of introducing developed countries' nursing curricula into developing countries. Papers of the highest level of evidence rating were given priority. Participants of interest were all levels of nursing staff, nursing students, healthcare consumers and healthcare administrators. Outcomes of interest that are relevant to the evaluation of undergraduate nursing curricula were considered in the review including cost-effectiveness, cultural relevancy, adaptability, consumer satisfaction and student satisfaction. The search strategy sought to find both published and unpublished studies and papers, limited to the English language. An initial limited search of MEDLINE and CINAHL was undertaken followed by an analysis of the text words contained in the title and abstract, and of the index terms used to describe the article. A second extensive search was then undertaken using all identified key words and index terms. Finally, the reference list of all identified reports and articles was searched, the contents pages of a few relevant journals were hand searched and experts in the field were contacted to find any relevant studies missed from the first two searches. Each paper was assessed by two independent reviewers for methodological quality before inclusion in the review using an appropriate critical appraisal instrument from the System for the Unified Management, Assessment and Review of Information (SUMARI) package. A total of four papers, including one descriptive study and three textual papers, were included in the review. Because of the diverse nature of these papers, meta-synthesis of the results was not possible. For this reason, this section of the review is presented in narrative form. In this review, a descriptive study and a textual opinion paper examined the cultural relevancy of borrowed curriculum models, and the global influence of American nursing. Another two opinion papers evaluated the adaptability of another country curriculum models in their countries. The evidence regarding the feasibility and appropriateness of introducing developed countries' nursing curricula into developing countries is weak because of the paucity of high-quality studies. However, some lower-level evidence suggesting that direct transfer of the curriculum model from one country to another is not appropriate without first assessing the cultural context of both countries. Second, the approach of considering international, regional and local experiences more feasible and presumably a more effective strategy for adapting of a country's curriculum into a culturally or economically different country.

  12. The Search for Subsurface Life on Mars: Results from the MARTE Analog Drill Experiment in Rio Tinto, Spain

    NASA Astrophysics Data System (ADS)

    Stoker, C. R.; Lemke, L. G.; Cannon, H.; Glass, B.; Dunagan, S.; Zavaleta, J.; Miller, D.; Gomez-Elvira, J.

    2006-03-01

    The Mars Analog Research and Technology (MARTE) experiment has developed an automated drilling system on a simulated Mars lander platform including drilling, sample handling, core analysis and down-hole instruments relevant to searching for life in the Martian subsurface.

  13. Talent Search: Purposes, Rationale, and Role in Gifted Education.

    ERIC Educational Resources Information Center

    Olszewski-Kubilius, Paula

    1998-01-01

    This paper describes the purpose and rationale of a "talent search" effort to identify gifted students through use of off-level testing. Three components are stressed: diagnosis and evaluation of domains and levels of talent; educational placement and guidance; and talent development opportunities. Research supporting the talent-search…

  14. The Search for Meaning in Factor Analytically Derived Dimensions.

    ERIC Educational Resources Information Center

    Hagekull, Berit

    This paper discusses the development of instruments to measure individual differences in behavior during infancy. The Infant Temperament Questionnaire (ITQ), which was designed to measure the temperament dimensions identified by the New York Longitudinal Study (NYLS), constituted the methodological starting point in the search for a dimensional…

  15. Teaching Geosciences in Mississippi

    ERIC Educational Resources Information Center

    Dewey, Christopher; Beasley, Rodney W.

    2007-01-01

    Historically, two paths have developed in an individual and communal search for understanding and meaning: The study of science and the search for a higher spirituality. Although they should not necessarily be mutually exclusive, the history of science is littered with the collision of these two pathways, for rarely have they met without…

  16. Searching for extra-terrestrial civilizations

    NASA Technical Reports Server (NTRS)

    Gindilis, L. M.

    1974-01-01

    The probability of radio interchange with extraterrestrial civilizations is discussed. Difficulties constitute absorption, scattering, and dispersion of signals by the rarified interstellar medium as well as the deciphering of received signals and convergence of semantic concept. A cybernetic approach considers searching for signals that develop from astroengineering activities of extraterrestrial civilizations.

  17. Use of Information Technology in Optometric Education.

    ERIC Educational Resources Information Center

    Elam, Jimmy H.

    1999-01-01

    To enhance the information technology literacy of optometry students, the Southern College of Optometry (Tennessee) developed an academic assignment, the Electronic Media Paper, in which second-year students must search two different electronic media for information. Results suggest Internet use for searching may be a useful tool for specific…

  18. blastjs: a BLAST+ wrapper for Node.js.

    PubMed

    Page, Martin; MacLean, Dan; Schudoma, Christian

    2016-02-27

    To cope with the ever-increasing amount of sequence data generated in the field of genomics, the demand for efficient and fast database searches that drive functional and structural annotation in both large- and small-scale genome projects is on the rise. The tools of the BLAST+ suite are the most widely employed bioinformatic method for these database searches. Recent trends in bioinformatics application development show an increasing number of JavaScript apps that are based on modern frameworks such as Node.js. Until now, there is no way of using database searches with the BLAST+ suite from a Node.js codebase. We developed blastjs, a Node.js library that wraps the search tools of the BLAST+ suite and thus allows to easily add significant functionality to any Node.js-based application. blastjs is a library that allows the incorporation of BLAST+ functionality into bioinformatics applications based on JavaScript and Node.js. The library was designed to be as user-friendly as possible and therefore requires only a minimal amount of code in the client application. The library is freely available under the MIT license at https://github.com/teammaclean/blastjs.

  19. It takes longer than you think: librarian time spent on systematic review tasks*

    PubMed Central

    Bullers, Krystal; Howard, Allison M.; Hanson, Ardis; Kearns, William D.; Orriola, John J.; Polo, Randall L.; Sakmar, Kristen A.

    2018-01-01

    Introduction The authors examined the time that medical librarians spent on specific tasks for systematic reviews (SRs): interview process, search strategy development, search strategy translation, documentation, deliverables, search methodology writing, and instruction. We also investigated relationships among the time spent on SR tasks, years of experience, and number of completed SRs to gain a better understanding of the time spent on SR tasks from time, staffing, and project management perspectives. Methods A confidential survey and study description were sent to medical library directors who were members of the Association of Academic Health Sciences Libraries as well as librarians serving members of the Association of American Medical Colleges or American Osteopathic Association. Results Of the 185 participants, 143 (77%) had worked on an SR within the last 5 years. The number of SRs conducted by participants during their careers ranged from 1 to 500, with a median of 5. The major component of time spent was on search strategy development and translation. Average aggregated time for standard tasks was 26.9 hours, with a median of 18.5 hours. Task time was unrelated to the number of SRs but was positively correlated with years of SR experience. Conclusion The time required to conduct the librarian’s discrete tasks in an SR varies substantially, and there are no standard time frames. Librarians with more SR experience spent more time on instruction and interviews; time spent on all other tasks varied widely. Librarians also can expect to spend a significant amount of their time on search strategy development, translation, and writing. PMID:29632442

  20. Computational efficiency of parallel combinatorial OR-tree searches

    NASA Technical Reports Server (NTRS)

    Li, Guo-Jie; Wah, Benjamin W.

    1990-01-01

    The performance of parallel combinatorial OR-tree searches is analytically evaluated. This performance depends on the complexity of the problem to be solved, the error allowance function, the dominance relation, and the search strategies. The exact performance may be difficult to predict due to the nondeterminism and anomalies of parallelism. The authors derive the performance bounds of parallel OR-tree searches with respect to the best-first, depth-first, and breadth-first strategies, and verify these bounds by simulation. They show that a near-linear speedup can be achieved with respect to a large number of processors for parallel OR-tree searches. Using the bounds developed, the authors derive sufficient conditions for assuring that parallelism will not degrade performance and necessary conditions for allowing parallelism to have a speedup greater than the ratio of the numbers of processors. These bounds and conditions provide the theoretical foundation for determining the number of processors required to assure a near-linear speedup.

  1. Predicting hospital visits from geo-tagged Internet search logs.

    PubMed

    Agarwal, Vibhu; Han, Lichy; Madan, Isaac; Saluja, Shaurya; Shidham, Aaditya; Shah, Nigam H

    2016-01-01

    The steady rise in healthcare costs has deprived over 45 million Americans of healthcare services (1, 2) and has encouraged healthcare providers to look for opportunities to improve their operational efficiency. Prior studies have shown that evidence of healthcare seeking intent in Internet searches correlates well with healthcare resource utilization. Given the ubiquitous nature of mobile Internet search, we hypothesized that analyzing geo-tagged mobile search logs could enable us to machine-learn predictors of future patient visits. Using a de-identified dataset of geo-tagged mobile Internet search logs, we mined text and location patterns that are predictors of healthcare resource utilization and built statistical models that predict the probability of a user's future visit to a medical facility. Our efforts will enable the development of innovative methods for modeling and optimizing the use of healthcare resources-a crucial prerequisite for securing healthcare access for everyone in the days to come.

  2. A user-friendly tool for medical-related patent retrieval.

    PubMed

    Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnyakova, Dina; Lovis, Christian; Ruch, Patrick

    2012-01-01

    Health-related information retrieval is complicated by the variety of nomenclatures available to name entities, since different communities of users will use different ways to name a same entity. We present in this report the development and evaluation of a user-friendly interactive Web application aiming at facilitating health-related patent search. Our tool, called TWINC, relies on a search engine tuned during several patent retrieval competitions, enhanced with intelligent interaction modules, such as chemical query, normalization and expansion. While the functionality of related article search showed promising performances, the ad hoc search results in fairly contrasted results. Nonetheless, TWINC performed well during the PatOlympics competition and was appreciated by intellectual property experts. This result should be balanced by the limited evaluation sample. We can also assume that it can be customized to be applied in corporate search environments to process domain and company-specific vocabularies, including non-English literature and patents reports.

  3. Photometric Detection of Extra-Solar Planets

    NASA Technical Reports Server (NTRS)

    Hatzes, Artie P.; Cochran, William D.

    2004-01-01

    This NASA Origins Program grant supported the TEMPEST Texas McDonald Photometric Extrasolar Search for Transits) program at McDonald Observatory, which searches for transits of extrasolar planets across the disks of their parent stars. The basic approach is to use a wide-field ground-based telescope (in our case the McDonald Observatory 0.76m telescope and it s Prime Focus Corrector) to search for transits of short period (1-15 day orbits) of close-in hot-Jupiter planets in orbit around a large sample of field stars. The next task is to search these data streams for possible transit events. We collected our first set of test data for this program using the 0.76 m PFC in the summer of 1998. From those data, we developed the optimal observing procedures, including tailoring the stellar density, exposure times, and filters to best-suit the instrument and project. In the summer of 1999, we obtained the first partial season of data on a dedicated field in the constellation Cygnus. These data were used to develop and refine the reduction and analysis procedures to produce high-precision photometry and search for transits in the resulting light curves. The TeMPEST project subsequently obtained three full seasons of data on six different fields using the McDonald Observatory 0.76m PFC.

  4. Searching for Life with Rovers: Exploration Methods & Science Results from the 2004 Field Campaign of the "Life in the Atacama" Project and Applications to Future Mars Missions

    NASA Technical Reports Server (NTRS)

    Cabrol, N. A.a; Wettergreen, D. S.; Whittaker, R.; Grin, E. A.; Moersch, J.; Diaz, G. Chong; Cockell, C.; Coppin, P.; Dohm, J. M.; Fisher, G.

    2005-01-01

    The Life In The Atacama (LITA) project develops and field tests a long-range, solarpowered, automated rover platform (Zo ) and a science payload assembled to search for microbial life in the Atacama desert. Life is barely detectable over most of the driest desert on Earth. Its unique geological, climatic, and biological evolution have created a unique training site for designing and testing exploration strategies and life detection methods for the robotic search for life on Mars.

  5. Evaluation of DNA mixtures from database search.

    PubMed

    Chung, Yuk-Ka; Hu, Yue-Qing; Fung, Wing K

    2010-03-01

    With the aim of bridging the gap between DNA mixture analysis and DNA database search, a novel approach is proposed to evaluate the forensic evidence of DNA mixtures when the suspect is identified by the search of a database of DNA profiles. General formulae are developed for the calculation of the likelihood ratio for a two-person mixture under general situations including multiple matches and imperfect evidence. The influence of the prior probabilities on the weight of evidence under the scenario of multiple matches is demonstrated by a numerical example based on Hong Kong data. Our approach is shown to be capable of presenting the forensic evidence of DNA mixtures in a comprehensive way when the suspect is identified through database search.

  6. SpEnD: Linked Data SPARQL Endpoints Discovery Using Search Engines

    NASA Astrophysics Data System (ADS)

    Yumusak, Semih; Dogdu, Erdogan; Kodaz, Halife; Kamilaris, Andreas; Vandenbussche, Pierre-Yves

    In this study, a novel metacrawling method is proposed for discovering and monitoring linked data sources on the Web. We implemented the method in a prototype system, named SPARQL Endpoints Discovery (SpEnD). SpEnD starts with a "search keyword" discovery process for finding relevant keywords for the linked data domain and specifically SPARQL endpoints. Then, these search keywords are utilized to find linked data sources via popular search engines (Google, Bing, Yahoo, Yandex). By using this method, most of the currently listed SPARQL endpoints in existing endpoint repositories, as well as a significant number of new SPARQL endpoints, have been discovered. Finally, we have developed a new SPARQL endpoint crawler (SpEC) for crawling and link analysis.

  7. LIVIVO - the Vertical Search Engine for Life Sciences.

    PubMed

    Müller, Bernd; Poley, Christoph; Pössel, Jana; Hagelstein, Alexandra; Gübitz, Thomas

    2017-01-01

    The explosive growth of literature and data in the life sciences challenges researchers to keep track of current advancements in their disciplines. Novel approaches in the life science like the One Health paradigm require integrated methodologies in order to link and connect heterogeneous information from databases and literature resources. Current publications in the life sciences are increasingly characterized by the employment of trans-disciplinary methodologies comprising molecular and cell biology, genetics, genomic, epigenomic, transcriptional and proteomic high throughput technologies with data from humans, plants, and animals. The literature search engine LIVIVO empowers retrieval functionality by incorporating various literature resources from medicine, health, environment, agriculture and nutrition. LIVIVO is developed in-house by ZB MED - Information Centre for Life Sciences. It provides a user-friendly and usability-tested search interface with a corpus of 55 Million citations derived from 50 databases. Standardized application programming interfaces are available for data export and high throughput retrieval. The search functions allow for semantic retrieval with filtering options based on life science entities. The service oriented architecture of LIVIVO uses four different implementation layers to deliver search services. A Knowledge Environment is developed by ZB MED to deal with the heterogeneity of data as an integrative approach to model, store, and link semantic concepts within literature resources and databases. Future work will focus on the exploitation of life science ontologies and on the employment of NLP technologies in order to improve query expansion, filters in faceted search, and concept based relevancy rankings in LIVIVO.

  8. Content based information retrieval in forensic image databases.

    PubMed

    Geradts, Zeno; Bijhold, Jurrien

    2002-03-01

    This paper gives an overview of the various available image databases and ways of searching these databases on image contents. The developments in research groups of searching in image databases is evaluated and compared with the forensic databases that exist. Forensic image databases of fingerprints, faces, shoeprints, handwriting, cartridge cases, drugs tablets, and tool marks are described. The developments in these fields appear to be valuable for forensic databases, especially that of the framework in MPEG-7, where the searching in image databases is standardized. In the future, the combination of the databases (also DNA-databases) and possibilities to combine these can result in stronger forensic evidence.

  9. ‘Sciencenet’—towards a global search and share engine for all scientific knowledge

    PubMed Central

    Lütjohann, Dominic S.; Shah, Asmi H.; Christen, Michael P.; Richter, Florian; Knese, Karsten; Liebel, Urban

    2011-01-01

    Summary: Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, ‘Sciencenet’, which facilitates rapid searching over this large data space. By ‘bringing the search engine to the data’, we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. Availability and Implementation: The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the ‘AskMe’ experiment publisher is written in Python 2.7, and the backend ‘YaCy’ search engine is based on Java 1.6. Contact: urban.liebel@kit.edu Supplementary Material: Detailed instructions and descriptions can be found on the project homepage: http://sciencenet.kit.edu. PMID:21493657

  10. Curious Consequences of a Miscopied Quadratic

    ERIC Educational Resources Information Center

    Poet, Jeffrey L.; Vestal, Donald L., Jr.

    2005-01-01

    The starting point of this article is a search for pairs of quadratic polynomials x[superscript 2] + bx plus or minus c with the property that they both factor over the integers. The search leads quickly to some number theory in the form of primitive Pythagorean triples, and this paper develops the connection between these two topics.

  11. The Development of Comprehensive Search Skills.

    ERIC Educational Resources Information Center

    Wellman, Henry M.; And Others

    1984-01-01

    One experiment examined two and one-half, three and one-half and four and one-half year olds' ability to conduct nonredundant comprehensive searches of small lidded trash cans. A second experiment examined three , four, and five year olds' ability to find Easter eggs hidden on a large playground. Results were discussed in relation to developmental…

  12. Systematically Identifying Relevant Research: Case Study on Child Protection Social Workers' Resilience

    ERIC Educational Resources Information Center

    McFadden, Paula; Taylor, Brian J.; Campbell, Anne; McQuilkin, Janice

    2012-01-01

    Context: The development of a consolidated knowledge base for social work requires rigorous approaches to identifying relevant research. Method: The quality of 10 databases and a web search engine were appraised by systematically searching for research articles on resilience and burnout in child protection social workers. Results: Applied Social…

  13. Locomotor Status and the Development of Spatial Search Skills.

    ERIC Educational Resources Information Center

    Bai, Dina L.; Bertenthal, Bennett I.

    1992-01-01

    Investigated the possibility that previous reports of a relation between locomotor status and stage-4 object permanence performance could be generalized to performance on an object localization task. Findings suggest that the effects of locomotor experience on infants' search performance are quite specific and mediated by a variety of factors that…

  14. Program Design for Retrospective Searches on Large Data Bases

    ERIC Educational Resources Information Center

    Thiel, L. H.; Heaps, H. S.

    1972-01-01

    Retrospective search of large data bases requires development of special techniques for automatic compression of data and minimization of the number of input-output operations to the computer files. The computer program should require a relatively small amount of internal memory. This paper describes the structure of such a program. (9 references)…

  15. The Search for Extraterrestrial Intelligence

    NASA Technical Reports Server (NTRS)

    Tucher, A.

    1985-01-01

    The development of NASA's SETI project and strategies for searching radio signals are reviewed. A computer program was written in FORTRAN to set up data from observations taken at Jodrell Bank. These data are to be used with a larger program to find the average radio signal strength at each of the approximately 63,000 channels.

  16. Spatial memory in foraging games.

    PubMed

    Kerster, Bryan E; Rhodes, Theo; Kello, Christopher T

    2016-03-01

    Foraging and foraging-like processes are found in spatial navigation, memory, visual search, and many other search functions in human cognition and behavior. Foraging is commonly theorized using either random or correlated movements based on Lévy walks, or a series of decisions to remain or leave proximal areas known as "patches". Neither class of model makes use of spatial memory, but search performance may be enhanced when information about searched and unsearched locations is encoded. A video game was developed to test the role of human spatial memory in a canonical foraging task. Analyses of search trajectories from over 2000 human players yielded evidence that foraging movements were inherently clustered, and that clustering was facilitated by spatial memory cues and influenced by memory for spatial locations of targets found. A simple foraging model is presented in which spatial memory is used to integrate aspects of Lévy-based and patch-based foraging theories to perform a kind of area-restricted search, and thereby enhance performance as search unfolds. Using only two free parameters, the model accounts for a variety of findings that individually support competing theories, but together they argue for the integration of spatial memory into theories of foraging. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Frontal-parietal synchrony in elderly EEG for visual search.

    PubMed

    Phillips, Steven; Takeda, Yuji

    2010-01-01

    Aging involves selective changes in attentional control. However, its precise effect on visual attention is difficult to discern from behavioural studies alone. In this paper, we employ a recently developed phase-locking measure of synchrony as an indicator of top-down/bottom-up control of attention to assess attentional control in the elderly. Fourteen participants (63-74 years) searched for a target item (coloured, oriented rectangular bar) among a display set of distractors. For the feature search condition, where none of the distractors shared a feature with the target, search time did not increase with display set size (two, or four items). For the conjunctive search condition, where each distractor shared either a colour or orientation feature with the target, search time increased with display size. Phase-locking analysis revealed a significant increase in high gamma-band (36-56 Hz) synchrony indicating greater bottom-up control for feature than conjunctive search. In view of our earlier study on younger (21-32 years) adults (Phillips and Takeda, 2009), these results suggest that older participants are more likely to use bottom-up control of attention, possibly triggered by their greater susceptibility to attentional capture, than younger participants. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  18. FindZebra: a search engine for rare diseases.

    PubMed

    Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole

    2013-06-01

    The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. PlantGI: a database for searching gene indices in agricultural plants developed at NIAB, Korea

    PubMed Central

    Kim, Chang Kug; Choi, Ji Weon; Park, DongSuk; Kang, Man Jung; Seol, Young-Joo; Hyun, Do Yoon; Hahn, Jang Ho

    2008-01-01

    The Plant Gene Index (PlantGI) database is developed as a web-based search system with search capabilities for keywords to provide information on gene indices specifically for agricultural plants. The database contains specific Gene Index information for ten agricultural species, namely, rice, Chinese cabbage, wheat, maize, soybean, barley, mushroom, Arabidopsis, hot pepper and tomato. PlantGI differs from other Gene Index databases in being specific to agricultural plant species and thus complements services from similar other developments. The database includes options for interactive mining of EST CONTIGS and assembled EST data for user specific keyword queries. The current version of PlantGI contains a total of 34,000 EST CONTIGS data for rice (8488 records), wheat (8560 records), maize (4570 records), soybean (3726 records), barley (3417 records), Chinese cabbage (3602 records), tomato (1236 records), hot pepper (998 records), mushroom (130 records) and Arabidopsis (8 records). Availability The database is available for free at http://www.niab.go.kr/nabic/. PMID:18685722

  20. In Silico PCR Tools for a Fast Primer, Probe, and Advanced Searching.

    PubMed

    Kalendar, Ruslan; Muterko, Alexandr; Shamekova, Malika; Zhambakin, Kabyl

    2017-01-01

    The polymerase chain reaction (PCR) is fundamental to molecular biology and is the most important practical molecular technique for the research laboratory. The principle of this technique has been further used and applied in plenty of other simple or complex nucleic acid amplification technologies (NAAT). In parallel to laboratory "wet bench" experiments for nucleic acid amplification technologies, in silico or virtual (bioinformatics) approaches have been developed, among which in silico PCR analysis. In silico NAAT analysis is a useful and efficient complementary method to ensure the specificity of primers or probes for an extensive range of PCR applications from homology gene discovery, molecular diagnosis, DNA fingerprinting, and repeat searching. Predicting sensitivity and specificity of primers and probes requires a search to determine whether they match a database with an optimal number of mismatches, similarity, and stability. In the development of in silico bioinformatics tools for nucleic acid amplification technologies, the prospects for the development of new NAAT or similar approaches should be taken into account, including forward-looking and comprehensive analysis that is not limited to only one PCR technique variant. The software FastPCR and the online Java web tool are integrated tools for in silico PCR of linear and circular DNA, multiple primer or probe searches in large or small databases and for advanced search. These tools are suitable for processing of batch files that are essential for automation when working with large amounts of data. The FastPCR software is available for download at http://primerdigital.com/fastpcr.html and the online Java version at http://primerdigital.com/tools/pcr.html .

Top