An Analysis of Web Image Queries for Search.
ERIC Educational Resources Information Center
Pu, Hsiao-Tieh
2003-01-01
Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)
Personalization of Rule-based Web Services.
Choi, Okkyung; Han, Sang Yong
2008-04-04
Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.
World Wide Web Metaphors for Search Mission Data
NASA Technical Reports Server (NTRS)
Norris, Jeffrey S.; Wallick, Michael N.; Joswig, Joseph C.; Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Abramyan, Lucy; Crockett, Thomas M.; Shams, Khawaja S.; Fox, Jason M.;
2010-01-01
A software program that searches and browses mission data emulates a Web browser, containing standard meta - phors for Web browsing. By taking advantage of back-end URLs, users may save and share search states. Also, since a Web interface is familiar to users, training time is reduced. Familiar back and forward buttons move through a local search history. A refresh/reload button regenerates a query, and loads in any new data. URLs can be constructed to save search results. Adding context to the current search is also handled through a familiar Web metaphor. The query is constructed by clicking on hyperlinks that represent new components to the search query. The selection of a link appears to the user as a page change; the choice of links changes to represent the updated search and the results are filtered by the new criteria. Selecting a navigation link changes the current query and also the URL that is associated with it. The back button can be used to return to the previous search state. This software is part of the MSLICE release, which was written in Java. It will run on any current Windows, Macintosh, or Linux system.
NASA Astrophysics Data System (ADS)
Hepp, Martin
E-Commerce on the basis of current Web technology has created fierce competition with a strong focus on price. Despite a huge variety of offerings and diversity in the individual preferences of consumers, current Web search fosters a very early reduction of the search space to just a few commodity makes and models. As soon as this reduction has taken place, search is reduced to flat price comparison. This is unfortunate for the manufacturers and vendors, because their individual value proposition for a particular customer may get lost in the course of communication over the Web, and it is unfortunate for the customer, because he/she may not get the most utility for the money based on her/his preference function. A key limitation is that consumers cannot search using a consolidated view on all alternative offers across the Web. In this talk, I will (1) analyze the technical effects of products and services search on the Web that cause this mismatch between supply and demand, (2) evaluate how the GoodRelations vocabulary and the current Web of Data movement can improve the situation, (3) give a brief hands-on demonstration, and (4) sketch business models for the various market participants.
Information Retrieval for Education: Making Search Engines Language Aware
ERIC Educational Resources Information Center
Ott, Niels; Meurers, Detmar
2010-01-01
Search engines have been a major factor in making the web the successful and widely used information source it is today. Generally speaking, they make it possible to retrieve web pages on a topic specified by the keywords entered by the user. Yet web searching currently does not take into account which of the search results are comprehensible for…
Indexing and Retrieval for the Web.
ERIC Educational Resources Information Center
Rasmussen, Edie M.
2003-01-01
Explores current research on indexing and ranking as retrieval functions of search engines on the Web. Highlights include measuring search engine stability; evaluation of Web indexing and retrieval; Web crawlers; hyperlinks for indexing and ranking; ranking for metasearch; document structure; citation indexing; relevance; query evaluation;…
GoWeb: a semantic search engine for the life science web.
Dietze, Heiko; Schroeder, Michael
2009-10-01
Current search engines are keyword-based. Semantic technologies promise a next generation of semantic search engines, which will be able to answer questions. Current approaches either apply natural language processing to unstructured text or they assume the existence of structured statements over which they can reason. Here, we introduce a third approach, GoWeb, which combines classical keyword-based Web search with text-mining and ontologies to navigate large results sets and facilitate question answering. We evaluate GoWeb on three benchmarks of questions on genes and functions, on symptoms and diseases, and on proteins and diseases. The first benchmark is based on the BioCreAtivE 1 Task 2 and links 457 gene names with 1352 functions. GoWeb finds 58% of the functional GeneOntology annotations. The second benchmark is based on 26 case reports and links symptoms with diseases. GoWeb achieves 77% success rate improving an existing approach by nearly 20%. The third benchmark is based on 28 questions in the TREC genomics challenge and links proteins to diseases. GoWeb achieves a success rate of 79%. GoWeb's combination of classical Web search with text-mining and ontologies is a first step towards answering questions in the biomedical domain. GoWeb is online at: http://www.gopubmed.org/goweb.
ERIC Educational Resources Information Center
Fast, Karl V.; Campbell, D. Grant
2001-01-01
Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…
What Major Search Engines Like Google, Yahoo and Bing Need to Know about Teachers in the UK?
ERIC Educational Resources Information Center
Seyedarabi, Faezeh
2014-01-01
This article briefly outlines the current major search engines' approach to teachers' web searching. The aim of this article is to make Web searching easier for teachers when searching for relevant online teaching materials, in general, and UK teacher practitioners at primary, secondary and post-compulsory levels, in particular. Therefore, major…
Semantic Search of Web Services
ERIC Educational Resources Information Center
Hao, Ke
2013-01-01
This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…
"Just the Answers, Please": Choosing a Web Search Service.
ERIC Educational Resources Information Center
Feldman, Susan
1997-01-01
Presents guidelines for selecting World Wide Web search engines. Real-life questions were used to test six search engines. Queries sought company information, product reviews, medical information, foreign information, technical reports, and current events. Compares performance and features of AltaVista, Excite, HotBot, Infoseek, Lycos, and Open…
ERIC Educational Resources Information Center
Gupta, Amardeep
2005-01-01
Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…
Uncovering the Hidden Web, Part I: Finding What the Search Engines Don't. ERIC Digest.
ERIC Educational Resources Information Center
Mardis, Marcia
Currently, the World Wide Web contains an estimated 7.4 million sites (OCLC, 2001). Yet even the most experienced searcher, using the most robust search engines, can access only about 16% of these pages (Dahn, 2001). The other 84% of the publicly available information on the Web is referred to as the "hidden,""invisible," or…
Web sites for postpartum depression: convenient, frustrating, incomplete, and misleading.
Summers, Audra L; Logsdon, M Cynthia
2005-01-01
To evaluate the content and the technology of Web sites providing information on postpartum depression. Eleven search engines were queried using the words "Postpartum Depression." The top 10 sites in each search engine were evaluated for correct content and technology using the Web Depression Tool, based on the Technology Assessment Model. Of the 36 unique Web sites located, 34 were available to review. Only five Web sites provided >75% correct responses to questions that summarized the current state of the science for postpartum depression. Eleven of the Web sites contained little or no useful information about postpartum depression, despite being among the first 10 Web sites listed by the search engine. Some Web sites contained possibly harmful suggestions for treatment of postpartum depression. In addition, there are many problems with the technology of Web sites providing information on postpartum depression. A better Web site for postpartum depression is necessary if we are to meet the needs of consumers for accurate and current information using technology that enhances learning. Since patient education is a core competency for nurses, it is essential that nurses understand how their patients are using the World Wide Web for learning and how we can assist our patients to find appropriate sites containing correct information.
An open-source, mobile-friendly search engine for public medical knowledge.
Samwald, Matthias; Hanbury, Allan
2014-01-01
The World Wide Web has become an important source of information for medical practitioners. To complement the capabilities of currently available web search engines we developed FindMeEvidence, an open-source, mobile-friendly medical search engine. In a preliminary evaluation, the quality of results from FindMeEvidence proved to be competitive with those from TRIP Database, an established, closed-source search engine for evidence-based medicine.
Automating Information Discovery Within the Invisible Web
NASA Astrophysics Data System (ADS)
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
Adding a Visualization Feature to Web Search Engines: It’s Time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.
Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t helpmore » but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.« less
Choi, Okkyung; Han, SangYong
2007-01-01
Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.
ERIC Educational Resources Information Center
McDermott, Irene E.
1999-01-01
Describes the development and current status of WebRing, a service that links related Web sites into a central hub. Discusses it as a viable alternative to other search engines and examines issues of free speech, use by the business sector, and implications for WebRing after its purchase by Yahoo! (LRW)
An introduction to web scale discovery systems.
Hoy, Matthew B
2012-01-01
This article explores the basic principles of web-scale discovery systems and how they are being implemented in libraries. "Web scale discovery" refers to a class of products that index a vast number of resources in a wide variety formats and allow users to search for content in the physical collection, print and electronic journal collections, and other resources from a single search box. Search results are displayed in a manner similar to Internet searches, in a relevance ranked list with links to online content. The advantages and disadvantages of these systems are discussed, and a list of popular discovery products is provided. A list of library websites with discovery systems currently implemented is also provided.
UnCover on the Web: search hints and applications in library environments.
Galpern, N F; Albert, K M
1997-01-01
Among the huge maze of resources available on the Internet, UnCoverWeb stands out as a valuable tool for medical libraries. This up-to-date, free-access, multidisciplinary database of periodical references is searched through an easy-to-learn graphical user interface that is a welcome improvement over the telnet version. This article reviews the basic and advanced search techniques for UnCoverWeb, as well as providing information on the document delivery functions and table of contents alerting service called Reveal. UnCover's currency is evaluated and compared with other current awareness resources. System deficiencies are discussed, with the conclusion that although UnCoverWeb lacks the sophisticated features of many commercial database search services, it is nonetheless a useful addition to the repertoire of information sources available in a library.
Result Merging Strategies for a Current News Metasearcher.
ERIC Educational Resources Information Center
Rasolofo, Yves; Hawking, David; Savoy, Jacques
2003-01-01
Metasearching of online current news services is a potentially useful Web application of distributed information retrieval techniques. Reports experiences in building a metasearcher designed to provide up-to-date searching over a significant number of rapidly changing current news sites, focusing on how to merge results from the search engines at…
Web-based UMLS concept retrieval by automatic text scanning: a comparison of two methods.
Brandt, C; Nadkarni, P
2001-01-01
The Web is increasingly the medium of choice for multi-user application program delivery. Yet selection of an appropriate programming environment for rapid prototyping, code portability, and maintainability remain issues. We summarize our experience on the conversion of a LISP Web application, Search/SR to a new, functionally identical application, Search/SR-ASP using a relational database and active server pages (ASP) technology. Our results indicate that provision of easy access to database engines and external objects is almost essential for a development environment to be considered viable for rapid and robust application delivery. While LISP itself is a robust language, its use in Web applications may be hard to justify given that current vendor implementations do not provide such functionality. Alternative, currently available scripting environments for Web development appear to have most of LISP's advantages and few of its disadvantages.
Progress developing the JAXA next generation satellite data repository (G-Portal).
NASA Astrophysics Data System (ADS)
Ikehata, Y.
2016-12-01
JAXA has been operating the "G-Portal" as a repository for search and access data of Earth observation satellite related JAXA since February 2013. The G-Portal handles ten satellites data; GPM, TRMM, Aqua, ADEOS-II, ALOS (search only), ALOS-2 (search only), MOS-1, MOS-1b, ERS-1 and JERS-1. G-Portal plans to import future satellites GCOM-C and EarthCARE. Except for ALOS and ALOS-2, all of these data are open and free. The G-Portal supports web search, catalogue search (CSW and OpenSearch) and direct download by SFTP for data access. However, the G-Portal has some problems about performance and usability. For example, about performance, the G-Portal is based on 10Gbps network and uses scale out architecture. (Conceptual design was reported in AGU fall meeting 2015. (IN23D-1748)) In order to improve those problems, JAXA is developing the next generation repository since February 2016. This paper describes usability problems improvements and challenges towards the next generation system. The improvements and challenges include the following points. Current web interface uses "step by step" design and URL is generated randomly. For that reason, users must see the Web page and click many times to get desired satellite data. So, Web design will be changed completely from "step by step" to "1 page" and URL will be based on REST (REpresentational State Transfer). Regarding direct download, the current method(SFTP) is very hard to use because of anomaly port assign and key-authentication. So, we will support FTP protocol. Additionally, the next G-Portal improve catalogue service. Currently catalogue search is available only to limited users including NASA, ESA and CEOS due to performance and reliability issue, but we will remove this limitation. Furthermore, catalogue search client function will be implemented to take in other agencies satellites catalogue. Users will be able to search satellite data across agencies.
Hot Topics on the Web: Strategies for Research.
ERIC Educational Resources Information Center
Diaz, Karen R.; O'Hanlon, Nancy
2001-01-01
Presents strategies for researching topics on the Web that are controversial or current in nature. Discusses topic selection and overviews, including the use of online encyclopedias; search engines; finding laws and pending legislation; advocacy groups; proprietary databases; Web site evaluation; and the continuing usefulness of print materials.…
A Web-Based Learning System for Software Test Professionals
ERIC Educational Resources Information Center
Wang, Minhong; Jia, Haiyang; Sugumaran, V.; Ran, Weijia; Liao, Jian
2011-01-01
Fierce competition, globalization, and technology innovation have forced software companies to search for new ways to improve competitive advantage. Web-based learning is increasingly being used by software companies as an emergent approach for enhancing the skills of knowledge workers. However, the current practice of Web-based learning is…
Web OPAC Interfaces: An Overview.
ERIC Educational Resources Information Center
Babu, B. Ramesh; O'Brien, Ann
2000-01-01
Discussion of Web-based online public access catalogs (OPACs) focuses on a review of six Web OPAC interfaces in use in academic libraries in the United Kingdom. Presents a checklist and guidelines of important features and functions that are currently available, including search strategies, access points, display, links, and layout. (Author/LRW)
US Geoscience Information Network, Web Services for Geoscience Information Discovery and Access
NASA Astrophysics Data System (ADS)
Richard, S.; Allison, L.; Clark, R.; Coleman, C.; Chen, G.
2012-04-01
The US Geoscience information network has developed metadata profiles for interoperable catalog services based on ISO19139 and the OGC CSW 2.0.2. Currently data services are being deployed for the US Dept. of Energy-funded National Geothermal Data System. These services utilize OGC Web Map Services, Web Feature Services, and THREDDS-served NetCDF for gridded datasets. Services and underlying datasets (along with a wide variety of other information and non information resources are registered in the catalog system. Metadata for registration is produced by various workflows, including harvest from OGC capabilities documents, Drupal-based web applications, transformation from tabular compilations. Catalog search is implemented using the ESRI Geoportal open-source server. We are pursuing various client applications to demonstrated discovery and utilization of the data services. Currently operational applications allow catalog search and data acquisition from map services in an ESRI ArcMap extension, a catalog browse and search application built on openlayers and Django. We are developing use cases and requirements for other applications to utilize geothermal data services for resource exploration and evaluation.
Web Use for Symptom Appraisal of Physical Health Conditions: A Systematic Review
Jay, Caroline; Harper, Simon; Davies, Alan; Vega, Julio; Todd, Chris
2017-01-01
Background The Web has become an important information source for appraising symptoms. We need to understand the role it currently plays in help seeking and symptom evaluation to leverage its potential to support health care delivery. Objective The aim was to systematically review the literature currently available on Web use for symptom appraisal. Methods We searched PubMed, EMBASE, PsycINFO, ACM Digital Library, SCOPUS, and Web of Science for any empirical studies that addressed the use of the Web by lay people to evaluate symptoms for physical conditions. Articles were excluded if they did not meet minimum quality criteria. Study findings were synthesized using a thematic approach. Results A total of 32 studies were included. Study designs included cross-sectional surveys, qualitative studies, experimental studies, and studies involving website/search engine usage data. Approximately 35% of adults engage in Web use for symptom appraisal, but this proportion varies between 23% and 75% depending on sociodemographic and disease-related factors. Most searches were symptom-based rather than condition-based. Users viewed only the top search results and interacted more with results that mentioned serious conditions. Web use for symptom appraisal appears to impact on the decision to present to health services, communication with health professionals, and anxiety. Conclusions Web use for symptom appraisal has the potential to influence the timing of help seeking for symptoms and the communication between patients and health care professionals during consultations. However, studies lack suitable comparison groups as well as follow-up of participants over time to determine whether Web use results in health care utilization and diagnosis. Future research should involve longitudinal follow-up so that we can weigh the benefits of Web use for symptom appraisal (eg, reductions in delays to diagnosis) against the disadvantages (eg, unnecessary anxiety and health care use) and relate these to health care costs. PMID:28611017
Web Use for Symptom Appraisal of Physical Health Conditions: A Systematic Review.
Mueller, Julia; Jay, Caroline; Harper, Simon; Davies, Alan; Vega, Julio; Todd, Chris
2017-06-13
The Web has become an important information source for appraising symptoms. We need to understand the role it currently plays in help seeking and symptom evaluation to leverage its potential to support health care delivery. The aim was to systematically review the literature currently available on Web use for symptom appraisal. We searched PubMed, EMBASE, PsycINFO, ACM Digital Library, SCOPUS, and Web of Science for any empirical studies that addressed the use of the Web by lay people to evaluate symptoms for physical conditions. Articles were excluded if they did not meet minimum quality criteria. Study findings were synthesized using a thematic approach. A total of 32 studies were included. Study designs included cross-sectional surveys, qualitative studies, experimental studies, and studies involving website/search engine usage data. Approximately 35% of adults engage in Web use for symptom appraisal, but this proportion varies between 23% and 75% depending on sociodemographic and disease-related factors. Most searches were symptom-based rather than condition-based. Users viewed only the top search results and interacted more with results that mentioned serious conditions. Web use for symptom appraisal appears to impact on the decision to present to health services, communication with health professionals, and anxiety. Web use for symptom appraisal has the potential to influence the timing of help seeking for symptoms and the communication between patients and health care professionals during consultations. However, studies lack suitable comparison groups as well as follow-up of participants over time to determine whether Web use results in health care utilization and diagnosis. Future research should involve longitudinal follow-up so that we can weigh the benefits of Web use for symptom appraisal (eg, reductions in delays to diagnosis) against the disadvantages (eg, unnecessary anxiety and health care use) and relate these to health care costs. ©Julia Mueller, Caroline Jay, Simon Harper, Alan Davies, Julio Vega, Chris Todd. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 13.06.2017.
A novel architecture for information retrieval system based on semantic web
NASA Astrophysics Data System (ADS)
Zhang, Hui
2011-12-01
Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.
FindZebra: a search engine for rare diseases.
Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole
2013-06-01
The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
SA-Search: a web tool for protein structure mining based on a Structural Alphabet
Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre
2004-01-01
SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search. PMID:15215446
SA-Search: a web tool for protein structure mining based on a Structural Alphabet.
Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre
2004-07-01
SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search.
Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses.
Falagas, Matthew E; Pitsouni, Eleni I; Malietzis, George A; Pappas, Georgios
2008-02-01
The evolution of the electronic age has led to the development of numerous medical databases on the World Wide Web, offering search facilities on a particular subject and the ability to perform citation analysis. We compared the content coverage and practical utility of PubMed, Scopus, Web of Science, and Google Scholar. The official Web pages of the databases were used to extract information on the range of journals covered, search facilities and restrictions, and update frequency. We used the example of a keyword search to evaluate the usefulness of these databases in biomedical information retrieval and a specific published article to evaluate their utility in performing citation analysis. All databases were practical in use and offered numerous search facilities. PubMed and Google Scholar are accessed for free. The keyword search with PubMed offers optimal update frequency and includes online early articles; other databases can rate articles by number of citations, as an index of importance. For citation analysis, Scopus offers about 20% more coverage than Web of Science, whereas Google Scholar offers results of inconsistent accuracy. PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range, of help both in keyword searching and citation analysis, but it is currently limited to recent articles (published after 1995) compared with Web of Science. Google Scholar, as for the Web in general, can help in the retrieval of even the most obscure information but its use is marred by inadequate, less often updated, citation information.
Galbusera, Fabio; Brayda-Bruno, Marco; Freutel, Maren; Seitz, Andreas; Steiner, Malte; Wehrle, Esther; Wilke, Hans-Joachim
2012-01-01
Previous surveys showed a poor quality of the web sites providing health information about low back pain. However, the rapid and continuous evolution of the Internet content may question the current validity of those investigations. The present study is aimed to quantitatively assess the quality of the Internet information about low back pain retrieved with the most commonly employed search engines. An Internet search with the keywords "low back pain" has been performed with Google, Yahoo!® and Bing™ in the English language. The top 30 hits obtained with each search engine were evaluated by five independent raters and averaged following criteria derived from previous works. All search results were categorized as declaring compliant to a quality standard for health information (e.g. HONCode) or not and based on the web site type (Institutional, Free informative, Commercial, News, Social Network, Unknown). The quality of the hits retrieved by the three search engines was extremely similar. The web sites had a clear purpose, were easy to navigate, and mostly lacked in validity and quality of the provided links. The conformity to a quality standard was correlated with a marked greater quality of the web sites in all respects. Institutional web sites had the best validity and ease of use. Free informative web sites had good quality but a markedly lower validity compared to Institutional websites. Commercial web sites provided more biased information. News web sites were well designed and easy to use, but lacked in validity. The average quality of the hits retrieved by the most commonly employed search engines could be defined as satisfactory and favorably comparable with previous investigations. Awareness of the user about checking the quality of the information remains of concern.
VisSearch: A Collaborative Web Searching Environment
ERIC Educational Resources Information Center
Lee, Young-Jin
2005-01-01
VisSearch is a collaborative Web searching environment intended for sharing Web search results among people with similar interests, such as college students taking the same course. It facilitates students' Web searches by visualizing various Web searching processes. It also collects the visualized Web search results and applies an association rule…
Our Commitment to Reliable Health and Medical Information
... 000 visitors world-wide per day. HONcode Toolbar: search engine and checker of the certification status Automatically checks ... HONcode status when browsing health web sites. The search engine indexes only HONcode-certified sites. HONcodeHunt currently includes ...
A survey of the current status of web-based databases indexing Iranian journals.
Merat, Shahin; Khatibzadeh, Shahab; Mesgarpour, Bita; Malekzadeh, Reza
2009-05-01
The scientific output of Iran is increasing rapidly during the recent years. Unfortunately, most papers are published in journals which are not indexed by popular indexing systems and many of them are in Persian without English translation. This makes the results of Iranian scientific research unavailable to other researchers, including Iranians. The aim of this study was to evaluate the quality of current web-based databases indexing scientific articles published in Iran. We identified web-based databases which indexed scientific journals published in Iran using popular search engines. The sites were then subjected to a series of tests to evaluate their coverage, search capabilities, stability, accuracy of information, consistency, accessibility, ease of use, and other features. Results were compared with each other to identify strengths and shortcomings of each site. Five web sites were indentified. None had a complete coverage on scientific Iranian journals. The search capabilities were less than optimal in most sites. English translations of research titles, author names, keywords, and abstracts of Persian-language articles did not follow standards. Some sites did not cover abstracts. Numerous typing errors make searches ineffective and citation indexing unreliable. None of the currently available indexing sites are capable of presenting Iranian research to the international scientific community. The government should intervene by enforcing policies designed to facilitate indexing through a systematic approach. The policies should address Iranian journals, authors, and indexing sites. Iranian journals should be required to provide their indexing data, including references, electronically; authors should provide correct indexing information to journals; and indexing sites should improve their software to meet standards set by the government.
New Architectures for Presenting Search Results Based on Web Search Engines Users Experience
ERIC Educational Resources Information Center
Martinez, F. J.; Pastor, J. A.; Rodriguez, J. V.; Lopez, Rosana; Rodriguez, J. V., Jr.
2011-01-01
Introduction: The Internet is a dynamic environment which is continuously being updated. Search engines have been, currently are and in all probability will continue to be the most popular systems in this information cosmos. Method: In this work, special attention has been paid to the series of changes made to search engines up to this point,…
Islamic Extremists Love the Internet
2009-04-03
down on the West. Terrorists’ Use of Search Engines In order to find a particular blog, extremists use search engines such as Bloglines...BlogScope, and Technorati to search blog contents. Technorati, which is among the most popular blog search engines , provides current information on...of mid- January 2009 is tracking over 31.78 million blogs with 579.86 million posts.49 Other ways the terrorists use Web search engines are to
Search Techniques for the Web of Things: A Taxonomy and Survey.
Zhou, Yuchao; De, Suparna; Wang, Wei; Moessner, Klaus
2016-04-27
The Web of Things aims to make physical world objects and their data accessible through standard Web technologies to enable intelligent applications and sophisticated data analytics. Due to the amount and heterogeneity of the data, it is challenging to perform data analysis directly; especially when the data is captured from a large number of distributed sources. However, the size and scope of the data can be reduced and narrowed down with search techniques, so that only the most relevant and useful data items are selected according to the application requirements. Search is fundamental to the Web of Things while challenging by nature in this context, e.g., mobility of the objects, opportunistic presence and sensing, continuous data streams with changing spatial and temporal properties, efficient indexing for historical and real time data. The research community has developed numerous techniques and methods to tackle these problems as reported by a large body of literature in the last few years. A comprehensive investigation of the current and past studies is necessary to gain a clear view of the research landscape and to identify promising future directions. This survey reviews the state-of-the-art search methods for the Web of Things, which are classified according to three different viewpoints: basic principles, data/knowledge representation, and contents being searched. Experiences and lessons learned from the existing work and some EU research projects related to Web of Things are discussed, and an outlook to the future research is presented.
Search Techniques for the Web of Things: A Taxonomy and Survey
Zhou, Yuchao; De, Suparna; Wang, Wei; Moessner, Klaus
2016-01-01
The Web of Things aims to make physical world objects and their data accessible through standard Web technologies to enable intelligent applications and sophisticated data analytics. Due to the amount and heterogeneity of the data, it is challenging to perform data analysis directly; especially when the data is captured from a large number of distributed sources. However, the size and scope of the data can be reduced and narrowed down with search techniques, so that only the most relevant and useful data items are selected according to the application requirements. Search is fundamental to the Web of Things while challenging by nature in this context, e.g., mobility of the objects, opportunistic presence and sensing, continuous data streams with changing spatial and temporal properties, efficient indexing for historical and real time data. The research community has developed numerous techniques and methods to tackle these problems as reported by a large body of literature in the last few years. A comprehensive investigation of the current and past studies is necessary to gain a clear view of the research landscape and to identify promising future directions. This survey reviews the state-of-the-art search methods for the Web of Things, which are classified according to three different viewpoints: basic principles, data/knowledge representation, and contents being searched. Experiences and lessons learned from the existing work and some EU research projects related to Web of Things are discussed, and an outlook to the future research is presented. PMID:27128918
Finding Web-Based Anxiety Interventions on the World Wide Web: A Scoping Review
Olander, Ellinor K; Ayers, Susan
2016-01-01
Background One relatively new and increasingly popular approach of increasing access to treatment is Web-based intervention programs. The advantage of Web-based approaches is the accessibility, affordability, and anonymity of potentially evidence-based treatment. Despite much research evidence on the effectiveness of Web-based interventions for anxiety found in the literature, little is known about what is publically available for potential consumers on the Web. Objective Our aim was to explore what a consumer searching the Web for Web-based intervention options for anxiety-related issues might find. The objectives were to identify currently publically available Web-based intervention programs for anxiety and to synthesize and review these in terms of (1) website characteristics such as credibility and accessibility; (2) intervention program characteristics such as intervention focus, design, and presentation modes; (3) therapeutic elements employed; and (4) published evidence of efficacy. Methods Web keyword searches were carried out on three major search engines (Google, Bing, and Yahoo—UK platforms). For each search, the first 25 hyperlinks were screened for eligible programs. Included were programs that were designed for anxiety symptoms, currently publically accessible on the Web, had an online component, a structured treatment plan, and were available in English. Data were extracted for website characteristics, program characteristics, therapeutic characteristics, as well as empirical evidence. Programs were also evaluated using a 16-point rating tool. Results The search resulted in 34 programs that were eligible for review. A wide variety of programs for anxiety, including specific anxiety disorders, and anxiety in combination with stress, depression, or anger were identified and based predominantly on cognitive behavioral therapy techniques. The majority of websites were rated as credible, secure, and free of advertisement. The majority required users to register and/or to pay a program access fee. Half of the programs offered some form of paid therapist or professional support. Programs varied in treatment length and number of modules and employed a variety of presentation modes. Relatively few programs had published research evidence of the intervention’s efficacy. Conclusions This review represents a snapshot of available Web-based intervention programs for anxiety that could be found by consumers in March 2015. The consumer is confronted with a diversity of programs, which makes it difficult to identify an appropriate program. Limited reports and existence of empirical evidence for efficacy make it even more challenging to identify credible and reliable programs. This highlights the need for consistent guidelines and standards on developing, providing, and evaluating Web-based interventions and platforms with reliable up-to-date information for professionals and consumers about the characteristics, quality, and accessibility of Web-based interventions. PMID:27251763
Finding Web-Based Anxiety Interventions on the World Wide Web: A Scoping Review.
Ashford, Miriam Thiel; Olander, Ellinor K; Ayers, Susan
2016-06-01
One relatively new and increasingly popular approach of increasing access to treatment is Web-based intervention programs. The advantage of Web-based approaches is the accessibility, affordability, and anonymity of potentially evidence-based treatment. Despite much research evidence on the effectiveness of Web-based interventions for anxiety found in the literature, little is known about what is publically available for potential consumers on the Web. Our aim was to explore what a consumer searching the Web for Web-based intervention options for anxiety-related issues might find. The objectives were to identify currently publically available Web-based intervention programs for anxiety and to synthesize and review these in terms of (1) website characteristics such as credibility and accessibility; (2) intervention program characteristics such as intervention focus, design, and presentation modes; (3) therapeutic elements employed; and (4) published evidence of efficacy. Web keyword searches were carried out on three major search engines (Google, Bing, and Yahoo-UK platforms). For each search, the first 25 hyperlinks were screened for eligible programs. Included were programs that were designed for anxiety symptoms, currently publically accessible on the Web, had an online component, a structured treatment plan, and were available in English. Data were extracted for website characteristics, program characteristics, therapeutic characteristics, as well as empirical evidence. Programs were also evaluated using a 16-point rating tool. The search resulted in 34 programs that were eligible for review. A wide variety of programs for anxiety, including specific anxiety disorders, and anxiety in combination with stress, depression, or anger were identified and based predominantly on cognitive behavioral therapy techniques. The majority of websites were rated as credible, secure, and free of advertisement. The majority required users to register and/or to pay a program access fee. Half of the programs offered some form of paid therapist or professional support. Programs varied in treatment length and number of modules and employed a variety of presentation modes. Relatively few programs had published research evidence of the intervention's efficacy. This review represents a snapshot of available Web-based intervention programs for anxiety that could be found by consumers in March 2015. The consumer is confronted with a diversity of programs, which makes it difficult to identify an appropriate program. Limited reports and existence of empirical evidence for efficacy make it even more challenging to identify credible and reliable programs. This highlights the need for consistent guidelines and standards on developing, providing, and evaluating Web-based interventions and platforms with reliable up-to-date information for professionals and consumers about the characteristics, quality, and accessibility of Web-based interventions.
NASA Technical Reports Server (NTRS)
Hegde, Mahabaleshwara; Strub, Richard F.; Lynnes, Christopher S.; Fang, Hongliang; Teng, William
2008-01-01
Mirador is a web interface for searching Earth Science data archived at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Mirador provides keyword-based search and guided navigation for providing efficient search and access to Earth Science data. Mirador employs the power of Google's universal search technology for fast metadata keyword searches, augmented by additional capabilities such as event searches (e.g., hurricanes), searches based on location gazetteer, and data services like format converters and data sub-setters. The objective of guided data navigation is to present users with multiple guided navigation in Mirador is an ontology based on the Global Change Master directory (GCMD) Directory Interchange Format (DIF). Current implementation includes the project ontology covering various instruments and model data. Additional capabilities in the pipeline include Earth Science parameter and applications ontologies.
ERIC Educational Resources Information Center
Darrah, Brenda
Researchers for small businesses, which may have no access to expensive databases or market research reports, must often rely on information found on the Internet, which can be difficult to find. Although current conventional Internet search engines are now able to index over on billion documents, there are many more documents existing in…
Changiz, Tahereh; Haghani, Fariba; Masoomi, Rasoul
2012-01-01
Access to the medical resources on the web is one of current challenges for researchers and medical science educators. The purpose of current project was to design and implement a comprehensive and specific subject/web directory of medical education. First, the categories to be incorporated in the directory were defined through reviewing related directories and obtaining medical education experts' opinions in a focus group. Then, number of sources such as (Meta) search engines, subject directories, databases and library catalogs searched/browsed for selecting and collecting high quality resources. Finally, the website was designed and the resources were entered into the directory. The main categories incorporating WDME resources are: Journals, Organizations, Best Evidence in Medical Education, and Textbooks. Each category is divided into sub-categories and related resources of each category are described shortly within it. The resources in this directory could be accessed both by browsing and keyword searching. WDME is accessible on http://medirectory.org. The innovative Web Directory for Medical Education (WDME) presented in this paper, is more comprehensive than other existing directories, and expandable through user suggestions. It may help medical educators to find their desirable resources more quickly and easily; hence have more informed decisions in education.
Incorporating Web 2.0 Technologies from an Organizational Perspective
NASA Astrophysics Data System (ADS)
Owens, R.
2009-12-01
The Arctic Research Consortium of the United States (ARCUS) provides support for the organization, facilitation, and dissemination of online educational and scientific materials and information to a wide range of stakeholders. ARCUS is currently weaving the fabric of Web 2.0 technologies—web development featuring interactive information sharing and user-centered design—into its structure, both as a tool for information management and for educational outreach. The importance of planning, developing, and maintaining a cohesive online platform in order to integrate data storage and dissemination will be discussed in this presentation, as well as some specific open source technologies and tools currently available, including: ○ Content Management: Any system set up to manage the content of web sites and services. Drupal is a content management system, built in a modular fashion allowing for a powerful set of features including, but not limited to weblogs, forums, event calendars, polling, and more. ○ Faceted Search: Combined with full text indexing, faceted searching allows site visitors to locate information quickly and then provides a set of 'filters' with which to narrow the search results. Apache Solr is a search server with a web-services like API (Application programming interface) that has built in support for faceted searching. ○ Semantic Web: The semantic web refers to the ongoing evolution of the World Wide Web as it begins to incorporate semantic components, which aid in processing requests. OpenCalais is a web service that uses natural language processing, along with other methods, in order to extract meaningful 'tags' from your content. This metadata can then be used to connect people, places, and things throughout your website, enriching the surfing experience for the end user. ○ Web Widgets: A web widget is a portable 'piece of code' that can be embedded easily into web pages by an end user. Timeline is a widget developed as part of the SIMILE project at MIT (Massachusetts Institute of Technology) for displaying time-based events in a clean, horizontal timeline display. Numerous standards, applications, and 3rd party integration services are also available for use in today's Web 2.0 environment. In addition to a cohesive online platform, the following tools can improve networking, information sharing, and increased scientific and educational collaboration: ○ Facebook (Fan pages, social networking, etc) ○ Twitter/Twitterfeed (Automatic updates in 3 steps) ○ Mobify.me (Mobile web) ○ Wimba, Adobe Connect, etc (real time conferencing) Increasingly, the scientific community is being asked to share data and information within and outside disciplines, with K-12 students, and with members of the public and policy-makers. Web 2.0 technologies can easily be set up and utilized to share data and other information to specific audiences in real time, and their simplicity ensures their increasing use by the science community in years to come.
The Use of Web Search Engines in Information Science Research.
ERIC Educational Resources Information Center
Bar-Ilan, Judit
2004-01-01
Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…
Alternatives to animal testing: information resources via the Internet and World Wide Web.
Hakkinen, P J Bert; Green, Dianne K
2002-04-25
Many countries, including the United States, Canada, European Union member states, and others, require that a comprehensive search for possible alternatives be completed before beginning some or all research involving animals. Completing comprehensive alternatives searches and keeping current with information associated with alternatives to animal testing is a challenge that will be made easier as people throughout the world gain access to the Internet and World Wide Web. Numerous Internet and World Wide Web resources are available to provide guidance and other information on in vitro and other alternatives to animal testing. A comprehensive Web site is Alternatives to Animal Testing on the Web (Altweb), which serves as an online clearinghouse for resources, information, and news about alternatives to animal testing. Examples of other important Web sites include the joint one for the (US) Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM) and the Norwegian Reference Centre for Laboratory Animal Science and Alternatives (The NORINA database). Internet mailing lists and online access to bulletin boards, discussion areas, newsletters, and journals are other ways to access and share information to stay current with alternatives to animal testing.
Concept Mapping Your Web Searches: A Design Rationale and Web-Enabled Application
ERIC Educational Resources Information Center
Lee, Y.-J.
2004-01-01
Although it has become very common to use World Wide Web-based information in many educational settings, there has been little research on how to better search and organize Web-based information. This paper discusses the shortcomings of Web search engines and Web browsers as learning environments and describes an alternative Web search environment…
The Evolution of Web Searching.
ERIC Educational Resources Information Center
Green, David
2000-01-01
Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…
Adding a visualization feature to web search engines: it's time.
Wong, Pak Chung
2008-01-01
It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.
From Science to e-Science to Semantic e-Science: A Heliosphysics Case Study
NASA Technical Reports Server (NTRS)
Narock, Thomas; Fox, Peter
2011-01-01
The past few years have witnessed unparalleled efforts to make scientific data web accessible. The Semantic Web has proven invaluable in this effort; however, much of the literature is devoted to system design, ontology creation, and trials and tribulations of current technologies. In order to fully develop the nascent field of Semantic e-Science we must also evaluate systems in real-world settings. We describe a case study within the field of Heliophysics and provide a comparison of the evolutionary stages of data discovery, from manual to semantically enable. We describe the socio-technical implications of moving toward automated and intelligent data discovery. In doing so, we highlight how this process enhances what is currently being done manually in various scientific disciplines. Our case study illustrates that Semantic e-Science is more than just semantic search. The integration of search with web services, relational databases, and other cyberinfrastructure is a central tenet of our case study and one that we believe has applicability as a generalized research area within Semantic e-Science. This case study illustrates a specific example of the benefits, and limitations, of semantically replicating data discovery. We show examples of significant reductions in time and effort enable by Semantic e-Science; yet, we argue that a "complete" solution requires integrating semantic search with other research areas such as data provenance and web services.
A review of the reporting of web searching to identify studies for Cochrane systematic reviews.
Briscoe, Simon
2018-03-01
The literature searches that are used to identify studies for inclusion in a systematic review should be comprehensively reported. This ensures that the literature searches are transparent and reproducible, which is important for assessing the strengths and weaknesses of a systematic review and re-running the literature searches when conducting an update review. Web searching using search engines and the websites of topically relevant organisations is sometimes used as a supplementary literature search method. Previous research has shown that the reporting of web searching in systematic reviews often lacks important details and is thus not transparent or reproducible. Useful details to report about web searching include the name of the search engine or website, the URL, the date searched, the search strategy, and the number of results. This study reviews the reporting of web searching to identify studies for Cochrane systematic reviews published in the 6-month period August 2016 to January 2017 (n = 423). Of these reviews, 61 reviews reported using web searching using a search engine or website as a literature search method. In the majority of reviews, the reporting of web searching was found to lack essential detail for ensuring transparency and reproducibility, such as the search terms. Recommendations are made on how to improve the reporting of web searching in Cochrane systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.
Multimedia Web Searching Trends.
ERIC Educational Resources Information Center
Ozmutlu, Seda; Spink, Amanda; Ozmutlu, H. Cenk
2002-01-01
Examines and compares multimedia Web searching by Excite and FAST search engine users in 2001. Highlights include audio and video queries; time spent on searches; terms per query; ranking of the most frequently used terms; and differences in Web search behaviors of U.S. and European Web users. (Author/LRW)
Searching Online Chemical Data Repositories via the ChemAgora Portal.
Zanzi, Antonella; Wittwehr, Clemens
2017-12-26
ChemAgora, a web application designed and developed in the context of the "Data Infrastructure for Chemical Safety Assessment" (diXa) project, provides search capabilities to chemical data from resources available online, enabling users to cross-reference their search results with both regulatory chemical information and public chemical databases. ChemAgora, through an on-the-fly search, informs whether a chemical is known or not in each of the external data sources and provides clikable links leading to the third-party web site pages containing the information. The original purpose of the ChemAgora application was to correlate studies stored in the diXa data warehouse with available chemical data. Since the end of the diXa project, ChemAgora has evolved into an independent portal, currently accessible directly through the ChemAgora home page, with improved search capabilities of online data sources.
: AWC CPC EMC NCO NHC OPC SPC SWPC WPC Search by city or zip code. Press enter or select the go button to submit request Local forecast by "City, St" City, St Go Search NCEP Go NCEP Quarterly Surface Analysis Product Loops Environmental Models Product Info Current Status Model Analyses &
Surfing for mouth guards: assessing quality of online information.
Magunacelaya, Macarena B; Glendor, Ulf
2011-10-01
The Internet is an easily accessible and commonly used source of health-related information, but evaluations of the quality of this information within the dental trauma field are still lacking. The aims of this study are (i) to present the most current scientific knowledge regarding mouth guards used in sport activities, (ii) to suggest a scoring system to evaluate the quality of information pertaining to mouth guard protection related to World Wide Web sites and (iii) to employ this scoring system when seeking reliable mouth guard-related websites. First, an Internet search using the keywords 'athletic injuries/prevention and control' and 'mouth protector' or 'mouth guards' in English was performed on PubMed, Cochrane, SvedMed+ and Web of Science to identify scientific knowledge about mouth guards. Second, an Internet search using the keywords 'consumer health information Internet', 'Internet information public health' and 'web usage-seeking behaviour' was performed on PubMed and Web of Science to obtain scientific articles seeking to evaluate the quality of health information on the Web. Based on the articles found in the second search, two scoring systems were selected. Then, an Internet search using the keywords 'mouth protector', 'mouth guards' and 'gum shields' in English was performed on the search engines Google, MSN and Yahoo. The websites selected were evaluated for reliability and accuracy. Of the 223 websites retrieved, 39 were designated valid and evaluated. Nine sites scored 22 or higher. The mean total score of the 39 websites was 14.2. Fourteen websites scored higher than the mean total score, and 25 websites scored less. The highest total score, presented by a Public Institution Web site (Health Canada), was 31 from a maximum possible score of 34, and the lowest score was 0. This study shows that there is a high amount of information about mouth guards on the Internet but that the quality of this information varies. It should be the responsibility of health care professionals to suggest and provide reliable Internet URL addresses to patients. In addition, an appropriate search terminology and search strategy should be made available to persons who want to search beyond the recommended sites. © 2011 John Wiley & Sons A/S.
Sexual information seeking on web search engines.
Spink, Amanda; Koricich, Andrew; Jansen, B J; Cole, Charles
2004-02-01
Sexual information seeking is an important element within human information behavior. Seeking sexually related information on the Internet takes many forms and channels, including chat rooms discussions, accessing Websites or searching Web search engines for sexual materials. The study of sexual Web queries provides insight into sexually-related information-seeking behavior, of value to Web users and providers alike. We qualitatively analyzed queries from logs of 1,025,910 Alta Vista and AlltheWeb.com Web user queries from 2001. We compared the differences in sexually-related Web searching between Alta Vista and AlltheWeb.com users. Differences were found in session duration, query outcomes, and search term choices. Implications of the findings for sexual information seeking are discussed.
Information System through ANIS at CeSAM
NASA Astrophysics Data System (ADS)
Moreau, C.; Agneray, F.; Gimenez, S.
2015-09-01
ANIS (AstroNomical Information System) is a web generic tool developed at CeSAM to facilitate and standardize the implementation of astronomical data of various kinds through private and/or public dedicated Information Systems. The architecture of ANIS is composed of a database server which contains the project data, a web user interface template which provides high level services (search, extract and display imaging and spectroscopic data using a combination of criteria, an object list, a sql query module or a cone search interfaces), a framework composed of several packages, and a metadata database managed by a web administration entity. The process to implement a new ANIS instance at CeSAM is easy and fast : the scientific project has to submit data or a data secure access, the CeSAM team installs the new instance (web interface template and the metadata database), and the project administrator can configure the instance with the web ANIS-administration entity. Currently, the CeSAM offers through ANIS a web access to VO compliant Information Systems for different projects (HeDaM, HST-COSMOS, CFHTLS-ZPhots, ExoDAT,...).
A Semantic Web-based System for Managing Clinical Archetypes.
Fernandez-Breis, Jesualdo Tomas; Menarguez-Tortosa, Marcos; Martinez-Costa, Catalina; Fernandez-Breis, Eneko; Herrero-Sempere, Jose; Moner, David; Sanchez, Jesus; Valencia-Garcia, Rafael; Robles, Montserrat
2008-01-01
Archetypes facilitate the sharing of clinical knowledge and therefore are a basic tool for achieving interoperability between healthcare information systems. In this paper, a Semantic Web System for Managing Archetypes is presented. This system allows for the semantic annotation of archetypes, as well for performing semantic searches. The current system is capable of working with both ISO13606 and OpenEHR archetypes.
Web Search Studies: Multidisciplinary Perspectives on Web Search Engines
NASA Astrophysics Data System (ADS)
Zimmer, Michael
Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.
On Building a Search Interface Discovery System
NASA Astrophysics Data System (ADS)
Shestakov, Denis
A huge portion of the Web known as the deep Web is accessible via search interfaces to myriads of databases on the Web. While relatively good approaches for querying the contents of web databases have been recently proposed, one cannot fully utilize them having most search interfaces unlocated. Thus, the automatic recognition of search interfaces to online databases is crucial for any application accessing the deep Web. This paper describes the architecture of the I-Crawler, a system for finding and classifying search interfaces. The I-Crawler is intentionally designed to be used in the deep web characterization surveys and for constructing directories of deep web resources.
None Available
2018-02-06
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
Web-Based Training Methods for Behavioral Health Providers: A Systematic Review.
Jackson, Carrie B; Quetsch, Lauren B; Brabson, Laurel A; Herschell, Amy D
2018-07-01
There has been an increase in the use of web-based training methods to train behavioral health providers in evidence-based practices. This systematic review focuses solely on the efficacy of web-based training methods for training behavioral health providers. A literature search yielded 45 articles meeting inclusion criteria. Results indicated that the serial instruction training method was the most commonly studied web-based training method. While the current review has several notable limitations, findings indicate that participating in a web-based training may result in greater post-training knowledge and skill, in comparison to baseline scores. Implications and recommendations for future research on web-based training methods are discussed.
Multitasking Web Searching and Implications for Design.
ERIC Educational Resources Information Center
Ozmutlu, Seda; Ozmutlu, H. C.; Spink, Amanda
2003-01-01
Findings from a study of users' multitasking searches on Web search engines include: multitasking searches are a noticeable user behavior; multitasking search sessions are longer than regular search sessions in terms of queries per session and duration; both Excite and AlltheWeb.com users search for about three topics per multitasking session and…
Children's Search Engines from an Information Search Process Perspective.
ERIC Educational Resources Information Center
Broch, Elana
2000-01-01
Describes cognitive and affective characteristics of children and teenagers that may affect their Web searching behavior. Reviews literature on children's searching in online public access catalogs (OPACs) and using digital libraries. Profiles two Web search engines. Discusses some of the difficulties children have searching the Web, in the…
Searching the Web: The Public and Their Queries.
ERIC Educational Resources Information Center
Spink, Amanda; Wolfram, Dietmar; Jansen, Major B. J.; Saracevic, Tefko
2001-01-01
Reports findings from a study of searching behavior by over 200,000 users of the Excite search engine. Analysis of over one million queries revealed most people use few search terms, few modified queries, view few Web pages, and rarely use advanced search features. Concludes that Web searching by the public differs significantly from searching of…
Improving Web Searches: Case Study of Quit-Smoking Web Sites for Teenagers
Skinner, Harvey
2003-01-01
Background The Web has become an important and influential source of health information. With the vast number of Web sites on the Internet, users often resort to popular search sites when searching for information. However, little is known about the characteristics of Web sites returned by simple Web searches for information about smoking cessation for teenagers. Objective To determine the characteristics of Web sites retrieved by search engines about smoking cessation for teenagers and how information quality correlates with the search ranking. Methods The top 30 sites returned by 4 popular search sites in response to the search terms "teen quit smoking" were examined. The information relevance and quality characteristics of these sites were evaluated by 2 raters. Objective site characteristics were obtained using a page-analysis Web site. Results Only 14 of the 30 Web sites are of direct relevance to smoking cessation for teenagers. The readability of about two-thirds of the 14 sites is below an eighth-grade school level and they ranked significantly higher (Kendall rank correlation, tau = -0.39, P= .05) in search-site results than sites with readability above or equal to that grade level. Sites that ranked higher were significantly associated with the presence of e-mail address for contact (tau = -0.46, P= .01), annotated hyperlinks to external sites (tau = -0.39, P= .04), and the presence of meta description tag (tau = -0.48, P= .002). The median link density (number of external sites that have a link to that site) of the Web pages was 6 and the maximum was 735. A higher link density was significantly associated with a higher rank (tau = -0.58, P= .02). Conclusions Using simple search terms on popular search sites to look for information on smoking cessation for teenagers resulted in less than half of the sites being of direct relevance. To improve search efficiency, users could supplement results obtained from simple Web searches with human-maintained Web directories and learn to refine their searches with more advanced search syntax. PMID:14713656
Cognitive and Task Influences on Web Searching Behavior.
ERIC Educational Resources Information Center
Kim, Kyung-Sun; Allen, Bryce
2002-01-01
Describes results from two independent investigations of college students that were conducted to study the impact of differences in users' cognition and search tasks on Web search activities and outcomes. Topics include cognitive style; problem-solving; and implications for the design and use of the Web and Web search engines. (Author/LRW)
Evaluation of Web Accessibility of Consumer Health Information Websites
Zeng, Xiaoming; Parmanto, Bambang
2003-01-01
The objectives of the study are to construct a comprehensive framework for web accessibility evaluation, to evaluate the current status of web accessibility of consumer health information websites and to investigate the relationship between web accessibility and property of the websites. We selected 108 consumer health information websites from the directory service of a Web search engine. We used Web accessibility specifications to construct a framework for the measurement of Web Accessibility Barriers (WAB) of website. We found that none of the websites is completely accessible to people with disabilities, but governmental and educational health information websites exhibit better performance on web accessibility than other categories of websites. We also found that the correlation between the WAB score and the popularity of a website is statistically significant. PMID:14728272
Evaluation of web accessibility of consumer health information websites.
Zeng, Xiaoming; Parmanto, Bambang
2003-01-01
The objectives of the study are to construct a comprehensive framework for web accessibility evaluation, to evaluate the current status of web accessibility of consumer health information websites and to investigate the relationship between web accessibility and property of the websites. We selected 108 consumer health information websites from the directory service of a Web search engine. We used Web accessibility specifications to construct a framework for the measurement of Web Accessibility Barriers (WAB) of website. We found that none of the websites is completely accessible to people with disabilities, but governmental and educational health information websites exhibit better performance on web accessibility than other categories of websites. We also found that the correlation between the WAB score and the popularity of a website is statistically significant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None Available
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
Creating a Classroom Kaleidoscope with the World Wide Web.
ERIC Educational Resources Information Center
Quinlan, Laurie A.
1997-01-01
Discusses the elements of classroom Web presentations: planning; construction, including design tips; classroom use; and assessment. Lists 14 World Wide Web resources for K-12 teachers; Internet search tools (directories, search engines and meta-search engines); a Web glossary; and an example of HTML for a simple Web page. (PEN)
Search strategies on the Internet: general and specific.
Bottrill, Krys
2004-06-01
Some of the most up-to-date information on scientific activity is to be found on the Internet; for example, on the websites of academic and other research institutions and in databases of currently funded research studies provided on the websites of funding bodies. Such information can be valuable in suggesting new approaches and techniques that could be applicable in a Three Rs context. However, the Internet is a chaotic medium, not subject to the meticulous classification and organisation of classical information resources. At the same time, Internet search engines do not match the sophistication of search systems used by database hosts. Also, although some offer relatively advanced features, user awareness of these tends to be low. Furthermore, much of the information on the Internet is not accessible to conventional search engines, giving rise to the concept of the "Invisible Web". General strategies and techniques for Internet searching are presented, together with a comparative survey of selected search engines. The question of how the Invisible Web can be accessed is discussed, as well as how to keep up-to-date with Internet content and improve searching skills.
Migration to Earth Observation Satellite Product Dissemination System at JAXA
NASA Astrophysics Data System (ADS)
Ikehata, Y.; Matsunaga, M.
2017-12-01
JAXA released "G-Portal" as a portal web site for search and deliver data of Earth observation satellites in February 2013. G-Portal handles ten satellites data; GPM, TRMM, Aqua, ADEOS-II, ALOS (search only), ALOS-2 (search only), MOS-1, MOS-1b, ERS-1 and JERS-1 and archives 5.17 million products and 14 million catalogues in total. Users can search those products/catalogues in GUI web search and catalogue interface(CSW/Opensearch). In this fiscal year, we will replace this to "Next G-Portal" and has been doing integration, test and migrations. New G-Portal will treat data of satellites planned to be launched in the future in addition to those handled by G - Portal. At system architecture perspective, G-Portal adopted "cluster system" for its redundancy, so we must replace the servers into those with higher specifications when we improve its performance ("scale up approach"). This requests a lot of cost in every improvement. To avoid this, Next G-Portal adopts "scale out" system: load balancing interfaces, distributed file system, distributed data bases. (We reported in AGU fall meeting 2015(IN23D-1748).) At customer usability perspective, G-Portal provides complicated interface: "step by step" web design, randomly generated URLs, sftp (needs anomaly tcp port). Customers complained about the interfaces and the support team had been tired from answering them. To solve this problem, Next G-Portal adopts simple interfaces: "1 page" web design, RESTful URL, and Normal FTP. (We reported in AGU fall meeting 2016(IN23B-1778).) Furthermore, Next G-Portal must merge GCOM-W data dissemination system to be terminated in the next March as well as the current G-Portal. This might arrise some difficulties, since the current G-Portal and GCOM-W data dissemination systems are quite different from Next G-Portal. The presentation reports the knowledge obtained from the process of merging those systems.
PubMed and beyond: a survey of web tools for searching biomedical literature
Lu, Zhiyong
2011-01-01
The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search PMID:21245076
Information about liver transplantation on the World Wide Web.
Hanif, F; Sivaprakasam, R; Butler, A; Huguet, E; Pettigrew, G J; Michael, E D A; Praseedom, R K; Jamieson, N V; Bradley, J A; Gibbs, P
2006-09-01
Orthotopic liver transplant (OLTx) has evolved to a successful surgical management for end-stage liver diseases. Awareness and information about OLTx is an important tool in assisting OLTx recipients and people supporting them, including non-transplant clinicians. The study aimed to investigate the nature and quality of liver transplant-related patient information on the World Wide Web. Four common search engines were used to explore the Internet by using the key words 'Liver transplant'. The URL (unique resource locator) of the top 50 returns was chosen as it was judged unlikely that the average user would search beyond the first 50 sites returned by a given search. Each Web site was assessed on the following categories: origin, language, accessibility and extent of the information. A weighted Information Score (IS) was created to assess the quality of clinical and educational value of each Web site and was scored independently by three transplant clinicians. The Internet search performed with the aid of the four search engines yielded a total of 2,255,244 Web sites. Of the 200 possible sites, only 58 Web sites were assessed because of repetition of the same Web sites and non-accessible links. The overall median weighted IS was 22 (IQR 1 - 42). Of the 58 Web sites analysed, 45 (77%) belonged to USA, six (10%) were European, and seven (12%) were from the rest of the world. The median weighted IS of publications originating from Europe and USA was 40 (IQR = 22 - 60) and 23 (IQR = 6 - 38), respectively. Although European Web sites produced a higher weighted IS [40 (IQR = 22 - 60)] as compared with the USA publications [23 (IQR = 6 - 38)], this was not statistically significant (p = 0.07). Web sites belonging to the academic institutions and the professional organizations scored significantly higher with a median weighted IS of 28 (IQR = 16 - 44) and 24(12 - 35), respectively, as compared with the commercial Web sites (median = 6 with IQR of 0 - 14, p = .001). There was an Intraclass Correlation Coefficient (ICC) of 0.89 and an associated 95% CI (0.83, 0.93) for the three observers on the 58 Web sites. The study highlights the need for a significant improvement in the information available on the World Wide Web about OLTx. It concludes that the educational material currently available on the World Wide Web about liver transplant is of poor quality and requires rigorous input from health care professionals. The authors suggest that clinicians should pay more attention to take the necessary steps to improve the standard of information available on their relevant Web sites and must take an active role in helping their patients find Web sites that provide the best and accurate information specifically applicable to the loco-regional circumstances.
The Effect of Individual Differences on Searching the Web.
ERIC Educational Resources Information Center
Ihadjadene, Madjid; Chaudiron, Stephanne,; Martins, Daniel
2003-01-01
Reports results from a project that investigated the influence of two types of expertise--knowledge of the search domain and experience of the Web search engines--on students' use of a Web search engine. Results showed participants with good knowledge in the domain and participants with high experience of the Web had the best performances. (AEF)
The Next Linear Collider Program
Navbar Other Address Books: Laboratory Phone/Email Web Directory SLAC SLAC Phonebook Entire SLAC Web FNAL Telephone Directory Fermilab Search LLNL Phone Book LLNL Web Servers LBNL Directory Services Web Search: A-Z Index KEK E-mail Database Research Projects NLC Website Search: Entire SLAC Web | Help
Searching for American Indian Resources on the Internet.
ERIC Educational Resources Information Center
Pollack, Ira; Derby, Amy
This paper provides basic information on searching the Internet and lists World Wide Web sites containing resources for American Indian education. Comprehensive and topical Web directories, search engines, and meta-search engines are briefly described. Search strategies are discussed, and seven Web sites are listed that provide more advanced…
ERIC Educational Resources Information Center
She, Hsiao-Ching; Cheng, Meng-Tzu; Li, Ta-Wei; Wang, Chia-Yu; Chiu, Hsin-Tien; Lee, Pei-Zon; Chou, Wen-Chi; Chuang, Ming-Hua
2012-01-01
This study investigates the effect of Web-based Chemistry Problem-Solving, with the attributes of Web-searching and problem-solving scaffolds, on undergraduate students' problem-solving task performance. In addition, the nature and extent of Web-searching strategies students used and its correlation with task performance and domain knowledge also…
Using the internet to understand angler behavior in the information age
Martin, Dustin R.; Pracheil, Brenda M.; DeBoer, Jason A.; Wilde, Gene R.; Pope, Kevin L.
2012-01-01
Declining participation in recreational angling is of great concern to fishery managers because fishing license sales are an important revenue source for protection of aquatic resources. This decline is frequently attributed, in part, to increased societal reliance on electronics. Internet use by anglers is increasing and fishery managers may use the Internet as a unique means to increase angler participation. We examined Internet search behavior using Google Insights for Search, a free online tool that summarizes Google searches from 2004 to 2011 to determine (1) trends in Internet search volume for general fishing related terms and (2) the relative usefulness of terms related to angler recruitment programs across the United States. Though search volume declined for general fishing terms (e.g., fishing, fishing guide), search volume increased for social media and recruitment terms (e.g., fishing forum, family fishing) over the 7-year period. We encourage coordinators of recruitment programs to capitalize on anglers’ Internet usage by considering Internet search patterns when creating web-based information. Careful selection of terms used in web-based information to match those currently searched by potential anglers may help to direct traffic to state agency websites that support recruitment efforts.
NASA Astrophysics Data System (ADS)
Ahlers, Dirk; Boll, Susanne
In recent years, the relation of Web information to a physical location has gained much attention. However, Web content today often carries only an implicit relation to a location. In this chapter, we present a novel location-based search engine that automatically derives spatial context from unstructured Web resources and allows for location-based search: our focused crawler applies heuristics to crawl and analyze Web pages that have a high probability of carrying a spatial relation to a certain region or place; the location extractor identifies the actual location information from the pages; our indexer assigns a geo-context to the pages and makes them available for a later spatial Web search. We illustrate the usage of our spatial Web search for location-based applications that provide information not only right-in-time but also right-on-the-spot.
Nicholson, Scott
2005-01-01
The paper explores the current state of generalist search education in library schools and considers that foundation in respect to the Medical Library Association's statement on expert searching. Syllabi from courses with significant searching components were examined from ten of the top library schools, as determined by the U.S. News & World Report rankings. Mixed methods were used, but primarily quantitative bibliometric methods were used. The educational focus in these searching components was on understanding the generalist searching resources and typical users and on performing a reflective search through application of search strategies, controlled vocabulary, and logic appropriate to the search tool. There is a growing emphasis on Web-based search tools and a movement away from traditional set-based searching and toward free-text search strategies. While a core set of authors is used in these courses, no core set of readings is used. While library schools provide a strong foundation, future medical librarians still need to take courses that introduce them to the resources, settings, and users associated with medical libraries. In addition, as more emphasis is placed on Web-based search tools and free-text searching, instructors of the specialist medical informatics courses will need to focus on teaching traditional search methods appropriate for common tools in the medical domain.
Nicholson, Scott
2005-01-01
Purpose: The paper explores the current state of generalist search education in library schools and considers that foundation in respect to the Medical Library Association's statement on expert searching. Setting/Subjects: Syllabi from courses with significant searching components were examined from ten of the top library schools, as determined by the U.S. News & World Report rankings. Methodology: Mixed methods were used, but primarily quantitative bibliometric methods were used. Results: The educational focus in these searching components was on understanding the generalist searching resources and typical users and on performing a reflective search through application of search strategies, controlled vocabulary, and logic appropriate to the search tool. There is a growing emphasis on Web-based search tools and a movement away from traditional set-based searching and toward free-text search strategies. While a core set of authors is used in these courses, no core set of readings is used. Discussion/Conclusion: While library schools provide a strong foundation, future medical librarians still need to take courses that introduce them to the resources, settings, and users associated with medical libraries. In addition, as more emphasis is placed on Web-based search tools and free-text searching, instructors of the specialist medical informatics courses will need to focus on teaching traditional search methods appropriate for common tools in the medical domain. PMID:15685276
Systematic Review of Quality of Patient Information on Liposuction in the Internet
Zuk, Grzegorz; Eylert, Gertraud; Raptis, Dimitri Aristotle; Guggenheim, Merlin; Shafighi, Maziar
2016-01-01
Background: A large number of patients who are interested in esthetic surgery actively search the Internet, which represents nowadays the first source of information. However, the quality of information available in the Internet on liposuction is currently unknown. The aim of this study was to assess the quality of patient information on liposuction available in the Internet. Methods: The quantitative and qualitative assessment of Web sites was based on a modified Ensuring Quality Information for Patients tool (36 items). Five hundred Web sites were identified by the most popular web search engines. Results: Two hundred forty-five Web sites were assessed after duplicates and irrelevant sources were excluded. Only 72 (29%) Web sites addressed >16 items, and scores tended to be higher for professional societies, portals, patient groups, health departments, and academic centers than for Web sites developed by physicians, respectively. The Ensuring Quality Information for Patients score achieved by Web sites ranged between 8 and 29 of total 36 points, with a median value of 16 points (interquartile range, 14–18). The top 10 Web sites with the highest scores were identified. Conclusions: The quality of patient information on liposuction available in the Internet is poor, and existing Web sites show substantial shortcomings. There is an urgent need for improvement in offering superior quality information on liposuction for patients intending to undergo this procedure. PMID:27482498
Systematic Review of Quality of Patient Information on Liposuction in the Internet.
Zuk, Grzegorz; Palma, Adrian Fernando; Eylert, Gertraud; Raptis, Dimitri Aristotle; Guggenheim, Merlin; Shafighi, Maziar
2016-06-01
A large number of patients who are interested in esthetic surgery actively search the Internet, which represents nowadays the first source of information. However, the quality of information available in the Internet on liposuction is currently unknown. The aim of this study was to assess the quality of patient information on liposuction available in the Internet. The quantitative and qualitative assessment of Web sites was based on a modified Ensuring Quality Information for Patients tool (36 items). Five hundred Web sites were identified by the most popular web search engines. Two hundred forty-five Web sites were assessed after duplicates and irrelevant sources were excluded. Only 72 (29%) Web sites addressed >16 items, and scores tended to be higher for professional societies, portals, patient groups, health departments, and academic centers than for Web sites developed by physicians, respectively. The Ensuring Quality Information for Patients score achieved by Web sites ranged between 8 and 29 of total 36 points, with a median value of 16 points (interquartile range, 14-18). The top 10 Web sites with the highest scores were identified. The quality of patient information on liposuction available in the Internet is poor, and existing Web sites show substantial shortcomings. There is an urgent need for improvement in offering superior quality information on liposuction for patients intending to undergo this procedure.
Using the Turning Research Into Practice (TRIP) database: how do clinicians really search?*
Meats, Emma; Brassey, Jon; Heneghan, Carl; Glasziou, Paul
2007-01-01
Objectives: Clinicians and patients are increasingly accessing information through Internet searches. This study aimed to examine clinicians' current search behavior when using the Turning Research Into Practice (TRIP) database to examine search engine use and the ways it might be improved. Methods: A Web log analysis was undertaken of the TRIP database—a meta-search engine covering 150 health resources including MEDLINE, The Cochrane Library, and a variety of guidelines. The connectors for terms used in searches were studied, and observations were made of 9 users' search behavior when working with the TRIP database. Results: Of 620,735 searches, most used a single term, and 12% (n = 75,947) used a Boolean operator: 11% (n = 69,006) used “AND” and 0.8% (n = 4,941) used “OR.” Of the elements of a well-structured clinical question (population, intervention, comparator, and outcome), the population was most commonly used, while fewer searches included the intervention. Comparator and outcome were rarely used. Participants in the observational study were interested in learning how to formulate better searches. Conclusions: Web log analysis showed most searches used a single term and no Boolean operators. Observational study revealed users were interested in conducting efficient searches but did not always know how. Therefore, either better training or better search interfaces are required to assist users and enable more effective searching. PMID:17443248
Financial outcomes of transoral robotic surgery: A narrative review.
Othman, Sammy; McKinnon, Brian J
2018-04-03
To determine the current cost impact and financial outcomes of transoral robotic surgery in Otolaryngology. A narrative review of the literature with a defined search strategy using Pubmed, MEDLINE, CINAHL, and Web of Science. Using keywords ENT or otolaryngology, cost or economic, transoral robotic surgery or TORs, searches were performed in Pubmed, MEDLINE, CINAHL, and Web of Science and reviewed by the authors for inclusion and analysis. Six total papers were deemed appropriate for analysis. All addressed cost impact of transoral robotic surgery (TORs) as compared to open surgical methods in treating oropharyngeal cancer and/or the identification of the primary tumor within unknown primary squamous cell carcinoma. Results showed TORs to be cost-effective. Transoral robotic surgery is currently largely cost effective for both treatment and diagnostic procedures. However, further studies are needed to qualify long-term data. Copyright © 2018. Published by Elsevier Inc.
POPcorn: An Online Resource Providing Access to Distributed and Diverse Maize Project Data.
Cannon, Ethalinda K S; Birkett, Scott M; Braun, Bremen L; Kodavali, Sateesh; Jennewein, Douglas M; Yilmaz, Alper; Antonescu, Valentin; Antonescu, Corina; Harper, Lisa C; Gardiner, Jack M; Schaeffer, Mary L; Campbell, Darwin A; Andorf, Carson M; Andorf, Destri; Lisch, Damon; Koch, Karen E; McCarty, Donald R; Quackenbush, John; Grotewold, Erich; Lushbough, Carol M; Sen, Taner Z; Lawrence, Carolyn J
2011-01-01
The purpose of the online resource presented here, POPcorn (Project Portal for corn), is to enhance accessibility of maize genetic and genomic resources for plant biologists. Currently, many online locations are difficult to find, some are best searched independently, and individual project websites often degrade over time-sometimes disappearing entirely. The POPcorn site makes available (1) a centralized, web-accessible resource to search and browse descriptions of ongoing maize genomics projects, (2) a single, stand-alone tool that uses web Services and minimal data warehousing to search for sequence matches in online resources of diverse offsite projects, and (3) a set of tools that enables researchers to migrate their data to the long-term model organism database for maize genetic and genomic information: MaizeGDB. Examples demonstrating POPcorn's utility are provided herein.
POPcorn: An Online Resource Providing Access to Distributed and Diverse Maize Project Data
Cannon, Ethalinda K. S.; Birkett, Scott M.; Braun, Bremen L.; Kodavali, Sateesh; Jennewein, Douglas M.; Yilmaz, Alper; Antonescu, Valentin; Antonescu, Corina; Harper, Lisa C.; Gardiner, Jack M.; Schaeffer, Mary L.; Campbell, Darwin A.; Andorf, Carson M.; Andorf, Destri; Lisch, Damon; Koch, Karen E.; McCarty, Donald R.; Quackenbush, John; Grotewold, Erich; Lushbough, Carol M.; Sen, Taner Z.; Lawrence, Carolyn J.
2011-01-01
The purpose of the online resource presented here, POPcorn (Project Portal for corn), is to enhance accessibility of maize genetic and genomic resources for plant biologists. Currently, many online locations are difficult to find, some are best searched independently, and individual project websites often degrade over time—sometimes disappearing entirely. The POPcorn site makes available (1) a centralized, web-accessible resource to search and browse descriptions of ongoing maize genomics projects, (2) a single, stand-alone tool that uses web Services and minimal data warehousing to search for sequence matches in online resources of diverse offsite projects, and (3) a set of tools that enables researchers to migrate their data to the long-term model organism database for maize genetic and genomic information: MaizeGDB. Examples demonstrating POPcorn's utility are provided herein. PMID:22253616
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-18
....gov , enter ``NOAA-NMFS-2011-0025'' in the search field and click on ``search''. After you locate the proposed rule, click the ``Submit a Comment'' link in that row. This will display the comment web form. You...), however, the current burden estimate (20 minutes per applicant) to complete the application form would not...
Research on Web Search Behavior: How Online Query Data Inform Social Psychology.
Lai, Kaisheng; Lee, Yan Xin; Chen, Hao; Yu, Rongjun
2017-10-01
The widespread use of web searches in daily life has allowed researchers to study people's online social and psychological behavior. Using web search data has advantages in terms of data objectivity, ecological validity, temporal resolution, and unique application value. This review integrates existing studies on web search data that have explored topics including sexual behavior, suicidal behavior, mental health, social prejudice, social inequality, public responses to policies, and other psychosocial issues. These studies are categorized as descriptive, correlational, inferential, predictive, and policy evaluation research. The integration of theory-based hypothesis testing in future web search research will result in even stronger contributions to social psychology.
Incorporating the Internet into Traditional Library Instruction.
ERIC Educational Resources Information Center
Fonseca, Tony; King, Monica
2000-01-01
Presents a template for teaching traditional library research and one for incorporating the Web. Highlights include the differences between directories and search engines; devising search strategies; creating search terms; how to choose search engines; evaluating online resources; helpful Web sites; and how to read URLs to evaluate a Web site's…
An insight into the deep web; why it matters for addiction psychiatry?
Orsolini, Laura; Papanti, Duccio; Corkery, John; Schifano, Fabrizio
2017-05-01
Nowadays, the web is rapidly spreading, playing a significant role in the marketing or sale or distribution of "quasi" legal drugs, hence facilitating continuous changes in drug scenarios. The easily renewable and anarchic online drug-market is gradually transforming indeed the drug market itself, from a "street" to a "virtual" one, with customers being able to shop with a relative anonymity in a 24-hr marketplace. The hidden "deep web" is facilitating this phenomenon. The paper aims at providing an overview to mental health's and addiction's professionals on current knowledge about prodrug activities on the deep web. A nonparticipant netnographic qualitative study of a list of prodrug websites (blogs, fora, and drug marketplaces) located into the surface web was here carried out. A systematic Internet search was conducted on Duckduckgo® and Google® whilst including the following keywords: "drugs" or "legal highs" or "Novel Psychoactive Substances" or "NPS" combined with the word deep web. Four themes (e.g., "How to access into the deepweb"; "Darknet and the online drug trading sites"; "Grams-search engine for the deep web"; and "Cryptocurrencies") and 14 categories were here generated and properly discussed. This paper represents a complete or systematical guideline about the deep web, specifically focusing on practical information on online drug marketplaces, useful for addiction's professionals. Copyright © 2017 John Wiley & Sons, Ltd.
CliniWeb: managing clinical information on the World Wide Web.
Hersh, W R; Brown, K E; Donohoe, L C; Campbell, E M; Horacek, A E
1996-01-01
The World Wide Web is a powerful new way to deliver on-line clinical information, but several problems limit its value to health care professionals: content is highly distributed and difficult to find, clinical information is not separated from non-clinical information, and the current Web technology is unable to support some advanced retrieval capabilities. A system called CliniWeb has been developed to address these problems. CliniWeb is an index to clinical information on the World Wide Web, providing a browsing and searching interface to clinical content at the level of the health care student or provider. Its database contains a list of clinical information resources on the Web that are indexed by terms from the Medical Subject Headings disease tree and retrieved with the assistance of SAPHIRE. Limitations of the processes used to build the database are discussed, together with directions for future research.
A study of medical and health queries to web search engines.
Spink, Amanda; Yang, Yin; Jansen, Jim; Nykanen, Pirrko; Lorence, Daniel P; Ozmutlu, Seda; Ozmutlu, H Cenk
2004-03-01
This paper reports findings from an analysis of medical or health queries to different web search engines. We report results: (i). comparing samples of 10000 web queries taken randomly from 1.2 million query logs from the AlltheWeb.com and Excite.com commercial web search engines in 2001 for medical or health queries, (ii). comparing the 2001 findings from Excite and AlltheWeb.com users with results from a previous analysis of medical and health related queries from the Excite Web search engine for 1997 and 1999, and (iii). medical or health advice-seeking queries beginning with the word 'should'. Findings suggest: (i). a small percentage of web queries are medical or health related, (ii). the top five categories of medical or health queries were: general health, weight issues, reproductive health and puberty, pregnancy/obstetrics, and human relationships, and (iii). over time, the medical and health queries may have declined as a proportion of all web queries, as the use of specialized medical/health websites and e-commerce-related queries has increased. Findings provide insights into medical and health-related web querying and suggests some implications for the use of the general web search engines when seeking medical/health information.
Abbott, Kevin C; Oliver, David K; Boal, Thomas R; Gadiyak, Grigorii; Boocks, Carl; Yuan, Christina M; Welch, Paul G; Poropatich, Ronald K
2002-04-01
Studies of the use of the World Wide Web to obtain medical knowledge have largely focused on patients. In particular, neither the international use of academic nephrology World Wide Web sites (websites) as primary information sources nor the use of search engines (and search strategies) to obtain medical information have been described. Visits ("hits") to the Walter Reed Army Medical Center (WRAMC) Nephrology Service website from April 30, 2000, to March 14, 2001, were analyzed for the location of originating source using Webtrends, and search engines (Google, Lycos, etc.) were analyzed manually for search strategies used. From April 30, 2000 to March 14, 2001, the WRAMC Nephrology Service website received 1,007,103 hits and 12,175 visits. These visits were from 33 different countries, and the most frequent regions were Western Europe, Asia, Australia, the Middle East, Pacific Islands, and South America. The most frequent organization using the site was the military Internet system, followed by America Online and automated search programs of online search engines, most commonly Google. The online lecture series was the most frequently visited section of the website. Search strategies used in search engines were extremely technical. The use of "robots" by standard Internet search engines to locate websites, which may be blocked by mandatory registration, has allowed users worldwide to access the WRAMC Nephrology Service website to answer very technical questions. This suggests that it is being used as an alternative to other primary sources of medical information and that the use of mandatory registration may hinder users from finding valuable sites. With current Internet technology, even a single service can become a worldwide information resource without sacrificing its primary customers.
Use of Web Search Engines and Personalisation in Information Searching for Educational Purposes
ERIC Educational Resources Information Center
Salehi, Sara; Du, Jia Tina; Ashman, Helen
2018-01-01
Introduction: Students increasingly depend on Web search for educational purposes. This causes concerns among education providers as some evidence indicates that in higher education, the disadvantages of Web search and personalised information are not justified by the benefits. Method: One hundred and twenty university students were surveyed about…
Modeling Rich Interactions for Web Search Intent Inference, Ranking and Evaluation
ERIC Educational Resources Information Center
Guo, Qi
2012-01-01
Billions of people interact with Web search engines daily and their interactions provide valuable clues about their interests and preferences. While modeling search behavior, such as queries and clicks on results, has been found to be effective for various Web search applications, the effectiveness of the existing approaches are limited by…
Discovering How Students Search a Library Web Site: A Usability Case Study.
ERIC Educational Resources Information Center
Augustine, Susan; Greene, Courtney
2002-01-01
Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…
77 FR 36583 - NRC Form 5, Occupational Dose Record for a Monitoring Period
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-19
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2012... following methods: Federal Rulemaking Web Site: Go to http://www.regulations.gov and search for Docket ID... begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search...
OpenSearch technology for geospatial resources discovery
NASA Astrophysics Data System (ADS)
Papeschi, Fabrizio; Enrico, Boldrini; Mazzetti, Paolo
2010-05-01
In 2005, the term Web 2.0 has been coined by Tim O'Reilly to describe a quickly growing set of Web-based applications that share a common philosophy of "mutually maximizing collective intelligence and added value for each participant by formalized and dynamic information sharing". Around this same period, OpenSearch a new Web 2.0 technology, was developed. More properly, OpenSearch is a collection of technologies that allow publishing of search results in a format suitable for syndication and aggregation. It is a way for websites and search engines to publish search results in a standard and accessible format. Due to its strong impact on the way the Web is perceived by users and also due its relevance for businesses, Web 2.0 has attracted the attention of both mass media and the scientific community. This explosive growth in popularity of Web 2.0 technologies like OpenSearch, and practical applications of Service Oriented Architecture (SOA) resulted in an increased interest in similarities, convergence, and a potential synergy of these two concepts. SOA is considered as the philosophy of encapsulating application logic in services with a uniformly defined interface and making these publicly available via discovery mechanisms. Service consumers may then retrieve these services, compose and use them according to their current needs. A great degree of similarity between SOA and Web 2.0 may be leading to a convergence between the two paradigms. They also expose divergent elements, such as the Web 2.0 support to the human interaction in opposition to the typical SOA machine-to-machine interaction. According to these considerations, the Geospatial Information (GI) domain, is also moving first steps towards a new approach of data publishing and discovering, in particular taking advantage of the OpenSearch technology. A specific GI niche is represented by the OGC Catalog Service for Web (CSW) that is part of the OGC Web Services (OWS) specifications suite, which provides a set of services for discovery, access, and processing of geospatial resources in a SOA framework. GI-cat is a distributed CSW framework implementation developed by the ESSI Lab of the Italian National Research Council (CNR-IMAA) and the University of Florence. It provides brokering and mediation functionalities towards heterogeneous resources and inventories, exposing several standard interfaces for query distribution. This work focuses on a new GI-cat interface which allows the catalog to be queried according to the OpenSearch syntax specification, thus filling the gap between the SOA architectural design of the CSW and the Web 2.0. At the moment, there is no OGC standard specification about this topic, but an official change request has been proposed in order to enable the OGC catalogues to support OpenSearch queries. In this change request, an OpenSearch extension is proposed providing a standard mechanism to query a resource based on temporal and geographic extents. Two new catalog operations are also proposed, in order to publish a suitable OpenSearch interface. This extended interface is implemented by the modular GI-cat architecture adding a new profiling module called "OpenSearch profiler". Since GI-cat also acts as a clearinghouse catalog, another component called "OpenSearch accessor" is added in order to access OpenSearch compliant services. An important role in the GI-cat extension, is played by the adopted mapping strategy. Two different kind of mappings are required: query, and response elements mapping. Query mapping is provided in order to fit the simple OpenSearch query syntax to the complex CSW query expressed by the OGC Filter syntax. GI-cat internal data model is based on the ISO-19115 profile, that is more complex than the simple XML syndication formats, such as RSS 2.0 and Atom 1.0, suggested by OpenSearch. Once response elements are available, in order to be presented, they need to be translated from the GI-cat internal data model, to the above mentioned syndication formats; the mapping processing, is bidirectional. When GI-cat is used to access OpenSearch compliant services, the CSW query must be mapped to the OpenSearch query, and the response elements, must be translated according to the GI-cat internal data model. As results of such extensions, GI-cat provides a user friendly facade to the complex CSW interface, thus enabling it to be queried, for example, using a browser toolbar.
E-Referencer: Transforming Boolean OPACs to Web Search Engines.
ERIC Educational Resources Information Center
Khoo, Christopher S. G.; Poo, Danny C. C.; Toh, Teck-Kang; Hong, Glenn
E-Referencer is an expert intermediary system for searching library online public access catalogs (OPACs) on the World Wide Web. It is implemented as a proxy server that mediates the interaction between the user and Boolean OPACs. It transforms a Boolean OPAC into a retrieval system with many of the search capabilities of Web search engines.…
ERIC Educational Resources Information Center
Tillotson, Joy
2003-01-01
Describes a survey that was conducted involving participants in the library instruction program at two Canadian universities in order to describe the characteristics of students receiving instruction in Web searching. Examines criteria for evaluating Web sites, search strategies, use of search engines, and frequency of use. Questionnaire is…
Web Spam, Social Propaganda and the Evolution of Search Engine Rankings
NASA Astrophysics Data System (ADS)
Metaxas, Panagiotis Takis
Search Engines have greatly influenced the way we experience the web. Since the early days of the web, users have been relying on them to get informed and make decisions. When the web was relatively small, web directories were built and maintained using human experts to screen and categorize pages according to their characteristics. By the mid 1990's, however, it was apparent that the human expert model of categorizing web pages does not scale. The first search engines appeared and they have been evolving ever since, taking over the role that web directories used to play.
Eysenbach, G.; Kohler, Ch.
2003-01-01
While health information is often said to be the most sought after information on the web, empirical data on the actual frequency of health-related searches on the web are missing. In the present study we aimed to determine the prevalence of health-related searches on the web by analyzing search terms entered by people into popular search engines. We also made some preliminary attempts in qualitatively describing and classifying these searches. Occasional difficulties in determining what constitutes a “health-related” search led us to propose and validate a simple method to automatically classify a search string as “health-related”. This method is based on determining the proportion of pages on the web containing the search string and the word “health”, as a proportion of the total number of pages with the search string alone. Using human codings as gold standard we plotted a ROC curve and determined empirically that if this “co-occurance rate” is larger than 35%, the search string can be said to be health-related (sensitivity: 85.2%, specificity 80.4%). The results of our “human” codings of search queries determined that about 4.5% of all searches are “health-related”. We estimate that globally a minimum of 6.75 Million health-related searches are being conducted on the web every day, which is roughly the same number of searches that have been conducted on the NLM Medlars system in 1996 in a full year. PMID:14728167
Web Feet Guide to Search Engines: Finding It on the Net.
ERIC Educational Resources Information Center
Web Feet, 2001
2001-01-01
This guide to search engines for the World Wide Web discusses selecting the right search engine; interpreting search results; major search engines; online tutorials and guides; search engines for kids; specialized search tools for various subjects; and other specialized engines and gateways. (LRW)
The utilization of oncology web-based resources in Spanish-speaking Internet users.
Simone, Charles B; Hampshire, Margaret K; Vachani, Carolyn; Metz, James M
2012-12-01
There currently are few web-based resources written in Spanish providing oncology-specific information. This study examines utilization of Spanish-language oncology web-based resources and evaluates oncology-related Internet browsing practices of Spanish-speaking patients. OncoLink (http://www.oncolink.org) is the oldest and among the largest Internet-based cancer information resources. In September 2005, OncoLink pioneered OncoLink en español (OEE) (http://es.oncolink.org), a Spanish translation of OncoLink. Internet utilization data on these sites for 2006 to 2007 were compared. Visits to OncoLink rose from 4,440,843 in 2006 to 5,125,952 in 2007. OEE had 204,578 unique visitors and 240,442 visits in 2006, and 351,228 visitors and 412,153 visits in 2007. Although there was no time predilection for viewing OncoLink, less relative browsing on OEE was conducted during weekends and early morning hours. Although OncoLink readers searched for information on the most common cancers in the United States, OEE readers most often search for gastric, vaginal, osteosarcoma, leukemia, penile, cervical, and testicular malignancies. Average visit duration on OEE was shorter, and fewer readers surveyed OEE more than 15 minutes (4.5% vs. 14.9%, P < 0.001). Spanish-speaking users of web-based oncology resources are increasingly using the Internet to supplement their cancer knowledge. Limited available resources written in Spanish contribute to disparities in information access and disease outcomes. Spanish-speaking oncology readers differ from English-speaking readers in day and time of Internet browsing, visit duration, Internet search patterns, and types of cancers searched. By acknowledging these differences, content of web-based oncology resources can be developed to best target the needs of Spanish-speaking viewers.
The Utilization of Oncology Web-based Resources in Spanish-speaking Internet Users
Simone, Charles B.; Hampshire, Margaret K.; Vachani, Carolyn; Metz, James M.
2011-01-01
Objectives: There currently are few web-based resources written in Spanish providing oncology-specific information. This study examines utilization of Spanish-language oncology web-based resources and evaluates oncology-related Internet browsing practices of Spanish-speaking patients. Methods: OncoLink (http://www.oncolink.org) is the oldest and among the largest Internet-based cancer information resources. In 9/2005, OncoLink pioneered OncoLink en español (OEE) (http://es.oncolink.org), a Spanish translation of OncoLink. Internet utilization data on these sites for 2006-2007 were compared. Results: Visits to OncoLink rose from 4,440,843 in 2006 to 5,125,952 in 2007. OEE had 204,578 unique visitors and 240,442 visits in 2006, and 351,228 visitors and 412,153 visits in 2007. While there was no time predilection for viewing OncoLink, less relative browsing on OEE was conducted during weekends and early morning hours. While OncoLink readers searched for information on the most common cancers in the United States, OEE readers most often search for gastric, vaginal, osteosarcoma, leukemia, penile, cervical, and testicular malignancies. Average visit duration on OEE was shorter, and fewer readers surveyed OEE >15 minutes (4.5% vs. 14.9%, p<0.001). Conclusions: Spanish-speaking users of web-based oncology resources are increasingly using the Internet to supplement their cancer knowledge. Limited available resources written in Spanish contribute to disparities in information access and disease outcomes. Spanish-speaking oncology readers differ from English-speaking readers in day and time of Internet browsing, visit duration, Internet search patterns, and types of cancers searched. By acknowledging these differences, content of web-based oncology resources can be developed to best target the needs of Spanish-speaking viewers. PMID:21654312
Googling DNA sequences on the World Wide Web.
Hajibabaei, Mehrdad; Singer, Gregory A C
2009-11-10
New web-based technologies provide an excellent opportunity for sharing and accessing information and using web as a platform for interaction and collaboration. Although several specialized tools are available for analyzing DNA sequence information, conventional web-based tools have not been utilized for bioinformatics applications. We have developed a novel algorithm and implemented it for searching species-specific genomic sequences, DNA barcodes, by using popular web-based methods such as Google. We developed an alignment independent character based algorithm based on dividing a sequence library (DNA barcodes) and query sequence to words. The actual search is conducted by conventional search tools such as freely available Google Desktop Search. We implemented our algorithm in two exemplar packages. We developed pre and post-processing software to provide customized input and output services, respectively. Our analysis of all publicly available DNA barcode sequences shows a high accuracy as well as rapid results. Our method makes use of conventional web-based technologies for specialized genetic data. It provides a robust and efficient solution for sequence search on the web. The integration of our search method for large-scale sequence libraries such as DNA barcodes provides an excellent web-based tool for accessing this information and linking it to other available categories of information on the web.
Utility of Web search query data in testing theoretical assumptions about mephedrone.
Kapitány-Fövény, Máté; Demetrovics, Zsolt
2017-05-01
With growing access to the Internet, people who use drugs and traffickers started to obtain information about novel psychoactive substances (NPS) via online platforms. This paper aims to analyze whether a decreasing Web interest in formerly banned substances-cocaine, heroin, and MDMA-and the legislative status of mephedrone predict Web interest about this NPS. Google Trends was used to measure changes of Web interest on cocaine, heroin, MDMA, and mephedrone. Google search results for mephedrone within the same time frame were analyzed and categorized. Web interest about classic drugs found to be more persistent. Regarding geographical distribution, location of Web searches for heroin and cocaine was less centralized. Illicit status of mephedrone was a negative predictor of its Web search query rates. The connection between mephedrone-related Web search rates and legislative status of this substance was significantly mediated by ecstasy-related Web search queries, the number of documentaries, and forum/blog entries about mephedrone. The results might provide support for the hypothesis that mephedrone's popularity was highly correlated with its legal status as well as it functioned as a potential substitute for MDMA. Google Trends was found to be a useful tool for testing theoretical assumptions about NPS. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yu, Bailang; Wu, Jianping
2006-10-01
Spatial Information Grid (SIG) is an infrastructure that has the ability to provide the services for spatial information according to users' needs by means of collecting, sharing, organizing and processing the massive distributed spatial information resources. This paper presents the architecture, technologies and implementation of the Shanghai City Spatial Information Application and Service System, a SIG based platform, which is an integrated platform that serves for administration, planning, construction and development of the city. In the System, there are ten categories of spatial information resources, including city planning, land-use, real estate, river system, transportation, municipal facility construction, environment protection, sanitation, urban afforestation and basic geographic information data. In addition, spatial information processing services are offered as a means of GIS Web Services. The resources and services are all distributed in different web-based nodes. A single database is created to store the metadata of all the spatial information. A portal site is published as the main user interface of the System. There are three main functions in the portal site. First, users can search the metadata and consequently acquire the distributed data by using the searching results. Second, some spatial processing web applications that developed with GIS Web Services, such as file format conversion, spatial coordinate transfer, cartographic generalization and spatial analysis etc, are offered to use. Third, GIS Web Services currently available in the System can be searched and new ones can be registered. The System has been working efficiently in Shanghai Government Network since 2005.
78 FR 5838 - NRC Enforcement Policy
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-28
... submit comments by any of the following methods: Federal Rulemaking Web site: Go to http://www... of the following methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search... the search, select ``ADAMS Public Documents'' and then select ``Begin Web-based ADAMS Search.'' For...
New and Improved Version of the ASDC MOPITT Search and Subset Web Application
Atmospheric Science Data Center
2016-07-06
... and Improved Version of the ASDC MOPITT Search and Subset Web Application Friday, June 24, 2016 A new and improved version of the ASDC MOPITT Search and Subset Web Application has been released. New features include: Versions 5 and 6 ...
Columba: an integrated database of proteins, structures, and annotations.
Trissl, Silke; Rother, Kristian; Müller, Heiko; Steinke, Thomas; Koch, Ina; Preissner, Robert; Frömmel, Cornelius; Leser, Ulf
2005-03-31
Structural and functional research often requires the computation of sets of protein structures based on certain properties of the proteins, such as sequence features, fold classification, or functional annotation. Compiling such sets using current web resources is tedious because the necessary data are spread over many different databases. To facilitate this task, we have created COLUMBA, an integrated database of annotations of protein structures. COLUMBA currently integrates twelve different databases, including PDB, KEGG, Swiss-Prot, CATH, SCOP, the Gene Ontology, and ENZYME. The database can be searched using either keyword search or data source-specific web forms. Users can thus quickly select and download PDB entries that, for instance, participate in a particular pathway, are classified as containing a certain CATH architecture, are annotated as having a certain molecular function in the Gene Ontology, and whose structures have a resolution under a defined threshold. The results of queries are provided in both machine-readable extensible markup language and human-readable format. The structures themselves can be viewed interactively on the web. The COLUMBA database facilitates the creation of protein structure data sets for many structure-based studies. It allows to combine queries on a number of structure-related databases not covered by other projects at present. Thus, information on both many and few protein structures can be used efficiently. The web interface for COLUMBA is available at http://www.columba-db.de.
Criteria for Comparing Children's Web Search Tools.
ERIC Educational Resources Information Center
Kuntz, Jerry
1999-01-01
Presents criteria for evaluating and comparing Web search tools designed for children. Highlights include database size; accountability; categorization; search access methods; help files; spell check; URL searching; links to alternative search services; advertising; privacy policy; and layout and design. (LRW)
Improving Concept-Based Web Image Retrieval by Mixing Semantically Similar Greek Queries
ERIC Educational Resources Information Center
Lazarinis, Fotis
2008-01-01
Purpose: Image searching is a common activity for web users. Search engines offer image retrieval services based on textual queries. Previous studies have shown that web searching is more demanding when the search is not in English and does not use a Latin-based language. The aim of this paper is to explore the behaviour of the major search…
Discovering Authorities and Hubs in Different Topological Web Graph Structures.
ERIC Educational Resources Information Center
Meghabghab, George
2002-01-01
Discussion of citation analysis on the Web considers Web hyperlinks as a source to analyze citations. Topics include basic graph theory applied to Web pages, including matrices, linear algebra, and Web topology; and hubs and authorities, including a search technique called HITS (Hyperlink Induced Topic Search). (Author/LRW)
NASA Astrophysics Data System (ADS)
Morton, J. J.; Ferrini, V. L.
2015-12-01
The Marine Geoscience Data System (MGDS, www.marine-geo.org) operates an interactive digital data repository and metadata catalog that provides access to a variety of marine geology and geophysical data from throughout the global oceans. Its Marine-Geo Digital Library includes common marine geophysical data types and supporting data and metadata, as well as complementary long-tail data. The Digital Library also includes community data collections and custom data portals for the GeoPRISMS, MARGINS and Ridge2000 programs, for active source reflection data (Academic Seismic Portal), and for marine data acquired by the US Antarctic Program (Antarctic and Southern Ocean Data Portal). Ensuring that these data are discoverable not only through our own interfaces but also through standards-compliant web services is critical for enabling investigators to find data of interest.Over the past two years, MGDS has developed several new RESTful web services that enable programmatic access to metadata and data holdings. These web services are compliant with the EarthCube GeoWS Building Blocks specifications and are currently used to drive our own user interfaces. New web applications have also been deployed to provide a more intuitive user experience for searching, accessing and browsing metadata and data. Our new map-based search interface combines components of the Google Maps API with our web services for dynamic searching and exploration of geospatially constrained data sets. Direct introspection of nearly all data formats for hundreds of thousands of data files curated in the Marine-Geo Digital Library has allowed for precise geographic bounds, which allow geographic searches to an extent not previously possible. All MGDS map interfaces utilize the web services of the Global Multi-Resolution Topography (GMRT) synthesis for displaying global basemap imagery and for dynamically provide depth values at the cursor location.
Gupta, Amarnath; Bug, William; Marenco, Luis; Qian, Xufei; Condit, Christopher; Rangarajan, Arun; Müller, Hans Michael; Miller, Perry L.; Sanders, Brian; Grethe, Jeffrey S.; Astakhov, Vadim; Shepherd, Gordon; Sternberg, Paul W.; Martone, Maryann E.
2009-01-01
The overarching goal of the NIF (Neuroscience Information Framework) project is to be a one-stop-shop for Neuroscience. This paper provides a technical overview of how the system is designed. The technical goal of the first version of the NIF system was to develop an information system that a neuroscientist can use to locate relevant information from a wide variety of information sources by simple keyword queries. Although the user would provide only keywords to retrieve information, the NIF system is designed to treat them as concepts whose meanings are interpreted by the system. Thus, a search for term should find a record containing synonyms of the term. The system is targeted to find information from web pages, publications, databases, web sites built upon databases, XML documents and any other modality in which such information may be published. We have designed a system to achieve this functionality. A central element in the system is an ontology called NIFSTD (for NIF Standard) constructed by amalgamating a number of known and newly developed ontologies. NIFSTD is used by our ontology management module, called OntoQuest to perform ontology-based search over data sources. The NIF architecture currently provides three different mechanisms for searching heterogeneous data sources including relational databases, web sites, XML documents and full text of publications. Version 1.0 of the NIF system is currently in beta test and may be accessed through http://nif.nih.gov. PMID:18958629
Gupta, Amarnath; Bug, William; Marenco, Luis; Qian, Xufei; Condit, Christopher; Rangarajan, Arun; Müller, Hans Michael; Miller, Perry L; Sanders, Brian; Grethe, Jeffrey S; Astakhov, Vadim; Shepherd, Gordon; Sternberg, Paul W; Martone, Maryann E
2008-09-01
The overarching goal of the NIF (Neuroscience Information Framework) project is to be a one-stop-shop for Neuroscience. This paper provides a technical overview of how the system is designed. The technical goal of the first version of the NIF system was to develop an information system that a neuroscientist can use to locate relevant information from a wide variety of information sources by simple keyword queries. Although the user would provide only keywords to retrieve information, the NIF system is designed to treat them as concepts whose meanings are interpreted by the system. Thus, a search for term should find a record containing synonyms of the term. The system is targeted to find information from web pages, publications, databases, web sites built upon databases, XML documents and any other modality in which such information may be published. We have designed a system to achieve this functionality. A central element in the system is an ontology called NIFSTD (for NIF Standard) constructed by amalgamating a number of known and newly developed ontologies. NIFSTD is used by our ontology management module, called OntoQuest to perform ontology-based search over data sources. The NIF architecture currently provides three different mechanisms for searching heterogeneous data sources including relational databases, web sites, XML documents and full text of publications. Version 1.0 of the NIF system is currently in beta test and may be accessed through http://nif.nih.gov.
Dynamics of a macroscopic model characterizing mutualism of search engines and web sites
NASA Astrophysics Data System (ADS)
Wang, Yuanshi; Wu, Hong
2006-05-01
We present a model to describe the mutualism relationship between search engines and web sites. In the model, search engines and web sites benefit from each other while the search engines are derived products of the web sites and cannot survive independently. Our goal is to show strategies for the search engines to survive in the internet market. From mathematical analysis of the model, we show that mutualism does not always result in survival. We show various conditions under which the search engines would tend to extinction, persist or grow explosively. Then by the conditions, we deduce a series of strategies for the search engines to survive in the internet market. We present conditions under which the initial number of consumers of the search engines has little contribution to their persistence, which is in agreement with the results in previous works. Furthermore, we show novel conditions under which the initial value plays an important role in the persistence of the search engines and deduce new strategies. We also give suggestions for the web sites to cooperate with the search engines in order to form a win-win situation.
Curating the Web: Building a Google Custom Search Engine for the Arts
ERIC Educational Resources Information Center
Hennesy, Cody; Bowman, John
2008-01-01
Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…
ERIC Educational Resources Information Center
Williams, Sarah C.
2010-01-01
The purpose of this study was to investigate how federated search engines are incorporated into the Web sites of libraries in the Association of Research Libraries. In 2009, information was gathered for each library in the Association of Research Libraries with a federated search engine. This included the name of the federated search service and…
2016-07-21
Todays internet has multiple webs. The surface web is what Google and other search engines index and pull based on links. Essentially, the surface...financial records, research and development), and personal data (medical records or legal documents). These are all deep web. Standard search engines dont
The quality of mental health information commonly searched for on the Internet.
Grohol, John M; Slimowicz, Joseph; Granda, Rebecca
2014-04-01
Previous research has reviewed the quality of online information related to specific mental disorders. Yet, no comprehensive study has been conducted on the overall quality of mental health information searched for online. This study examined the first 20 search results of two popular search engines-Google and Bing-for 11 common mental health terms. They were analyzed using the DISCERN instrument, an adaptation of the Depression Website Content Checklist (ADWCC), Flesch Reading Ease and Flesch-Kincaid Grade Level readability measures, HONCode badge display, and commercial status, resulting in an analysis of 440 web pages. Quality of Web site results varied based on type of disorder examined, with higher quality Web sites found for schizophrenia, bipolar disorder, and dysthymia, and lower quality ratings for phobia, anxiety, and panic disorder Web sites. Of the total Web sites analyzed, 67.5% had good or better quality content. Nearly one-third of the search results produced Web sites from three entities: WebMD, Wikipedia, and the Mayo Clinic. The mean Flesch Reading Ease score was 41.21, and the mean Flesch-Kincaid Grade Level score was 11.68. The presence of the HONCode badge and noncommercial status was found to have a small correlation with Web site quality, and Web sites displaying the HONCode badge and commercial sites had lower readability scores. Popular search engines appear to offer generally reliable results pointing to mostly good or better quality mental health Web sites. However, additional work is needed to make these sites more readable.
IntegromeDB: an integrated system and biological search engine.
Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia
2012-01-19
With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.
Quality analysis of patient information about knee arthroscopy on the World Wide Web.
Sambandam, Senthil Nathan; Ramasamy, Vijayaraj; Priyanka, Priyanka; Ilango, Balakrishnan
2007-05-01
This study was designed to ascertain the quality of patient information available on the World Wide Web on the topic of knee arthroscopy. For the purpose of quality analysis, we used a pool of 232 search results obtained from 7 different search engines. We used a modified assessment questionnaire to assess the quality of these Web sites. This questionnaire was developed based on similar studies evaluating Web site quality and includes items on illustrations, accessibility, availability, accountability, and content of the Web site. We also compared results obtained with different search engines and tried to establish the best possible search strategy to attain the most relevant, authentic, and adequate information with minimum time consumption. For this purpose, we first compared 100 search results from the single most commonly used search engine (AltaVista) with the pooled sample containing 20 search results from each of the 7 different search engines. The search engines used were metasearch (Copernic and Mamma), general search (Google, AltaVista, and Yahoo), and health topic-related search engines (MedHunt and Healthfinder). The phrase "knee arthroscopy" was used as the search terminology. Excluding the repetitions, there were 117 Web sites available for quality analysis. These sites were analyzed for accessibility, relevance, authenticity, adequacy, and accountability by use of a specially designed questionnaire. Our analysis showed that most of the sites providing patient information on knee arthroscopy contained outdated information, were inadequate, and were not accountable. Only 16 sites were found to be providing reasonably good patient information and hence can be recommended to patients. Understandably, most of these sites were from nonprofit organizations and educational institutions. Furthermore, our study revealed that using multiple search engines increases patients' chances of obtaining more relevant information rather than using a single search engine. Our study shows the difficulties encountered by patients in obtaining information regarding knee arthroscopy and highlights the duty of knee surgeons in helping patients to identify the relevant and authentic information in the most efficient manner from the World Wide Web. This study highlights the importance of the role of orthopaedic surgeons in helping their patients to identify the best possible information on the World Wide Web.
Surfing the World Wide Web to Education Hot-Spots.
ERIC Educational Resources Information Center
Dyrli, Odvard Egil
1995-01-01
Provides a brief explanation of Web browsers and their use, as well as technical information for those considering access to the WWW (World Wide Web). Curriculum resources and addresses to useful Web sites are included. Sidebars show sample searches using Yahoo and Lycos search engines, and a list of recommended Web resources. (JKP)
Web-based surveillance of public information needs for informing preconception interventions.
D'Ambrosio, Angelo; Agricola, Eleonora; Russo, Luisa; Gesualdo, Francesco; Pandolfi, Elisabetta; Bortolus, Renata; Castellani, Carlo; Lalatta, Faustina; Mastroiacovo, Pierpaolo; Tozzi, Alberto Eugenio
2015-01-01
The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public's information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health. Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics. We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time. Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations.
Web-Based Surveillance of Public Information Needs for Informing Preconception Interventions
D’Ambrosio, Angelo; Agricola, Eleonora; Russo, Luisa; Gesualdo, Francesco; Pandolfi, Elisabetta; Bortolus, Renata; Castellani, Carlo; Lalatta, Faustina; Mastroiacovo, Pierpaolo; Tozzi, Alberto Eugenio
2015-01-01
Background The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public's information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health. Methods Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics. Results We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time. Conclusion Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations. PMID:25879682
Science in Afterschool Literature Review
ERIC Educational Resources Information Center
Falkenberg, Karen; McClure, Patricia; McComb, Errin M.
2006-01-01
In considering science in afterschool, research was reviewed and is presented in this document on how students learn science; how science is assessed, particularly inquiry science; recommended practices for afterschool science; and current afterschool science programs. Databases such as ERIC, Wilson Web, and PsychINFO were searched using…
Using the TIGR gene index databases for biological discovery.
Lee, Yuandan; Quackenbush, John
2003-11-01
The TIGR Gene Index web pages provide access to analyses of ESTs and gene sequences for nearly 60 species, as well as a number of resources derived from these. Each species-specific database is presented using a common format with a homepage. A variety of methods exist that allow users to search each species-specific database. Methods implemented currently include nucleotide or protein sequence queries using WU-BLAST, text-based searches using various sequence identifiers, searches by gene, tissue and library name, and searches using functional classes through Gene Ontology assignments. This protocol provides guidance for using the Gene Index Databases to extract information.
Factors Associated With Suicidal Attempts in Iran: A Systematic Review.
Hakim Shooshtari, Mitra; Malakouti, Seyyed Kazem; Panaghi, Leili; Mohseni, Shohreh; Mansouri, Naghmeh; Rahimi Movaghar, Afarin
2016-03-01
Suicide prevention is a health service priority. Some surveys have assessed suicidal behaviors and potential risk factors. The current paper aimed to gather information about etiology of suicide attempts in Iran. Pubmed, ISI web of science, PsychInfo, IranPsych, IranMedex, IranDoc as well as gray literature were searched. By electronic and gray literature search, 128 articles were enrolled in this paper. Pubmed, ISI web of science, PsychInfo, IranPsych, IranMedex, IranDoc were searched for electronic search. After reading the abstracts, 84 studies were excluded and full texts of 44 articles were reviewed critically. Pubmed, ISI web of science, PsychInfo, IranPsych, IranMedex, IranDoc as well as gray literature were searched to find any study about etiologic factors of suicide attempt in Iran. Depressive disorder was the most common diagnosis in suicide attempters that is 45% of the evaluated cases had depression. One study that had used Minnesota multiphasic personality inventory (MMPI) found that Histrionics in females and Schizophrenia and Paranoia in males were significantly influential. Family conflicts with 50.7% and conflict with parents with 44% were two effective psychosocial factors in suicidal attempts. In around one fourth (28.7%) of the cases, conflict with spouse was the main etiologic factor. According to the methodological limitations, outcomes should be generalized cautiously. Further studies will help to plan preventive strategies for suicidal attempts; therefore, continued researches should be conducted to fill the data gaps.
NDBC Tropical Atmosphere Ocean (TAO)
to go to the NWS homepage Left navigation bar Home News Organization Search NDBC web site search TAO Tour FAQ NDBC Home Contact Us USA.gov is the U.S. government's official web portal to all federal , state and local government web resources and services. Recent Data Observations Search TAO DART Tropical
Millennial Undergraduate Research Strategies in Web and Library Information Retrieval Systems
ERIC Educational Resources Information Center
Porter, Brandi
2011-01-01
This article summarizes the author's dissertation regarding search strategies of millennial undergraduate students in Web and library online information retrieval systems. Millennials bring a unique set of search characteristics and strategies to their research since they have never known a world without the Web. Through the use of search engines,…
Communication Webagogy 2.0: More Click, Less Drag.
ERIC Educational Resources Information Center
Radford, Marie L.; Wagner, Kurt W.
2000-01-01
Argues that, because of the chaotic nature of the Web and the competing searching software, no single search tool will suffice. Lists and discusses meta search engines; communication meta-sites and subject directories, all indexed by humans; teaching resources for communication courses that utilize the unique features of the Web; and web sites…
ERIC Educational Resources Information Center
Tunender, Heather; Ervin, Jane
1998-01-01
Character strings were planted in a World Wide Web site (Project Whistlestop) to test indexing and retrieval rates of five Web search tools (Lycos, infoseek, AltaVista, Yahoo, Excite). It was found that search tools indexed few of the planted character strings, none indexed the META descriptor tag, and only Excite indexed into the 3rd-4th site…
MIRASS: medical informatics research activity support system using information mashup network.
Kiah, M L M; Zaidan, B B; Zaidan, A A; Nabi, Mohamed; Ibraheem, Rabiu
2014-04-01
The advancement of information technology has facilitated the automation and feasibility of online information sharing. The second generation of the World Wide Web (Web 2.0) enables the collaboration and sharing of online information through Web-serving applications. Data mashup, which is considered a Web 2.0 platform, plays an important role in information and communication technology applications. However, few ideas have been transformed into education and research domains, particularly in medical informatics. The creation of a friendly environment for medical informatics research requires the removal of certain obstacles in terms of search time, resource credibility, and search result accuracy. This paper considers three glitches that researchers encounter in medical informatics research; these glitches include the quality of papers obtained from scientific search engines (particularly, Web of Science and Science Direct), the quality of articles from the indices of these search engines, and the customizability and flexibility of these search engines. A customizable search engine for trusted resources of medical informatics was developed and implemented through data mashup. Results show that the proposed search engine improves the usability of scientific search engines for medical informatics. Pipe search engine was found to be more efficient than other engines.
Graph-Based Semantic Web Service Composition for Healthcare Data Integration.
Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena
2017-01-01
Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.
Graph-Based Semantic Web Service Composition for Healthcare Data Integration
2017-01-01
Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602
ERIC Educational Resources Information Center
Liu, Chen-Chung; Don, Ping-Hsing; Chung, Chen-Wei; Lin, Shao-Jun; Chen, Gwo-Dong; Liu, Baw-Jhiune
2010-01-01
While Web discovery is usually undertaken as a solitary activity, Web co-discovery may transform Web learning activities from the isolated individual search process into interactive and collaborative knowledge exploration. Recent studies have proposed Web co-search environments on a single computer, supported by multiple one-to-one technologies.…
How Public Is the Web?: Robots, Access, and Scholarly Communication.
ERIC Educational Resources Information Center
Snyder, Herbert; Rosenbaum, Howard
1998-01-01
Examines the use of Robot Exclusion Protocol (REP) to restrict the access of search engine robots to 10 major United States university Web sites. An analysis of Web site searching and interviews with Web server administrators shows that the decision to use this procedure is largely technical and is typically made by the Web server administrator.…
Triggers and monitoring in intelligent personal health record.
Luo, Gang
2012-10-01
Although Web-based personal health records (PHRs) have been widely deployed, the existing ones have limited intelligence. Previously, we introduced expert system technology and Web search technology into the PHR domain and proposed the concept of an intelligent PHR (iPHR). iPHR provides personalized healthcare information to facilitate users' daily activities of living. The current iPHR is passive and follows the pull model of information distribution. This paper introduces triggers and monitoring into iPHR to make iPHR become active. Our idea is to let medical professionals pre-compile triggers and store them in iPHR's knowledge base. Each trigger corresponds to an abnormal event that may have potential medical impact. iPHR keeps collecting, processing, and analyzing the user's medical data from various sources such as wearable sensors. Whenever an abnormal event is detected from the user's medical data, the corresponding trigger fires and the related personalized healthcare information is pushed to the user using natural language generation technology, expert system technology, and Web search technology.
Produce and Consume Linked Data with Drupal!
NASA Astrophysics Data System (ADS)
Corlosquet, Stéphane; Delbru, Renaud; Clark, Tim; Polleres, Axel; Decker, Stefan
Currently a large number of Web sites are driven by Content Management Systems (CMS) which manage textual and multimedia content but also - inherently - carry valuable information about a site's structure and content model. Exposing this structured information to the Web of Data has so far required considerable expertise in RDF and OWL modelling and additional programming effort. In this paper we tackle one of the most popular CMS: Drupal. We enable site administrators to export their site content model and data to the Web of Data without requiring extensive knowledge on Semantic Web technologies. Our modules create RDFa annotations and - optionally - a SPARQL endpoint for any Drupal site out of the box. Likewise, we add the means to map the site data to existing ontologies on the Web with a search interface to find commonly used ontology terms. We also allow a Drupal site administrator to include existing RDF data from remote SPARQL endpoints on the Web in the site. When brought together, these features allow networked RDF Drupal sites that reuse and enrich Linked Data. We finally discuss the adoption of our modules and report on a use case in the biomedical field and the current status of its deployment.
IntegromeDB: an integrated system and biological search engine
2012-01-01
Background With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Description Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. Conclusions The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback. PMID:22260095
Social Search: A Taxonomy of, and a User-Centred Approach to, Social Web Search
ERIC Educational Resources Information Center
McDonnell, Michael; Shiri, Ali
2011-01-01
Purpose: The purpose of this paper is to introduce the notion of social search as a new concept, drawing upon the patterns of web search behaviour. It aims to: define social search; present a taxonomy of social search; and propose a user-centred social search method. Design/methodology/approach: A mixed method approach was adopted to investigate…
Search Engines on the World Wide Web.
ERIC Educational Resources Information Center
Walster, Dian
1997-01-01
Discusses search engines and provides methods for determining what resources are searched, the quality of the information, and the algorithms used that will improve the use of search engines on the World Wide Web, online public access catalogs, and electronic encyclopedias. Lists strategies for conducting searches and for learning about the latest…
Helping Students Choose Tools To Search the Web.
ERIC Educational Resources Information Center
Cohen, Laura B.; Jacobson, Trudi E.
2000-01-01
Describes areas where faculty members can aid students in making intelligent use of the Web in their research. Differentiates between subject directories and search engines. Describes an engine's three components: spider, index, and search engine. Outlines two misconceptions: that Yahoo! is a search engine and that search engines contain all the…
Sagace: A web-based search engine for biomedical databases in Japan
2012-01-01
Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data) and biological resource banks (such as mouse models of disease and cell lines). With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/. PMID:23110816
Virtual Global Magnetic Observatory - Concept and Implementation
NASA Astrophysics Data System (ADS)
Papitashvili, V.; Clauer, R.; Petrov, V.; Saxena, A.
2002-12-01
The existing World Data Centers (WDC) continue to serve excellently the worldwide scientific community in providing free access to a huge number of global geophysical databases. Various institutions at different geographic locations house these Centers, mainly organized by a scientific discipline. However, population of the Centers requires mandatory or voluntary submission of locally collected data. Recently many digital geomagnetic datasets have been placed on the World Wide Web and some of these sets have not been even submitted to any data center. This has created an urgent need for more sophisticated search engines capable of identifying geomagnetic data on the Web and then retrieving a certain amount of data for the scientific analysis. In this study, we formulate a concept of the virtual global magnetic observatory (VGMO) that currently uses a pre-set list of the Web-based geomagnetic data holders (including WDC) as retrieving a requested case-study interval. Saving the retrieved data locally over the multiple requests, a VGMO user begins to build his/her own data sub-center, which does not need to search the Web if the newly requested interval will be within a span of the earlier retrieved data. At the same time, this self-populated sub-center becomes available to other VGMO users down on the requests chain. Some aspects of the Web``crawling'' helping to identify the newly ``webbed'' digital geomagnetic data are also considered.
Spiders and Worms and Crawlers, Oh My: Searching on the World Wide Web.
ERIC Educational Resources Information Center
Eagan, Ann; Bender, Laura
Searching on the world wide web can be confusing. A myriad of search engines exist, often with little or no documentation, and many of these search engines work differently from the standard search engines people are accustomed to using. Intended for librarians, this paper defines search engines, directories, spiders, and robots, and covers basics…
Teaching Web Search Skills: Techniques and Strategies of Top Trainers
ERIC Educational Resources Information Center
Notess, Greg R.
2006-01-01
Here is a unique and practical reference for anyone who teaches Web searching. Greg Notess shares his own techniques and strategies along with expert tips and advice from a virtual "who's who" of Web search training: Joe Barker, Paul Barron, Phil Bradley, John Ferguson, Alice Fulbright, Ran Hock, Jeff Humphrey, Diane Kovacs, Gary Price, Danny…
Intelligent personal health record: experience and open issues.
Luo, Gang; Tang, Chunqiang; Thomas, Selena B
2012-08-01
Web-based personal health records (PHRs) are under massive deployment. To improve PHR's capability and usability, we previously proposed the concept of intelligent PHR (iPHR). By introducing and extending expert system technology and Web search technology into the PHR domain, iPHR can automatically provide users with personalized healthcare information to facilitate their daily activities of living. Our iPHR system currently provides three functions: guided search for disease information, recommendation of home nursing activities, and recommendation of home medical products. This paper discusses our experience with iPHR as well as the open issues, including both enhancements to the existing functions and potential new functions. We outline some preliminary solutions, whereas a main purpose of this paper is to stimulate future research work in the area of consumer health informatics.
Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G
2016-01-01
Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p < .01) and covered fewer important HPV-related topics (p < .001). Critical Web pages required viewers to have higher reading skills, were less likely to include an author byline, and were more likely to include testimonial accounts. They also were more likely to raise unsubstantiated concerns about vaccination. Web pages critical of HPV vaccine may be frequently returned and highly ranked by search engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
The Human Transcript Database: A Catalogue of Full Length cDNA Inserts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouckk John; Michael McLeod; Kim Worley
1999-09-10
The BCM Search Launcher provided improved access to web-based sequence analysis services during the granting period and beyond. The Search Launcher web site grouped analysis procedures by function and provided default parameters that provided reasonable search results for most applications. For instance, most queries were automatically masked for repeat sequences prior to sequence database searches to avoid spurious matches. In addition to the web-based access and arrangements that were made using the functions easier, the BCM Search Launcher provided unique value-added applications like the BEAUTY sequence database search tool that combined information about protein domains and sequence database search resultsmore » to give an enhanced, more complete picture of the reliability and relative value of the information reported. This enhanced search tool made evaluating search results more straight-forward and consistent. Some of the favorite features of the web site are the sequence utilities and the batch client functionality that allows processing of multiple samples from the command line interface. One measure of the success of the BCM Search Launcher is the number of sites that have adopted the models first developed on the site. The graphic display on the BLAST search from the NCBI web site is one such outgrowth, as is the display of protein domain search results within BLAST search results, and the design of the Biology Workbench application. The logs of usage and comments from users confirm the great utility of this resource.« less
Opinions in Federated Search: University of Lugano at TREC 2014 Federated Web Search Track
2014-11-01
Opinions in Federated Search : University of Lugano at TREC 2014 Federated Web Search Track Anastasia Giachanou 1 , Ilya Markov 2 and Fabio Crestani 1...ranking based on sentiment using the retrieval-interpolated diversification method. Keywords: federated search , resource selection, vertical selection...performance. Federated search , also known as Distributed Information Retrieval (DIR), o↵ers the means of simultaneously searching multiple information
MetaSpider: Meta-Searching and Categorization on the Web.
ERIC Educational Resources Information Center
Chen, Hsinchun; Fan, Haiyan; Chau, Michael; Zeng, Daniel
2001-01-01
Discusses the difficulty of locating relevant information on the Web and studies two approaches to addressing the low precision and poor presentation of search results: meta-search and document categorization. Introduces MetaSpider, a meta-search engine, and presents results of a user evaluation study that compared three search engines.…
Waack, Katherine E; Ernst, Michael E; Graber, Mark A
2004-12-01
In the last 5 years, several treatments have become available for erectile dysfunction (ED). During this same period, consumer use of the Internet for health information has increased rapidly. In traditional direct-to-consumer advertisements, viewers are often referred to a pharmaceutical company Web site for further information. To evaluate the accessibility and informational content of 5 pharmaceutical company Web sites about ED treatments. Using 10 popular search engines and 1 specialized search engine, the accessibility of the official pharmaceutical company-sponsored Web site was determined by searching under brand and generic names. One company also manufactures an ED device; this site was also included. A structured, explicit review of information found on these sites was conducted. Of 110 searches (1 for each treatment, including corresponding generic drug name, using each search engine), 68 yielded the official pharmaceutical company Web site within the first 10 links. Removal of outliers (for both brand and generic name searches) resulted in 68 of 77 searches producing the pharmaceutical company Web site for the brand-name drug in the top 10 links. Although all pharmaceutical company Web sites contained general information on adverse effects and contraindications to use, only 2 sites gave actual percentages. Three sites provided references for their materials or discussed other treatment or drug options, while 4 of the sites contained profound advertising or emotive content. None mentioned cost of the therapy. The information contained on pharmaceutical company Web sites for ED treatments is superficial and aimed primarily at consumers. It is largely promotional and provides only limited information needed to effectively compare treatment options.
Till, Benedikt; Niederkrotenthaler, Thomas
2014-08-01
The Internet provides a variety of resources for individuals searching for suicide-related information. Structured content-analytic approaches to assess intercultural differences in web contents retrieved with method-related and help-related searches are scarce. We used the 2 most popular search engines (Google and Yahoo/Bing) to retrieve US-American and Austrian search results for the term suicide, method-related search terms (e.g., suicide methods, how to kill yourself, painless suicide, how to hang yourself), and help-related terms (e.g., suicidal thoughts, suicide help) on February 11, 2013. In total, 396 websites retrieved with US search engines and 335 websites from Austrian searches were analyzed with content analysis on the basis of current media guidelines for suicide reporting. We assessed the quality of websites and compared findings across search terms and between the United States and Austria. In both countries, protective outweighed harmful website characteristics by approximately 2:1. Websites retrieved with method-related search terms (e.g., how to hang yourself) contained more harmful (United States: P < .001, Austria: P < .05) and fewer protective characteristics (United States: P < .001, Austria: P < .001) compared to the term suicide. Help-related search terms (e.g., suicidal thoughts) yielded more websites with protective characteristics (United States: P = .07, Austria: P < .01). Websites retrieved with U.S. search engines generally had more protective characteristics (P < .001) than searches with Austrian search engines. Resources with harmful characteristics were better ranked than those with protective characteristics (United States: P < .01, Austria: P < .05). The quality of suicide-related websites obtained depends on the search terms used. Preventive efforts to improve the ranking of preventive web content, particularly regarding method-related search terms, seem necessary. © Copyright 2014 Physicians Postgraduate Press, Inc.
OntologyWidget – a reusable, embeddable widget for easily locating ontology terms
Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, JH Pate; Ball, Catherine A; Sherlock, Gavin
2007-01-01
Background Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. Results We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website [1]. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat [2] on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. Conclusion We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website [1], as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from . PMID:17854506
OntologyWidget - a reusable, embeddable widget for easily locating ontology terms.
Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, J H Pate; Ball, Catherine A; Sherlock, Gavin
2007-09-13
Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website 1. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat 2 on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website 1, as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from http://smd.stanford.edu/ontologyWidget/.
How To Do Field Searching in Web Search Engines: A Field Trip.
ERIC Educational Resources Information Center
Hock, Ran
1998-01-01
Describes the field search capabilities of selected Web search engines (AltaVista, HotBot, Infoseek, Lycos, Yahoo!) and includes a chart outlining what fields (date, title, URL, images, audio, video, links, page depth) are searchable, where to go on the page to search them, the syntax required (if any), and how field search queries are entered.…
[Oncologic gynecology and the Internet].
Gizler, Robert; Bielanów, Tomasz; Kulikiewicz, Krzysztof
2002-11-01
The strategy of World Wide Web searching for medical sites was presented in this article. The "deep web" and "surface web" resources were searched. The 10 best sites connected with the gynecological oncology, according to authors' opinion, were presented.
Llnking the EarthScope Data Virtual Catalog to the GEON Portal
NASA Astrophysics Data System (ADS)
Lin, K.; Memon, A.; Baru, C.
2008-12-01
The EarthScope Data Portal provides a unified, single-point of access to EarthScope data and products from USArray, Plate Boundary Observatory (PBO), and San Andreas Fault Observatory at Depth (SAFOD) experiments. The portal features basic search and data access capabilities to allow users to discover and access EarthScope data using spatial, temporal, and other metadata-based (data type, station specific) search conditions. The portal search module is the user interface implementation of the EarthScope Data Search Web Service. This Web Service acts as a virtual catalog that in turn invokes Web services developed by IRIS (Incorporated Research Institutions for Seismology), UNAVCO (University NAVSTAR Consortium), and GFZ (German Research Center for Geosciences) to search for EarthScope data in the archives at each of these locations. These Web Services provide information about all resources (data) that match the specified search conditions. In this presentation we will describe how the EarthScope Data Search Web service can be integrated into the GEONsearch application in the GEON Portal (see http://portal.geongrid.org). Thus, a search request issued at the GEON Portal will also search the EarthScope virtual catalog thereby providing users seamless access to data in GEON as well as the Earthscope via a common user interface.
A novel visualization model for web search results.
Nguyen, Tien N; Zhang, Jin
2006-01-01
This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.
VisGets: coordinated visualizations for web-based information exploration and discovery.
Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey
2008-01-01
In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.
ERIC Educational Resources Information Center
Kao, Chia-Pin; Chien, Hui-Min
2017-01-01
This study was conducted to explore the relationships between pre-school educators' conceptions of and approaches to learning by web-searching through Internet Self-efficacy. Based on data from 242 pre-school educators who had prior experience of participating in web-searching in Taiwan for path analyses, it was found in this study that…
Finding and Evaluating Adult ESL Resources on the World Wide Web. ERIC Q & A.
ERIC Educational Resources Information Center
Florez, MaryAnn Cunningham
One of the challenges often mentioned by users of the World Wide Web is creating and implementing successful searches on topics of interest. This article provides background information about adult English-as-a-Second-Language (ESL) resources available on the Web. It describes various search tools, explains how to create search strategies and how…
Estimating search engine index size variability: a 9-year longitudinal study.
van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice
One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.
Information Diversity in Web Search
ERIC Educational Resources Information Center
Liu, Jiahui
2009-01-01
The web is a rich and diverse information source with incredible amounts of information about all kinds of subjects in various forms. This information source affords great opportunity to build systems that support users in their work and everyday lives. To help users explore information on the web, web search systems should find information that…
20 CFR 656.17 - Basic labor certification process.
Code of Federal Regulations, 2010 CFR
2010-04-01
... participant in the job fair. (B) Employer's Web site. The use of the employer's Web site as a recruitment... involved in the application. (C) Job search Web site other than the employer's. The use of a job search Web...) The Department of Labor may issue or require the use of certain identifying information, including...
LigSearch: a knowledge-based web server to identify likely ligands for a protein target
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, Tjaart A. P. de; Laskowski, Roman A.; Duban, Mark-Eugene
LigSearch is a web server for identifying ligands likely to bind to a given protein. Identifying which ligands might bind to a protein before crystallization trials could provide a significant saving in time and resources. LigSearch, a web server aimed at predicting ligands that might bind to and stabilize a given protein, has been developed. Using a protein sequence and/or structure, the system searches against a variety of databases, combining available knowledge, and provides a clustered and ranked output of possible ligands. LigSearch can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/LigSearch.
Quality of consumer-targeted internet guidance on home firearm and ammunition storage.
Freundlich, Katherine L; Skoczylas, Maria Shakour; Schmidt, John P; Keshavarzi, Nahid R; Mohr, Bethany Anne
2016-10-01
Four storage practices protect against unintentional and/or self-inflicted firearm injury among children and adolescents: keeping guns locked (1) and unloaded (2) and keeping ammunition locked up (3) and in a separate location from the guns (4). Our aim was to mimic common Google search strategies on firearm/ammunition storage and assess whether the resulting web pages provided recommendations consistent with those supported by the literature. We identified 87 web pages by Google search of the 10 most commonly used search terms in the USA related to firearm/ammunition storage. Two non-blinded independent reviewers analysed web page technical quality according to a 17-item checklist derived from previous studies. A single reviewer analysed readability by US grade level assigned by Flesch-Kincaid Grade Level Index. Two separate, blinded, independent reviewers analysed deidentified web page content for accuracy and completeness describing the four accepted storage practices. Reviewers resolved disagreements by consensus. The web pages described, on average, less than one of four accepted storage practices (mean 0.2 (95% CL 0.1 to 0.4)). Only two web pages (2%) identified all four practices. Two web pages (2%) made assertions inconsistent with recommendations; both implied that loaded firearms could be stored safely. Flesch-Kincaid Grade Level Index averaged 8.0 (95% CL 7.3 to 8.7). The average technical quality score was 7.1 (95% CL 6.8 to 7.4) out of an available score of 17. There was a high degree of agreement between reviewers regarding completeness (weighted κ 0.78 (95% CL 0.61 to 0.97)). The internet currently provides incomplete information about safe firearm storage. Understanding existing deficiencies may inform future strategies for improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Data Access and Web Services at the EarthScope Plate Boundary Observatory
NASA Astrophysics Data System (ADS)
Matykiewicz, J.; Anderson, G.; Henderson, D.; Hodgkinson, K.; Hoyt, B.; Lee, E.; Persson, E.; Torrez, D.; Smith, J.; Wright, J.; Jackson, M.
2007-12-01
The EarthScope Plate Boundary Observatory (PBO) at UNAVCO, Inc., part of the NSF-funded EarthScope project, is designed to study the three-dimensional strain field resulting from deformation across the active boundary zone between the Pacific and North American plates in the western United States. To meet these goals, PBO will install 880 continuous GPS stations, 103 borehole strainmeter stations, and five laser strainmeters, as well as manage data for 209 previously existing continuous GPS stations and one previously existing laser strainmeter. UNAVCO provides access to data products from these stations, as well as general information about the PBO project, via the PBO web site (http://pboweb.unavco.org). GPS and strainmeter data products can be found using a variety of access methods, incuding map searches, text searches, and station specific data retrieval. In addition, the PBO construction status is available via multiple mapping interfaces, including custom web based map widgets and Google Earth. Additional construction details can be accessed from PBO operational pages and station specific home pages. The current state of health for the PBO network is available with the statistical snap-shot, full map interfaces, tabular web based reports, and automatic data mining and alerts. UNAVCO is currently working to enhance the community access to this information by developing a web service framework for the discovery of data products, interfacing with operational engineers, and exposing data services to third party participants. In addition, UNAVCO, through the PBO project, provides advanced data management and monitoring systems for use by the community in operating geodetic networks in the United States and beyond. We will demonstrate these systems during the AGU meeting, and we welcome inquiries from the community at any time.
The Plate Boundary Observatory: Community Focused Web Services
NASA Astrophysics Data System (ADS)
Matykiewicz, J.; Anderson, G.; Lee, E.; Hoyt, B.; Hodgkinson, K.; Persson, E.; Wright, J.; Torrez, D.; Jackson, M.
2006-12-01
The Plate Boundary Observatory (PBO), part of the NSF-funded EarthScope project, is designed to study the three-dimensional strain field resulting from deformation across the active boundary zone between the Pacific and North American plates in the western United States. To meet these goals, PBO will install 852 continuous GPS stations, 103 borehole strainmeter stations, 28 tiltmeters, and five laser strainmeters, as well as manage data for 209 previously existing continuous GPS stations. UNAVCO provides access to data products from these stations, as well as general information about the PBO project, via the PBO web site (http://pboweb.unavco.org). GPS and strainmeter data products can be found using a variety of channels, including map searches, text searches, and station specific data retrieval. In addition, the PBO construction status is available via multiple mapping interfaces, including custom web based map widgets and Google Earth. Additional construction details can be accessed from PBO operational pages and station specific home pages. The current state of health for the PBO network is available with the statistical snap-shot, full map interfaces, tabular web based reports, and automatic data mining and alerts. UNAVCO is currently working to enhance the community access to this information by developing a web service framework for the discovery of data products, interfacing with operational engineers, and exposing data services to third party participants. In addition, UNAVCO, through the PBO project, provides advanced data management and monitoring systems for use by the community in operating geodetic networks in the United States and beyond. We will demonstrate these systems during the AGU meeting, and we welcome inquiries from the community at any time.
What do web-use skill differences imply for online health information searches?
Feufel, Markus A; Stahl, S Frederica
2012-06-13
Online health information is of variable and often low scientific quality. In particular, elderly less-educated populations are said to struggle in accessing quality online information (digital divide). Little is known about (1) how their online behavior differs from that of younger, more-educated, and more-frequent Web users, and (2) how the older population may be supported in accessing good-quality online health information. To specify the digital divide between skilled and less-skilled Web users, we assessed qualitative differences in technical skills, cognitive strategies, and attitudes toward online health information. Based on these findings, we identified educational and technological interventions to help Web users find and access good-quality online health information. We asked 22 native German-speaking adults to search for health information online. The skilled cohort consisted of 10 participants who were younger than 30 years of age, had a higher level of education, and were more experienced using the Web than 12 participants in the less-skilled cohort, who were at least 50 years of age. We observed online health information searches to specify differences in technical skills and analyzed concurrent verbal protocols to identify health information seekers' cognitive strategies and attitudes. Our main findings relate to (1) attitudes: health information seekers in both cohorts doubted the quality of information retrieved online; among poorly skilled seekers, this was mainly because they doubted their skills to navigate vast amounts of information; once a website was accessed, quality concerns disappeared in both cohorts, (2) technical skills: skilled Web users effectively filtered information according to search intentions and data sources; less-skilled users were easily distracted by unrelated information, and (3) cognitive strategies: skilled Web users searched to inform themselves; less-skilled users searched to confirm their health-related opinions such as "vaccinations are harmful." Independent of Web-use skills, most participants stopped a search once they had found the first piece of evidence satisfying search intentions, rather than according to quality criteria. Findings related to Web-use skills differences suggest two classes of interventions to facilitate access to good-quality online health information. Challenges related to findings (1) and (2) should be remedied by improving people's basic Web-use skills. In particular, Web users should be taught how to avoid information overload by generating specific search terms and to avoid low-quality information by requesting results from trusted websites only. Problems related to finding (3) may be remedied by visually labeling search engine results according to quality criteria.
Design and Empirical Evaluation of Search Software for Legal Professionals on the WWW.
ERIC Educational Resources Information Center
Dempsey, Bert J.; Vreeland, Robert C.; Sumner, Robert G., Jr.; Yang, Kiduk
2000-01-01
Discussion of effective search aids for legal researchers on the World Wide Web focuses on the design and evaluation of two software systems developed to explore models for browsing and searching across a user-selected set of Web sites. Describes crawler-enhanced search engines, filters, distributed full-text searching, and natural language…
Accessing Biomedical Literature in the Current Information Landscape
Khare, Ritu; Leaman, Robert; Lu, Zhiyong
2015-01-01
i. Summary Biomedical and life sciences literature is unique because of its exponentially increasing volume and interdisciplinary nature. Biomedical literature access is essential for several types of users including biomedical researchers, clinicians, database curators, and bibliometricians. In the past few decades, several online search tools and literature archives, generic as well as biomedicine-specific, have been developed. We present this chapter in the light of three consecutive steps of literature access: searching for citations, retrieving full-text, and viewing the article. The first section presents the current state of practice of biomedical literature access, including an analysis of the search tools most frequently used by the users, including PubMed, Google Scholar, Web of Science, Scopus, and Embase, and a study on biomedical literature archives such as PubMed Central. The next section describes current research and the state-of-the-art systems motivated by the challenges a user faces during query formulation and interpretation of search results. The research solutions are classified into five key areas related to text and data mining, text similarity search, semantic search, query support, relevance ranking, and clustering results. Finally, the last section describes some predicted future trends for improving biomedical literature access, such as searching and reading articles on portable devices, and adoption of the open access policy. PMID:24788259
Robotic Prostatectomy on the Web: A Cross-Sectional Qualitative Assessment.
Borgmann, Hendrik; Mager, René; Salem, Johannes; Bründl, Johannes; Kunath, Frank; Thomas, Christian; Haferkamp, Axel; Tsaur, Igor
2016-08-01
Many patients diagnosed with prostate cancer search for information on robotic prostatectomy (RobP) on the Web. We aimed to evaluate the qualitative characteristics of the mostly frequented Web sites on RobP with a particular emphasis on provider-dependent issues. Google was searched for the term "robotic prostatectomy" in Europe and North America. The mostly frequented Web sites were selected and classified as physician-provided and publically-provided. Quality was measured using Journal of the American Medical Association (JAMA) benchmark criteria, DISCERN score, and addressing of Trifecta surgical outcomes. Popularity was analyzed using Google PageRank and Alexa tool. Accessibility, usability, and reliability were investigated using the LIDA tool and readability was assessed using readability indices. Twenty-eight Web sites were physician-provided and 15 publically-provided. For all Web sites, 88% of JAMA benchmark criteria were fulfilled, DISCERN quality score was high, and 81% of Trifecta outcome measurements were addressed. Popularity was average according to Google PageRank (mean 2.9 ± 1.5) and Alexa Traffic Rank (median, 49,109; minimum, 7; maximum, 8,582,295). Accessibility (85 ± 7%), usability (92 ± 3%), and reliability scores (88 ± 8%) were moderate to high. Automated Readability Index was 7.2 ± 2.1 and Flesch-Kincaid Grade Level was 9 ± 2, rating the Web sites as difficult to read. Physician-provided Web sites had higher quality scores and lower readability compared with publically-provided Web sites. Websites providing information on RobP obtained medium to high ratings in all domains of quality in the current assessment. In contrast, readability needs to be significantly improved so that this content can become available for the populace. Copyright © 2015 Elsevier Inc. All rights reserved.
Factors Associated With Suicidal Attempts in Iran: A Systematic Review
Hakim Shooshtari, Mitra; Malakouti, Seyyed Kazem; Panaghi, Leili; Mohseni, Shohreh; Mansouri, Naghmeh; Rahimi Movaghar, Afarin
2016-01-01
Context: Suicide prevention is a health service priority. Some surveys have assessed suicidal behaviors and potential risk factors. Objectives: The current paper aimed to gather information about etiology of suicide attempts in Iran. Data Sources: Pubmed, ISI web of science, PsychInfo, IranPsych, IranMedex, IranDoc as well as gray literature were searched. Study Selection: By electronic and gray literature search, 128 articles were enrolled in this paper. Pubmed, ISI web of science, PsychInfo, IranPsych, IranMedex, IranDoc were searched for electronic search. After reading the abstracts, 84 studies were excluded and full texts of 44 articles were reviewed critically. Data Extraction: Pubmed, ISI web of science, PsychInfo, IranPsych, IranMedex, IranDoc as well as gray literature were searched to find any study about etiologic factors of suicide attempt in Iran. Results: Depressive disorder was the most common diagnosis in suicide attempters that is 45% of the evaluated cases had depression. One study that had used Minnesota multiphasic personality inventory (MMPI) found that Histrionics in females and Schizophrenia and Paranoia in males were significantly influential. Family conflicts with 50.7% and conflict with parents with 44% were two effective psychosocial factors in suicidal attempts. In around one fourth (28.7%) of the cases, conflict with spouse was the main etiologic factor. Conclusions: According to the methodological limitations, outcomes should be generalized cautiously. Further studies will help to plan preventive strategies for suicidal attempts; therefore, continued researches should be conducted to fill the data gaps. PMID:27284284
World Wide Web Search Engines: AltaVista and Yahoo.
ERIC Educational Resources Information Center
Machovec, George S., Ed.
1996-01-01
Examines the history, structure, and search capabilities of Internet search tools AltaVista and Yahoo. AltaVista provides relevance-ranked feedback on full-text searches. Yahoo indexes Web "citations" only but does organize information hierarchically into predefined categories. Yahoo has recently become a publicly held company and…
Hybrid Filtering in Semantic Query Processing
ERIC Educational Resources Information Center
Jeong, Hanjo
2011-01-01
This dissertation presents a hybrid filtering method and a case-based reasoning framework for enhancing the effectiveness of Web search. Web search may not reflect user needs, intent, context, and preferences, because today's keyword-based search is lacking semantic information to capture the user's context and intent in posing the search query.…
ERIC Educational Resources Information Center
Vine, Rita
2001-01-01
Explains how to train users in effective Web searching. Discusses challenges of teaching Web information retrieval; a framework for information searching; choosing the right search tools for users; the seven-step lesson planning process; tips for delivering group Internet training; and things that help people work faster and smarter on the Web.…
Designing learning management system interoperability in semantic web
NASA Astrophysics Data System (ADS)
Anistyasari, Y.; Sarno, R.; Rochmawati, N.
2018-01-01
The extensive adoption of learning management system (LMS) has set the focus on the interoperability requirement. Interoperability is the ability of different computer systems, applications or services to communicate, share and exchange data, information, and knowledge in a precise, effective and consistent way. Semantic web technology and the use of ontologies are able to provide the required computational semantics and interoperability for the automation of tasks in LMS. The purpose of this study is to design learning management system interoperability in the semantic web which currently has not been investigated deeply. Moodle is utilized to design the interoperability. Several database tables of Moodle are enhanced and some features are added. The semantic web interoperability is provided by exploited ontology in content materials. The ontology is further utilized as a searching tool to match user’s queries and available courses. It is concluded that LMS interoperability in Semantic Web is possible to be performed.
Semantic e-Learning: Next Generation of e-Learning?
NASA Astrophysics Data System (ADS)
Konstantinos, Markellos; Penelope, Markellou; Giannis, Koutsonikos; Aglaia, Liopa-Tsakalidi
Semantic e-learning aspires to be the next generation of e-learning, since the understanding of learning materials and knowledge semantics allows their advanced representation, manipulation, sharing, exchange and reuse and ultimately promote efficient online experiences for users. In this context, the paper firstly explores some fundamental Semantic Web technologies and then discusses current and potential applications of these technologies in e-learning domain, namely, Semantic portals, Semantic search, personalization, recommendation systems, social software and Web 2.0 tools. Finally, it highlights future research directions and open issues of the field.
Drexel at TREC 2014 Federated Web Search Track
2014-11-01
of its input RS results. 1. INTRODUCTION Federated Web Search is the task of searching multiple search engines simultaneously and combining their...or distributed properly[5]. The goal of RS is then, for a given query, to select only the most promising search engines from all those available. Most...result pages of 149 search engines . 4000 queries are used in building the sample set. As a part of the Vertical Selection task, search engines are
Supporting Reflective Activities in Information Seeking on the Web
NASA Astrophysics Data System (ADS)
Saito, Hitomi; Miwa, Kazuhisa
Recently, many opportunities have emerged to use the Internet in daily life and classrooms. However, with the growth of the World Wide Web (Web), it is becoming increasingly difficult to find target information on the Internet. In this study, we explore a method for developing the ability of users in information seeking on the Web and construct a search process feedback system supporting reflective activities of information seeking on the Web. Reflection is defined as a cognitive activity for monitoring, evaluating, and modifying one's thinking and process. In the field of learning science, many researchers have investigated reflective activities that facilitate learners' problem solving and deep understanding. The characteristics of this system are: (1) to show learners' search processes on the Web as described, based on a cognitive schema, and (2) to prompt learners to reflect on their search processes. We expect that users of this system can reflect on their search processes by receiving information on their own search processes provided by the system, and that these types of reflective activity helps them to deepen their understanding of information seeking activities. We have conducted an experiment to investigate the effects of our system. The experimental results confirmed that (1) the system actually facilitated the learners' reflective activities by providing process visualization and prompts, and (2) the learners who reflected on their search processes more actively understood their own search processes more deeply.
A web search on environmental topics: what is the role of ranking?
Covolo, Loredana; Filisetti, Barbara; Mascaretti, Silvia; Limina, Rosa Maria; Gelatti, Umberto
2013-12-01
Although the Internet is easy to use, the mechanisms and logic behind a Web search are often unknown. Reliable information can be obtained, but it may not be visible as the Web site is not located in the first positions of search results. The possible risks of adverse health effects arising from environmental hazards are issues of increasing public interest, and therefore the information about these risks, particularly on topics for which there is no scientific evidence, is very crucial. The aim of this study was to investigate whether the presentation of information on some environmental health topics differed among various search engines, assuming that the most reliable information should come from institutional Web sites. Five search engines were used: Google, Yahoo!, Bing, Ask, and AOL. The following topics were searched in combination with the word "health": "nuclear energy," "electromagnetic waves," "air pollution," "waste," and "radon." For each topic three key words were used. The first 30 search results for each query were considered. The ranking variability among the search engines and the type of search results were analyzed for each topic and for each key word. The ranking of institutional Web sites was given particular consideration. Variable results were obtained when surfing the Internet on different environmental health topics. Multivariate logistic regression analysis showed that, when searching for radon and air pollution topics, it is more likely to find institutional Web sites in the first 10 positions compared with nuclear power (odds ratio=3.4, 95% confidence interval 2.1-5.4 and odds ratio=2.9, 95% confidence interval 1.8-4.7, respectively) and also when using Google compared with Bing (odds ratio=3.1, 95% confidence interval 1.9-5.1). The increasing use of online information could play an important role in forming opinions. Web users should become more aware of the importance of finding reliable information, and health institutions should be able to make that information more visible.
Dy, Christopher J; Taylor, Samuel A; Patel, Ronak M; Kitay, Alison; Roberts, Timothy R; Daluiski, Aaron
2012-09-01
Recent emphasis on shared decision making and patient-centered research has increased the importance of patient education and health literacy. The internet is rapidly growing as a source of self-education for patients. However, concern exists over the quality, accuracy, and readability of the information. Our objective was to determine whether the quality, accuracy, and readability of information online about distal radius fractures vary with the search term. This was a prospective evaluation of 3 search engines using 3 different search terms of varying sophistication ("distal radius fracture," "wrist fracture," and "broken wrist"). We evaluated 70 unique Web sites for quality, accuracy, and readability. We used comparative statistics to determine whether the search term affected the quality, accuracy, and readability of the Web sites found. Three orthopedic surgeons independently gauged quality and accuracy of information using a set of predetermined scoring criteria. We evaluated the readability of the Web site using the Fleisch-Kincaid score for reading grade level. There were significant differences in the quality, accuracy, and readability of information found, depending on the search term. We found higher quality and accuracy resulted from the search term "distal radius fracture," particularly compared with Web sites resulting from the term "broken wrist." The reading level was higher than recommended in 65 of the 70 Web sites and was significantly higher when searching with "distal radius fracture" than "wrist fracture" or "broken wrist." There was no correlation between Web site reading level and quality or accuracy. The readability of information about distal radius fractures in most Web sites was higher than the recommended reading level for the general public. The quality and accuracy of the information found significantly varied with the sophistication of the search term used. Physicians, professional societies, and search engines should consider efforts to improve internet access to high-quality information at an understandable level. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
The Arctic Observing Viewer: A Web-mapping Application for U.S. Arctic Observing Activities
NASA Astrophysics Data System (ADS)
Cody, R. P.; Manley, W. F.; Gaylord, A. G.; Kassin, A.; Villarreal, S.; Barba, M.; Dover, M.; Escarzaga, S. M.; Habermann, T.; Kozimor, J.; Score, R.; Tweedie, C. E.
2015-12-01
Although a great deal of progress has been made with various arctic observing efforts, it can be difficult to assess such progress when so many agencies, organizations, research groups and others are making such rapid progress over such a large expanse of the Arctic. To help meet the strategic needs of the U.S. SEARCH-AON program and facilitate the development of SAON and other related initiatives, the Arctic Observing Viewer (AOV; http://ArcticObservingViewer.org) has been developed. This web mapping application compiles detailed information pertaining to U.S. Arctic Observing efforts. Contributing partners include the U.S. NSF, USGS, ACADIS, ADIwg, AOOS, a2dc, AON, ARMAP, BAID, IASOA, INTERACT, and others. Over 7700 observation sites are currently in the AOV database and the application allows users to visualize, navigate, select, advance search, draw, print, and more. During 2015, the web mapping application has been enhanced by the addition of a query builder that allows users to create rich and complex queries. AOV is founded on principles of software and data interoperability and includes an emerging "Project" metadata standard, which uses ISO 19115-1 and compatible web services. Substantial efforts have focused on maintaining and centralizing all database information. In order to keep up with emerging technologies, the AOV data set has been structured and centralized within a relational database and the application front-end has been ported to HTML5 to enable mobile access. Other application enhancements include an embedded Apache Solr search platform which provides users with the capability to perform advance searches and an administration web based data management system that allows administrators to add, update, and delete information in real time. We encourage all collaborators to use AOV tools and services for their own purposes and to help us extend the impact of our efforts and ensure AOV complements other cyber-resources. Reinforcing dispersed but interoperable resources in this way will help to ensure improved capacities for conducting activities such as assessing the status of arctic observing efforts, optimizing logistic operations, and for quickly accessing external and project-focused web resources for more detailed information and access to scientific data and derived products.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-03
...'' field when using either the Web-based search (advanced search) engine or the ADAMS FIND tool in Citrix... should enter ``05200011'' in the ``Docket Number'' field in the web-based search (advanced search) engine... ML100740441. To search for documents in ADAMS using Vogtle Units 3 and 4 COL application docket numbers, 52...
BacDive--The Bacterial Diversity Metadatabase in 2016.
Söhngen, Carola; Podstawka, Adam; Bunk, Boyke; Gleim, Dorothea; Vetcininova, Anna; Reimer, Lorenz Christian; Ebeling, Christian; Pendarovski, Cezar; Overmann, Jörg
2016-01-04
BacDive-the Bacterial Diversity Metadatabase (http://bacdive.dsmz.de) provides strain-linked information about bacterial and archaeal biodiversity. The range of data encompasses taxonomy, morphology, physiology, sampling and concomitant environmental conditions as well as molecular biology. The majority of data is manually annotated and curated. Currently (with release 9/2015), BacDive covers 53 978 strains. Newly implemented RESTful web services provide instant access to the content in machine-readable XML and JSON format. Besides an overall increase of data content, BacDive offers new data fields and features, e.g. the search for gene names, plasmids or 16S rRNA in the advanced search, as well as improved linkage of entries to external life science web resources. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
The Web: Can We Make It Easier To Find Information?
ERIC Educational Resources Information Center
Maddux, Cleborne D.
1999-01-01
Reviews problems with the World Wide Web that can be attributed to human error or ineptitude, and provides suggestions for improvement. Discusses poor Web design, poor use of search engines, and poor quality control by search engines and directories. (AEF)
Start Your Search Engines. Part One: Taming Google--and Other Tips to Master Web Searches
ERIC Educational Resources Information Center
Adam, Anna; Mowers, Helen
2008-01-01
There are a lot of useful tools on the Web, all those social applications, and the like. Still most people go online for one thing--to perform a basic search. For most fact-finding missions, the Web is there. But--as media specialists well know--the sheer wealth of online information can hamper efforts to focus on a few reliable references.…
The Effectiveness of Web Search Engines to Index New Sites from Different Countries
ERIC Educational Resources Information Center
Pirkola, Ari
2009-01-01
Introduction: Investigates how effectively Web search engines index new sites from different countries. The primary interest is whether new sites are indexed equally or whether search engines are biased towards certain countries. If major search engines show biased coverage it can be considered a significant economic and political problem because…
Manually Classifying User Search Queries on an Academic Library Web Site
ERIC Educational Resources Information Center
Chapman, Suzanne; Desai, Shevon; Hagedorn, Kat; Varnum, Ken; Mishra, Sonali; Piacentine, Julie
2013-01-01
The University of Michigan Library wanted to learn more about the kinds of searches its users were conducting through the "one search" search box on the Library Web site. Library staff conducted two investigations. A preliminary investigation in 2011 involved the manual review of the 100 most frequently occurring queries conducted…
Ethnography of Novices' First Use of Web Search Engines: Affective Control in Cognitive Processing.
ERIC Educational Resources Information Center
Nahl, Diane
1998-01-01
This study of 18 novice Internet users employed a structured self-report method to investigate affective and cognitive operations in the following phases of World Wide Web searching: presearch formulation, search statement formulation, search strategy, and evaluation of results. Users also rated their self-confidence as searchers and satisfaction…
Just-in-Time Web Searches for Trainers & Adult Educators.
ERIC Educational Resources Information Center
Kirk, James J.
Trainers and adult educators often need to quickly locate quality information on the World Wide Web (WWW) and need assistance in searching for such information. A "search engine" is an application used to query existing information on the WWW. The three types of search engines are computer-generated indexes, directories, and meta search…
Use of an Academic Library Web Site Search Engine.
ERIC Educational Resources Information Center
Fagan, Jody Condit
2002-01-01
Describes an analysis of the search engine logs of Southern Illinois University, Carbondale's library to determine how patrons used the site search. Discusses results that showed patrons did not understand the function of the search and explains improvements that were made in the Web site and in online reference services. (Author/LRW)
GeoSearcher: Location-Based Ranking of Search Engine Results.
ERIC Educational Resources Information Center
Watters, Carolyn; Amoudi, Ghada
2003-01-01
Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…
[Improving vaccination social marketing by monitoring the web].
Ferro, A; Bonanni, P; Castiglia, P; Montante, A; Colucci, M; Miotto, S; Siddu, A; Murrone, L; Baldo, V
2014-01-01
Immunisation is one of the most important and cost- effective interventions in Public Health because of their significant positive impact on population health.However, since Jenner's discovery there always been a lively debate between supporters and opponents of vaccination; Today the antivaccination movement spreads its message mostly on the web, disseminating inaccurate data through blogs and forums, increasing vaccine rejection.In this context, the Società Italiana di Igiene (SItI) created a web project in order to fight the misinformation on the web regarding vaccinations, through a series of information tools, including scientific articles, educational information, video and multimedia presentations The web portal (http://www.vaccinarsi.org) was published in May 2013 and now is already available over one hundred web pages related to vaccinations Recently a Forum, a periodic newsletter and a Twitter page have been created. There has been an average of 10,000 hits per month. Currently our users are mostly healthcare professionals. The visibility of the site is very good and it currently ranks first in the Google's search engine, taping the word "vaccinarsi" The results of the first four months of activity are extremely encouraging and show the importance of this project; furthermore the application for quality certification by independent international Organizations has been submitted.
How good is Google? The quality of otolaryngology information on the internet.
Pusz, Max D; Brietzke, Scott E
2012-09-01
To assess the quality of the information a patient (parent) may encounter using a Google search for typical otolaryngology ailments. Cross-sectional study. Tertiary care center. A Google keyword search was performed for 10 common otolaryngology problems including ear infection, hearing loss, tonsillitis, and so on. The top 10 search results for each were critically examined using the 16-item (1-5 scale) standardized DISCERN instrument. The DISCERN instrument was developed to assess the quality and comprehensiveness of patient treatment choice literature. A total of 100 Web sites were assessed. Of these, 19 (19%) were primarily advertisements for products and were excluded from DISCERN scoring. Searches for more typically chronic otolaryngic problems (eg, tinnitus, sleep apnea, etc) resulted in more biased, advertisement-type results than those for typically acute problems (eg, ear infection, sinus infection, P = .03). The search for "sleep apnea treatment" produced the highest scoring results (mean overall DISCERN score = 3.49, range = 1.81-4.56), and the search for "hoarseness treatment" produced the lowest scores (mean = 2.49, range = 1.56-3.56). Results from major comprehensive Web sites (WebMD, EMedicinehealth.com, Wikipedia, etc.) scored higher than other Web sites (mean DISCERN score = 3.46 vs 2.48, P < .001). There is marked variability in the quality of Web site information for the treatment of common otolaryngologic problems. Searches on more chronic problems resulted in a higher proportion of biased advertisement Web sites. Larger, comprehensive Web sites generally provided better information but were less than perfect in presenting complete information on treatment options.
The Librarian's Internet Survival Guide: Strategies for the High-Tech Reference Desk.
ERIC Educational Resources Information Center
McDermott, Irene E.; Quint, Barbara, Ed.
This guide discusses the use of the World Wide Web for library reference service. Part 1, "Ready Reference on the Web: Resources for Patrons," contains chapters on searching and meta-searching the Internet, using the Web to find people, news on the Internet, quality reference resources on the Web, Internet sites for kids, free full-text…
Shedlock, James; Frisque, Michelle; Hunt, Steve; Walton, Linda; Handler, Jonathan; Gillam, Michael
2010-04-01
How can the user's access to health information, especially full-text articles, be improved? The solution is building and evaluating the Health SmartLibrary (HSL). The setting is the Galter Health Sciences Library, Feinberg School of Medicine, Northwestern University. The HSL was built on web-based personalization and customization tools: My E-Resources, Stay Current, Quick Search, and File Cabinet. Personalization and customization data were tracked to show user activity with these value-added, online services. Registration data indicated that users were receptive to personalized resource selection and that the automated application of specialty-based, personalized HSLs was more frequently adopted than manual customization by users. Those who did customize customized My E-Resources and Stay Current more often than Quick Search and File Cabinet. Most of those who customized did so only once. Users did not always take advantage of the services designed to aid their library research experiences. When personalization is available at registration, users readily accepted it. Customization tools were used less frequently; however, more research is needed to determine why this was the case.
Analysis of governmental Web sites on food safety issues: a global perspective.
Namkung, Young; Almanza, Barbara A
2006-10-01
Despite a growing concern over food safety issues, as well as a growing dependence on the Internet as a source of information, little research has been done to examine the presence and relevance of food safety-related information on Web sites. The study reported here conducted Web site analysis in order to examine the current operational status of governmental Web sites on food safety issues. The study also evaluated Web site usability, especially information dimensionalities such as utility, currency, and relevance of content, from the perspective of the English-speaking consumer. Results showed that out of 192 World Health Organization members, 111 countries operated governmental Web sites that provide information about food safety issues. Among 171 searchable Web sites from the 111 countries, 123 Web sites (71.9 percent) were accessible, and 81 of those 123 (65.9 percent) were available in English. The majority of Web sites offered search engine tools and related links for more information, but their availability and utility was limited. In terms of content, 69.9 percent of Web sites offered information on foodborne-disease outbreaks, compared with 31.5 percent that had travel- and health-related information.
Autonomous Mission Operations for Sensor Webs
NASA Astrophysics Data System (ADS)
Underbrink, A.; Witt, K.; Stanley, J.; Mandl, D.
2008-12-01
We present interim results of a 2005 ROSES AIST project entitled, "Using Intelligent Agents to Form a Sensor Web for Autonomous Mission Operations", or SWAMO. The goal of the SWAMO project is to shift the control of spacecraft missions from a ground-based, centrally controlled architecture to a collaborative, distributed set of intelligent agents. The network of intelligent agents intends to reduce management requirements by utilizing model-based system prediction and autonomic model/agent collaboration. SWAMO agents are distributed throughout the Sensor Web environment, which may include multiple spacecraft, aircraft, ground systems, and ocean systems, as well as manned operations centers. The agents monitor and manage sensor platforms, Earth sensing systems, and Earth sensing models and processes. The SWAMO agents form a Sensor Web of agents via peer-to-peer coordination. Some of the intelligent agents are mobile and able to traverse between on-orbit and ground-based systems. Other agents in the network are responsible for encapsulating system models to perform prediction of future behavior of the modeled subsystems and components to which they are assigned. The software agents use semantic web technologies to enable improved information sharing among the operational entities of the Sensor Web. The semantics include ontological conceptualizations of the Sensor Web environment, plus conceptualizations of the SWAMO agents themselves. By conceptualizations of the agents, we mean knowledge of their state, operational capabilities, current operational capacities, Web Service search and discovery results, agent collaboration rules, etc. The need for ontological conceptualizations over the agents is to enable autonomous and autonomic operations of the Sensor Web. The SWAMO ontology enables automated decision making and responses to the dynamic Sensor Web environment and to end user science requests. The current ontology is compatible with Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) Sensor Model Language (SensorML) concepts and structures. The agents are currently deployed on the U.S. Naval Academy MidSTAR-1 satellite and are actively managing the power subsystem on-orbit without the need for human intervention.
Electronic Biomedical Literature Search for Budding Researcher
Thakre, Subhash B.; Thakre S, Sushama S.; Thakre, Amol D.
2013-01-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research. PMID:24179937
Electronic biomedical literature search for budding researcher.
Thakre, Subhash B; Thakre S, Sushama S; Thakre, Amol D
2013-09-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research.
United States National Library of Medicine Drug Information Portal.
Hochstein, Colette; Goshorn, Jeanne; Chang, Florence
2009-01-01
The Drug Information Portal is a free Web resource from the National Library of Medicine (NLM) that provides a user-friendly gateway to current information for more than 15,000 drugs. The site guides users to related resources of NLM, the National Institutes of Health (NIH), and other government agencies. Current drug-related information regarding consumer health, clinical trials, AIDS, MeSH pharmacological actions, MEDLINE/PubMed biomedical literature, and physical properties and structure is easily retrieved by searching on a drug name. A varied selection of focused topics in medicine and drugs is also available from displayed subject headings. This column provides background information about the Drug Information Portal, as well as search basics.
Teen smoking cessation help via the Internet: a survey of search engines.
Edwards, Christine C; Elliott, Sean P; Conway, Terry L; Woodruff, Susan I
2003-07-01
The objective of this study was to assess Web sites related to teen smoking cessation on the Internet. Seven Internet search engines were searched using the keywords teen quit smoking. The top 20 hits from each search engine were reviewed and categorized. The keywords teen quit smoking produced between 35 and 400,000 hits depending on the search engine. Of 140 potential hits, 62% were active, unique sites; 85% were listed by only one search engine; and 40% focused on cessation. Findings suggest that legitimate on-line smoking cessation help for teens is constrained by search engine choice and the amount of time teens spend looking through potential sites. Resource listings should be updated regularly. Smoking cessation Web sites need to be picked up on multiple search engine searches. Further evaluation of smoking cessation Web sites need to be conducted to identify the most effective help for teens.
Text categorization models for identifying unproven cancer treatments on the web.
Aphinyanaphongs, Yin; Aliferis, Constantin
2007-01-01
The nature of the internet as a non-peer-reviewed (and largely unregulated) publication medium has allowed wide-spread promotion of inaccurate and unproven medical claims in unprecedented scale. Patients with conditions that are not currently fully treatable are particularly susceptible to unproven and dangerous promises about miracle treatments. In extreme cases, fatal adverse outcomes have been documented. Most commonly, the cost is financial, psychological, and delayed application of imperfect but proven scientific modalities. To help protect patients, who may be desperately ill and thus prone to exploitation, we explored the use of machine learning techniques to identify web pages that make unproven claims. This feasibility study shows that the resulting models can identify web pages that make unproven claims in a fully automatic manner, and substantially better than previous web tools and state-of-the-art search engine technology.
WebCSD: the online portal to the Cambridge Structural Database
Thomas, Ian R.; Bruno, Ian J.; Cole, Jason C.; Macrae, Clare F.; Pidcock, Elna; Wood, Peter A.
2010-01-01
WebCSD, a new web-based application developed by the Cambridge Crystallographic Data Centre, offers fast searching of the Cambridge Structural Database using only a standard internet browser. Search facilities include two-dimensional substructure, molecular similarity, text/numeric and reduced cell searching. Text, chemical diagrams and three-dimensional structural information can all be studied in the results browser using the efficient entry summaries and embedded three-dimensional viewer. PMID:22477776
TREC Microblog 2012 Track: Real-Time Algorithm for Microblog Ranking Systems
2012-11-01
such as information about the tweet and the user profile. We collected those tweets by means of web crawler and extract several features from the raw...Mining Text Data. 2012. [5] D. Feltoni. Twittersa: un sistema per l’analisi del sentimento nelle reti sociali. Master’s thesis, Roma Tre University...Morris. Twittersearch: a comparison of microblog search and web search. Proceedings of the fourth ACM international conference on Web search, 2011
The Impact of Subject Indexes on Semantic Indeterminacy in Enterprise Document Retrieval
ERIC Educational Resources Information Center
Schymik, Gregory
2012-01-01
Ample evidence exists to support the conclusion that enterprise search is failing its users. This failure is costing corporate America billions of dollars every year. Most enterprise search engines are built using web search engines as their foundations. These search engines are optimized for web use and are inadequate when used inside the…
A Systematic Understanding of Successful Web Searches in Information-Based Tasks
ERIC Educational Resources Information Center
Zhou, Mingming
2013-01-01
The purpose of this study is to research how Chinese university students solve information-based problems. With the Search Performance Index as the measure of search success, participants were divided into high, medium and low-performing groups. Based on their web search logs, these three groups were compared along five dimensions of the search…
Practical Tips and Strategies for Finding Information on the Internet.
ERIC Educational Resources Information Center
Armstrong, Rhonda; Flanagan, Lynn
This paper presents the most important concepts and techniques to use in successfully searching the major World Wide Web search engines and directories, explains the basics of how search engines work, and describes what is included in their indexes. Following an introduction that gives an overview of Web directories and search engines, the first…
Mulcahey, Mary K; Gosselin, Michelle M; Fadale, Paul D
2013-06-19
The Internet is a common source of information for orthopaedic residents applying for sports medicine fellowships, with the web sites of the American Orthopaedic Society for Sports Medicine (AOSSM) and the San Francisco Match serving as central databases. We sought to evaluate the web sites for accredited orthopaedic sports medicine fellowships with regard to content and accessibility. We reviewed the existing web sites of the ninety-five accredited orthopaedic sports medicine fellowships included in the AOSSM and San Francisco Match databases from February to March 2012. A Google search was performed to determine the overall accessibility of program web sites and to supplement information obtained from the AOSSM and San Francisco Match web sites. The study sample consisted of the eighty-seven programs whose web sites connected to information about the fellowship. Each web site was evaluated for its informational value. Of the ninety-five programs, fifty-one (54%) had links listed in the AOSSM database. Three (3%) of all accredited programs had web sites that were linked directly to information about the fellowship. Eighty-eight (93%) had links listed in the San Francisco Match database; however, only five (5%) had links that connected directly to information about the fellowship. Of the eighty-seven programs analyzed in our study, all eighty-seven web sites (100%) provided a description of the program and seventy-six web sites (87%) included information about the application process. Twenty-one web sites (24%) included a list of current fellows. Fifty-six web sites (64%) described the didactic instruction, seventy (80%) described team coverage responsibilities, forty-seven (54%) included a description of cases routinely performed by fellows, forty-one (47%) described the role of the fellow in seeing patients in the office, eleven (13%) included call responsibilities, and seventeen (20%) described a rotation schedule. Two Google searches identified direct links for 67% to 71% of all accredited programs. Most accredited orthopaedic sports medicine fellowships lack easily accessible or complete web sites in the AOSSM or San Francisco Match databases. Improvement in the accessibility and quality of information on orthopaedic sports medicine fellowship web sites would facilitate the ability of applicants to obtain useful information.
SpEnD: Linked Data SPARQL Endpoints Discovery Using Search Engines
NASA Astrophysics Data System (ADS)
Yumusak, Semih; Dogdu, Erdogan; Kodaz, Halife; Kamilaris, Andreas; Vandenbussche, Pierre-Yves
In this study, a novel metacrawling method is proposed for discovering and monitoring linked data sources on the Web. We implemented the method in a prototype system, named SPARQL Endpoints Discovery (SpEnD). SpEnD starts with a "search keyword" discovery process for finding relevant keywords for the linked data domain and specifically SPARQL endpoints. Then, these search keywords are utilized to find linked data sources via popular search engines (Google, Bing, Yahoo, Yandex). By using this method, most of the currently listed SPARQL endpoints in existing endpoint repositories, as well as a significant number of new SPARQL endpoints, have been discovered. Finally, we have developed a new SPARQL endpoint crawler (SpEC) for crawling and link analysis.
78 FR 69710 - Luminant Generation Company, LLC
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-20
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2008... . To begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public [[Page 69711
Agility: Agent - Ility Architecture
2002-10-01
existing and emerging standards (e.g., distributed objects, email, web, search engines , XML, Java, Jini). Three agent system components resulted from...agents and other Internet resources and operate over the web (AgentGram), a yellow pages service that uses Internet search engines to locate XML ads for agents and other Internet resources (WebTrader).
Searchers Net Treasure in Monterey.
ERIC Educational Resources Information Center
McDermott, Irene E.
1999-01-01
Reports on Web keyword searching, metadata, Dublin Core, Extensible Markup Language (XML), metasearch engines (metasearch engines search several Web indexes and/or directories and/or Usenet and/or specific Web sites), and the Year 2000 (Y2K) dilemma, all topics discussed at the second annual Internet Librarian Conference sponsored by Information…
Setting the public agenda for online health search: a white paper and action agenda.
Greenberg, Liza; D'Andrea, Guy; Lorence, Dan
2004-06-08
Searches for health information are among the most common reasons that consumers use the Internet. Both consumers and quality experts have raised concerns about the quality of information on the Web and the ability of consumers to find accurate information that meets their needs. To produce a national stakeholder-driven agenda for research, technical improvements, and education that will improve the results of consumer searches for health information on the Internet. URAC, a national accreditation organization, and Consumer WebWatch (CWW), a project of Consumers Union (a consumer advocacy organization), conducted a review of factors influencing the results of online health searches. The organizations convened two stakeholder groups of consumers, quality experts, search engine experts, researchers, health-care providers, informatics specialists, and others. Meeting participants reviewed existing information and developed recommendations for improving the results of online consumer searches for health information. Participants were not asked to vote on or endorse the recommendations. Our working definition of a quality Web site was one that contained accurate, reliable, and complete information. The Internet has greatly improved access to health information for consumers. There is great variation in how consumers seek information via the Internet, and in how successful they are in searching for health information. Further, there is variation among Web sites, both in quality and accessibility. Many Web site features affect the capability of search engines to find and index them. Research is needed to define quality elements of Web sites that could be retrieved by search engines and understand how to meet the needs of different types of searchers. Technological research should seek to develop more sophisticated approaches for tagging information, and to develop searches that "learn" from consumer behavior. Finally, education initiatives are needed to help consumers search more effectively and to help them critically evaluate the information they find.
Setting the Public Agenda for Online Health Search: A White Paper and Action Agenda
D'Andrea, Guy; Lorence, Dan
2004-01-01
Background Searches for health information are among the most common reasons that consumers use the Internet. Both consumers and quality experts have raised concerns about the quality of information on the Web and the ability of consumers to find accurate information that meets their needs. Objective To produce a national stakeholder-driven agenda for research, technical improvements, and education that will improve the results of consumer searches for health information on the Internet. Methods URAC, a national accreditation organization, and Consumer WebWatch (CWW), a project of Consumers Union (a consumer advocacy organization), conducted a review of factors influencing the results of online health searches. The organizations convened two stakeholder groups of consumers, quality experts, search engine experts, researchers, health-care providers, informatics specialists, and others. Meeting participants reviewed existing information and developed recommendations for improving the results of online consumer searches for health information. Participants were not asked to vote on or endorse the recommendations. Our working definition of a quality Web site was one that contained accurate, reliable, and complete information. Results The Internet has greatly improved access to health information for consumers. There is great variation in how consumers seek information via the Internet, and in how successful they are in searching for health information. Further, there is variation among Web sites, both in quality and accessibility. Many Web site features affect the capability of search engines to find and index them. Conclusions Research is needed to define quality elements of Web sites that could be retrieved by search engines and understand how to meet the needs of different types of searchers. Technological research should seek to develop more sophisticated approaches for tagging information, and to develop searches that "learn" from consumer behavior. Finally, education initiatives are needed to help consumers search more effectively and to help them critically evaluate the information they find. PMID:15249267
Yi, Fengyun; Yang, Pin; Sheng, Huifeng
2016-04-15
Ebola virus disease (hereafter EVD or Ebola) has a high fatality rate. The devastating effects of the current epidemic of Ebola in West Africa have put the global health response in acute focus. In response, the World Health Organization (WHO) has declared the Ebola outbreak in West Africa as a "Public Health Emergency of International Concern". A small proportion of scientific literature is dedicated to Ebola research. To identify global research trends in Ebola research, the Institute for Scientific Information (ISI) Web of Science™ database was used to search for data, which encompassed original articles published from 1900 to 2013. The keyword "Ebola" was used to identify articles for the purposes of this review. In order to include all published items, the database was searched using the Basic Search method. The earliest record of literature about Ebola indexed in the Web of Science is from 1977. A total of 2477 publications on Ebola, published between 1977 and 2014 (with the number of publications increasing annually), were retrieved from the database. Original research articles (n = 1623, 65.5%) were the most common type of publication. Almost all (96.5%) of the literature in this field was in English. The USA had the highest scientific output and greatest number of funding agencies. Journal of Virology published 239 papers on Ebola, followed by Journal of Infectious Diseases and Virology, which published 113 and 99 papers, respectively. A total of 1911 papers on Ebola were cited 61,477 times. This analysis identified the current state of research and trends in studies about Ebola between 1977 and 2014. Our bibliometric analysis provides a historical perspective on the progress in Ebola research.
Do sign language videos improve Web navigation for Deaf Signer users?
Fajardo, Inmaculada; Parra, Elena; Cañas, José J
2010-01-01
The efficacy of video-based sign language (SL) navigation aids to improve Web search for Deaf Signers was tested by two experiments. Experiment 1 compared 2 navigation aids based on text hyperlinks linked to embedded SL videos, which differed in the spatial contiguity between the text hyperlink and SL video (contiguous vs. distant). Deaf Signers' performance was similar in Web search using both aids, but a positive correlation between their word categorization abilities and search efficiency appeared in the distant condition. In Experiment 2, the contiguous condition was compared with a text-only hyperlink condition. Deaf Signers became less disorientated (used shorter paths to find the target) in the text plus SL condition than in the text-only condition. In addition, the positive correlation between word categorization abilities and search only appeared in the text-only condition. These findings suggest that SL videos added to text hyperlinks improve Web search efficiency for Deaf Signers.
2006-06-01
Horizontal Fusion, the JCDX team developed two web services, a Classification Policy Decision Service (cPDS), and a Federated Search Provider (FSP...The cPDS web service primarily provides other systems with methods for handling labeled data such as label comparison. The federated search provider...level domains. To provide defense-in-depth, cPDS and the Federated Search Provider are implemented on a separate server known as the JCDX Web
[Biomedical information on the internet using search engines. A one-year trial].
Corrao, Salvatore; Leone, Francesco; Arnone, Sabrina
2004-01-01
The internet is a communication medium and content distributor that provide information in the general sense but it could be of great utility regarding as the search and retrieval of biomedical information. Search engines represent a great deal to rapidly find information on the net. However, we do not know whether general search engines and meta-search ones are reliable in order to find useful and validated biomedical information. The aim of our study was to verify the reproducibility of a search by key-words (pediatric or evidence) using 9 international search engines and 1 meta-search engine at the baseline and after a one year period. We analysed the first 20 citations as output of each searching. We evaluated the formal quality of Web-sites and their domain extensions. Moreover, we compared the output of each search at the start of this study and after a one year period and we considered as a criterion of reliability the number of Web-sites cited again. We found some interesting results that are reported throughout the text. Our findings point out an extreme dynamicity of the information on the Web and, for this reason, we advice a great caution when someone want to use search and meta-search engines as a tool for searching and retrieve reliable biomedical information. On the other hand, some search and meta-search engines could be very useful as a first step searching for defining better a search and, moreover, for finding institutional Web-sites too. This paper allows to know a more conscious approach to the internet biomedical information universe.
78 FR 68100 - Luminant Generation Company, LLC
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-13
... following methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID.../adams.html . To begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public Document Room (PDR) reference...
The effects of link format and screen location on visual search of web pages.
Ling, Jonathan; Van Schaik, Paul
2004-06-22
Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.
ERIC Educational Resources Information Center
Hock, Randolph
This book aims to facilitate more effective and efficient use of World Wide Web search engines by helping the reader: know the basic structure of the major search engines; become acquainted with those attributes (features, benefits, options, content, etc.) that search engines have in common and where they differ; know the main strengths and…
Qureshi, Sheeraz A; Koehler, Steven M; Lin, James D; Bird, Justin; Garcia, Ryan M; Hecht, Andrew C
2012-05-01
Cross-sectional survey. The objective of this study was to investigate the authorship, content, and quality of information available to the public on the Internet pertaining to the cervical artificial disc replacement device. The Internet is widely used by patients as an educational tool for health care information. In addition, the Internet is used as a medium for direct-to-consumer marketing. Increasing interest in cervical artificial disc replacement has led to the emergence of numerous Web sites offering information about this procedure. It is thought that patients can be influenced by information found on the Internet. A cross section of Web sites accessible to the general public was surveyed. Three commonly used search engines were used to locate 150 (50/search engine) Web sites providing information about the cervical artificial disc replacement. Each Web site was evaluated with regard to authorship and content. Fifty-three percent of the Web sites reviewed were authorized by a private physician group, 4% by an academic physician group, 13% by industry, 16% were news reports, and 14% were not otherwise categorized. Sixty-five percent of Web sites offered a mechanism for direct contact and 19% provided clear patient eligibility criteria. Benefits were expressed in 80% of Web sites, whereas associated risks were described in 35% or less. European experiences were noted in 17% of Web sites, whereas only 9% of Web sites detailed the current US experience. CONCLUSION.: The results of this study demonstrate that much of the content of the Internet-derived information pertaining to the cervical artificial disc replacement is for marketing purposes and may not represent unbiased information. Until we can confirm the content on a Web site to be accurate, patients should be cautioned when using the Internet as a source for health care information related to cervical disc replacement.
A Web Search on Environmental Topics: What Is the Role of Ranking?
Filisetti, Barbara; Mascaretti, Silvia; Limina, Rosa Maria; Gelatti, Umberto
2013-01-01
Abstract Background: Although the Internet is easy to use, the mechanisms and logic behind a Web search are often unknown. Reliable information can be obtained, but it may not be visible as the Web site is not located in the first positions of search results. The possible risks of adverse health effects arising from environmental hazards are issues of increasing public interest, and therefore the information about these risks, particularly on topics for which there is no scientific evidence, is very crucial. The aim of this study was to investigate whether the presentation of information on some environmental health topics differed among various search engines, assuming that the most reliable information should come from institutional Web sites. Materials and Methods: Five search engines were used: Google, Yahoo!, Bing, Ask, and AOL. The following topics were searched in combination with the word “health”: “nuclear energy,” “electromagnetic waves,” “air pollution,” “waste,” and “radon.” For each topic three key words were used. The first 30 search results for each query were considered. The ranking variability among the search engines and the type of search results were analyzed for each topic and for each key word. The ranking of institutional Web sites was given particular consideration. Results: Variable results were obtained when surfing the Internet on different environmental health topics. Multivariate logistic regression analysis showed that, when searching for radon and air pollution topics, it is more likely to find institutional Web sites in the first 10 positions compared with nuclear power (odds ratio=3.4, 95% confidence interval 2.1–5.4 and odds ratio=2.9, 95% confidence interval 1.8–4.7, respectively) and also when using Google compared with Bing (odds ratio=3.1, 95% confidence interval 1.9–5.1). Conclusions: The increasing use of online information could play an important role in forming opinions. Web users should become more aware of the importance of finding reliable information, and health institutions should be able to make that information more visible. PMID:24083368
ERIC Educational Resources Information Center
Villasenor, Romana F.; Smith, Sarah L.; Jewell, Vanessa D.
2018-01-01
This systematic review evaluates current evidence for using sound-based interventions (SBIs) to improve educational participation for children with challenges in sensory processing and integration. Databases searched included CINAHL, MEDLINE Complete, PsychINFO, ERIC, Web of Science, and Cochrane. No studies explicitly measured participation-level…
ERIC Educational Resources Information Center
Peng, Wei; Crouse, Julia C.; Lin, Jih-Hsuan
2013-01-01
This systematic review evaluates interventions using active video games (AVGs) to increase physical activity and summarizes laboratory studies quantifying intensity of AVG play among children and adults. Databases (Cochrane Library, PsychInfo, PubMed, SPORTDiscus, Web of Science) and forward citation and reference list searches were used to…
Automated Data Tagging in the HLA
NASA Astrophysics Data System (ADS)
Gaffney, N. I.; Miller, W. W.
2008-08-01
One of the more powerful and popular forms of data organization implemented in most popular information sharing web applications is data tagging. With a rich user base from which to gather and digest tags, many interesting and often unanticipated yet very useful associations are revealed. With regard to an existing information, the astronomical community has a rich pool of existing digitally stored and searchable data than any of the currently popular web community, such as You Tube or My Space, had when they started. In initial experiments with the search engine for the Hubble Legacy Archive, we have created a simple yet powerful scheme by which the information from a footprint service, the NED and SIMBAD catalog services, and the ADS abstracts and keywords can be used to initially tag data with standard keywords. By then ingesting this into a public ally available information search engine, such as Apache Lucene, one can create a simple and powerful data tag search engine and association system. By then augmenting this with user provided keys and usage pattern analysis, one can produce a powerful modern data mining system for any astronomical data warehouse.
RTECS database (on the internet). Online data
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The Registry of Toxic Effects of Chemical Substances (RTECS (trademark)) is a database of toxicological information compiled, maintained, and updated by the National Institute for Occupational Safety and Health. The program is mandated by the Occupational Safety and Health Act of 1970. The original edition, known as the `Toxic Substances List,` was published on June 28, 1971, and included toxicologic data for approximately 5,000 chemicals. Since that time, the list has continuously grown and been updated, and its name changed to the current title, `Registry of Toxic Effects of Chemical Substances.` RTECS (trademark) now contains over 133,000 chemicals as NIOSHmore » strives to fulfill the mandate to list `all known toxic substances...and the concentrations at which...toxicity is known to occur.` This database is now available for searching through the Gov. Research-Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less
Interactive Information Organization: Techniques and Evaluation
2001-05-01
information search and access. Locating interesting information on the World Wide Web is the main task of on-line search engines . Such engines accept a...likelihood of being relevant to the user’s request. The majority of today’s Web search engines follow this scenario. The ordering of documents in the
77 FR 26321 - Virginia Electric and Power Company
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-03
... NUCLEAR REGULATORY COMMISSION [Docket Nos. 50-338 and 50-339; NRC-2012-0051; License Nos. NPF-4...: Federal Rulemaking Web Site: Go to http://www.regulations.gov and search for Docket ID NRC-2012-0051... search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems...
The Choice of Initial Web Search Strategies: A Comparison between Finnish and American Searchers.
ERIC Educational Resources Information Center
Iivonen, Mirja; White, Marilyn Domas
2001-01-01
Describes a study that used qualitative and quantitative methodologies to analyze differences between Finnish and American Web searchers in their choice of initial search strategies (direct address, subject directory, and search engines) and their reasoning underlying their choices. Considers implications for considering cultural differences in…
A Search Engine Features Comparison.
ERIC Educational Resources Information Center
Vorndran, Gerald
Until recently, the World Wide Web (WWW) public access search engines have not included many of the advanced commands, options, and features commonly available with the for-profit online database user interfaces, such as DIALOG. This study evaluates the features and characteristics common to both types of search interfaces, examines the Web search…
Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers
Alsaleh, Mansour; Alarifi, Abdulrahman
2016-01-01
Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents. PMID:27855179
Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.
Alsaleh, Mansour; Alarifi, Abdulrahman
2016-01-01
Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.
Publicizing Your Web Resources for Maximum Exposure.
ERIC Educational Resources Information Center
Smith, Kerry J.
2001-01-01
Offers advice to librarians for marketing their Web sites on Internet search engines. Advises against relying solely on spiders and recommends adding metadata to the source code and delivering that information directly to the search engines. Gives an overview of metadata and typical coding for meta tags. Includes Web addresses for a number of…
Climate Prediction Center - Outlooks: UV Alert Forecast
Organization Search Go Search the CPC Go About Us Our Mission Who We Are Contact Us CPC Information CPC Web Team USA.gov is the U.S. Government's official Web portal to all Federal, state and local government Web resources and services. HOME > Stratosphere Home > Stratosphere UV Index > UV Alert
Ocean Drilling Program: Publication Services: Online Manuscript Submission
products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP/TAMU Science Operator Home ODP's main web site Publications Policy Author Instructions Scientific Results Manuscript use the submission and review forms available on the IODP-USIO publications web site. ODP | Search
Designing Search: Effective Search Interfaces for Academic Library Web Sites
ERIC Educational Resources Information Center
Teague-Rector, Susan; Ghaphery, Jimmy
2008-01-01
Academic libraries customize, support, and provide access to myriad information systems, each with complex graphical user interfaces. The number of possible information entry points on an academic library Web site is both daunting to the end-user and consistently challenging to library Web site designers. Faced with the challenges inherent in…
Mining Hidden Gems Beneath the Surface: A Look At the Invisible Web.
ERIC Educational Resources Information Center
Carlson, Randal D.; Repman, Judi
2002-01-01
Describes resources for researchers called the Invisible Web that are hidden from the usual search engines and other tools and contrasts them with those resources available on the surface Web. Identifies specialized search tools, databases, and strategies that can be used to locate credible in-depth information. (Author/LRW)
Climate Prediction Center - Monitoring Atlantic Hurricane Potential
Organization Search Go Search the CPC Go About Us Our Mission Who We Are Contact Us CPC Information CPC Web Team USA.gov is the U.S. Government's official Web portal to all Federal, state and local government Web resources and services. HOME > Monitoring and Data > Monitoring Atlantic Hurricane Potential
A Semiotic Analysis of Icons on the World Wide Web.
ERIC Educational Resources Information Center
Ma, Yan
The World Wide Web allows users to interact with a graphic interface to search information in a hypermedia and multimedia environment. Graphics serve as reference points on the World Wide Web for searching and retrieving information. This study analyzed the culturally constructed syntax patterns, or codes, embedded in the icons of library…
ERIC Educational Resources Information Center
Gerjets, Peter; Kammerer, Yvonne; Werner, Benita
2011-01-01
Web searching for complex information requires to appropriately evaluating diverse sources of information. Information science studies identified different criteria applied by searchers to evaluate Web information. However, the explicit evaluation instructions used in these studies might have resulted in a distortion of spontaneous evaluation…
Staleness Among Web Search Engines.
ERIC Educational Resources Information Center
Koehler, Wallace
1998-01-01
Describes a study of four major Web search engines that tested for staleness, a condition when a significant number of the hits it returns point to Web pages or server-level domains (SLD) that are no longer viable. Results of tests of URLs with AltaVista, HotBot, InfoSeek, and Open Text are discussed. (Author/LRW)
ERIC Educational Resources Information Center
Turner, Laura
2001-01-01
Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…
Real-time earthquake monitoring using a search engine method.
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-12-04
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.
Real-time earthquake monitoring using a search engine method
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-01-01
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake’s parameters in <1 s after receiving the long-period surface wave data. PMID:25472861
Search, Read and Write: An Inquiry into Web Accessibility for People with Dyslexia.
Berget, Gerd; Herstad, Jo; Sandnes, Frode Eika
2016-01-01
Universal design in context of digitalisation has become an integrated part of international conventions and national legislations. A goal is to make the Web accessible for people of different genders, ages, backgrounds, cultures and physical, sensory and cognitive abilities. Political demands for universally designed solutions have raised questions about how it is achieved in practice. Developers, designers and legislators have looked towards the Web Content Accessibility Guidelines (WCAG) for answers. WCAG 2.0 has become the de facto standard for universal design on the Web. Some of the guidelines are directed at the general population, while others are targeted at more specific user groups, such as the visually impaired or hearing impaired. Issues related to cognitive impairments such as dyslexia receive less attention, although dyslexia is prevalent in at least 5-10% of the population. Navigation and search are two common ways of using the Web. However, while navigation has received a fair amount of attention, search systems are not explicitly included, although search has become an important part of people's daily routines. This paper discusses WCAG in the context of dyslexia for the Web in general and search user interfaces specifically. Although certain guidelines address topics that affect dyslexia, WCAG does not seem to fully accommodate users with dyslexia.
ERIC Educational Resources Information Center
Gallo, Gail; Wichowski, Chester P.
This second of two guides on Netscape Communicator 4.5 contains six lessons on advanced searches, multimedia, and composing a World Wide Web page. Lesson 1 is a review of the Navigator window, toolbars, and menus. Lesson 2 covers AltaVista's advanced search tips, searching for information excluding certain text, and advanced and nested Boolean…
Edelstein, Michael; Wallensten, Anders; Zetterqvist, Inga; Hulth, Anette
2014-01-01
Norovirus outbreaks severely disrupt healthcare systems. We evaluated whether Websök, an internet-based surveillance system using search engine data, improved norovirus surveillance and response in Sweden. We compared Websök users' characteristics with the general population, cross-correlated weekly Websök searches with laboratory notifications between 2006 and 2013, compared the time Websök and laboratory data crossed the epidemic threshold and surveyed infection control teams about their perception and use of Websök. Users of Websök were not representative of the general population. Websök correlated with laboratory data (b = 0.88-0.89) and gave an earlier signal to the onset of the norovirus season compared with laboratory-based surveillance. 17/21 (81%) infection control teams answered the survey, of which 11 (65%) believed Websök could help with infection control plans. Websök is a low-resource, easily replicable system that detects the norovirus season as reliably as laboratory data, but earlier. Using Websök in routine surveillance can help infection control teams prepare for the yearly norovirus season. PMID:24955857
Edelstein, Michael; Wallensten, Anders; Zetterqvist, Inga; Hulth, Anette
2014-01-01
Norovirus outbreaks severely disrupt healthcare systems. We evaluated whether Websök, an internet-based surveillance system using search engine data, improved norovirus surveillance and response in Sweden. We compared Websök users' characteristics with the general population, cross-correlated weekly Websök searches with laboratory notifications between 2006 and 2013, compared the time Websök and laboratory data crossed the epidemic threshold and surveyed infection control teams about their perception and use of Websök. Users of Websök were not representative of the general population. Websök correlated with laboratory data (b = 0.88-0.89) and gave an earlier signal to the onset of the norovirus season compared with laboratory-based surveillance. 17/21 (81%) infection control teams answered the survey, of which 11 (65%) believed Websök could help with infection control plans. Websök is a low-resource, easily replicable system that detects the norovirus season as reliably as laboratory data, but earlier. Using Websök in routine surveillance can help infection control teams prepare for the yearly norovirus season.
Ocean Drilling Program: TAMU Staff Directory
products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main web site Employment Opportunities ODP | Search | Database | Drilling | Publications | Science | Cruise Info | Public
New Quality Metrics for Web Search Results
NASA Astrophysics Data System (ADS)
Metaxas, Panagiotis Takis; Ivanova, Lilia; Mustafaraj, Eni
Web search results enjoy an increasing importance in our daily lives. But what can be said about their quality, especially when querying a controversial issue? The traditional information retrieval metrics of precision and recall do not provide much insight in the case of web information retrieval. In this paper we examine new ways of evaluating quality in search results: coverage and independence. We give examples on how these new metrics can be calculated and what their values reveal regarding the two major search engines, Google and Yahoo. We have found evidence of low coverage for commercial and medical controversial queries, and high coverage for a political query that is highly contested. Given the fact that search engines are unwilling to tune their search results manually, except in a few cases that have become the source of bad publicity, low coverage and independence reveal the efforts of dedicated groups to manipulate the search results.
NASA Astrophysics Data System (ADS)
Aloisio, Giovanni; Fiore, Sandro; Negro, A.
2010-05-01
The CMCC Data Distribution Centre (DDC) is the primary entry point (web gateway) to the CMCC. It is a Data Grid Portal providing a ubiquitous and pervasive way to ease data publishing, climate metadata search, datasets discovery, metadata annotation, data access, data aggregation, sub-setting, etc. The grid portal security model includes the use of HTTPS protocol for secure communication with the client (based on X509v3 certificates that must be loaded into the browser) and secure cookies to establish and maintain user sessions. The CMCC DDC is now in a pre-production phase and it is currently used only by internal users (CMCC researchers and climate scientists). The most important component already available in the CMCC DDC is the Search Engine which allows users to perform, through web interfaces, distributed search and discovery activities by introducing one or more of the following search criteria: horizontal extent (which can be specified by interacting with a geographic map), vertical extent, temporal extent, keywords, topics, creation date, etc. By means of this page the user submits the first step of the query process on the metadata DB, then, she can choose one or more datasets retrieving and displaying the complete XML metadata description (from the browser). This way, the second step of the query process is carried out by accessing to a specific XML document of the metadata DB. Finally, through the web interface, the user can access to and download (partially or totally) the data stored on the storage device accessing to OPeNDAP servers and to other available grid storage interfaces. Requests concerning datasets stored in deep storage will be served asynchronously.
Modeling Traffic on the Web Graph
NASA Astrophysics Data System (ADS)
Meiss, Mark R.; Gonçalves, Bruno; Ramasco, José J.; Flammini, Alessandro; Menczer, Filippo
Analysis of aggregate and individual Web requests shows that PageRank is a poor predictor of traffic. We use empirical data to characterize properties of Web traffic not reproduced by Markovian models, including both aggregate statistics such as page and link traffic, and individual statistics such as entropy and session size. As no current model reconciles all of these observations, we present an agent-based model that explains them through realistic browsing behaviors: (1) revisiting bookmarked pages; (2) backtracking; and (3) seeking out novel pages of topical interest. The resulting model can reproduce the behaviors we observe in empirical data, especially heterogeneous session lengths, reconciling the narrowly focused browsing patterns of individual users with the extreme variance in aggregate traffic measurements. We can thereby identify a few salient features that are necessary and sufficient to interpret Web traffic data. Beyond the descriptive and explanatory power of our model, these results may lead to improvements in Web applications such as search and crawling.
ERIC Educational Resources Information Center
Dysart, Joe
2008-01-01
Given Google's growing market share--69% of all searches by the close of 2007--it's absolutely critical for any school on the Web to ensure its site is Google-friendly. A Google-optimized site ensures that students and parents can quickly find one's district on the Web even if they don't know the address. Plus, good search optimization simply…
Speaking the Same Language: Information College Seekers Look for on a College Web Site
ERIC Educational Resources Information Center
Tucciarone, Krista M.
2009-01-01
The purpose of this qualitative study is to analyze and understand what information students seek from a college's Web site during their college search. Often, college Web sites fail either to offer students an interactive dialogue or to involve them in the communicative process, negatively affecting students' college search. Undergraduate…
Talking Trash on the Internet: Working Real Data into Your Classroom.
ERIC Educational Resources Information Center
Lynch, Maurice P.; Walton, Susan A.
1998-01-01
Describes how a middle school teacher used the Chesapeake Bay National Estuarine Research Reserve in Virginia (CBNERRVA) Web site to provide scientific data for a unit on recycling. Includes sample data sheets and tables, charts results of a Web search for marine debris using different search engines, and lists selected marine data Web sites. (PEN)
In Search of a Better Search Engine
ERIC Educational Resources Information Center
Kolowich, Steve
2009-01-01
Early this decade, the number of Web-based documents stored on the servers of the University of Florida hovered near 300,000. By the end of 2006, that number had leapt to four million. Two years later, the university hosts close to eight million Web documents. Web sites for colleges and universities everywhere have become repositories for data…
ERIC Educational Resources Information Center
Griffin, Teresa; Cohen, Deb
2012-01-01
The ubiquity and familiarity of the world wide web means that students regularly turn to it as a source of information. In doing so, they "are said to rely heavily on simple search engines, such as Google to find what they want." Researchers have also investigated how students use search engines, concluding that "the young web users tended to…
Climate Prediction Center - Monitoring East Pacific Hurricane Potential
Organization Search Go Search the CPC Go About Us Our Mission Who We Are Contact Us CPC Information CPC Web Team USA.gov is the U.S. Government's official Web portal to all Federal, state and local government Web resources and services. HOME > Monitoring and Data > Monitoring East Pacific Hurricane
Users' Perceptions of the Web As Revealed by Transaction Log Analysis.
ERIC Educational Resources Information Center
Moukdad, Haidar; Large, Andrew
2001-01-01
Describes the results of a transaction log analysis of a Web search engine, WebCrawler, to analyze user's queries for information retrieval. Results suggest most users do not employ advanced search features, and the linguistic structure often resembles a human-human communication model that is not always successful in human-computer communication.…
Web Search Engines: Key To Locating Information for All Users or Only the Cognoscenti?
ERIC Educational Resources Information Center
Tomaiuolo, Nicholas G.; Packer, Joan G.
This paper describes a study that attempted to ascertain the degree of success that undergraduates and graduate students, with varying levels of experience using the World Wide Web and Web search engines, and without librarian instruction or intervention, had in locating relevant material on specific topics furnished by the investigators. Because…
77 FR 33786 - NRC Enforcement Policy Revision
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-07
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2011... search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems... either 2.3.2.a. or b. must be met for the disposition of a violation as an NCV.'' The following new...
ERIC Educational Resources Information Center
Ture, Ferhan
2013-01-01
With the adoption of web services in daily life, people have access to tremendous amounts of information, beyond any human's reading and comprehension capabilities. As a result, search technologies have become a fundamental tool for accessing information. Furthermore, the web contains information in multiple languages, introducing another barrier…
78 FR 7818 - Duane Arnold Energy Center; Application for Amendment to Facility Operating License
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-04
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2013... search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems... INFORMATION CONTACT: Karl D. Feintuch, Project Manager, Office of Nuclear Reactor Regulation, U.S. Nuclear...
77 FR 67837 - Callaway Plant, Unit 1; Application for Amendment to Facility Operating License
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-14
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2012... search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems... INFORMATION CONTACT: Carl F. Lyon, Project Manager, Office of Nuclear Reactor Regulation, U.S. Nuclear...
Exploration of Web Users' Search Interests through Automatic Subject Categorization of Query Terms.
ERIC Educational Resources Information Center
Pu, Hsiao-tieh; Yang, Chyan; Chuang, Shui-Lung
2001-01-01
Proposes a mechanism that carefully integrates human and machine efforts to explore Web users' search interests. The approach consists of a four-step process: extraction of core terms; construction of subject taxonomy; automatic subject categorization of query terms; and observation of users' search interests. Research findings are proved valuable…
Exploring Library 2.0 on the Social Web
ERIC Educational Resources Information Center
Brantley, John S.
2010-01-01
Library 2.0 literature has described many of the possibilities Web 2.0 technologies offer to libraries. Case studies have assessed local use, but no studies have measured the Library 2.0 phenomenon by searching public social networking sites. This study used library-specific terms to search public social networking sites, blog search engines, and…
Features: Real-Time Adaptive Feature and Document Learning for Web Search.
ERIC Educational Resources Information Center
Chen, Zhixiang; Meng, Xiannong; Fowler, Richard H.; Zhu, Binhai
2001-01-01
Describes Features, an intelligent Web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Explains how Features learns from users' document relevance feedback and automatically extracts and suggests indexing keywords relevant to a search query, and learns from users' keyword relevance feedback…
Search Engines: A Primer on Finding Information on the World Wide Web.
ERIC Educational Resources Information Center
Maddux, Cleborne
1996-01-01
Presents an annotated list of several World Wide Web search engines, including Yahoo, Infoseek, Alta Vista, Magellan, Lycos, Webcrawler, Excite, Deja News, and the LISZT Directory of discussion groups. Uniform Resource Locators (URLs) are included. Discussion assesses performance and describes rules and syntax for refining or limiting a search.…
ERIC Educational Resources Information Center
Gunn, Holly
2004-01-01
In this article, the author stresses not to give up on a site when a URL returns an error message. Many web sites can be found by using strategies such as URL trimming, searching cached sites, site searching and searching the WayBack Machine. Methods and tips for finding web sites are contained within this article.
ERIC Educational Resources Information Center
Woodruff, Allison; Rosenholtz, Ruth; Morrison, Julie B.; Faulring, Andrew; Pirolli, Peter
2002-01-01
Discussion of Web search strategies focuses on a comparative study of textual and graphical summarization mechanisms applied to search engine results. Suggests that thumbnail images (graphical summaries) can increase efficiency in processing results, and that enhanced thumbnails (augmented with readable textual elements) had more consistent…
Finding Information on the World Wide Web: The Retrieval Effectiveness of Search Engines.
ERIC Educational Resources Information Center
Pathak, Praveen; Gordon, Michael
1999-01-01
Describes a study that examined the effectiveness of eight search engines for the World Wide Web. Calculated traditional information-retrieval measures of recall and precision at varying numbers of retrieved documents to use as the bases for statistical comparisons of retrieval effectiveness. Also examined the overlap between search engines.…
Fabricant, Peter D; Dy, Christopher J; Patel, Ronak M; Blanco, John S; Doyle, Shevaun M
2013-06-01
The recent emphasis on shared decision-making has increased the role of the Internet as a readily accessible medical reference source for patients and families. However, the lack of professional review creates concern over the quality, accuracy, and readability of medical information available to patients on the Internet. Three Internet search engines (Google, Yahoo, and Bing) were evaluated prospectively using 3 difference search terms of varying sophistication ("congenital hip dislocation," "developmental dysplasia of the hip," and "hip dysplasia in children"). Sixty-three unique Web sites were evaluated by each of 3 surgeons (2 fellowship-trained pediatric orthopaedic attendings and 1 orthopaedic chief resident) for quality and accuracy using a set of scoring criteria based on the AAOS/POSNA patient education Web site. The readability (literacy grade level) of each Web site was assessed using the Fleisch-Kincaid score. There were significant differences noted in quality, accuracy, and readability of information depending on the search term used. The search term "developmental dysplasia of the hip" provided higher quality and accuracy compared with the search term "congenital hip dislocation." Of the 63 total Web sites, 1 (1.6%) was below the sixth grade reading level recommended by the NIH for health education materials and 8 (12.7%) Web sites were below the average American reading level (eighth grade). The quality and accuracy of information available on the Internet regarding developmental hip dysplasia significantly varied with the search term used. Patients seeking information about DDH on the Internet may not understand the materials found because nearly all of the Web sites are written at a level above that recommended for publically distributed health information. Physicians should advise their patients to search for information using the term "developmental dysplasia of the hip" or, better yet, should refer patients to Web sites that they have personally reviewed for content and clarity. Orthopaedic surgeons, professional societies, and search engines should undertake efforts to ensure that patients have access to information about DDH that is both accurate and easily understandable.
Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália
2016-07-01
Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations meaningful to that particular scope of research. Conversely, indirect concept associations, i.e. concepts related by other intermediary concepts, can be useful to integrate information from different studies and look into non-trivial relations. The BIOMedical Search Engine Framework supports the development of domain-specific search engines. The key strengths of the framework are modularity and extensibilityin terms of software design, the use of open-source consolidated Web technologies, and the ability to integrate any number of biomedical text mining tools and information resources. Currently, the Smart Drug Search keeps over 1,186,000 documents, containing more than 11,854,000 annotations for 77,200 different concepts. The Smart Drug Search is publicly accessible at http://sing.ei.uvigo.es/sds/. The BIOMedical Search Engine Framework is freely available for non-commercial use at https://github.com/agjacome/biomsef. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The Arctic Observing Viewer: A Web-mapping Application for U.S. Arctic Observing Activities
NASA Astrophysics Data System (ADS)
Kassin, A.; Gaylord, A. G.; Manley, W. F.; Villarreal, S.; Tweedie, C. E.; Cody, R. P.; Copenhaver, W.; Dover, M.; Score, R.; Habermann, T.
2014-12-01
Although a great deal of progress has been made with various arctic observing efforts, it can be difficult to assess such progress when so many agencies, organizations, research groups and others are making such rapid progress. To help meet the strategic needs of the U.S. SEARCH-AON program and facilitate the development of SAON and related initiatives, the Arctic Observing Viewer (AOV; http://ArcticObservingViewer.org) has been developed. This web mapping application compiles detailed information pertaining to U.S. Arctic Observing efforts. Contributing partners include the U.S. NSF, USGS, ACADIS, ADIwg, AOOS, a2dc, AON, ARMAP, BAID, IASOA, INTERACT, and others. Over 6100 sites are currently in the AOV database and the application allows users to visualize, navigate, select, advance search, draw, print, and more. AOV is founded on principles of software and data interoperability and includes an emerging "Project" metadata standard, which uses ISO 19115-1 and compatible web services. In the last year, substantial efforts have focused on maintaining and centralizing all database information. In order to keep up with emerging technologies and demand for the application, the AOV data set has been structured and centralized within a relational database; furthermore, the application front-end has been ported to HTML5. Porting the application to HTML5 will now provide access to mobile users utilizing tablets and cell phone devices. Other application enhancements include an embedded Apache Solr search platform which provides users with the capability to perform advance searches throughout the AOV dataset, and an administration web based data management system which allows the administrators to add, update, and delete data in real time. We encourage all collaborators to use AOV tools and services for their own purposes and to help us extend the impact of our efforts and ensure AOV complements other cyber-resources. Reinforcing dispersed but interoperable resources in this way will help to ensure improved capacities for conducting activities such as assessing the status of arctic observing efforts, optimizing logistic operations, and for quickly accessing external and project-focused web resources for more detailed information and data.
Choi, Jihye; Cho, Youngtae; Shim, Eunyoung; Woo, Hyekyung
2016-12-08
Emerging and re-emerging infectious diseases are a significant public health concern, and early detection and immediate response is crucial for disease control. These challenges have led to the need for new approaches and technologies to reinforce the capacity of traditional surveillance systems for detecting emerging infectious diseases. In the last few years, the availability of novel web-based data sources has contributed substantially to infectious disease surveillance. This study explores the burgeoning field of web-based infectious disease surveillance systems by examining their current status, importance, and potential challenges. A systematic review framework was applied to the search, screening, and analysis of web-based infectious disease surveillance systems. We searched PubMed, Web of Science, and Embase databases to extensively review the English literature published between 2000 and 2015. Eleven surveillance systems were chosen for evaluation according to their high frequency of application. Relevant terms, including newly coined terms, development and classification of the surveillance systems, and various characteristics associated with the systems were studied. Based on a detailed and informative review of the 11 web-based infectious disease surveillance systems, it was evident that these systems exhibited clear strengths, as compared to traditional surveillance systems, but with some limitations yet to be overcome. The major strengths of the newly emerging surveillance systems are that they are intuitive, adaptable, low-cost, and operated in real-time, all of which are necessary features of an effective public health tool. The most apparent potential challenges of the web-based systems are those of inaccurate interpretation and prediction of health status, and privacy issues, based on an individual's internet activity. Despite being in a nascent stage with further modification needed, web-based surveillance systems have evolved to complement traditional national surveillance systems. This review highlights ways in which the strengths of existing systems can be maintained and weaknesses alleviated to implement optimal web surveillance systems.
Noesis: Ontology based Scoped Search Engine and Resource Aggregator for Atmospheric Science
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Movva, S.; Li, X.; Cherukuri, P.; Graves, S.
2006-12-01
The goal for search engines is to return results that are both accurate and complete. The search engines should find only what you really want and find everything you really want. Search engines (even meta search engines) lack semantics. The basis for search is simply based on string matching between the user's query term and the resource database and the semantics associated with the search string is not captured. For example, if an atmospheric scientist is searching for "pressure" related web resources, most search engines return inaccurate results such as web resources related to blood pressure. In this presentation Noesis, which is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities will be described. Noesis uses domain ontologies to help the user scope the search query to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. Noesis also serves as a resource aggregator. It categorizes the search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user.
Current Experiments in Particle Physics. 1996 Edition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galic, Hrvoje
2003-06-27
This report contains summaries of current and recent experiments in Particle Physics. Included are experiments at BEPC (Beijing), BNL, CEBAF, CERN, CESR, DESY, FNAL, Frascati, ITEP (Moscow), JINR (Dubna), KEK, LAMPF, Novosibirsk, PNPI (St. Petersburg), PSI, Saclay, Serpukhov, SLAC, and TRIUMF, and also several proton decay and solar neutrino experiments. Excluded are experiments that finished taking data before 1991. Instructions are given for the World Wide Web (WWW) searching of the computer database (maintained under the SLAC-SPIRES system) that contains the summaries.
The Quality of Health Information Available on the Internet for Patients With Pelvic Organ Prolapse.
Solomon, Ellen R; Janssen, Kristine; Krajewski, Colleen M; Barber, Matthew D
2015-01-01
This study aimed to assess the quality of Web sites that provide information on pelvic organ prolapse using validated quality measurement tools. The Google search engine was used to perform a search of the following 4 terms: "pelvic organ prolapse," "dropped bladder," "cystocele," and "vaginal mesh." The DISCERN appraisal tool and JAMA benchmark criteria were used to determine the quality of health information of each Web site. Cohen κ was performed to determine interrater reliability between reviewers. Kruskal-Wallis and Wilcoxon rank sum tests were used to compare DISCERN scores and JAMA criteria among search terms. Interrater reliability between the two reviewers using DISCERN was κ = 0.71 [95% confidence interval (CI), 0.68-0.74] and using JAMA criteria was κ = 0.98 (95% CI, 0.74-1.0). On the basis of the DISCERN appraisal tool, the search term "vaginal mesh" had significantly lower Web site quality than "pelvic organ prolapse" and "cystocele," respectively [mean difference of DISCERN score, -14.65 (95% CI, -25.50 to 8.50, P < 0.0001) and -12.55 (95% CI, -24.00 to 7.00, P = 0.0007)]. "Dropped bladder" had significantly lower Web site quality compared to "pelvic organ prolapse" and "cystocele," respectively (mean difference of DISCERN score, -9.55 (95% CI, -20.00 to 3.00, P = 0.0098) and -7.80 (95% CI, -18.00 to 1.00, P = 0.0348). Using JAMA criteria, there were no statistically significant differences between Web sites. Web sites queried under search terms "vaginal mesh" and "dropped bladder" are lower in quality compared with the Web sites found using the search terms "pelvic organ prolapse" and "cystocele."
Hofmeister, Erik H; Watson, Victoria; Snyder, Lindsey B C; Love, Emma J
2008-12-15
To determine the validity of the information on the World Wide Web concerning veterinary anesthesia in dogs and to determine the methods dog owners use to obtain that information. Web-based search and client survey. 73 Web sites and 92 clients. Web sites were scored on a 5-point scale for completeness and accuracy of information about veterinary anesthesia by 3 board-certified anesthesiologists. A search for anesthetic information regarding 49 specific breeds of dogs was also performed. A survey was distributed to the clients who visited the University of Georgia Veterinary Teaching Hospital during a 4-month period to solicit data about sources used by clients to obtain veterinary medical information and the manner in which information obtained from Web sites was used. The general search identified 73 Web sites that included information on veterinary anesthesia; these sites received a mean score of 3.4 for accuracy and 2.5 for completeness. Of 178 Web sites identified through the breed-specific search, 57 (32%) indicated that a particular breed was sensitive to anesthesia. Of 83 usable, completed surveys, 72 (87%) indicated the client used the Web for veterinary medical information. Fifteen clients (18%) indicated they believed their animal was sensitive to anesthesia because of its breed. Information available on the internet regarding anesthesia in dogs is generally not complete and may be misleading with respect to risks to specific breeds. Consequently, veterinarians should appropriately educate clients regarding anesthetic risk to their particular dog.
MetaSEEk: a content-based metasearch engine for images
NASA Astrophysics Data System (ADS)
Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu
1997-12-01
Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.
Ramu, Chenna
2003-07-01
SIRW (http://sirw.embl.de/) is a World Wide Web interface to the Simple Indexing and Retrieval System (SIR) that is capable of parsing and indexing various flat file databases. In addition it provides a framework for doing sequence analysis (e.g. motif pattern searches) for selected biological sequences through keyword search. SIRW is an ideal tool for the bioinformatics community for searching as well as analyzing biological sequences of interest.
Quality of Web-based Information for the 10 Most Common Fractures.
Memon, Muzammil; Ginsberg, Lydia; Simunovic, Nicole; Ristevski, Bill; Bhandari, Mohit; Kleinlugtenbelt, Ydo Vincent
2016-06-17
In today's technologically advanced world, 75% of patients have used Google to search for health information. As a result, health care professionals fear that patients may be misinformed. Currently, there is a paucity of data on the quality and readability of Web-based health information on fractures. In this study, we assessed the quality and readability of Web-based health information related to the 10 most common fractures. Using the Google search engine, we assessed websites from the first results page for the 10 most common fractures using lay search terms. Website quality was measured using the DISCERN instrument, which scores websites as very poor (15-22.5), poor (22.5-37.5), fair (37.5-52.5), good (52.5-67.5), or excellent (67.5-75). The presence of Health on the Net code (HONcode) certification was assessed for all websites. Website readability was measured using the Flesch Reading Ease Score (0-100), where 60-69 is ideal for the general public, and the Flesch-Kincaid Grade Level (FKGL; -3.4 to ∞), where the mean FKGL of the US adult population is 8. Overall, website quality was "fair" for all fractures, with a mean (standard deviation) DISCERN score of 50.3 (5.8). The DISCERN score correlated positively with a higher website position on the search results page (r(2)=0.1, P=.002) and with HONcode certification (P=.007). The mean (standard deviation) Flesch Reading Ease Score and FKGL for all fractures were 62.2 (9.1) and 6.7 (1.6), respectively. The quality of Web-based health information on fracture care is fair, and its readability is appropriate for the general public. To obtain higher quality information, patients should select HONcode-certified websites. Furthermore, patients should select websites that are positioned higher on the results page because the Google ranking algorithms appear to rank the websites by quality.
Quality of Web-based Information for the 10 Most Common Fractures
Ginsberg, Lydia; Simunovic, Nicole; Ristevski, Bill; Bhandari, Mohit; Kleinlugtenbelt, Ydo Vincent
2016-01-01
Background In today's technologically advanced world, 75% of patients have used Google to search for health information. As a result, health care professionals fear that patients may be misinformed. Currently, there is a paucity of data on the quality and readability of Web-based health information on fractures. Objectives In this study, we assessed the quality and readability of Web-based health information related to the 10 most common fractures. Methods Using the Google search engine, we assessed websites from the first results page for the 10 most common fractures using lay search terms. Website quality was measured using the DISCERN instrument, which scores websites as very poor (15-22.5), poor (22.5-37.5), fair (37.5-52.5), good (52.5-67.5), or excellent (67.5-75). The presence of Health on the Net code (HONcode) certification was assessed for all websites. Website readability was measured using the Flesch Reading Ease Score (0-100), where 60-69 is ideal for the general public, and the Flesch-Kincaid Grade Level (FKGL; −3.4 to ∞), where the mean FKGL of the US adult population is 8. Results Overall, website quality was “fair” for all fractures, with a mean (standard deviation) DISCERN score of 50.3 (5.8). The DISCERN score correlated positively with a higher website position on the search results page (r2=0.1, P=.002) and with HONcode certification (P=.007). The mean (standard deviation) Flesch Reading Ease Score and FKGL for all fractures were 62.2 (9.1) and 6.7 (1.6), respectively. Conclusion The quality of Web-based health information on fracture care is fair, and its readability is appropriate for the general public. To obtain higher quality information, patients should select HONcode-certified websites. Furthermore, patients should select websites that are positioned higher on the results page because the Google ranking algorithms appear to rank the websites by quality. PMID:27317159
Leonardi, Michael J; McGory, Marcia L; Ko, Clifford Y
2007-09-01
To explore hospital comparison Web sites for general surgery based on: (1) a systematic Internet search, (2) Web site quality evaluation, and (3) exploration of possible areas of improvement. A systematic Internet search was performed to identify hospital quality comparison Web sites in September 2006. Publicly available Web sites were rated on accessibility, data/statistical transparency, appropriateness, and timeliness. A sample search was performed to determine ranking consistency. Six national hospital comparison Web sites were identified: 1 government (Hospital Compare [Centers for Medicare and Medicaid Services]), 2 nonprofit (Quality Check [Joint Commission on Accreditation of Healthcare Organizations] and Hospital Quality and Safety Survey Results [Leapfrog Group]), and 3 proprietary sites (names withheld). For accessibility and data transparency, the government and nonprofit Web sites were best. For appropriateness, the proprietary Web sites were best, comparing multiple surgical procedures using a combination of process, structure, and outcome measures. However, none of these sites explicitly defined terms such as complications. Two proprietary sites allowed patients to choose ranking criteria. Most data on these sites were 2 years old or older. A sample search of 3 surgical procedures at 4 hospitals demonstrated significant inconsistencies. Patients undergoing surgery are increasingly using the Internet to compare hospital quality. However, a review of available hospital comparison Web sites shows suboptimal measures of quality and inconsistent results. This may be partially because of a lack of complete and timely data. Surgeons should be involved with quality comparison Web sites to ensure appropriate methods and criteria.
Global Ocean Currents Database
NASA Astrophysics Data System (ADS)
Boyer, T.; Sun, L.
2016-02-01
The NOAA's National Centers for Environmental Information has released an ocean currents database portal that aims 1) to integrate global ocean currents observations from a variety of instruments with different resolution, accuracy and response to spatial and temporal variability into a uniform network common data form (NetCDF) format and 2) to provide a dedicated online data discovery, access to NCEI-hosted and distributed data sources for ocean currents data. The portal provides a tailored web application that allows users to search for ocean currents data by platform types and spatial/temporal ranges of their interest. The dedicated web application is available at http://www.nodc.noaa.gov/gocd/index.html. The NetCDF format supports widely-used data access protocols and catalog services such as OPeNDAP (Open-source Project for a Network Data Access Protocol) and THREDDS (Thematic Real-time Environmental Distributed Data Services), which the GOCD users can use data files with their favorite analysis and visualization client software without downloading to their local machine. The potential users of the ocean currents database include, but are not limited to, 1) ocean modelers for their model skills assessments, 2) scientists and researchers for studying the impact of ocean circulations on the climate variability, 3) ocean shipping industry for safety navigation and finding optimal routes for ship fuel efficiency, 4) ocean resources managers while planning for the optimal sites for wastes and sewages dumping and for renewable hydro-kinematic energy, and 5) state and federal governments to provide historical (analyzed) ocean circulations as an aid for search and rescue
Interface Between CDS/ISIS and the Web at the Library of the Cagliari Observatory
NASA Astrophysics Data System (ADS)
Mureddu, Leonardo; Denotti, Franca; Alvito, Gianni
The library catalog of the Cagliari Observatory was digitized some years ago, by using CDS/ISIS with a practical format named ``ASTCA'' derived from the well-known ``BIBLO''. Recently the observatory has put some effort into the creation and maintenance of a Web site; on that occasion the library database has been interfaced to the Web server by means of the software WWWISIS and a locally created search form. Both books and journals can be searched by remote users. Book searches can be made by authors, titles or keywords.
Shedlock, James; Frisque, Michelle; Hunt, Steve; Walton, Linda; Handler, Jonathan; Gillam, Michael
2010-01-01
Question: How can the user's access to health information, especially full-text articles, be improved? The solution is building and evaluating the Health SmartLibrary (HSL). Setting: The setting is the Galter Health Sciences Library, Feinberg School of Medicine, Northwestern University. Method: The HSL was built on web-based personalization and customization tools: My E-Resources, Stay Current, Quick Search, and File Cabinet. Personalization and customization data were tracked to show user activity with these value-added, online services. Main Results: Registration data indicated that users were receptive to personalized resource selection and that the automated application of specialty-based, personalized HSLs was more frequently adopted than manual customization by users. Those who did customize customized My E-Resources and Stay Current more often than Quick Search and File Cabinet. Most of those who customized did so only once. Conclusion: Users did not always take advantage of the services designed to aid their library research experiences. When personalization is available at registration, users readily accepted it. Customization tools were used less frequently; however, more research is needed to determine why this was the case. PMID:20428276
Cañada, Andres; Rabal, Obdulia; Oyarzabal, Julen; Valencia, Alfonso
2017-01-01
Abstract A considerable effort has been devoted to retrieve systematically information for genes and proteins as well as relationships between them. Despite the importance of chemical compounds and drugs as a central bio-entity in pharmacological and biological research, only a limited number of freely available chemical text-mining/search engine technologies are currently accessible. Here we present LimTox (Literature Mining for Toxicology), a web-based online biomedical search tool with special focus on adverse hepatobiliary reactions. It integrates a range of text mining, named entity recognition and information extraction components. LimTox relies on machine-learning, rule-based, pattern-based and term lookup strategies. This system processes scientific abstracts, a set of full text articles and medical agency assessment reports. Although the main focus of LimTox is on adverse liver events, it enables also basic searches for other organ level toxicity associations (nephrotoxicity, cardiotoxicity, thyrotoxicity and phospholipidosis). This tool supports specialized search queries for: chemical compounds/drugs, genes (with additional emphasis on key enzymes in drug metabolism, namely P450 cytochromes—CYPs) and biochemical liver markers. The LimTox website is free and open to all users and there is no login requirement. LimTox can be accessed at: http://limtox.bioinfo.cnio.es PMID:28531339
Smarter Earth Science Data System
NASA Technical Reports Server (NTRS)
Huang, Thomas
2013-01-01
The explosive growth in Earth observational data in the recent decade demands a better method of interoperability across heterogeneous systems. The Earth science data system community has mastered the art in storing large volume of observational data, but it is still unclear how this traditional method scale over time as we are entering the age of Big Data. Indexed search solutions such as Apache Solr (Smiley and Pugh, 2011) provides fast, scalable search via keyword or phases without any reasoning or inference. The modern search solutions such as Googles Knowledge Graph (Singhal, 2012) and Microsoft Bing, all utilize semantic reasoning to improve its accuracy in searches. The Earth science user community is demanding for an intelligent solution to help them finding the right data for their researches. The Ontological System for Context Artifacts and Resources (OSCAR) (Huang et al., 2012), was created in response to the DARPA Adaptive Vehicle Make (AVM) programs need for an intelligent context models management system to empower its terrain simulation subsystem. The core component of OSCAR is the Environmental Context Ontology (ECO) is built using the Semantic Web for Earth and Environmental Terminology (SWEET) (Raskin and Pan, 2005). This paper presents the current data archival methodology within a NASA Earth science data centers and discuss using semantic web to improve the way we capture and serve data to our users.
Internet Usage by Low-Literacy Adults Seeking Health Information: An Observational Analysis
Birru, Mehret S; Monaco, Valerie M; Charles, Lonelyss; Drew, Hadiya; Njie, Valerie; Bierria, Timothy; Detlefsen, Ellen
2004-01-01
Background Adults with low literacy may encounter informational obstacles on the Internet when searching for health information, in part because most health Web sites require at least a high-school reading proficiency for optimal access. Objective The purpose of this study was to 1) determine how low-literacy adults independently access and evaluate health information on the Internet, 2) identify challenges and areas of proficiency in the Internet-searching skills of low-literacy adults. Methods Subjects (n=8) were enrolled in a reading assistance program at Bidwell Training Center in Pittsburgh, PA, and read at a 3rd to 8th grade level. Subjects conducted self-directed Internet searches for designated health topics while utilizing a think-aloud protocol. Subjects' keystrokes and comments were recorded using Camtasia Studio screen-capture software. The search terms used to find health information, the amount of time spent on each Web site, the number of Web sites accessed, the reading level of Web sites accessed, and the responses of subjects to questionnaires were assessed. Results Subjects collectively answered 8 out of 24 questions correctly. Seven out of 8 subjects selected "sponsored sites"-paid Web advertisements-over search engine-generated links when answering health questions. On average, subjects accessed health Web sites written at or above a 10th grade reading level. Standard methodologies used for measuring health literacy and for promoting subjects to verbalize responses to Web-site form and content had limited utility in this population. Conclusion This study demonstrates that Web health information requires a reading level that prohibits optimal access by some low-literacy adults. These results highlight the low-literacy adult population as a potential audience for Web health information, and indicate some areas of difficulty that these individuals face when using the Internet and health Web sites to find information on specific health topics. PMID:15471751
Keynote Talk: Mining the Web 2.0 for Improved Image Search
NASA Astrophysics Data System (ADS)
Baeza-Yates, Ricardo
There are several semantic sources that can be found in the Web that are either explicit, e.g. Wikipedia, or implicit, e.g. derived from Web usage data. Most of them are related to user generated content (UGC) or what is called today the Web 2.0. In this talk we show how to use these sources of evidence in Flickr, such as tags, visual annotations or clicks, which represent the the wisdom of crowds behind UGC, to improve image search. These results are the work of the multimedia retrieval team at Yahoo! Research Barcelona and they are already being used in Yahoo! image search. This work is part of a larger effort to produce a virtuous data feedback circuit based on the right combination many different technologies to leverage the Web itself.
WEB-server for search of a periodicity in amino acid and nucleotide sequences
NASA Astrophysics Data System (ADS)
E Frenkel, F.; Skryabin, K. G.; Korotkov, E. V.
2017-12-01
A new web server (http://victoria.biengi.ac.ru/splinter/login.php) was designed and developed to search for periodicity in nucleotide and amino acid sequences. The web server operation is based upon a new mathematical method of searching for multiple alignments, which is founded on the position weight matrices optimization, as well as on implementation of the two-dimensional dynamic programming. This approach allows the construction of multiple alignments of the indistinctly similar amino acid and nucleotide sequences that accumulated more than 1.5 substitutions per a single amino acid or a nucleotide without performing the sequences paired comparisons. The article examines the principles of the web server operation and two examples of studying amino acid and nucleotide sequences, as well as information that could be obtained using the web server.
ERIC Educational Resources Information Center
Bilal, Dania
2002-01-01
Reports findings of a three-part research project that examined the information seeking behavior and success of 22 seventh-grade science students in using the Web. Discusses problems encountered, including inadequate knowledge of how to use the search engine and poor level of research skills; and considers implications for Web training and system…
Law, Michael R; Mintzes, Barbara; Morgan, Steven G
2011-03-01
The Internet has become a popular source of health information. However, there is little information on what drug information and which Web sites are being searched. To investigate the sources of online information about prescription drugs by assessing the most common Web sites returned in online drug searches and to assess the comparative popularity of Web pages for particular drugs. This was a cross-sectional study of search results for the most commonly dispensed drugs in the US (n=278 active ingredients) on 4 popular search engines: Bing, Google (both US and Canada), and Yahoo. We determined the number of times a Web site appeared as the first result. A linked retrospective analysis counted Wikipedia page hits for each of these drugs in 2008 and 2009. About three quarters of the first result on Google USA for both brand and generic names linked to the National Library of Medicine. In contrast, Wikipedia was the first result for approximately 80% of generic name searches on the other 3 sites. On these other sites, over two thirds of brand name searches led to industry-sponsored sites. The Wikipedia pages with the highest number of hits were mainly for opiates, benzodiazepines, antibiotics, and antidepressants. Wikipedia and the National Library of Medicine rank highly in online drug searches. Further, our results suggest that patients most often seek information on drugs with the potential for dependence, for stigmatized conditions, that have received media attention, and for episodic treatments. Quality improvement efforts should focus on these drugs.
Technical development of PubMed interact: an improved interface for MEDLINE/PubMed searches.
Muin, Michael; Fontelo, Paul
2006-11-03
The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications.
Global polar geospatial information service retrieval based on search engine and ontology reasoning
Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang
2007-01-01
In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.
A Study of HTML Title Tag Creation Behavior of Academic Web Sites
ERIC Educational Resources Information Center
Noruzi, Alireza
2007-01-01
The HTML title tag information should identify and describe exactly what a Web page contains. This paper analyzes the "Title element" and raises a significant question: "Why is the title tag important?" Search engines base search results and page rankings on certain criteria. Among the most important criteria is the presence of the search keywords…
Teaching AI Search Algorithms in a Web-Based Educational System
ERIC Educational Resources Information Center
Grivokostopoulou, Foteini; Hatzilygeroudis, Ioannis
2013-01-01
In this paper, we present a way of teaching AI search algorithms in a web-based adaptive educational system. Teaching is based on interactive examples and exercises. Interactive examples, which use visualized animations to present AI search algorithms in a step-by-step way with explanations, are used to make learning more attractive. Practice…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-21
... any of the following methods: Federal Rulemaking Web Site: Go to http://www.regulations.gov and search.../reading-rm/adams.html . To begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public Document Room (PDR...
Overview of Nuclear Physics Data: Databases, Web Applications and Teaching Tools
NASA Astrophysics Data System (ADS)
McCutchan, Elizabeth
2017-01-01
The mission of the United States Nuclear Data Program (USNDP) is to provide current, accurate, and authoritative data for use in pure and applied areas of nuclear science and engineering. This is accomplished by compiling, evaluating, and disseminating extensive datasets. Our main products include the Evaluated Nuclear Structure File (ENSDF) containing information on nuclear structure and decay properties and the Evaluated Nuclear Data File (ENDF) containing information on neutron-induced reactions. The National Nuclear Data Center (NNDC), through the website www.nndc.bnl.gov, provides web-based retrieval systems for these and many other databases. In addition, the NNDC hosts several on-line physics tools, useful for calculating various quantities relating to basic nuclear physics. In this talk, I will first introduce the quantities which are evaluated and recommended in our databases. I will then outline the searching capabilities which allow one to quickly and efficiently retrieve data. Finally, I will demonstrate how the database searches and web applications can provide effective teaching tools concerning the structure of nuclei and how they interact. Work supported by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886.
Global reaction to the recent outbreaks of Zika virus: Insights from a Big Data analysis.
Bragazzi, Nicola Luigi; Alicino, Cristiano; Trucchi, Cecilia; Paganino, Chiara; Barberis, Ilaria; Martini, Mariano; Sticchi, Laura; Trinka, Eugen; Brigo, Francesco; Ansaldi, Filippo; Icardi, Giancarlo; Orsi, Andrea
2017-01-01
The recent spreading of Zika virus represents an emerging global health threat. As such, it is attracting public interest worldwide, generating a great amount of related Internet searches and social media interactions. The aim of this research was to understand Zika-related digital behavior throughout the epidemic spreading and to assess its consistence with real-world epidemiological data, using a behavioral informatics and analytics approach. In this study, the global web-interest and reaction to the recently occurred outbreaks of the Zika Virus were analyzed in terms of tweets and Google Trends (GT), Google News, YouTube, and Wikipedia search queries. These data streams were mined from 1st January 2004 to 31st October 2016, with a focus on the period November 2015-October 2016. This analysis was complemented with the use of epidemiological data. Spearman's correlation was performed to correlate all Zika-related data. Moreover, a multivariate regression was performed using Zika-related search queries as a dependent variable, and epidemiological data, number of inhabitants in 2015 and Human Development Index as predictor variables. Overall 3,864,395 tweets, 284,903 accesses to Wikipedia pages dedicated to the Zika virus were analyzed during the study period. All web-data sources showed that the main spike of researches and interactions occurred in February 2016 with a second peak in August 2016. All novel data streams-related activities increased markedly during the epidemic period with respect to pre-epidemic period when no web activity was detected. Correlations between data from all these web platforms resulted very high and statistically significant. The countries in which web searches were particularly concentrated are mainly from Central and South Americas. The majority of queries concerned the symptoms of the Zika virus, its vector of transmission, and its possible effect to babies, including microcephaly. No statistically significant correlation was found between novel data streams and global real-world epidemiological data. At country level, a correlation between the digital interest towards the Zika virus and Zika incidence rate or microcephaly cases has been detected. An increasing public interest and reaction to the current Zika virus outbreak was documented by all web-data sources and a similar pattern of web reactions has been detected. The public opinion seems to be particularly worried by the alert of teratogenicity of the Zika virus. Stakeholders and health authorities could usefully exploited these internet tools for collecting the concerns of public opinion and reply to them, disseminating key information.
Global reaction to the recent outbreaks of Zika virus: Insights from a Big Data analysis
Trucchi, Cecilia; Paganino, Chiara; Barberis, Ilaria; Martini, Mariano; Sticchi, Laura; Trinka, Eugen; Brigo, Francesco; Ansaldi, Filippo; Icardi, Giancarlo; Orsi, Andrea
2017-01-01
Objective The recent spreading of Zika virus represents an emerging global health threat. As such, it is attracting public interest worldwide, generating a great amount of related Internet searches and social media interactions. The aim of this research was to understand Zika-related digital behavior throughout the epidemic spreading and to assess its consistence with real-world epidemiological data, using a behavioral informatics and analytics approach. Methods In this study, the global web-interest and reaction to the recently occurred outbreaks of the Zika Virus were analyzed in terms of tweets and Google Trends (GT), Google News, YouTube, and Wikipedia search queries. These data streams were mined from 1st January 2004 to 31st October 2016, with a focus on the period November 2015—October 2016. This analysis was complemented with the use of epidemiological data. Spearman’s correlation was performed to correlate all Zika-related data. Moreover, a multivariate regression was performed using Zika-related search queries as a dependent variable, and epidemiological data, number of inhabitants in 2015 and Human Development Index as predictor variables. Results Overall 3,864,395 tweets, 284,903 accesses to Wikipedia pages dedicated to the Zika virus were analyzed during the study period. All web-data sources showed that the main spike of researches and interactions occurred in February 2016 with a second peak in August 2016. All novel data streams-related activities increased markedly during the epidemic period with respect to pre-epidemic period when no web activity was detected. Correlations between data from all these web platforms resulted very high and statistically significant. The countries in which web searches were particularly concentrated are mainly from Central and South Americas. The majority of queries concerned the symptoms of the Zika virus, its vector of transmission, and its possible effect to babies, including microcephaly. No statistically significant correlation was found between novel data streams and global real-world epidemiological data. At country level, a correlation between the digital interest towards the Zika virus and Zika incidence rate or microcephaly cases has been detected. Conclusions An increasing public interest and reaction to the current Zika virus outbreak was documented by all web-data sources and a similar pattern of web reactions has been detected. The public opinion seems to be particularly worried by the alert of teratogenicity of the Zika virus. Stakeholders and health authorities could usefully exploited these internet tools for collecting the concerns of public opinion and reply to them, disseminating key information. PMID:28934352
NASA Astrophysics Data System (ADS)
Brauer, U.
2007-08-01
The Open Navigator Framework (ONF) was developed to provide a unified and scalable platform for user interface integration. The main objective for the framework was to raise usability of monitoring and control consoles and to provide a reuse of software components in different application areas. ONF is currently applied for the Columbus onboard crew interface, the commanding application for the Columbus Control Centre, the Columbus user facilities specialized user interfaces, the Mission Execution Crew Assistant (MECA) study and EADS Astrium internal R&D projects. ONF provides a well documented and proven middleware for GUI components (Java plugin interface, simplified concept similar to Eclipse). The overall application configuration is performed within a graphical user interface for layout and component selection. The end-user does not have to work in the underlying XML configuration files. ONF was optimized to provide harmonized user interfaces for monitoring and command consoles. It provides many convenience functions designed together with flight controllers and onboard crew: user defined workspaces, incl. support for multi screens efficient communication mechanism between the components integrated web browsing and documentation search &viewing consistent and integrated menus and shortcuts common logging and application configuration (properties) supervision interface for remote plugin GUI access (web based) A large number of operationally proven ONF components have been developed: Command Stack & History: Release of commands and follow up the command acknowledges System Message Panel: Browse, filter and search system messages/events Unified Synoptic System: Generic synoptic display system Situational Awareness : Show overall subsystem status based on monitoring of key parameters System Model Browser: Browse mission database defintions (measurements, commands, events) Flight Procedure Executor: Execute checklist and logical flow interactive procedures Web Browser : Integrated browser reference documentation and operations data Timeline Viewer: View master timeline as Gantt chart Search: Local search of operations products (e.g. documentation, procedures, displays) All GUI components access the underlying spacecraft data (commanding, reporting data, events, command history) via a common library providing adaptors for the current deployments (Columbus MCS, Columbus onboard Data Management System, Columbus Trainer raw packet protocol). New Adaptors are easy to develop. Currently an adaptor to SCOS 2000 is developed as part of a study for the ESTEC standardization section ("USS for ESTEC Reference Facility").
Cooper, Chris; Booth, Andrew; Britten, Nicky; Garside, Ruth
2017-11-28
The purpose and contribution of supplementary search methods in systematic reviews is increasingly acknowledged. Numerous studies have demonstrated their potential in identifying studies or study data that would have been missed by bibliographic database searching alone. What is less certain is how supplementary search methods actually work, how they are applied, and the consequent advantages, disadvantages and resource implications of each search method. The aim of this study is to compare current practice in using supplementary search methods with methodological guidance. Four methodological handbooks in informing systematic review practice in the UK were read and audited to establish current methodological guidance. Studies evaluating the use of supplementary search methods were identified by searching five bibliographic databases. Studies were included if they (1) reported practical application of a supplementary search method (descriptive) or (2) examined the utility of a supplementary search method (analytical) or (3) identified/explored factors that impact on the utility of a supplementary method, when applied in practice. Thirty-five studies were included in this review in addition to the four methodological handbooks. Studies were published between 1989 and 2016, and dates of publication of the handbooks ranged from 1994 to 2014. Five supplementary search methods were reviewed: contacting study authors, citation chasing, handsearching, searching trial registers and web searching. There is reasonable consistency between recommended best practice (handbooks) and current practice (methodological studies) as it relates to the application of supplementary search methods. The methodological studies provide useful information on the effectiveness of the supplementary search methods, often seeking to evaluate aspects of the method to improve effectiveness or efficiency. In this way, the studies advance the understanding of the supplementary search methods. Further research is required, however, so that a rational choice can be made about which supplementary search strategies should be used, and when.
NASA Astrophysics Data System (ADS)
Civera Lorenzo, Tamara
2017-10-01
Brief presentation about the J-PLUS EDR data access web portal (http://archive.cefca.es/catalogues/jplus-edr) where the different services available to retrieve images and catalogues data have been presented.J-PLUS Early Data Release (EDR) archive includes two types of data: images and dual and single catalogue data which include parameters measured from images. J-PLUS web portal offers catalogue data and images through several different online data access tools or services each suited to a particular need. The different services offered are: Coverage map Sky navigator Object visualization Image search Cone search Object list search Virtual observatory services: Simple Cone Search Simple Image Access Protocol Simple Spectral Access Protocol Table Access Protocol
Essie: A Concept-based Search Engine for Structured Biomedical Text
Ide, Nicholas C.; Loane, Russell F.; Demner-Fushman, Dina
2007-01-01
This article describes the algorithms implemented in the Essie search engine that is currently serving several Web sites at the National Library of Medicine. Essie is a phrase-based search engine with term and concept query expansion and probabilistic relevancy ranking. Essie’s design is motivated by an observation that query terms are often conceptually related to terms in a document, without actually occurring in the document text. Essie’s performance was evaluated using data and standard evaluation methods from the 2003 and 2006 Text REtrieval Conference (TREC) Genomics track. Essie was the best-performing search engine in the 2003 TREC Genomics track and achieved results comparable to those of the highest-ranking systems on the 2006 TREC Genomics track task. Essie shows that a judicious combination of exploiting document structure, phrase searching, and concept based query expansion is a useful approach for information retrieval in the biomedical domain. PMID:17329729
An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.
Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J
2002-01-01
Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.
Documentation systems for educators seeking academic promotion in U.S. medical schools.
Simpson, Deborah; Hafler, Janet; Brown, Diane; Wilkerson, LuAnn
2004-08-01
To explore the state and use of teaching portfolios in promotion and tenure in U.S. medical schools. A two-phase qualitative study using a Web-based search procedure and telephone interviews was conducted. The first phase assessed the penetration of teaching portfolio-like systems in U.S. medical schools using a keyword search of medical school Web sites. The second phase examined the current use of teaching portfolios in 16 U.S. medical schools that reported their use in a survey in 1992. The individual designated as having primary responsibility for faculty appointments/promotions was contacted to participate in a 30-60 minute interview. The Phase 1 search of U.S. medical schools' Web sites revealed that 76 medical schools have Web-based access to information on documenting educational activities for promotion. A total of 16 of 17 medical schools responded to Phase 2. All 16 continued to use a portfolio-like system in 2003. Two documentation categories, honors/awards and philosophy/personal statement regarding education, were included by six more of these schools than used these categories in 1992. Dissemination of work to colleagues is now a key inclusion at 15 of the Phase 2 schools. The most common type of evidence used to document education was learner and/or peer ratings with infrequent use of outcome measures and internal/external review. The number of medical schools whose promotion packets include portfolio-like documentation associated with a faculty member's excellence in education has increased by more than 400% in just over ten years. Among early-responder schools the types of documentation categories have increased, but students' ratings of teaching remain the primary evidence used to document the quality or outcomes of the educational efforts reported.
Health literacy and usability of clinical trial search engines.
Utami, Dina; Bickmore, Timothy W; Barry, Barbara; Paasche-Orlow, Michael K
2014-01-01
Several web-based search engines have been developed to assist individuals to find clinical trials for which they may be interested in volunteering. However, these search engines may be difficult for individuals with low health and computer literacy to navigate. The authors present findings from a usability evaluation of clinical trial search tools with 41 participants across the health and computer literacy spectrum. The study consisted of 3 parts: (a) a usability study of an existing web-based clinical trial search tool; (b) a usability study of a keyword-based clinical trial search tool; and (c) an exploratory study investigating users' information needs when deciding among 2 or more candidate clinical trials. From the first 2 studies, the authors found that users with low health literacy have difficulty forming queries using keywords and have significantly more difficulty using a standard web-based clinical trial search tool compared with users with adequate health literacy. From the third study, the authors identified the search factors most important to individuals searching for clinical trials and how these varied by health literacy level.
SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services
Gessler, Damian DG; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T
2009-01-01
Background SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. Results There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at , developer tools at , and a portal to third-party ontologies at (a "swap meet"). Conclusion SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing the concept of a canonical yet mutable OWL DL graph that allows data and service providers to describe their resources, to allow discovery servers to offer semantically rich search engines, to allow clients to discover and invoke those resources, and to allow providers to respond with semantically tagged data. SSWAP allows for a mix-and-match of terms from both new and legacy third-party ontologies in these graphs. PMID:19775460
Study on online community user motif using web usage mining
NASA Astrophysics Data System (ADS)
Alphy, Meera; Sharma, Ajay
2016-04-01
The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.
BingEO: Enable Distributed Earth Observation Data for Environmental Research
NASA Astrophysics Data System (ADS)
Wu, H.; Yang, C.; Xu, Y.
2010-12-01
Our planet is facing great environmental challenges including global climate change, environmental vulnerability, extreme poverty, and a shortage of clean cheap energy. To address these problems, scientists are developing various models to analysis, forecast, simulate various geospatial phenomena to support critical decision making. These models not only challenge our computing technology, but also challenge us to feed huge demands of earth observation data. Through various policies and programs, open and free sharing of earth observation data are advocated in earth science. Currently, thousands of data sources are freely available online through open standards such as Web Map Service (WMS), Web Feature Service (WFS) and Web Coverage Service (WCS). Seamless sharing and access to these resources call for a spatial Cyberinfrastructure (CI) to enable the use of spatial data for the advancement of related applied sciences including environmental research. Based on Microsoft Bing Search Engine and Bing Map, a seamlessly integrated and visual tool is under development to bridge the gap between researchers/educators and earth observation data providers. With this tool, earth science researchers/educators can easily and visually find the best data sets for their research and education. The tool includes a registry and its related supporting module at server-side and an integrated portal as its client. The proposed portal, Bing Earth Observation (BingEO), is based on Bing Search and Bing Map to: 1) Use Bing Search to discover Web Map Services (WMS) resources available over the internet; 2) Develop and maintain a registry to manage all the available WMS resources and constantly monitor their service quality; 3) Allow users to manually register data services; 4) Provide a Bing Maps-based Web application to visualize the data on a high-quality and easy-to-manipulate map platform and enable users to select the best data layers online. Given the amount of observation data accumulated already and still growing, BingEO will allow these resources to be utilized more widely, intensively, efficiently and economically in earth science applications.
Eysenbach, Gunther; Powell, John; Kuss, Oliver; Sa, Eun-Ryoung
The quality of consumer health information on the World Wide Web is an important issue for medicine, but to date no systematic and comprehensive synthesis of the methods and evidence has been performed. To establish a methodological framework on how quality on the Web is evaluated in practice, to determine the heterogeneity of the results and conclusions, and to compare the methodological rigor of these studies, to determine to what extent the conclusions depend on the methodology used, and to suggest future directions for research. We searched MEDLINE and PREMEDLINE (1966 through September 2001), Science Citation Index (1997 through September 2001), Social Sciences Citation Index (1997 through September 2001), Arts and Humanities Citation Index (1997 through September 2001), LISA (1969 through July 2001), CINAHL (1982 through July 2001), PsychINFO (1988 through September 2001), EMBASE (1988 through June 2001), and SIGLE (1980 through June 2001). We also conducted hand searches, general Internet searches, and a personal bibliographic database search. We included published and unpublished empirical studies in any language in which investigators searched the Web systematically for specific health information, evaluated the quality of Web sites or pages, and reported quantitative results. We screened 7830 citations and retrieved 170 potentially eligible full articles. A total of 79 distinct studies met the inclusion criteria, evaluating 5941 health Web sites and 1329 Web pages, and reporting 408 evaluation results for 86 different quality criteria. Two reviewers independently extracted study characteristics, medical domains, search strategies used, methods and criteria of quality assessment, results (percentage of sites or pages rated as inadequate pertaining to a quality criterion), and quality and rigor of study methods and reporting. Most frequently used quality criteria used include accuracy, completeness, readability, design, disclosures, and references provided. Fifty-five studies (70%) concluded that quality is a problem on the Web, 17 (22%) remained neutral, and 7 studies (9%) came to a positive conclusion. Positive studies scored significantly lower in search (P =.02) and evaluation (P =.04) methods. Due to differences in study methods and rigor, quality criteria, study population, and topic chosen, study results and conclusions on health-related Web sites vary widely. Operational definitions of quality criteria are needed.
2013-01-01
Background Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. Results We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user’s query, advanced data searching based on the specified user’s query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. Conclusions search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/. PMID:23452691
Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Siążnik, Artur
2013-03-01
Due to the growing number of biomedical entries in data repositories of the National Center for Biotechnology Information (NCBI), it is difficult to collect, manage and process all of these entries in one place by third-party software developers without significant investment in hardware and software infrastructure, its maintenance and administration. Web services allow development of software applications that integrate in one place the functionality and processing logic of distributed software components, without integrating the components themselves and without integrating the resources to which they have access. This is achieved by appropriate orchestration or choreography of available Web services and their shared functions. After the successful application of Web services in the business sector, this technology can now be used to build composite software tools that are oriented towards biomedical data processing. We have developed a new tool for efficient and dynamic data exploration in GenBank and other NCBI databases. A dedicated search GenBank system makes use of NCBI Web services and a package of Entrez Programming Utilities (eUtils) in order to provide extended searching capabilities in NCBI data repositories. In search GenBank users can use one of the three exploration paths: simple data searching based on the specified user's query, advanced data searching based on the specified user's query, and advanced data exploration with the use of macros. search GenBank orchestrates calls of particular tools available through the NCBI Web service providing requested functionality, while users interactively browse selected records in search GenBank and traverse between NCBI databases using available links. On the other hand, by building macros in the advanced data exploration mode, users create choreographies of eUtils calls, which can lead to the automatic discovery of related data in the specified databases. search GenBank extends standard capabilities of the NCBI Entrez search engine in querying biomedical databases. The possibility of creating and saving macros in the search GenBank is a unique feature and has a great potential. The potential will further grow in the future with the increasing density of networks of relationships between data stored in particular databases. search GenBank is available for public use at http://sgb.biotools.pl/.
Using the Internet in Career Education. Practice Application Brief No. 1.
ERIC Educational Resources Information Center
Wagner, Judith O.
The World Wide Web has a wealth of information on career planning, individual jobs, and job search methods that counselors and teachers can use. Search engines such as Yahoo! and Magellan, organized like library tools, and engines such as AltaVista and HotBot search words or phrases. Web indexes offer a variety of features. The criteria for…
A Framework for Integrating Oceanographic Data Repositories
NASA Astrophysics Data System (ADS)
Rozell, E.; Maffei, A. R.; Beaulieu, S. E.; Fox, P. A.
2010-12-01
Oceanographic research covers a broad range of science domains and requires a tremendous amount of cross-disciplinary collaboration. Advances in cyberinfrastructure are making it easier to share data across disciplines through the use of web services and community vocabularies. Best practices in the design of web services and vocabularies to support interoperability amongst science data repositories are only starting to emerge. Strategic design decisions in these areas are crucial to the creation of end-user data and application integration tools. We present S2S, a novel framework for deploying customizable user interfaces to support the search and analysis of data from multiple repositories. Our research methods follow the Semantic Web methodology and technology development process developed by Fox et al. This methodology stresses the importance of close scientist-technologist interactions when developing scientific use cases, keeping the project well scoped and ensuring the result meets a real scientific need. The S2S framework motivates the development of standardized web services with well-described parameters, as well as the integration of existing web services and applications in the search and analysis of data. S2S also encourages the use and development of community vocabularies and ontologies to support federated search and reduce the amount of domain expertise required in the data discovery process. S2S utilizes the Web Ontology Language (OWL) to describe the components of the framework, including web service parameters, and OpenSearch as a standard description for web services, particularly search services for oceanographic data repositories. We have created search services for an oceanographic metadata database, a large set of quality-controlled ocean profile measurements, and a biogeographic search service. S2S provides an application programming interface (API) that can be used to generate custom user interfaces, supporting data and application integration across these repositories and other web resources. Although initially targeted towards a general oceanographic audience, the S2S framework shows promise in many science domains, inspired in part by the broad disciplinary coverage of oceanography. This presentation will cover the challenges addressed by the S2S framework, the research methods used in its development, and the resulting architecture for the system. It will demonstrate how S2S is remarkably extensible, and can be generalized to many science domains. Given these characteristics, the framework can simplify the process of data discovery and analysis for the end user, and can help to shift the responsibility of search interface development away from data managers.
Context-Aware Online Commercial Intention Detection
NASA Astrophysics Data System (ADS)
Hu, Derek Hao; Shen, Dou; Sun, Jian-Tao; Yang, Qiang; Chen, Zheng
With more and more commercial activities moving onto the Internet, people tend to purchase what they need through Internet or conduct some online research before the actual transactions happen. For many Web users, their online commercial activities start from submitting a search query to search engines. Just like the common Web search queries, the queries with commercial intention are usually very short. Recognizing the queries with commercial intention against the common queries will help search engines provide proper search results and advertisements, help Web users obtain the right information they desire and help the advertisers benefit from the potential transactions. However, the intentions behind a query vary a lot for users with different background and interest. The intentions can even be different for the same user, when the query is issued in different contexts. In this paper, we present a new algorithm framework based on skip-chain conditional random field (SCCRF) for automatically classifying Web queries according to context-based online commercial intention. We analyze our algorithm performance both theoretically and empirically. Extensive experiments on several real search engine log datasets show that our algorithm can improve more than 10% on F1 score than previous algorithms on commercial intention detection.
Geomorphology and the World Wide Web
NASA Astrophysics Data System (ADS)
Shroder, John F.; Bishop, Michael P.; Olsenholler, Jeffrey; Craiger, J. Philip
2002-10-01
The Internet and the World Wide Web have brought many dimensions of new technology to education and research in geomorphology. As with other disciplines on the Web, Web-based geomorphology has become an eclectic mix of whatever material an individual deems worthy of presentation, and in many cases is without quality control. Nevertheless, new electronic media can facilitate education and research in geomorphology. For example, virtual field trips can be developed and accessed to reinforce concepts in class. Techniques for evaluating Internet references helps students to write traditional term papers, but professional presentations can also involve student papers that are published on the Web. Faculty can also address plagiarism issues by using search engines. Because of the lack of peer review of much of the content on the Web, care must be exercised in using it for reference searches. Today, however, refereed journals are going online and can be accessed through subscription or payment per article viewed. Library reference desks regularly use the Web for searches of refereed articles. Research on the Web ranges from communication between investigators, data acquisition, scientific visualization, or comprehensive searches of refereed sources, to interactive analyses of remote data sets. The Nanga Parbat and the Global Land Ice Measurements from Space (GLIMS) Projects are two examples of geomorphologic research that are achieving full potential through use of the Web. Teaching and research in geomorphology are undergoing a beneficial, but sometimes problematic, transition with the new technology. The learning curve is steep for some users but the view from the top is bright. Geomorphology can only prosper from the benefits offered by computer technologies.
Towards Web 3.0: taxonomies and ontologies for medical education -- a systematic review.
Blaum, Wolf E; Jarczweski, Anne; Balzer, Felix; Stötzner, Philip; Ahlers, Olaf
2013-01-01
Both for curricular development and mapping, as well as for orientation within the mounting supply of learning resources in medical education, the Semantic Web ("Web 3.0") poses a low-threshold, effective tool that enables identification of content related items across system boundaries. Replacement of the currently required manual with an automatically generated link, which is based on content and semantics, requires the use of a suitably structured vocabulary for a machine-readable description of object content. Aim of this study is to compile the existing taxonomies and ontologies used for the annotation of medical content and learning resources, to compare those using selected criteria, and to verify their suitability in the context described above. Based on a systematic literature search, existing taxonomies and ontologies for the description of medical learning resources were identified. Through web searches and/or direct contact with the respective editors, each of the structured vocabularies thus identified were examined in regards to topic, structure, language, scope, maintenance, and technology of the taxonomy/ontology. In addition, suitability for use in the Semantic Web was verified. Among 20 identified publications, 14 structured vocabularies were identified, which differed rather strongly in regards to language, scope, currency, and maintenance. None of the identified vocabularies fulfilled the necessary criteria for content description of medical curricula and learning resources in the German-speaking world. While moving towards Web 3.0, a significant problem lies in the selection and use of an appropriate German vocabulary for the machine-readable description of object content. Possible solutions include development, translation and/or combination of existing vocabularies, possibly including partial translations of English vocabularies.
Getting to the top of Google: search engine optimization.
Maley, Catherine; Baum, Neil
2010-01-01
Search engine optimization is the process of making your Web site appear at or near the top of popular search engines such as Google, Yahoo, and MSN. This is not done by luck or knowing someone working for the search engines but by understanding the process of how search engines select Web sites for placement on top or on the first page. This article will review the process and provide methods and techniques to use to have your site rated at the top or very near the top.
The Giardia genome project database.
McArthur, A G; Morrison, H G; Nixon, J E; Passamaneck, N Q; Kim, U; Hinkle, G; Crocker, M K; Holder, M E; Farr, R; Reich, C I; Olsen, G E; Aley, S B; Adam, R D; Gillin, F D; Sogin, M L
2000-08-15
The Giardia genome project database provides an online resource for Giardia lamblia (WB strain, clone C6) genome sequence information. The database includes edited single-pass reads, the results of BLASTX searches, and details of progress towards sequencing the entire 12 million-bp Giardia genome. Pre-sorted BLASTX results can be retrieved based on keyword searches and BLAST searches of the high throughput Giardia data can be initiated from the web site or through NCBI. Descriptions of the genomic DNA libraries, project protocols and summary statistics are also available. Although the Giardia genome project is ongoing, new sequences are made available on a bi-monthly basis to ensure that researchers have access to information that may assist them in the search for genes and their biological function. The current URL of the Giardia genome project database is www.mbl.edu/Giardia.
Speeding on the Information Superhighway: Strategies for Saving Time on the Web.
ERIC Educational Resources Information Center
Colaric, Susan M.; Carr-Chellman, Alison A.
2000-01-01
Outlines ways to make online searching more efficient. Highlights include starting with printed materials; online reference libraries; subject directories such as Yahoo; search engines; evaluating Web sites, including reliability; bookmarking helpful sites; and using links. (LRW)
2009-06-01
search engines are not up to this task, as they have been optimized to catalog information quickly and efficiently for user ease of access while promoting retail commerce at the same time. This thesis presents a performance analysis of a new search engine algorithm designed to help find IED education networks using the Nutch open-source search engine architecture. It reveals which web pages are more important via references from other web pages regardless of domain. In addition, this thesis discusses potential evaluation and monitoring techniques to be used in conjunction
Alderdice, Fiona; Gargan, Phyl; McCall, Emma; Franck, Linda
2018-01-30
Online resources are a source of information for parents of premature babies when their baby is discharged from hospital. To explore what topics parents deemed important after returning home from hospital with their premature baby and to evaluate the quality of existing websites that provide information for parents post-discharge. In stage 1, 23 parents living in Northern Ireland participated in three focus groups and shared their information and support needs following the discharge of their infant(s). In stage 2, a World Wide Web (WWW) search was conducted using Google, Yahoo and Bing search engines. Websites meeting pre-specified inclusion criteria were reviewed using two website assessment tools and by calculating a readability score. Website content was compared to the topics identified by parents in the focus groups. Five overarching topics were identified across the three focus groups: life at home after neonatal care, taking care of our family, taking care of our premature baby, baby's growth and development and help with getting support and advice. Twenty-nine sites were identified that met the systematic web search inclusion criteria. Fifteen (52%) covered all five topics identified by parents to some extent and 9 (31%) provided current, accurate and relevant information based on the assessment criteria. Parents reported the need for information and support post-discharge from hospital. This was not always available to them, and relevant online resources were of varying quality. Listening to parents needs and preferences can facilitate the development of high-quality, evidence-based, parent-centred resources. © 2018 The Authors Health Expectations published by John Wiley & Sons Ltd.
Googling suicide: surfing for suicide information on the Internet.
Recupero, Patricia R; Harms, Samara E; Noble, Jeffrey M
2008-06-01
This study examined the types of resources a suicidal person might find through search engines on the Internet. We were especially interested in determining the accessibility of potentially harmful resources, such as prosuicide forums, as such resources have been implicated in completed suicides and are known to exist on the Web. Using 5 popular search engines (Google, Yahoo!, Ask.com, Lycos, and Dogpile) and 4 suicide-related search terms (suicide, how to commit suicide, suicide methods, and how to kill yourself), we collected quantitative and qualitative data about the search results. The searches were conducted in August and September 2006. Several coraters assigned codes and characterizations to the first 30 Web sites per search term combination (and "sponsored links" on those pages), which were then confirmed by consensus ratings. Search results were classified as being prosuicide, antisuicide, suicide-neutral, not a suicide site, or error (i.e., page would not load). Additional information was collected to further characterize the nature of the information on these Web sites. Suicide-neutral and anti-suicide pages occurred most frequently (of 373 unique Web pages, 115 were coded as suicide-neutral, and 109 were anti-suicide). While pro-suicide resources were less frequent (41 Web pages), they were nonetheless easily accessible. Detailed how-to instructions for unusual and lethal suicide methods were likewise easily located through the searches. Mental health professionals should ask patients about their Internet use. Depressed, suicidal, or potentially suicidal patients who use the Internet may be especially at risk. Clinicians may wish to assist patients in locating helpful, supportive resources online so that patients' Internet use may be more therapeutic than harmful.
Technical development of PubMed Interact: an improved interface for MEDLINE/PubMed searches
Muin, Michael; Fontelo, Paul
2006-01-01
Background The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. Results PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. Conclusion PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications. PMID:17083729
Chan, Emily H; Sahai, Vikram; Conrad, Corrie; Brownstein, John S
2011-05-01
A variety of obstacles including bureaucracy and lack of resources have interfered with timely detection and reporting of dengue cases in many endemic countries. Surveillance efforts have turned to modern data sources, such as Internet search queries, which have been shown to be effective for monitoring influenza-like illnesses. However, few have evaluated the utility of web search query data for other diseases, especially those of high morbidity and mortality or where a vaccine may not exist. In this study, we aimed to assess whether web search queries are a viable data source for the early detection and monitoring of dengue epidemics. Bolivia, Brazil, India, Indonesia and Singapore were chosen for analysis based on available data and adequate search volume. For each country, a univariate linear model was then built by fitting a time series of the fraction of Google search query volume for specific dengue-related queries from that country against a time series of official dengue case counts for a time-frame within 2003-2010. The specific combination of queries used was chosen to maximize model fit. Spurious spikes in the data were also removed prior to model fitting. The final models, fit using a training subset of the data, were cross-validated against both the overall dataset and a holdout subset of the data. All models were found to fit the data quite well, with validation correlations ranging from 0.82 to 0.99. Web search query data were found to be capable of tracking dengue activity in Bolivia, Brazil, India, Indonesia and Singapore. Whereas traditional dengue data from official sources are often not available until after some substantial delay, web search query data are available in near real-time. These data represent valuable complement to assist with traditional dengue surveillance.
Citation searching: a systematic review case study of multiple risk behaviour interventions
2014-01-01
Background The value of citation searches as part of the systematic review process is currently unknown. While the major guides to conducting systematic reviews state that citation searching should be carried out in addition to searching bibliographic databases there are still few studies in the literature that support this view. Rather than using a predefined search strategy to retrieve studies, citation searching uses known relevant papers to identify further papers. Methods We describe a case study about the effectiveness of using the citation sources Google Scholar, Scopus, Web of Science and OVIDSP MEDLINE to identify records for inclusion in a systematic review. We used the 40 included studies identified by traditional database searches from one systematic review of interventions for multiple risk behaviours. We searched for each of the included studies in the four citation sources to retrieve the details of all papers that have cited these studies. We carried out two analyses; the first was to examine the overlap between the four citation sources to identify which citation tool was the most useful; the second was to investigate whether the citation searches identified any relevant records in addition to those retrieved by the original database searches. Results The highest number of citations was retrieved from Google Scholar (1680), followed by Scopus (1173), then Web of Science (1095) and lastly OVIDSP (213). To retrieve all the records identified by the citation tracking searching all four resources was required. Google Scholar identified the highest number of unique citations. The citation tracking identified 9 studies that met the review’s inclusion criteria. Eight of these had already been identified by the traditional databases searches and identified in the screening process while the ninth was not available in any of the databases when the original searches were carried out. It would, however, have been identified by two of the database search strategies if searches had been carried out later. Conclusions Based on the results from this investigation, citation searching as a supplementary search method for systematic reviews may not be the best use of valuable time and resources. It would be useful to verify these findings in other reviews. PMID:24893958
A web access script language to support clinical application development.
O'Kane, K C; McColligan, E E
1998-02-01
This paper describes the development of a script language to support the implementation of decentralized, clinical information applications on the World Wide Web (Web). The goal of this work is to facilitate construction of low overhead, fully functional clinical information systems that can be accessed anywhere by low cost Web browsers to search, retrieve and analyze stored patient data. The Web provides a model of network access to data bases on a global scale. Although it was originally conceived as a means to exchange scientific documents, Web browsers and servers currently support access to a wide variety of audio, video, graphical and text based data to a rapidly growing community. Access to these services is via inexpensive client software browsers that connect to servers by means of the open architecture of the Internet. In this paper, the design and implementation of a script language that supports the development of low cost, Web-based, distributed clinical information systems for both Inter- and Intra-Net use is presented. The language is based on the Mumps language and, consequently, supports many legacy applications with few modifications. Several enhancements, however, have been made to support modern programming practices and the Web interface. The interpreter for the language also supports standalone program execution on Unix, MS-Windows, OS/2 and other operating systems.
Can people find patient decision aids on the Internet?
Morris, Debra; Drake, Elizabeth; Saarimaki, Anton; Bennett, Carol; O'Connor, Annette
2008-12-01
To determine if people could find patient decision aids (PtDAs) on the Internet using the most popular general search engines. We chose five medical conditions for which English language PtDAs were available from at least three different developers. The search engines used were: Google (www.google.com), Yahoo! (www.yahoo.com), and MSN (www.msn.com). For each condition and search engine we ran six searches using a combination of search terms. We coded all non-sponsored Web pages that were linked from the first page of the search results. Most first page results linked to informational Web pages about the condition, only 16% linked to PtDAs. PtDAs were more readily found for the breast cancer surgery decision (our searches found seven of the nine developers). The searches using Yahoo and Google search engines were more likely to find PtDAs. The following combination of search terms: condition, treatment, decision (e.g. breast cancer surgery decision) was most successful across all search engines (29%). While some terms and search engines were more successful, few resulted in direct links to PtDAs. Finding PtDAs would be improved with use of standardized labelling, providing patients with specific Web site addresses or access to an independent PtDA clearinghouse.
Web 2.0 for Health Promotion: Reviewing the Current Evidence
Prestin, Abby; Lyons, Claire
2013-01-01
As Web 2.0 and social media make the communication landscape increasingly participatory, empirical evidence is needed regarding their impact on and utility for health promotion. Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, we searched 4 medical and social science databases for literature (2004–present) on the intersection of Web 2.0 and health. A total of 514 unique publications matched our criteria. We classified references as commentaries and reviews (n = 267), descriptive studies (n = 213), and pilot intervention studies (n = 34). The scarcity of empirical evidence points to the need for more interventions with participatory and user-generated features. Innovative study designs and measurement methods are needed to understand the communication landscape and to critically assess intervention effectiveness. To address health disparities, interventions must consider accessibility for vulnerable populations. PMID:23153164
Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search.
Jay, Caroline; Harper, Simon; Dunlop, Ian; Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain
2016-01-14
Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these "experts." Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the "Google generation" than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is "Google-like," enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F1,19=37.3, P<.001), with a main effect of task (F3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F1,19=18.0, P<.001). There was also a main effect of task (F2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation.
Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search
Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain
2016-01-01
Background Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these “experts.” Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. Objective The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the “Google generation” than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Methods Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is “Google-like,” enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Results Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F 1,19=37.3, P<.001), with a main effect of task (F 3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F 1,19=18.0, P<.001). There was also a main effect of task (F 2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. Conclusions The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation. PMID:26769334
Towards a semantic PACS: Using Semantic Web technology to represent imaging data.
Van Soest, Johan; Lustberg, Tim; Grittner, Detlef; Marshall, M Scott; Persoon, Lucas; Nijsten, Bas; Feltens, Peter; Dekker, Andre
2014-01-01
The DICOM standard is ubiquitous within medicine. However, improved DICOM semantics would significantly enhance search operations. Furthermore, databases of current PACS systems are not flexible enough for the demands within image analysis research. In this paper, we investigated if we can use Semantic Web technology, to store and represent metadata of DICOM image files, as well as linking additional computational results to image metadata. Therefore, we developed a proof of concept containing two applications: one to store commonly used DICOM metadata in an RDF repository, and one to calculate imaging biomarkers based on DICOM images, and store the biomarker values in an RDF repository. This enabled us to search for all patients with a gross tumor volume calculated to be larger than 50 cc. We have shown that we can successfully store the DICOM metadata in an RDF repository and are refining our proof of concept with regards to volume naming, value representation, and the applications themselves.
Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N
2009-06-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
Index Compression and Efficient Query Processing in Large Web Search Engines
ERIC Educational Resources Information Center
Ding, Shuai
2013-01-01
The inverted index is the main data structure used by all the major search engines. Search engines build an inverted index on their collection to speed up query processing. As the size of the web grows, the length of the inverted list structures, which can easily grow to hundreds of MBs or even GBs for common terms (roughly linear in the size of…
ERIC Educational Resources Information Center
Levay, Paul; Ainsworth, Nicola; Kettle, Rachel; Morgan, Antony
2016-01-01
Aim: To examine how effectively forwards citation searching with Web of Science (WOS) or Google Scholar (GS) identified evidence to support public health guidance published by the National Institute for Health and Care Excellence. Method: Forwards citation searching was performed using GS on a base set of 46 publications and replicated using WOS.…
Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.
Kahn, Charles E
2008-09-01
Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.
Methodologies for Crawler Based Web Surveys.
ERIC Educational Resources Information Center
Thelwall, Mike
2002-01-01
Describes Web survey methodologies used to study the content of the Web, and discusses search engines and the concept of crawling the Web. Highlights include Web page selection methodologies; obstacles to reliable automatic indexing of Web sites; publicly indexable pages; crawling parameters; and tests for file duplication. (Contains 62…
The Charlie Sheen Effect on Rapid In-home Human Immunodeficiency Virus Test Sales.
Allem, Jon-Patrick; Leas, Eric C; Caputi, Theodore L; Dredze, Mark; Althouse, Benjamin M; Noar, Seth M; Ayers, John W
2017-07-01
One in eight of the 1.2 million Americans living with human immunodeficiency virus (HIV) are unaware of their positive status, and untested individuals are responsible for most new infections. As a result, testing is the most cost-effective HIV prevention strategy and must be accelerated when opportunities are presented. Web searches for HIV spiked around actor Charlie Sheen's HIV-positive disclosure. However, it is unknown whether Sheen's disclosure impacted offline behaviors like HIV testing. The goal of this study was to determine if Sheen's HIV disclosure was a record-setting HIV prevention event and determine if Web searches presage increases in testing allowing for rapid detection and reaction in the future. Sales of OraQuick rapid in-home HIV test kits in the USA were monitored weekly from April 12, 2014, to April 16, 2016, alongside Web searches including the terms "test," "tests," or "testing" and "HIV" as accessed from Google Trends. Changes in OraQuick sales around Sheen's disclosure and prediction models using Web searches were assessed. OraQuick sales rose 95% (95% CI, 75-117; p < 0.001) of the week of Sheen's disclosure and remained elevated for 4 more weeks (p < 0.05). In total, there were 8225 more sales than expected around Sheen's disclosure, surpassing World AIDS Day by a factor of about 7. Moreover, Web searches mirrored OraQuick sales trends (r = 0.79), demonstrating their ability to presage increases in testing. The "Charlie Sheen effect" represents an important opportunity for a public health response, and in the future, Web searches can be used to detect and act on more opportunities to foster prevention behaviors.
BIOSMILE web search: a web application for annotating biomedical entities and relations.
Dai, Hong-Jie; Huang, Chi-Hsin; Lin, Ryan T K; Tsai, Richard Tzong-Han; Hsu, Wen-Lian
2008-07-01
BIOSMILE web search (BWS), a web-based NCBI-PubMed search application, which can analyze articles for selected biomedical verbs and give users relational information, such as subject, object, location, manner, time, etc. After receiving keyword query input, BWS retrieves matching PubMed abstracts and lists them along with snippets by order of relevancy to protein-protein interaction. Users can then select articles for further analysis, and BWS will find and mark up biomedical relations in the text. The analysis results can be viewed in the abstract text or in table form. To date, BWS has been field tested by over 30 biologists and questionnaires have shown that subjects are highly satisfied with its capabilities and usability. BWS is accessible free of charge at http://bioservices.cse.yzu.edu.tw/BWS.
Fazeli Dehkordy, Soudabeh; Carlos, Ruth C; Hall, Kelli S; Dalton, Vanessa K
2014-09-01
Millions of people use online search engines everyday to find health-related information and voluntarily share their personal health status and behaviors in various Web sites. Thus, data from tracking of online information seeker's behavior offer potential opportunities for use in public health surveillance and research. Google Trends is a feature of Google which allows Internet users to graph the frequency of searches for a single term or phrase over time or by geographic region. We used Google Trends to describe patterns of information-seeking behavior in the subject of dense breasts and to examine their correlation with the passage or introduction of dense breast notification legislation. To capture the temporal variations of information seeking about dense breasts, the Web search query "dense breast" was entered in the Google Trends tool. We then mapped the dates of legislative actions regarding dense breasts that received widespread coverage in the lay media to information-seeking trends about dense breasts over time. Newsworthy events and legislative actions appear to correlate well with peaks in search volume of "dense breast". Geographic regions with the highest search volumes have passed, denied, or are currently considering the dense breast legislation. Our study demonstrated that any legislative action and respective news coverage correlate with increase in information seeking for "dense breast" on Google, suggesting that Google Trends has the potential to serve as a data source for policy-relevant research. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
search.bioPreprint: a discovery tool for cutting edge, preprint biomedical research articles
Iwema, Carrie L.; LaDue, John; Zack, Angela; Chattopadhyay, Ansuman
2016-01-01
The time it takes for a completed manuscript to be published traditionally can be extremely lengthy. Article publication delay, which occurs in part due to constraints associated with peer review, can prevent the timely dissemination of critical and actionable data associated with new information on rare diseases or developing health concerns such as Zika virus. Preprint servers are open access online repositories housing preprint research articles that enable authors (1) to make their research immediately and freely available and (2) to receive commentary and peer review prior to journal submission. There is a growing movement of preprint advocates aiming to change the current journal publication and peer review system, proposing that preprints catalyze biomedical discovery, support career advancement, and improve scientific communication. While the number of articles submitted to and hosted by preprint servers are gradually increasing, there has been no simple way to identify biomedical research published in a preprint format, as they are not typically indexed and are only discoverable by directly searching the specific preprint server websites. To address this issue, we created a search engine that quickly compiles preprints from disparate host repositories and provides a one-stop search solution. Additionally, we developed a web application that bolsters the discovery of preprints by enabling each and every word or phrase appearing on any web site to be integrated with articles from preprint servers. This tool, search.bioPreprint, is publicly available at http://www.hsls.pitt.edu/resources/preprint. PMID:27508060
Boyer, C; Baujard, V; Scherrer, J R
2001-01-01
Any new user to the Internet will think that to retrieve the relevant document is an easy task especially with the wealth of sources available on this medium, but this is not the case. Even experienced users have difficulty formulating the right query for making the most of a search tool in order to efficiently obtain an accurate result. The goal of this work is to reduce the time and the energy necessary in searching and locating medical and health information. To reach this goal we have developed HONselect [1]. The aim of HONselect is not only to improve efficiency in retrieving documents but to respond to an increased need for obtaining a selection of relevant and accurate documents from a breadth of various knowledge databases including scientific bibliographical references, clinical trials, daily news, multimedia illustrations, conferences, forum, Web sites, clinical cases, and others. The authors based their approach on the knowledge representation using the National Library of Medicine's Medical Subject Headings (NLM, MeSH) vocabulary and classification [2,3]. The innovation is to propose a multilingual "one-stop searching" (one Web interface to databases currently in English, French and German) with full navigational and connectivity capabilities. The user may choose from a given selection of related terms the one that best suit his search, navigate in the term's hierarchical tree, and access directly to a selection of documents from high quality knowledge suppliers such as the MEDLINE database, the NLM's ClinicalTrials.gov server, the NewsPage's daily news, the HON's media gallery, conference listings and MedHunt's Web sites [4, 5, 6, 7, 8, 9]. HONselect, developed by HON, a non-profit organisation [10], is a free online available multilingual tool based on the MeSH thesaurus to index, select, retrieve and display accurate, up to date, high-level and quality documents.
Sandhu, Maninder; Sureshkumar, V; Prakash, Chandra; Dixit, Rekha; Solanke, Amolkumar U; Sharma, Tilak Raj; Mohapatra, Trilochan; S V, Amitha Mithra
2017-09-30
Genome-wide microarray has enabled development of robust databases for functional genomics studies in rice. However, such databases do not directly cater to the needs of breeders. Here, we have attempted to develop a web interface which combines the information from functional genomic studies across different genetic backgrounds with DNA markers so that they can be readily deployed in crop improvement. In the current version of the database, we have included drought and salinity stress studies since these two are the major abiotic stresses in rice. RiceMetaSys, a user-friendly and freely available web interface provides comprehensive information on salt responsive genes (SRGs) and drought responsive genes (DRGs) across genotypes, crop development stages and tissues, identified from multiple microarray datasets. 'Physical position search' is an attractive tool for those using QTL based approach for dissecting tolerance to salt and drought stress since it can provide the list of SRGs and DRGs in any physical interval. To identify robust candidate genes for use in crop improvement, the 'common genes across varieties' search tool is useful. Graphical visualization of expression profiles across genes and rice genotypes has been enabled to facilitate the user and to make the comparisons more impactful. Simple Sequence Repeat (SSR) search in the SRGs and DRGs is a valuable tool for fine mapping and marker assisted selection since it provides primers for survey of polymorphism. An external link to intron specific markers is also provided for this purpose. Bulk retrieval of data without any limit has been enabled in case of locus and SSR search. The aim of this database is to facilitate users with a simple and straight-forward search options for identification of robust candidate genes from among thousands of SRGs and DRGs so as to facilitate linking variation in expression profiles to variation in phenotype. Database URL: http://14.139.229.201.
[Development of domain specific search engines].
Takai, T; Tokunaga, M; Maeda, K; Kaminuma, T
2000-01-01
As cyber space exploding in a pace that nobody has ever imagined, it becomes very important to search cyber space efficiently and effectively. One solution to this problem is search engines. Already a lot of commercial search engines have been put on the market. However these search engines respond with such cumbersome results that domain specific experts can not tolerate. Using a dedicate hardware and a commercial software called OpenText, we have tried to develop several domain specific search engines. These engines are for our institute's Web contents, drugs, chemical safety, endocrine disruptors, and emergent response for chemical hazard. These engines have been on our Web site for testing.
'Sciencenet'--towards a global search and share engine for all scientific knowledge.
Lütjohann, Dominic S; Shah, Asmi H; Christen, Michael P; Richter, Florian; Knese, Karsten; Liebel, Urban
2011-06-15
Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, 'Sciencenet', which facilitates rapid searching over this large data space. By 'bringing the search engine to the data', we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the 'AskMe' experiment publisher is written in Python 2.7, and the backend 'YaCy' search engine is based on Java 1.6.
ICTNET at Web Track 2009 Diversity task
2009-11-01
performance. On the World Wide Web, there exist many documents which represents several implicit subtopics. We used commerce search engines to gather those...documents. In this task, our work can be divided into five steps. First, we collect documents returned by commerce search engines , and considered
Instruction for Web Searching: An Empirical Study.
ERIC Educational Resources Information Center
Colaric, Susan M.
2003-01-01
Discussion of problems that users have with Web searching focuses on a study of undergraduates that investigated three instructional methods (instruction by example, conceptual models without illustrations, and conceptual models with illustrations) to determine differences in knowledge acquisition related to three types of knowledge (declarative,…
Are Google or Yahoo a good portal for getting quality healthcare web information?
Chang, Polun; Hou, I-Ching; Hsu, Chiao-Ling; Lai, Hsiang-Fen
2006-01-01
We examined the ranks of 50 award-won health websites in Taiwan against the search results of two popular portals with 6 common diseases. The results showed that the portal search results do not rank the quality web sites reasonably.
Research on the optimization strategy of web search engine based on data mining
NASA Astrophysics Data System (ADS)
Chen, Ronghua
2018-04-01
With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.
[Study on Information Extraction of Clinic Expert Information from Hospital Portals].
Zhang, Yuanpeng; Dong, Jiancheng; Qian, Danmin; Geng, Xingyun; Wu, Huiqun; Wang, Li
2015-12-01
Clinic expert information provides important references for residents in need of hospital care. Usually, such information is hidden in the deep web and cannot be directly indexed by search engines. To extract clinic expert information from the deep web, the first challenge is to make a judgment on forms. This paper proposes a novel method based on a domain model, which is a tree structure constructed by the attributes of search interfaces. With this model, search interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from the returned web pages indexed by search interfaces. To filter the noise information on a web page, a block importance model is proposed. The experiment results indicated that the domain model yielded a precision 10.83% higher than that of the rule-based method, whereas the block importance model yielded an F₁ measure 10.5% higher than that of the XPath method.
Age differences in search of web pages: the effects of link size, link number, and clutter.
Grahame, Michael; Laberge, Jason; Scialfa, Charles T
2004-01-01
Reaction time, eye movements, and errors were measured during visual search of Web pages to determine age-related differences in performance as a function of link size, link number, link location, and clutter. Participants (15 young adults, M = 23 years; 14 older adults, M = 57 years) searched Web pages for target links that varied from trial to trial. During one half of the trials, links were enlarged from 10-point to 12-point font. Target location was distributed among the left, center, and bottom portions of the screen. Clutter was manipulated according to the percentage of used space, including graphics and text, and the number of potentially distracting nontarget links was varied. Increased link size improved performance, whereas increased clutter and links hampered search, especially for older adults. Results also showed that links located in the left region of the page were found most easily. Actual or potential applications of this research include Web site design to increase usability, particularly for older adults.
EVpedia: a community web portal for extracellular vesicles research.
Kim, Dae-Kyum; Lee, Jaewook; Kim, Sae Rom; Choi, Dong-Sic; Yoon, Yae Jin; Kim, Ji Hyun; Go, Gyeongyun; Nhung, Dinh; Hong, Kahye; Jang, Su Chul; Kim, Si-Hyun; Park, Kyong-Su; Kim, Oh Youn; Park, Hyun Taek; Seo, Ji Hye; Aikawa, Elena; Baj-Krzyworzeka, Monika; van Balkom, Bas W M; Belting, Mattias; Blanc, Lionel; Bond, Vincent; Bongiovanni, Antonella; Borràs, Francesc E; Buée, Luc; Buzás, Edit I; Cheng, Lesley; Clayton, Aled; Cocucci, Emanuele; Dela Cruz, Charles S; Desiderio, Dominic M; Di Vizio, Dolores; Ekström, Karin; Falcon-Perez, Juan M; Gardiner, Chris; Giebel, Bernd; Greening, David W; Gross, Julia Christina; Gupta, Dwijendra; Hendrix, An; Hill, Andrew F; Hill, Michelle M; Nolte-'t Hoen, Esther; Hwang, Do Won; Inal, Jameel; Jagannadham, Medicharla V; Jayachandran, Muthuvel; Jee, Young-Koo; Jørgensen, Malene; Kim, Kwang Pyo; Kim, Yoon-Keun; Kislinger, Thomas; Lässer, Cecilia; Lee, Dong Soo; Lee, Hakmo; van Leeuwen, Johannes; Lener, Thomas; Liu, Ming-Lin; Lötvall, Jan; Marcilla, Antonio; Mathivanan, Suresh; Möller, Andreas; Morhayim, Jess; Mullier, François; Nazarenko, Irina; Nieuwland, Rienk; Nunes, Diana N; Pang, Ken; Park, Jaesung; Patel, Tushar; Pocsfalvi, Gabriella; Del Portillo, Hernando; Putz, Ulrich; Ramirez, Marcel I; Rodrigues, Marcio L; Roh, Tae-Young; Royo, Felix; Sahoo, Susmita; Schiffelers, Raymond; Sharma, Shivani; Siljander, Pia; Simpson, Richard J; Soekmadji, Carolina; Stahl, Philip; Stensballe, Allan; Stępień, Ewa; Tahara, Hidetoshi; Trummer, Arne; Valadi, Hadi; Vella, Laura J; Wai, Sun Nyunt; Witwer, Kenneth; Yáñez-Mó, María; Youn, Hyewon; Zeidler, Reinhard; Gho, Yong Song
2015-03-15
Extracellular vesicles (EVs) are spherical bilayered proteolipids, harboring various bioactive molecules. Due to the complexity of the vesicular nomenclatures and components, online searches for EV-related publications and vesicular components are currently challenging. We present an improved version of EVpedia, a public database for EVs research. This community web portal contains a database of publications and vesicular components, identification of orthologous vesicular components, bioinformatic tools and a personalized function. EVpedia includes 6879 publications, 172 080 vesicular components from 263 high-throughput datasets, and has been accessed more than 65 000 times from more than 750 cities. In addition, about 350 members from 73 international research groups have participated in developing EVpedia. This free web-based database might serve as a useful resource to stimulate the emerging field of EV research. The web site was implemented in PHP, Java, MySQL and Apache, and is freely available at http://evpedia.info. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.
Zweigenbaum, P.; Darmoni, S. J.; Grabar, N.; Douyère, M.; Benichou, J.
2002-01-01
Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF. PMID:12463965
Using Open Web APIs in Teaching Web Mining
ERIC Educational Resources Information Center
Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju
2009-01-01
With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…
Visualization of usability and functionality of a professional website through web-mining.
Jones, Josette F; Mahoui, Malika; Gopa, Venkata Devi Pragna
2007-10-11
Functional interface design requires understanding of the information system structure and the user. Web logs record user interactions with the interface, and thus provide some insight into user search behavior and efficiency of the search process. The present study uses a data-mining approach with techniques such as association rules, clustering and classification, to visualize the usability and functionality of a digital library through in depth analyses of web logs.
Web page sorting algorithm based on query keyword distance relation
NASA Astrophysics Data System (ADS)
Yang, Han; Cui, Hong Gang; Tang, Hao
2017-08-01
In order to optimize the problem of page sorting, according to the search keywords in the web page in the relationship between the characteristics of the proposed query keywords clustering ideas. And it is converted into the degree of aggregation of the search keywords in the web page. Based on the PageRank algorithm, the clustering degree factor of the query keyword is added to make it possible to participate in the quantitative calculation. This paper proposes an improved algorithm for PageRank based on the distance relation between search keywords. The experimental results show the feasibility and effectiveness of the method.
Going beyond Google for Faster and Smarter Web Searching
ERIC Educational Resources Information Center
Vine, Rita
2004-01-01
With more than 4 billion web pages in its database, Google is suitable for many different kinds of searches. When you know what you are looking for, Google can be a pretty good first choice, as long as you want to search a word pattern that can be expected to appear on any results pages. The problem starts when you don't know exactly what you're…
Cañada, Andres; Capella-Gutierrez, Salvador; Rabal, Obdulia; Oyarzabal, Julen; Valencia, Alfonso; Krallinger, Martin
2017-07-03
A considerable effort has been devoted to retrieve systematically information for genes and proteins as well as relationships between them. Despite the importance of chemical compounds and drugs as a central bio-entity in pharmacological and biological research, only a limited number of freely available chemical text-mining/search engine technologies are currently accessible. Here we present LimTox (Literature Mining for Toxicology), a web-based online biomedical search tool with special focus on adverse hepatobiliary reactions. It integrates a range of text mining, named entity recognition and information extraction components. LimTox relies on machine-learning, rule-based, pattern-based and term lookup strategies. This system processes scientific abstracts, a set of full text articles and medical agency assessment reports. Although the main focus of LimTox is on adverse liver events, it enables also basic searches for other organ level toxicity associations (nephrotoxicity, cardiotoxicity, thyrotoxicity and phospholipidosis). This tool supports specialized search queries for: chemical compounds/drugs, genes (with additional emphasis on key enzymes in drug metabolism, namely P450 cytochromes-CYPs) and biochemical liver markers. The LimTox website is free and open to all users and there is no login requirement. LimTox can be accessed at: http://limtox.bioinfo.cnio.es. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Burden of neurological diseases in the US revealed by web searches.
Baeza-Yates, Ricardo; Sangal, Puneet Mohan; Villoslada, Pablo
2017-01-01
Analyzing the disease-related web searches of Internet users provides insight into the interests of the general population as well as the healthcare industry, which can be used to shape health care policies. We analyzed the searches related to neurological diseases and drugs used in neurology using the most popular search engines in the US, Google and Bing/Yahoo. We found that the most frequently searched diseases were common diseases such as dementia or Attention Deficit/Hyperactivity Disorder (ADHD), as well as medium frequency diseases with high social impact such as Parkinson's disease, MS and ALS. The most frequently searched CNS drugs were generic drugs used for pain, followed by sleep disorders, dementia, ADHD, stroke and Parkinson's disease. Regarding the interests of the healthcare industry, ADHD, Alzheimer's disease, MS, ALS, meningitis, and hypersomnia received the higher advertising bids for neurological diseases, while painkillers and drugs for neuropathic pain, drugs for dementia or insomnia, and triptans had the highest advertising bidding prices. Web searches reflect the interest of people and the healthcare industry, and are based either on the frequency or social impact of the disease.
ERIC Educational Resources Information Center
Jacso, Peter
2001-01-01
Describes indexes to Web resources that have been created by librarians to be more discriminating than the usual Web search engines, some of which are organized by standard classification systems. Includes indexes by solo librarians as well as by groups of librarians, some in public libraries and some in higher education. (LRW)
Valmaggia, Lucia R; Latif, Leila; Kempton, Matthew J; Rus-Calafell, Maria
2016-02-28
The aim of this paper is to provide a review of controlled studies of the use of Virtual Reality in psychological treatment (VRT). Medline, PsychInfo, Embase and Web of Science were searched. Only studies comparing immersive virtual reality to a control condition were included. The search resulted in 1180 articles published between 2012 and 2015, of these, 24 were controlled studies. The reviewed studies confirm the effectiveness of VRT compared to treatment as usual, and show similar effectiveness when VRT is compared to conventional treatments. Current developments and future research are discussed. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba
2013-02-01
Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.
Search 3.0: Present, Personal, Precise
NASA Astrophysics Data System (ADS)
Spivack, Nova
The next generation of Web search is already beginning to emerge. With it we will see several shifts in the way people search, and the way major search engines provide search functionality to consumers.
Parallelizing Data-Centric Programs
2013-09-25
results than current techniques, such as ImageWebs [HGO+10], given the same budget of matches performed. 4.2 Scalable Parallel Similarity Search The work...algorithms. 5 Data-Driven Applications in the Cloud In this project, we investigated what happens when data-centric software is moved from expensive custom ...returns appropriate answer tuples. Figure 9 (b) shows the mutual constraint satisfaction that takes place in answering for 122. The intent is that
Plans for Follow-Up Observations of Kepler Planet Candidates
NASA Astrophysics Data System (ADS)
Gautier, Thomas N., III
2009-05-01
Ground based follow-up observations of transiting planet candidates identified by Kepler are pursued to identify false positives and to search for non-transiting planets in the systems of true transiting planets. I will describe the observational protocols developed by the Kepler team and the web based infrastructure we are using to support the observations. The current state of the Kepler follow-up observations will be reported.
NASA Astrophysics Data System (ADS)
Verdaasdonck, Rudolf M.; van Swol, Christiaan F. P.
1997-06-01
In this proceeding a summary is given of the slides presented at the meeting. For a detailed description of the research and clinical applications, references are included. An update of current research and clinical activities can be found on the web page of the medical laser center: www.cv.ruu.nl/LaserCenter. Links to other laser sites. At this site reprints can be requested.
A Semantic Approach for Knowledge Discovery to Help Mitigate Habitat Loss in the Gulf of Mexico
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Maskey, M.; Graves, S.; Hardin, D.
2008-12-01
Noesis is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities. Ontologies enable Noesis to help users refine their searches for information on the open web and in hidden web locations such as data catalogues with standardized, but discipline specific vocabularies. Through its ontologies Noesis provides a guided refinement of search queries which produces complete and accurate searches while reducing the user's burden to experiment with different search strings. All search results are organized by categories (e. g. all results from Google are grouped together) which may be selected or omitted according to the desire of the user. During the past two years ontologies were developed for sea grasses in the Gulf of Mexico and were used to support a habitat restoration demonstration project. Currently these ontologies are being augmented to address the special characteristics of mangroves. These new ontologies will extend the demonstration project to broader regions of the Gulf including protected mangrove locations in coastal Mexico. Noesis contributes to the decision making process by producing a comprehensive list of relevant resources based on the semantic information contained in the ontologies. Ontologies are organized in a tree like taxonomies, where the child nodes represent the Specializations and the parent nodes represent the Generalizations of a node or concept. Specializations can be used to provide more detailed search, while generalizations are used to make the search broader. Ontologies are also used to link two syntactically different terms to one semantic concept (synonyms). Appending a synonym to the query expands the search, thus providing better search coverage. Every concept has a set of properties that are neither in the same inheritance hierarchy (Specializations / Generalizations) nor equivalent (synonyms). These are called Related Concepts and they are captured in the ontology through property relationships. By using Related Concepts users can search for resources with respect to a particular property. Noesis automatically generates searches that include all of these capabilities, removing the burden from the user and producing broader and more accurate search results. This presentation will demonstrate the features of Noesis and describe its application to habitat studies in the Gulf of Mexico.
The Quality and Readability of Information Available on the Internet Regarding Lumbar Fusion
Zhang, Dafang; Schumacher, Charles; Harris, Mitchel B.; Bono, Christopher M.
2015-01-01
Study Design An Internet-based evaluation of Web sites regarding lumbar fusion. Objective The Internet has become a major resource for patients; however, the quality and readability of Internet information regarding lumbar fusion is unclear. The objective of this study is to evaluate the quality and readability of Internet information regarding lumbar fusion and to determine whether these measures changed with Web site modality, complexity of the search term, or Health on the Net Code of Conduct certification. Methods Using five search engines and three different search terms of varying complexity (“low back fusion,” “lumbar fusion,” and “lumbar arthrodesis”), we identified and reviewed 153 unique Web site hits for information quality and readability. Web sites were specifically analyzed by search term and Web site modality. Information quality was evaluated on a 5-point scale. Information readability was assessed using the Flesch-Kincaid score for reading grade level. Results The average quality score was low. The average reading grade level was nearly six grade levels above that recommended by National Work Group on Literacy and Health. The quality and readability of Internet information was significantly dependent on Web site modality. The use of more complex search terms yielded information of higher reading grade level but not higher quality. Conclusions Higher-quality information about lumbar fusion conveyed using language that is more readable by the general public is needed on the Internet. It is important for health care providers to be aware of the information accessible to patients, as it likely influences their decision making regarding care. PMID:26933614
The Quality and Readability of Information Available on the Internet Regarding Lumbar Fusion.
Zhang, Dafang; Schumacher, Charles; Harris, Mitchel B; Bono, Christopher M
2016-03-01
Study Design An Internet-based evaluation of Web sites regarding lumbar fusion. Objective The Internet has become a major resource for patients; however, the quality and readability of Internet information regarding lumbar fusion is unclear. The objective of this study is to evaluate the quality and readability of Internet information regarding lumbar fusion and to determine whether these measures changed with Web site modality, complexity of the search term, or Health on the Net Code of Conduct certification. Methods Using five search engines and three different search terms of varying complexity ("low back fusion," "lumbar fusion," and "lumbar arthrodesis"), we identified and reviewed 153 unique Web site hits for information quality and readability. Web sites were specifically analyzed by search term and Web site modality. Information quality was evaluated on a 5-point scale. Information readability was assessed using the Flesch-Kincaid score for reading grade level. Results The average quality score was low. The average reading grade level was nearly six grade levels above that recommended by National Work Group on Literacy and Health. The quality and readability of Internet information was significantly dependent on Web site modality. The use of more complex search terms yielded information of higher reading grade level but not higher quality. Conclusions Higher-quality information about lumbar fusion conveyed using language that is more readable by the general public is needed on the Internet. It is important for health care providers to be aware of the information accessible to patients, as it likely influences their decision making regarding care.
A knowledge base for tracking the impact of genomics on population health.
Yu, Wei; Gwinn, Marta; Dotson, W David; Green, Ridgely Fisk; Clyne, Mindy; Wulf, Anja; Bowen, Scott; Kolor, Katherine; Khoury, Muin J
2016-12-01
We created an online knowledge base (the Public Health Genomics Knowledge Base (PHGKB)) to provide systematically curated and updated information that bridges population-based research on genomics with clinical and public health applications. Weekly horizon scanning of a wide variety of online resources is used to retrieve relevant scientific publications, guidelines, and commentaries. After curation by domain experts, links are deposited into Web-based databases. PHGKB currently consists of nine component databases. Users can search the entire knowledge base or search one or more component databases directly and choose options for customizing the display of their search results. PHGKB offers researchers, policy makers, practitioners, and the general public a way to find information they need to understand the complicated landscape of genomics and population health.Genet Med 18 12, 1312-1314.
Lawrence; Giles
1998-04-03
The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the "indexable Web," the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages.
Lau, Annie Y S; Coiera, Enrico W
2008-01-22
The World Wide Web has increasingly become an important source of information in health care consumer decision making. However, little is known about whether searching online resources actually improves consumers' understanding of health issues. The aim was to study whether searching on the World Wide Web improves consumers' accuracy in answering health questions and whether consumers' understanding of health issues is subject to further change under social feedback. This was a pre/post prospective online study. A convenience sample of 227 undergraduate students was recruited from the population of the University of New South Wales. Subjects used a search engine that retrieved online documents from PubMed, MedlinePlus, and HealthInsite and answered a set of six questions (before and after use of the search engine) designed for health care consumers. They were then presented with feedback consisting of a summary of the post-search answers provided by previous subjects for the same questions and were asked to answer the questions again. There was an improvement in the percentage of correct answers after searching (pre-search 61.2% vs post-search 82.0%, P <.001) and after feedback with other subjects' answers (pre-feedback 82.0% vs post-feedback 85.3%, P =.051). The proportion of subjects with highly confident correct answers (ie, confident or very confident) and the proportion with highly confident incorrect answers significantly increased after searching (correct pre-search 61.6% vs correct post-search 95.5%, P <.001; incorrect pre-search 55.3% vs incorrect post-search 82.0%, P <.001). Subjects who were not as confident in their post-search answers were 28.5% more likely than those who were confident or very confident to change their answer after feedback with other subjects' post-search answers (chi(2) (1)= 66.65, P <.001). Searching across quality health information sources on the Web can improve consumers' accuracy in answering health questions. However, a consumer's confidence in an answer is not a good indicator of the answer being correct. Consumers who are not confident in their answers after searching are more likely to be influenced to change their views when provided with feedback from other consumers.
Russian-American health care: bridging the communication gap between physicians and patients.
Shpilko, Inna
2006-12-01
The objectives of this article are two-fold: (1) to gather in one place reliable information about Russian-Americans' past medical practices and their current outlook on health care and to provide health care professionals with an overview of the major afflictions suffered by this ethnic group; and (2) to educate Russian-speaking patients about the American heath care system and social services geared towards immigrants by locating and evaluating free, culturally appropriate patient education Web sites available in Russian. In order to draw data on specific diseases and conditions affecting the Russian-speaking population, the author searched various scholarly health-related electronic databases. A number of well-established U.S. government consumer-health Web sites were searched to locate patient education resources that can be utilized by recent Russian immigrants. The author provides an overview of the major health problems encountered by the Russian-speaking population before emigration and potential health concerns for Russian immigrant communities. In addition, the author provides a scholarly exploration of patient education materials available in Russian. In this increasingly diverse society, physicians are faced with the challenge of providing culturally sensitive health care. Multicultural Web-based health resources can serve as a valuable tool for reducing communication barriers between patients and health care providers, thus improving the delivery of quality health care services. Recommendations for further research are indicated. The author offers recommendations for practitioners serving Russian-speaking immigrants. Suggestions on utilization of Web resources are also provided.
Web-based information on oral dysplasia and precancer of the mouth - Quality and readability.
Alsoghier, Abdullah; Ni Riordain, Richeal; Fedele, Stefano; Porter, Stephen
2018-07-01
The numbers of individuals with oral cancer are increasing. This cancer is preceded by oral epithelial dysplasia (OED). There remains no detailed study of the online information presently available for patients with OED or indeed what information such patients may require to be appropriately informed regarding their condition. Hence, the aim of the present study is to assess the patient-oriented web content with respect to OED. The first 100 websites yielded from nine searches performed using different search terms and engines were considered. These were assessed for content, quality (DISCERN instrument, Journal of the American Medical Association benchmarks, and Health on Net seal) and readability (Flesch Reading Ease Score and Flesch-Kincaid Grade Level). There was a general scarcity of OED content across the identified websites. Information about authors, sources used to compile the publication, treatment, and shared decision were limited or absent. Only 6% and 27% of the websites achieved all the four JAMA benchmarks and HON seal, respectively. The average readability level was at 10th grade (US schools), which far exceeds the recommended levels of written health information. At present patients seeking information on OED are likely to have difficulty in finding reliable information from the Web about this disorder and its possible impact upon their life. Further work is thus required to develop a web-based resource regarding OED that addresses the shortfalls demonstrated by the current study. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.
MEGANTE: A Web-Based System for Integrated Plant Genome Annotation
Numa, Hisataka; Itoh, Takeshi
2014-01-01
The recent advancement of high-throughput genome sequencing technologies has resulted in a considerable increase in demands for large-scale genome annotation. While annotation is a crucial step for downstream data analyses and experimental studies, this process requires substantial expertise and knowledge of bioinformatics. Here we present MEGANTE, a web-based annotation system that makes plant genome annotation easy for researchers unfamiliar with bioinformatics. Without any complicated configuration, users can perform genomic sequence annotations simply by uploading a sequence and selecting the species to query. MEGANTE automatically runs several analysis programs and integrates the results to select the appropriate consensus exon–intron structures and to predict open reading frames (ORFs) at each locus. Functional annotation, including a similarity search against known proteins and a functional domain search, are also performed for the predicted ORFs. The resultant annotation information is visualized with a widely used genome browser, GBrowse. For ease of analysis, the results can be downloaded in Microsoft Excel format. All of the query sequences and annotation results are stored on the server side so that users can access their own data from virtually anywhere on the web. The current release of MEGANTE targets 24 plant species from the Brassicaceae, Fabaceae, Musaceae, Poaceae, Salicaceae, Solanaceae, Rosaceae and Vitaceae families, and it allows users to submit a sequence up to 10 Mb in length and to save up to 100 sequences with the annotation information on the server. The MEGANTE web service is available at https://megante.dna.affrc.go.jp/. PMID:24253915
Getting Answers to Natural Language Questions on the Web.
ERIC Educational Resources Information Center
Radev, Dragomir R.; Libner, Kelsey; Fan, Weiguo
2002-01-01
Describes a study that investigated the use of natural language questions on Web search engines. Highlights include query languages; differences in search engine syntax; and results of logistic regression and analysis of variance that showed aspects of questions that predicted significantly different performances, including the number of words,…
ERIC Educational Resources Information Center
Varnhagen, Connie K.; McFall, G. Peggy; Figueredo, Lauren; Takach, Bonnie Sadler; Daniels, Jason; Cuthbertson, Heather
2008-01-01
Correct spelling is increasingly important in our technological world. We examined children's and adults' Web search behavior for easy and more difficult to spell target keywords. Grade 4 children and university students searched for the life cycle of the lemming (easy to spell target keyword) or the ptarmigan (difficult to spell target keyword).…
NASA Technical Reports Server (NTRS)
Albornoz, Caleb Ronald
2012-01-01
Thousands of millions of documents are stored and updated daily in the World Wide Web. Most of the information is not efficiently organized to build knowledge from the stored data. Nowadays, search engines are mainly used by users who rely on their skills to look for the information needed. This paper presents different techniques search engine users can apply in Google Search to improve the relevancy of search results. According to the Pew Research Center, the average person spends eight hours a month searching for the right information. For instance, a company that employs 1000 employees wastes $2.5 million dollars on looking for nonexistent and/or not found information. The cost is very high because decisions are made based on the information that is readily available to use. Whenever the information necessary to formulate an argument is not available or found, poor decisions may be made and mistakes will be more likely to occur. Also, the survey indicates that only 56% of Google users feel confident with their current search skills. Moreover, just 76% of the information that is available on the Internet is accurate.
ERIC Educational Resources Information Center
Qin, Jian; Jurisica, Igor; Liddy, Elizabeth D.; Jansen, Bernard J; Spink, Amanda; Priss, Uta; Norton, Melanie J.
2000-01-01
These six articles discuss knowledge discovery in databases (KDD). Topics include data mining; knowledge management systems; applications of knowledge discovery; text and Web mining; text mining and information retrieval; user search patterns through Web log analysis; concept analysis; data collection; and data structure inconsistency. (LRW)
Reconsidering the Rhizome: A Textual Analysis of Web Search Engines as Gatekeepers of the Internet
NASA Astrophysics Data System (ADS)
Hess, A.
Critical theorists have often drawn from Deleuze and Guattari's notion of the rhizome when discussing the potential of the Internet. While the Internet may structurally appear as a rhizome, its day-to-day usage by millions via search engines precludes experiencing the random interconnectedness and potential democratizing function. Through a textual analysis of four search engines, I argue that Web searching has grown hierarchies, or "trees," that organize data in tracts of knowledge and place users in marketing niches rather than assist in the development of new knowledge.
NASA Astrophysics Data System (ADS)
Banerji, Anirban; Magarkar, Aniket
2012-09-01
We feel happy when web browsing operations provide us with necessary information; otherwise, we feel bitter. How to measure this happiness (or bitterness)? How does the profile of happiness grow and decay during the course of web browsing? We propose a probabilistic framework that models the evolution of user satisfaction, on top of his/her continuous frustration at not finding the required information. It is found that the cumulative satisfaction profile of a web-searching individual can be modeled effectively as the sum of a random number of random terms, where each term is a mutually independent random variable, originating from ‘memoryless’ Poisson flow. Evolution of satisfaction over the entire time interval of a user’s browsing was modeled using auto-correlation analysis. A utilitarian marker, a magnitude of greater than unity of which describes happy web-searching operations, and an empirical limit that connects user’s satisfaction with his frustration level-are proposed too. The presence of pertinent information in the very first page of a website and magnitude of the decay parameter of user satisfaction (frustration, irritation etc.) are found to be two key aspects that dominate the web user’s psychology. The proposed model employed different combinations of decay parameter, searching time and number of helpful websites. The obtained results are found to match the results from three real-life case studies.
Meeting Reference Responsibilities through Library Web Sites.
ERIC Educational Resources Information Center
Adams, Michael
2001-01-01
Discusses library Web sites and explains some of the benefits when libraries make their sites into reference portals, linking them to other useful Web sites. Topics include print versus Web information sources; limitations of search engines; what Web sites to include, including criteria for inclusions; and organizing the sites. (LRW)
ACHP | Web Site Privacy Policy
Search skip specific nav links Home arrow About ACHP arrow Web Site Privacy Policy ACHP Web Site Privacy be used after its purpose has been fulfilled. For questions on our Web site privacy policy, please contact the Web manager. Updated October 2, 2006 Return to Top
Extracting Macroscopic Information from Web Links.
ERIC Educational Resources Information Center
Thelwall, Mike
2001-01-01
Discussion of Web-based link analysis focuses on an evaluation of Ingversen's proposed external Web Impact Factor for the original use of the Web, namely the interlinking of academic research. Studies relationships between academic hyperlinks and research activities for British universities and discusses the use of search engines for Web link…
Collier, James H; Lesk, Arthur M; Garcia de la Banda, Maria; Konagurthu, Arun S
2012-07-01
Searching for well-fitting 3D oligopeptide fragments within a large collection of protein structures is an important task central to many analyses involving protein structures. This article reports a new web server, Super, dedicated to the task of rapidly screening the protein data bank (PDB) to identify all fragments that superpose with a query under a prespecified threshold of root-mean-square deviation (RMSD). Super relies on efficiently computing a mathematical bound on the commonly used structural similarity measure, RMSD of superposition. This allows the server to filter out a large proportion of fragments that are unrelated to the query; >99% of the total number of fragments in some cases. For a typical query, Super scans the current PDB containing over 80,500 structures (with ∼40 million potential oligopeptide fragments to match) in under a minute. Super web server is freely accessible from: http://lcb.infotech.monash.edu.au/super.
Are we there yet? An examination of online tailored health communication.
Suggs, L Suzanne; McIntyre, Chris
2009-04-01
Increasingly, the Internet is playing an important role in consumer health and patient-provider communication. Seventy-three percent of American adults are now online, and 79% have searched for health information on the Internet. This study provides a baseline understanding of the extent to which health consumers are able to find tailored communication online. It describes the current behavioral focus, the channels being used to deliver the tailored content, and the level of tailoring in online-tailored communication. A content analysis of 497 health Web sites found few examples of personalized, targeted, or tailored health sites freely available online. Tailored content was provided in 13 Web sites, although 15 collected individual data. More health risk assessment (HRA) sites included tailored feedback than other topics. The patterns that emerged from the analysis demonstrate that online health users can access a number of Web sites with communication tailored to their needs.
SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services.
Gessler, Damian D G; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T
2009-09-23
SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp, developer tools at http://sswap.info/developer.jsp, and a portal to third-party ontologies at http://sswapmeet.sswap.info (a "swap meet"). SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing the concept of a canonical yet mutable OWL DL graph that allows data and service providers to describe their resources, to allow discovery servers to offer semantically rich search engines, to allow clients to discover and invoke those resources, and to allow providers to respond with semantically tagged data. SSWAP allows for a mix-and-match of terms from both new and legacy third-party ontologies in these graphs.
Journal searching in non-MEDLINE resources on Internet Web sites.
Lingle, V A
1997-01-01
Internet access to the medical journal literature is absorbing the attention of all relevant parties, i.e., publishers, journal vendors, librarians, commercial providers, government agencies, and end users. Journal content on the Web sites spans the range from advertising and ordering information for the print version, to table of contents and abstracts, to downloadable full text and graphics of articles. The searching parameters for systems other than MEDLINE also differ extensively with a wide variety of features and resulting retrieval. This discussion reviews a selection of providers of medical information (particularly the journal literature) on the Internet, making a comparison of what is available on Web sites and how it can be searched.
Chan, Emily H.; Sahai, Vikram; Conrad, Corrie; Brownstein, John S.
2011-01-01
Background A variety of obstacles including bureaucracy and lack of resources have interfered with timely detection and reporting of dengue cases in many endemic countries. Surveillance efforts have turned to modern data sources, such as Internet search queries, which have been shown to be effective for monitoring influenza-like illnesses. However, few have evaluated the utility of web search query data for other diseases, especially those of high morbidity and mortality or where a vaccine may not exist. In this study, we aimed to assess whether web search queries are a viable data source for the early detection and monitoring of dengue epidemics. Methodology/Principal Findings Bolivia, Brazil, India, Indonesia and Singapore were chosen for analysis based on available data and adequate search volume. For each country, a univariate linear model was then built by fitting a time series of the fraction of Google search query volume for specific dengue-related queries from that country against a time series of official dengue case counts for a time-frame within 2003–2010. The specific combination of queries used was chosen to maximize model fit. Spurious spikes in the data were also removed prior to model fitting. The final models, fit using a training subset of the data, were cross-validated against both the overall dataset and a holdout subset of the data. All models were found to fit the data quite well, with validation correlations ranging from 0.82 to 0.99. Conclusions/Significance Web search query data were found to be capable of tracking dengue activity in Bolivia, Brazil, India, Indonesia and Singapore. Whereas traditional dengue data from official sources are often not available until after some substantial delay, web search query data are available in near real-time. These data represent valuable complement to assist with traditional dengue surveillance. PMID:21647308
What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.
Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W
2015-06-01
Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.
Ertl, Peter; Patiny, Luc; Sander, Thomas; Rufener, Christian; Zasso, Michaël
2015-01-01
Wikipedia, the world's largest and most popular encyclopedia is an indispensable source of chemistry information. It contains among others also entries for over 15,000 chemicals including metabolites, drugs, agrochemicals and industrial chemicals. To provide an easy access to this wealth of information we decided to develop a substructure and similarity search tool for chemical structures referenced in Wikipedia. We extracted chemical structures from entries in Wikipedia and implemented a web system allowing structure and similarity searching on these data. The whole search as well as visualization system is written in JavaScript and therefore can run locally within a web page and does not require a central server. The Wikipedia Chemical Structure Explorer is accessible on-line at www.cheminfo.org/wikipedia and is available also as an open source project from GitHub for local installation. The web-based Wikipedia Chemical Structure Explorer provides a useful resource for research as well as for chemical education enabling both researchers and students easy and user friendly chemistry searching and identification of relevant information in Wikipedia. The tool can also help to improve quality of chemical entries in Wikipedia by providing potential contributors regularly updated list of entries with problematic structures. And last but not least this search system is a nice example of how the modern web technology can be applied in the field of cheminformatics. Graphical abstractWikipedia Chemical Structure Explorer allows substructure and similarity searches on molecules referenced in Wikipedia.
Current knowledge and trends in age-related macular degeneration: today's and future treatments.
Velez-Montoya, Raul; Oliver, Scott C N; Olson, Jeffrey L; Fine, Stuart L; Mandava, Naresh; Quiroz-Mercado, Hugo
2013-09-01
To address the most dynamic and current issues concerning today's treatment options and promising research efforts regarding treatment for age-related macular degeneration. This review is aimed to serve as a practical reference for more in-depth reviews on the subject. An online review of the database PubMed and Ovid were performed, searching for the key words age-related macular degeneration, AMD, VEGF, treatment, PDT, steroids, bevacizumab, ranibizumab, VEGF-trap, radiation, combined therapy, as well as their compound phrases. The search was limited to articles published since 1985. All returned articles were carefully screened, and their references were manually reviewed for additional relevant data. The web page www.clinicaltrials.gov was also accessed in search of relevant research trials. A total of 363 articles were reviewed, including 64 additional articles extracted from the references. At the end, only 160 references were included in this review. Treatment for age-related macular degeneration is a very dynamic research field. While current treatments are mainly aimed at blocking vascular endothelial growth factor, future treatments seek to prevent vision loss because of scarring. Promising efforts have been made to address the dry form of the disease, which has lacked effective treatment.
ERIC Educational Resources Information Center
Porter, Brandi
2009-01-01
Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online information retrieval systems. The content, ease of use, and required search…
ERIC Educational Resources Information Center
Hwang, Gwo-Jen; Kuo, Fan-Ray
2015-01-01
Web-based problem-solving, a compound ability of critical thinking, creative thinking, reasoning thinking and information-searching abilities, has been recognised as an important competence for elementary school students. Some researchers have reported the possible correlations between problem-solving competence and information searching ability;…
Analysis of Scifinder Scholar and Web of Science Citation Searches.
ERIC Educational Resources Information Center
Whitley, Katherine M.
2002-01-01
With "Chemical Abstracts" and "Science Citation Index" both now available for citation searching, this study compares the duplication and uniqueness of citing references for works of chemistry researchers for the years 1999-2001. The two indexes cover very similar source material. This analysis of SciFinder Scholar and Web of…
Undergraduate Students Searching and Reading Web Sources for Writing
ERIC Educational Resources Information Center
Li, Yongyan
2012-01-01
With the Internet-evoked paradigm shift in the academy, there has been a growing interest in students' Web-based information-seeking and source-use practices. Nevertheless, little is known as to how individual students go about searching for sources online and selecting source material for writing particular assignments. This exploratory study…
A Smart Itsy Bitsy Spider for the Web.
ERIC Educational Resources Information Center
Chen, Hsinchun; Chung, Yi-Ming; Ramsey, Marshall; Yang, Christopher C.
1998-01-01
This study tested two Web personal spiders (i.e., agents that take users' requests and perform real-time customized searches) based on best first-search and genetic-algorithm techniques. Both results were comparable and complementary, although the genetic algorithm obtained higher recall value. The Java-based interface was found to be necessary…
Index (this page) 2. Use search.lbl.gov powered by Google. 3. Use DS The Directory of both People and Berkeley Lab Lawrence Berkeley National Laboratory A-Z Index Directory Submit Web People Navigation Berkeley Lab Search Submit Web People Close About the Lab Leadership/Organization Calendar News Center
FirstSearch and NetFirst--Web and Dial-up Access: Plus Ca Change, Plus C'est la Meme Chose?
ERIC Educational Resources Information Center
Koehler, Wallace; Mincey, Danielle
1996-01-01
Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)
Differences and Similarities in Information Seeking: Children and Adults as Web Users.
ERIC Educational Resources Information Center
Bilal, Dania; Kirby, Joe
2002-01-01
Analyzed and compared the success and information seeking behaviors of seventh grade science students and graduate students in using the Yahooligans! Web search engine. Discusses cognitive, affective, and physical behaviors during a fact-finding task, including searching, browsing, and time to complete the task; navigational styles; and focus on…
Ocean Drilling Program: Web Site Access Statistics
and products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main See statistics for JOIDES members. See statistics for Janus database. 1997 October November December accessible only on www-odp.tamu.edu. ** End of ODP, start of IODP. Privacy Policy ODP | Search | Database
ERIC Educational Resources Information Center
Callery, Anne
The Internet has the potential to be the ultimate information resource, but it needs to be organized in order to be useful. This paper discusses how the subject guide, "Yahoo!" is different from most web search engines, and how best to search for information on Yahoo! The strength in Yahoo! lies in the subject hierarchy. Advantages to…
Intelligent web image retrieval system
NASA Astrophysics Data System (ADS)
Hong, Sungyong; Lee, Chungwoo; Nah, Yunmook
2001-07-01
Recently, the web sites such as e-business sites and shopping mall sites deal with lots of image information. To find a specific image from these image sources, we usually use web search engines or image database engines which rely on keyword only retrievals or color based retrievals with limited search capabilities. This paper presents an intelligent web image retrieval system. We propose the system architecture, the texture and color based image classification and indexing techniques, and representation schemes of user usage patterns. The query can be given by providing keywords, by selecting one or more sample texture patterns, by assigning color values within positional color blocks, or by combining some or all of these factors. The system keeps track of user's preferences by generating user query logs and automatically add more search information to subsequent user queries. To show the usefulness of the proposed system, some experimental results showing recall and precision are also explained.
The Number of Scholarly Documents on the Public Web
Khabsa, Madian; Giles, C. Lee
2014-01-01
The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%. PMID:24817403
The number of scholarly documents on the public web.
Khabsa, Madian; Giles, C Lee
2014-01-01
The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.
Computer applications in the search for unrelated stem cell donors.
Müller, Carlheinz R
2002-08-01
The majority of patients which are eligible for a blood stem cell transplantation from an allogeneic donor do not have a suitable related donor so that an efficient unrelated donor search is a prerequisite for this treatment. Currently, there are over 7 million volunteer donors in the files of 50 registries in the world and in most countries the majority of transplants are performed from a foreign donor. Evidently, computer and communication technology must play a crucial role in the complex donor search process on the national and international level. This article describes the structural elements of the donor search process and discusses major systematic and technical issues to be addressed in the development and evolution of the supporting telematic systems. The theoretical considerations are complemented by a concise overview over the current state of the art which is given by describing the scope, relevance, interconnection and technical background of three major national and international computer appliances: The German Marrow Donor Information System (GERMIS) and the European Marrow Donor Information System (EMDIS) are interoperable business-to-business e-commerce systems and Bone Marrow Donors World Wide (BMDW) is the basic international donor information desk on the web.
Results from a Web Impact Factor Crawler.
ERIC Educational Resources Information Center
Thelwall, Mike
2001-01-01
Discusses Web impact factors (WIFs), Web versions of the impact factors for journals, and how they can be calculated by using search engines. Highlights include HTML and document indexing; Web page links; a Web crawler designed for calculating WIFs; and WIFs for United Kingdom universities that measured research profiles or capability. (Author/LRW)
RUC at TREC 2014: Select Resources Using Topic Models
2014-11-01
federated search techniques in a realistic Web setting with a large number of online Web search services. This year the track contains three tasks...Selection. In CIKM 2009, pages 1277-1286. [10] M. Baillie, M. Carmen, and F. Crestani. A Multiple- Collection Latent Topic Model for Federated ... Search . Information Retrieval (2011) 14:390-412. [11] A. Bellogin, G. G. Gebremeskel, J. He, A. Said, T. Samar, A. P. de Vries. CWI and TU Delft at TREC
Quality of Web-based information on obsessive compulsive disorder.
Klila, Hedi; Chatton, Anne; Zermatten, Ariane; Khan, Riaz; Preisig, Martin; Khazaal, Yasser
2013-01-01
The Internet is increasingly used as a source of information for mental health issues. The burden of obsessive compulsive disorder (OCD) may lead persons with diagnosed or undiagnosed OCD, and their relatives, to search for good quality information on the Web. This study aimed to evaluate the quality of Web-based information on English-language sites dealing with OCD and to compare the quality of websites found through a general and a medically specialized search engine. Keywords related to OCD were entered into Google and OmniMedicalSearch. Websites were assessed on the basis of accountability, interactivity, readability, and content quality. The "Health on the Net" (HON) quality label and the Brief DISCERN scale score were used as possible content quality indicators. Of the 235 links identified, 53 websites were analyzed. The content quality of the OCD websites examined was relatively good. The use of a specialized search engine did not offer an advantage in finding websites with better content quality. A score ≥16 on the Brief DISCERN scale is associated with better content quality. This study shows the acceptability of the content quality of OCD websites. There is no advantage in searching for information with a specialized search engine rather than a general one. The Internet offers a number of high quality OCD websites. It remains critical, however, to have a provider-patient talk about the information found on the Web.
Annotating images by mining image search results.
Wang, Xin-Jing; Zhang, Lei; Li, Xirong; Ma, Wei-Ying
2008-11-01
Although it has been studied for years by the computer vision and machine learning communities, image annotation is still far from practical. In this paper, we propose a novel attempt at model-free image annotation, which is a data-driven approach that annotates images by mining their search results. Some 2.4 million images with their surrounding text are collected from a few photo forums to support this approach. The entire process is formulated in a divide-and-conquer framework where a query keyword is provided along with the uncaptioned image to improve both the effectiveness and efficiency. This is helpful when the collected data set is not dense everywhere. In this sense, our approach contains three steps: 1) the search process to discover visually and semantically similar search results, 2) the mining process to identify salient terms from textual descriptions of the search results, and 3) the annotation rejection process to filter out noisy terms yielded by Step 2. To ensure real-time annotation, two key techniques are leveraged-one is to map the high-dimensional image visual features into hash codes, the other is to implement it as a distributed system, of which the search and mining processes are provided as Web services. As a typical result, the entire process finishes in less than 1 second. Since no training data set is required, our approach enables annotating with unlimited vocabulary and is highly scalable and robust to outliers. Experimental results on both real Web images and a benchmark image data set show the effectiveness and efficiency of the proposed algorithm. It is also worth noting that, although the entire approach is illustrated within the divide-and conquer framework, a query keyword is not crucial to our current implementation. We provide experimental results to prove this.
The HMMER Web Server for Protein Sequence Similarity Search.
Prakash, Ananth; Jeffryes, Matt; Bateman, Alex; Finn, Robert D
2017-12-08
Protein sequence similarity search is one of the most commonly used bioinformatics methods for identifying evolutionarily related proteins. In general, sequences that are evolutionarily related share some degree of similarity, and sequence-search algorithms use this principle to identify homologs. The requirement for a fast and sensitive sequence search method led to the development of the HMMER software, which in the latest version (v3.1) uses a combination of sophisticated acceleration heuristics and mathematical and computational optimizations to enable the use of profile hidden Markov models (HMMs) for sequence analysis. The HMMER Web server provides a common platform by linking the HMMER algorithms to databases, thereby enabling the search for homologs, as well as providing sequence and functional annotation by linking external databases. This unit describes three basic protocols and two alternate protocols that explain how to use the HMMER Web server using various input formats and user defined parameters. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Zhang, Lu; Du, Hongru; Zhao, Yannan; Wu, Rongwei; Zhang, Xiaolei
2017-01-01
"The Belt and Road" initiative has been expected to facilitate interactions among numerous city centers. This initiative would generate a number of centers, both economic and political, which would facilitate greater interaction. To explore how information flows are merged and the specific opportunities that may be offered, Chinese cities along "the Belt and Road" are selected for a case study. Furthermore, urban networks in cyberspace have been characterized by their infrastructure orientation, which implies that there is a relative dearth of studies focusing on the investigation of urban hierarchies by capturing information flows between Chinese cities along "the Belt and Road". This paper employs Baidu, the main web search engine in China, to examine urban hierarchies. The results show that urban networks become more balanced, shifting from a polycentric to a homogenized pattern. Furthermore, cities in networks tend to have both a hierarchical system and a spatial concentration primarily in regions such as Beijing-Tianjin-Hebei, Yangtze River Delta and the Pearl River Delta region. Urban hierarchy based on web search activity does not follow the existing hierarchical system based on geospatial and economic development in all cases. Moreover, urban networks, under the framework of "the Belt and Road", show several significant corridors and more opportunities for more cities, particularly western cities. Furthermore, factors that may influence web search activity are explored. The results show that web search activity is significantly influenced by the economic gap, geographical proximity and administrative rank of the city.
Videos | NOAA Gulf Spill Restoration
Skip to main content Home Home Toggle navigation Search form Search Search the web Search NOAA Gulf Spill Restoration Search Home About Us Trustees Contact Us How We Restore Planning Damage Assessment
Dehkordy, Soudabeh Fazeli; Carlos, Ruth C.; Hall, Kelli S.; Dalton, Vanessa K.
2015-01-01
Rationale and Objectives Millions of people use online search engines every day to find health-related information and voluntarily share their personal health status and behaviors in various Web sites. Thus, data from tracking of online information seeker’s behavior offer potential opportunities for use in public health surveillance and research. Google Trends is a feature of Google which allows internet users to graph the frequency of searches for a single term or phrase over time or by geographic region. We used Google Trends to describe patterns of information seeking behavior in the subject of dense breasts and to examine their correlation with the passage or introduction of dense breast notification legislation. Materials and Methods In order to capture the temporal variations of information seeking about dense breasts, the web search query “dense breast” was entered in the Google Trends tool. We then mapped the dates of legislative actions regarding dense breasts that received widespread coverage in the lay media to information seeking trends about dense breasts over time. Results Newsworthy events and legislative actions appear to correlate well with peaks in search volume of “dense breast”. Geographic regions with the highest search volumes have either passed, denied, or are currently considering the dense breast legislation. Conclusions Our study demonstrated that any legislative action and respective news coverage correlate with increase in information seeking for “dense breast” on Google, suggesting that Google Trends has the potential to serve as a data source for policy-relevant research. PMID:24998689
Optimizing Earth Data Search Ranking using Deep Learning and Real-time User Behaviour
NASA Astrophysics Data System (ADS)
Jiang, Y.; Yang, C. P.; Armstrong, E. M.; Huang, T.; Moroni, D. F.; McGibbney, L. J.; Greguska, F. R., III
2017-12-01
Finding Earth science data has been a challenging problem given both the quantity of data available and the heterogeneity of the data across a wide variety of domains. Current search engines in most geospatial data portals tend to induce end users to focus on one single data characteristic dimension (e.g., term frequency-inverse document frequency (TF-IDF) score, popularity, release date, etc.). This approach largely fails to take account of users' multidimensional preferences for geospatial data, and hence may likely result in a less than optimal user experience in discovering the most applicable dataset out of a vast range of available datasets. With users interacting with search engines, sufficient information is already hidden in the log files. Compared with explicit feedback data, information that can be derived/extracted from log files is virtually free and substantially more timely. In this dissertation, I propose an online deep learning framework that can quickly update the learning function based on real-time user clickstream data. The contributions of this framework include 1) a log processor that can ingest, process and create training data from web logs in a real-time manner; 2) a query understanding module to better interpret users' search intent using web log processing results and metadata; 3) a feature extractor that identifies ranking features representing users' multidimensional interests of geospatial data; and 4) a deep learning based ranking algorithm that can be trained incrementally using user behavior data. The search ranking results will be evaluated using precision at K and normalized discounted cumulative gain (NDCG).
Burden of neurological diseases in the US revealed by web searches
Baeza-Yates, Ricardo; Sangal, Puneet Mohan
2017-01-01
Background Analyzing the disease-related web searches of Internet users provides insight into the interests of the general population as well as the healthcare industry, which can be used to shape health care policies. Methods We analyzed the searches related to neurological diseases and drugs used in neurology using the most popular search engines in the US, Google and Bing/Yahoo. Results We found that the most frequently searched diseases were common diseases such as dementia or Attention Deficit/Hyperactivity Disorder (ADHD), as well as medium frequency diseases with high social impact such as Parkinson’s disease, MS and ALS. The most frequently searched CNS drugs were generic drugs used for pain, followed by sleep disorders, dementia, ADHD, stroke and Parkinson’s disease. Regarding the interests of the healthcare industry, ADHD, Alzheimer’s disease, MS, ALS, meningitis, and hypersomnia received the higher advertising bids for neurological diseases, while painkillers and drugs for neuropathic pain, drugs for dementia or insomnia, and triptans had the highest advertising bidding prices. Conclusions Web searches reflect the interest of people and the healthcare industry, and are based either on the frequency or social impact of the disease. PMID:28531237
Searching for an Acidic Aquifer in the Rio Tinto Basin: First Geobiology Results of MARTE Project
NASA Technical Reports Server (NTRS)
Fernandez-Remolar, D. C.; Prieto-Ballesteros, O.; Stoker, C.
2004-01-01
Among the conceivable modern habitats to be explored for searching life on Mars are those potentially developed underground. Subsurface habitats are currently environments that, under certain physicochemical circumstances, have high thermal and hydrochemical stability [1, 2]. In planets like Mars lacking an atmospheric shield, such systems are obviously protected against radiation, which strongly alters the structure of biological macromolecules. Low porosity but fractured aquifers currently emplaced inside ancient volcano/sedimentary and hydrothermal systems act as excellent habitats [3] due to its thermal and geochemical properties. In these aquifers the temperature is controlled by a thermal balance between conduction and advection processes, which are driven by the rock composition, geological structure, water turnover of aquifers and heat generation from geothermal processes or chemical reactions [4]. Moreover, microbial communities based on chemolithotrophy can obtain energy by the oxidation of metallic ores that are currently associated to these environments. Such a community core may sustain a trophic web composed of non-autotrophic forms like heterotrophic bacteria, fungi and protozoa.
Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.
Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas
2017-07-24
Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.
Evaluation of Web-Based Ostomy Patient Support Resources.
Pittman, Joyce; Nichols, Thom; Rawl, Susan M
To evaluate currently available, no-cost, Web-based patient support resources designed for those who have recently undergone ostomy surgery. Descriptive, correlational study using telephone survey. The sample comprised 202 adults who had ostomy surgery within the previous 24 months in 1 of 5 hospitals within a large healthcare organization in the Midwestern United States. Two of the hospitals were academic teaching hospitals, and 3 were community hospitals. The study was divided into 2 phases: (1) gap analysis of 4 Web sites (labeled A-D) based on specific criteria; and (2) telephone survey of individuals with an ostomy. In phase 1, a comprehensive checklist based on best practice standards was developed to conduct the gap analysis. In phase 2, data were collected from 202 participants by trained interviewers via 1-time structured telephone interviews that required approximately 30 minutes to complete. Descriptive analyses were performed, along with correlational analysis of relationships among Web site usage, acceptability and satisfaction, demographic characteristics, and medical history. Gap analysis revealed that Web site D, managed by a patient advocacy group, received the highest total content score of 155/176 (88%) and the highest usability score of 31.7/35 (91%). Two hundred two participants completed the telephone interview, with 96 (48%) reporting that they used the Internet as a source of information. Sixty participants (30%) reported that friends or family member had searched the Internet for ostomy information on their behalf, and 148 (75%) indicated they were confident they could get information about ostomies on the Internet. Of the 90 participants (45%) who reported using the Internet to locate ostomy information, 73 (82%) found the information on the Web easy to understand, 28 (31%) reported being frustrated during their search for information, 24 (27%) indicated it took a lot of effort to get the information they needed, and 39 (43%) were concerned about the quality of the information. Web-based patient support resources may be a cost-effective approach to providing essential ostomy information, self-management training, and support. Additional research is needed to examine the efficacy of Web-based patient support interventions to improve ostomy self-management knowledge, skills, and outcomes for patients.
Yeung, Trevor M; Sacchi, Matteo; Mortensen, Neil J; Spinelli, Antonino
2015-09-01
The Internet is a vast resource for patients to search for health information on the treatment of Crohn's disease. This study examines the quality of Web sites that provide information to adults regarding Crohn's disease, including treatment options and surgery. Two search engines (Google and Yahoo) and the search terms "surgery for Crohn's disease" were used. The first 50 sites of each search were assessed. Sites that fulfilled the inclusion criteria were evaluated for content and scored by using the DISCERN instrument, which evaluates the quality of health information on treatment choices. One hundred sites were examined, of which 13 were duplicates. Sixty-two sites provided patient-orientated information. The other sites included 7 scientific articles, 3 blogs, 2 links, 6 forums, 3 video links, and 4 dead links. Of the 62 Web sites that provided patient information for adults, only 15 (24.2%) had been updated within the past 2 years. Only 9 (14.5%) were affiliated with hospitals and clinics. The majority of sites (33, 53.2%) were associated with private companies with commercial interests. Only half of the Web sites provided details on treatment options, and most Web sites did not provide any information on symptoms and procedure details. Just 5 Web sites (8.1%) described the risks of surgery, and only 7 (11.3%) provided any information on the timescale for recovery. Overall, only 1 Web site (1.6%) was identified as being "good" or "excellent" with the use of the DISCERN criteria. Although the internet is constantly evolving, this study captures data at a specific time point. Search results may vary depending on geographical location. This study only assessed English language websites. The quality of patient information on surgery for Crohn's disease is highly variable and generally poor. There is potential for the Internet to provide valuable information, and clinicians should identify high-quality Web sites to guide their patients.
Systematic Review of Quality of Patient Information on Phalloplasty in the Internet.
Karamitros, Georgios A; Kitsos, Nikolaos A; Sapountzis, Stamatis
2017-12-01
An increasing number of patients, considering aesthetic surgery, use Internet health information as their first source of information. However, the quality of information available in the Internet on phalloplasty is currently unknown. This study aimed to assess the quality of patient information on phalloplasty available in the Internet. The assessment of the Web sites was based on the modified Ensuring Quality Information for Patients (EQIP) instrument (36 items). Three hundred Web sites were identified by the most popular Web search engines. Ninety Web sites were assessed after, duplicates, irrelevant sources and Web sites in other languages rather than English were excluded. Only 16 (18%) Web sites addressed >21 items, and scores tended to be higher for Web sites developed by academic centers and the industry than for Web sites developed by private practicing surgeons. The EQIP score achieved by Web sites ranged between 4 and 29 of the total 36 points, with a median value of 17.5 points (interquartile range, 13-21). The top 5 Web sites with the highest scores were identified. The quality of patient information on phalloplasty in the Internet is substandard, and the existing Web sites present inadequate information. There is a dire need to improve the quality of Internet phalloplasty resources for potential patients who might consider this procedure. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
NASA Astrophysics Data System (ADS)
Johns, E. M.; Mayernik, M. S.; Boler, F. M.; Corson-Rikert, J.; Daniels, M. D.; Gross, M. B.; Khan, H.; Maull, K. E.; Rowan, L. R.; Stott, D.; Williams, S.; Krafft, D. B.
2015-12-01
Researchers seek information and data through a variety of avenues: published literature, search engines, repositories, colleagues, etc. In order to build a web application that leverages linked open data to enable multiple paths for information discovery, the EarthCollab project has surveyed two geoscience user communities to consider how researchers find and share scholarly output. EarthCollab, a cross-institutional, EarthCube funded project partnering UCAR, Cornell University, and UNAVCO, is employing the open-source semantic web software, VIVO, as the underlying technology to connect the people and resources of virtual research communities. This study will present an analysis of survey responses from members of the two case study communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. The survey results illustrate the types of research products that respondents indicate should be discoverable within a digital platform and the current methods used to find publications, data, personnel, tools, and instrumentation. The responses showed that scientists rely heavily on general purpose search engines, such as Google, to find information, but that data center websites and the published literature were also critical sources for finding collaborators, data, and research tools.The survey participants also identify additional features of interest for an information platform such as search engine indexing, connection to institutional web pages, generation of bibliographies and CVs, and outward linking to social media. Through the survey, the user communities prioritized the type of information that is most important to display and describe their work within a research profile. The analysis of this survey will inform our further development of a platform that will facilitate different types of information discovery strategies, and help researchers to find and use the associated resources of a research project.
A comparative study of six European databases of medically oriented Web resources.
Abad García, Francisca; González Teruel, Aurora; Bayo Calduch, Patricia; de Ramón Frias, Rosa; Castillo Blasco, Lourdes
2005-10-01
The paper describes six European medically oriented databases of Web resources, pertaining to five quality-controlled subject gateways, and compares their performance. The characteristics, coverage, procedure for selecting Web resources, record structure, searching possibilities, and existence of user assistance were described for each database. Performance indicators for each database were obtained by means of searches carried out using the key words, "myocardial infarction." Most of the databases originated in the 1990s in an academic or library context and include all types of Web resources of an international nature. Five databases use Medical Subject Headings. The number of fields per record varies between three and nineteen. The language of the search interfaces is mostly English, and some of them allow searches in other languages. In some databases, the search can be extended to Pubmed. Organizing Medical Networked Information, Catalogue et Index des Sites Médicaux Francophones, and Diseases, Disorders and Related Topics produced the best results. The usefulness of these databases as quick reference resources is clear. In addition, their lack of content overlap means that, for the user, they complement each other. Their continued survival faces three challenges: the instability of the Internet, maintenance costs, and lack of use in spite of their potential usefulness.
The quality of patient-orientated Internet information on oral lichen planus: a pilot study.
López-Jornet, Pía; Camacho-Alonso, Fabio
2010-10-01
This study examines the accessibility and quality Web pages related with oral lichen planus. Sites were identified using two search engines (Google and Yahoo!) and the search terms 'oral lichen planus' and 'oral lesion lichenoid'. The first 100 sites in each search were visited and classified. The web sites were evaluated for content quality by using the validated DISCERN rating instrument. JAMA benchmarks and 'Health on the Net' seal (HON). A total of 109,000 sites were recorded in Google using the search terms and 520,000 in Yahoo! A total of 19 Web pages considered relevant were examined on Google and 20 on Yahoo! As regards the JAMA benchmarks, only two pages satisfied the four criteria in Google (10%), and only three (15%) in Yahoo! As regards DISCERN, the overall quality of web site information was poor, no site reaching the maximum score. In Google 78.94% of sites had important deficiencies, and 50% in Yahoo!, the difference between the two search engines being statistically significant (P = 0.031). Only five pages (17.2%) on Google and eight (40%) on Yahoo! showed the HON code. Based on our review, doctors must assume primary responsibility for educating and counselling their patients. © 2010 Blackwell Publishing Ltd.
Web-based Electronic Sharing and RE-allocation of Assets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverett, Dave; Miller, Robert A.; Berlin, Gary J.
2002-09-09
The Electronic Asses Sharing Program is a web-based application that provides the capability for complex-wide sharing and reallocation of assets that are excess, under utilized, or un-utilized. through a web-based fron-end and supporting has database with a search engine, users can search for assets that they need, search for assets needed by others, enter assets they need, and enter assets they have available for reallocation. In addition, entire listings of available assets and needed assets can be viewed. The application is written in Java, the hash database and search engine are in Object-oriented Java Database Management (OJDBM). The application willmore » be hosted on an SRS-managed server outside the Firewall and access will be controlled via a protected realm. An example of the application can be viewed at the followinig (temporary) URL: http://idgdev.srs.gov/servlet/srs.weshare.WeShare« less
ERIC Educational Resources Information Center
Raeder, Aggi
1997-01-01
Discussion of ways to promote sites on the World Wide Web focuses on how search engines work and how they retrieve and identify sites. Appropriate Web links for submitting new sites and for Internet marketing are included. (LRW)
Use of WebQuest Design for Inservice Teacher Professional Development
ERIC Educational Resources Information Center
Iskeceli-Tunc, Sinem; Oner, Diler
2016-01-01
This study investigated whether a teacher professional development module built around designing WebQuests could improve teachers' technological and pedagogical skills. The technological skills examined included Web searching and Web evaluating skills. The pedagogical skills targeted were developing a working definition for higher-order thinking…
Environmental Compliance | NOAA Gulf Spill Restoration
Skip to main content Home Home Toggle navigation Search form Search Search the web Search NOAA Gulf Spill Restoration Search Home About Us Trustees Contact Us How We Restore Planning Damage Assessment
NASA Technical Reports Server (NTRS)
Liu, Z.; Ostrenga, D.; Vollmer, B.; Kempler, S.; Deshong, B.; Greene, M.
2015-01-01
The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is also home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 17 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available: -Level-1 GPM Microwave Imager (GMI) and partner radiometer products, DPR products -Level-2 Goddard Profiling Algorithm (GPROF) GMI and partner products, DPR products -Level-3 daily and monthly products, DPR products -Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http://disc.sci.gsfc.nasa.gov/gpm). Data services that are currently and to-be available include Google-like Mirador (http://mirador.gsfc.nasa.gov/) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http://giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications. The United User Interface (UUI) is the next step in the evolution of the GES DISC web site. It attempts to provide seamless access to data, information and services through a single interface without sending the user to different applications or URLs (e.g., search, access, subset, Giovanni, documents).
BioCarian: search engine for exploratory searches in heterogeneous biological databases.
Zaki, Nazar; Tennakoon, Chandana
2017-10-02
There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search on previously published viral integration data and were able to deduce the main conclusions of the original publication. BioCarian is accessible via http://www.biocarian.com . We have developed a search engine to explore RDF databases that can be used by both novice and advanced users.
orthoFind Facilitates the Discovery of Homologous and Orthologous Proteins.
Mier, Pablo; Andrade-Navarro, Miguel A; Pérez-Pulido, Antonio J
2015-01-01
Finding homologous and orthologous protein sequences is often the first step in evolutionary studies, annotation projects, and experiments of functional complementation. Despite all currently available computational tools, there is a requirement for easy-to-use tools that provide functional information. Here, a new web application called orthoFind is presented, which allows a quick search for homologous and orthologous proteins given one or more query sequences, allowing a recurrent and exhaustive search against reference proteomes, and being able to include user databases. It addresses the protein multidomain problem, searching for homologs with the same domain architecture, and gives a simple functional analysis of the results to help in the annotation process. orthoFind is easy to use and has been proven to provide accurate results with different datasets. Availability: http://www.bioinfocabd.upo.es/orthofind/.
Air Markets Program Data (AMPD)
The Air Markets Program Data tool allows users to search EPA data to answer scientific, general, policy, and regulatory questions about industry emissions. Air Markets Program Data (AMPD) is a web-based application that allows users easy access to both current and historical data collected as part of EPA's emissions trading programs. This site allows you to create and view reports and to download emissions data for further analysis. AMPD provides a query tool so users can create custom queries of industry source emissions data, allowance data, compliance data, and facility attributes. In addition, AMPD provides interactive maps, charts, reports, and pre-packaged datasets. AMPD does not require any additional software, plug-ins, or security controls and can be accessed using a standard web browser.
Increasing efficiency of information dissemination and collection through the World Wide Web
Daniel P. Huebner; Malchus B. Baker; Peter F. Ffolliott
2000-01-01
Researchers, managers, and educators have access to revolutionary technology for information transfer through the World Wide Web (Web). Using the Web to effectively gather and distribute information is addressed in this paper. Tools, tips, and strategies are discussed. Companion Web sites are provided to guide users in selecting the most appropriate tool for searching...
EPA's Web Taxonomy is a faceted hierarchical vocabulary used to tag web pages with terms from a controlled vocabulary. Tagging enables search and discovery of EPA's Web based information assests. EPA's Web Taxonomy is being provided in Simple Knowledge Organization System (SKOS) format. SKOS is a standard for sharing and linking knowledge organization systems that promises to make Federal terminology resources more interoperable.
LymPHOS 2.0: an update of a phosphosite database of primary human T cells
Nguyen, Tien Dung; Vidal-Cortes, Oriol; Gallardo, Oscar; Abian, Joaquin; Carrascal, Montserrat
2015-01-01
LymPHOS is a web-oriented database containing peptide and protein sequences and spectrometric information on the phosphoproteome of primary human T-Lymphocytes. Current release 2.0 contains 15 566 phosphorylation sites from 8273 unique phosphopeptides and 4937 proteins, which correspond to a 45-fold increase over the original database description. It now includes quantitative data on phosphorylation changes after time-dependent treatment with activators of the TCR-mediated signal transduction pathway. Sequence data quality has also been improved with the use of multiple search engines for database searching. LymPHOS can be publicly accessed at http://www.lymphos.org. Database URL: http://www.lymphos.org. PMID:26708986
Some Non-FDA Approved Uses for Neuromodulation: A Review of the Evidence.
Lee, Samuel; Abd-Elsayed, Alaa
2016-09-01
Neuromodulation, including spinal cord stimulation and peripheral nerve field stimulation, has been used with success in treating several painful conditions. The FDA approved the use of neuromodulation for a few indications. We review evidence for neuromodulation in treating some important painful conditions that are not currently FDA approved. This review included an online web search for only clinical trials testing the efficacy of neuromodulation in treating coronary artery disease, peripheral vascular disease (PVD), headache, and peripheral field stimulation. Our systematic literature search found 10, 6, and 3 controlled studies relating to coronary artery disease, PVD, and headache, respectively. Our review also included 5 noncontrolled studies relating to peripheral field stimulation, as no controlled studies had been completed. This review article shows compelling evidence based on clinical trials that neuromodulation can be of benefit for patients with serious painful conditions that are not currently approved by the FDA. © 2015 World Institute of Pain.
Impact of organisational change on mental health: a systematic review.
Bamberger, Simon Grandjean; Vinding, Anker Lund; Larsen, Anelia; Nielsen, Peter; Fonager, Kirsten; Nielsen, René Nesgaard; Ryom, Pia; Omland, Øyvind
2012-08-01
Although limited evidence is available, organisational change is often cited as the cause of mental health problems. This paper provides an overview of the current literature regarding the impact of organisational change on mental health. A systematic search in PUBMED, PsychInfo and Web of Knowledge combining MeSH search terms for exposure and outcome. The criterion for inclusion was original data on exposure to organisational change with mental health problems as outcome. Both cross-sectional and longitudinal studies were included. We found in 11 out of 17 studies, an association between organisational change and elevated risk of mental health problems was observed, with a less provident association in the longitudinal studies. Based on the current research, this review cannot provide sufficient evidence of an association between organisational change and elevated risk of mental health problems. More studies of long-term effects are required including relevant analyses of confounders.
Stopping Web Plagiarists from Stealing Your Content
ERIC Educational Resources Information Center
Goldsborough, Reid
2004-01-01
This article gives tips on how to avoid having content stolen by plagiarists. Suggestions include: using a Web search service such as Google to search for unique strings of text at the individuals site to uncover other sites with the same content; buying a infringement-detection program; or hiring a public relations firm to do the work. There are…
Web-Based Search and Plot System for Nuclear Reaction Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otuka, N.; Nakagawa, T.; Fukahori, T.
2005-05-24
A web-based search and plot system for nuclear reaction data has been developed, covering experimental data in EXFOR format and evaluated data in ENDF format. The system is implemented for Linux OS, with Perl and MySQL used for CGI scripts and the database manager, respectively. Two prototypes for experimental and evaluated data are presented.
With Free Google Alert Services
ERIC Educational Resources Information Center
Gunn, Holly
2005-01-01
Alert services are a great way of keeping abreast of topics that interest you. Rather than searching the Web regularly to find new content about your areas of interest, an alert service keeps you informed by sending you notices when new material is added to the Web that matches your registered search criteria. Alert services are examples of push…
World Wide Web Indexes and Hierarchical Lists: Finding Tools for the Internet.
ERIC Educational Resources Information Center
Munson, Kurt I.
1996-01-01
In World Wide Web indexing: (1) the creation process is automated; (2) the indexes are merely descriptive, not analytical of document content; (3) results may be sorted differently depending on the search engine; and (4) indexes link directly to the resources. This article compares the indexing methods and querying options of the search engines…
ERIC Educational Resources Information Center
Taylor, Arthur; Dalal, Heather A.
2014-01-01
Introduction: This paper aims to determine how appropriate information literacy instruction is for preparing students for these unmediated searches using commercial search engines and the Web. Method. A survey was designed using the 2000 Association of College and Research Libraries literacy competency standards for higher education. Survey…
Source Evaluation of Domain Experts and Novices during Web Search
ERIC Educational Resources Information Center
Brand-Gruwel, S.; Kammerer, Y.; van Meeuwen, L.; van Gog, T.
2017-01-01
Nowadays, almost everyone uses the World Wide Web (WWW) to search for information of any kind. In education, students frequently use the WWW for selecting information to accomplish assignments such as writing an essay or preparing a presentation. The evaluation of sources and information is an important sub-skill in this process. But many students…
ERIC Educational Resources Information Center
Larson, Ray R.
1996-01-01
Examines the bibliometrics of the World Wide Web based on analysis of Web pages collected by the Inktomi "Web Crawler" and on the use of the DEC AltaVista search engine for cocitation analysis of a set of Earth Science related Web sites. Looks at the statistical characteristics of Web documents and their hypertext links, and the…
Search of the Deep and Dark Web via DARPA Memex
NASA Astrophysics Data System (ADS)
Mattmann, C. A.
2015-12-01
Search has progressed through several stages due to the increasing size of the Web. Search engines first focused on text and its rate of occurrence; then focused on the notion of link analysis and citation then on interactivity and guided search; and now on the use of social media - who we interact with, what we comment on, and who we follow (and who follows us). The next stage, referred to as "deep search," requires solutions that can bring together text, images, video, importance, interactivity, and social media to solve this challenging problem. The Apache Nutch project provides an open framework for large-scale, targeted, vertical search with capabilities to support all past and potential future search engine foci. Nutch is a flexible infrastructure allowing open access to ranking; URL selection and filtering approaches, to the link graph generated from search, and Nutch has spawned entire sub communities including Apache Hadoop and Apache Tika. It addresses many current needs with the capability to support new technologies such as image and video. On the DARPA Memex project, we are creating create specific extensions to Nutch that will directly improve its overall technological superiority for search and that will directly allow us to address complex search problems including human trafficking. We are integrating state-of-the-art algorithms developed by Kitware for IARPA Aladdin combined with work by Harvard to provide image and video understanding support allowing automatic detection of people and things and massive deployment via Nutch. We are expanding Apache Tika for scene understanding, object/person detection and classification in images/video. We are delivering an interactive and visual interface for initiating Nutch crawls. The interface uses Python technologies to expose Nutch data and to provide a domain specific language for crawls. With the Bokeh visualization library the interface we are delivering simple interactive crawl visualization and plotting techniques for exploring crawled information. The platform classifies, identify, and thwart predators, help to find victims and to identify buyers in human trafficking and will deliver technological superiority in search engines for DARPA. We are already transitioning the technologies into Geo and Planetary Science, and Bioinformatics.
Mining and integration of pathway diagrams from imaging data.
Kozhenkov, Sergey; Baitaluk, Michael
2012-03-01
Pathway diagrams from PubMed and World Wide Web (WWW) contain valuable highly curated information difficult to reach without tools specifically designed and customized for the biological semantics and high-content density of the images. There is currently no search engine or tool that can analyze pathway images, extract their pathway components (molecules, genes, proteins, organelles, cells, organs, etc.) and indicate their relationships. Here, we describe a resource of pathway diagrams retrieved from article and web-page images through optical character recognition, in conjunction with data mining and data integration methods. The recognized pathways are integrated into the BiologicalNetworks research environment linking them to a wealth of data available in the BiologicalNetworks' knowledgebase, which integrates data from >100 public data sources and the biomedical literature. Multiple search and analytical tools are available that allow the recognized cellular pathways, molecular networks and cell/tissue/organ diagrams to be studied in the context of integrated knowledge, experimental data and the literature. BiologicalNetworks software and the pathway repository are freely available at www.biologicalnetworks.org. Supplementary data are available at Bioinformatics online.
Dimensionality of consumer search space drives trophic interaction strengths.
Pawar, Samraat; Dell, Anthony I; Savage, Van M
2012-06-28
Trophic interactions govern biomass fluxes in ecosystems, and stability in food webs. Knowledge of how trophic interaction strengths are affected by differences among habitats is crucial for understanding variation in ecological systems. Here we show how substantial variation in consumption-rate data, and hence trophic interaction strengths, arises because consumers tend to encounter resources more frequently in three dimensions (3D) (for example, arboreal and pelagic zones) than two dimensions (2D) (for example, terrestrial and benthic zones). By combining new theory with extensive data (376 species, with body masses ranging from 5.24 × 10(-14) kg to 800 kg), we find that consumption rates scale sublinearly with consumer body mass (exponent of approximately 0.85) for 2D interactions, but superlinearly (exponent of approximately 1.06) for 3D interactions. These results contradict the currently widespread assumption of a single exponent (of approximately 0.75) in consumer-resource and food-web research. Further analysis of 2,929 consumer-resource interactions shows that dimensionality of consumer search space is probably a major driver of species coexistence, and the stability and abundance of populations.
Infodemiology: Tracking Flu-Related Searches on the Web for Syndromic Surveillance
Eysenbach, Gunther
2006-01-01
Background Syndromic surveillance uses health-related data that precede diagnosis and signal a sufficient probability of a case or an outbreak to warrant further public health response. Objective While most syndromic surveillance systems rely on data from clinical encounters with health professionals, I started to explore in 2004 whether analysis of trends in Internet searches can be useful to predict outbreaks such as influenza epidemics and prospectively gathered data on Internet search trends for this purpose. Results There is an excellent correlation between the number of clicks on a keyword-triggered link in Google with epidemiological data from the flu season 2004/2005 in Canada (Pearson correlation coefficient of current week clicks with the following week influenza cases r=.91). The “Google ad sentinel method” proved to be more timely, more accurate and – with a total cost of Can$365.64 for the entire flu-season – considerably cheaper than the traditional method of reports on influenza-like illnesses observed in clinics by sentinel physicians. Conclusion Systematically collecting and analyzing health information demand data from the Internet has considerable potential to be used for syndromic surveillance. Tracking web searches on the Internet has the potential to predict population-based events relevant for public health purposes, such as real outbreaks, but may also be confounded by “epidemics of fear”. Data from such “infodemiology studies” should also include longitudinal data on health information supply. PMID:17238340
Shaffer, Victoria A; Owens, Justin; Zikmund-Fisher, Brian J
2013-12-17
Previous research has examined the impact of patient narratives on treatment choices, but to our knowledge, no study has examined the effect of narratives on information search. Further, no research has considered the relative impact of their format (text vs video) on health care decisions in a single study. Our goal was to examine the impact of video and text-based narratives on information search in a Web-based patient decision aid for early stage breast cancer. Fifty-six women were asked to imagine that they had been diagnosed with early stage breast cancer and needed to choose between two surgical treatments (lumpectomy with radiation or mastectomy). Participants were randomly assigned to view one of four versions of a Web decision aid. Two versions of the decision aid included videos of interviews with patients and physicians or videos of interviews with physicians only. To distinguish between the effect of narratives and the effect of videos, we created two text versions of the Web decision aid by replacing the patient and physician interviews with text transcripts of the videos. Participants could freely browse the Web decision aid until they developed a treatment preference. We recorded participants' eye movements using the Tobii 1750 eye-tracking system equipped with Tobii Studio software. A priori, we defined 24 areas of interest (AOIs) in the Web decision aid. These AOIs were either separate pages of the Web decision aid or sections within a single page covering different content. We used multilevel modeling to examine the effect of narrative presence, narrative format, and their interaction on information search. There was a significant main effect of condition, P=.02; participants viewing decision aids with patient narratives spent more time searching for information than participants viewing the decision aids without narratives. The main effect of format was not significant, P=.10. However, there was a significant condition by format interaction on fixation duration, P<.001. When comparing the two video decision aids, participants viewing the narrative version spent more time searching for information than participants viewing the control version of the decision aid. In contrast, participants viewing the narrative version of the text decision aid spent less time searching for information than participants viewing the control version of the text decision aid. Further, narratives appear to have a global effect on information search; these effects were not limited to specific sections of the decision aid that contained topics discussed in the patient stories. The observed increase in fixation duration with video patient testimonials is consistent with the idea that the vividness of the video content could cause greater elaboration of the message, thereby encouraging greater information search. Conversely, because reading requires more effortful processing than watching, reading patient narratives may have decreased participant motivation to engage in more reading in the remaining sections of the Web decision aid. These findings suggest that the format of patient stories may be equally as important as their content in determining their effect on decision making. More research is needed to understand why differences in format result in fundamental differences in information search.
How Adolescents Search for and Appraise Online Health Information: A Systematic Review.
Freeman, Jaimie L; Caldwell, Patrina H Y; Bennett, Patricia A; Scott, Karen M
2018-04-01
To conduct a systematic review of the evidence concerning whether and how adolescents search for online health information and the extent to which they appraise the credibility of information they retrieve. A systematic search of online databases (MEDLINE, EMBASE, PsycINFO, ERIC) was performed. Reference lists of included papers were searched manually for additional articles. Included were studies on whether and how adolescents searched for and appraised online health information, where adolescent participants were aged 13-18 years. Thematic analysis was used to synthesize the findings. Thirty-four studies met the inclusion criteria. In line with the research questions, 2 key concepts were identified within the papers: whether and how adolescents search for online health information, and the extent to which adolescents appraise online health information. Four themes were identified regarding whether and how adolescents search for online health information: use of search engines, difficulties in selecting appropriate search strings, barriers to searching, and absence of searching. Four themes emerged concerning the extent to which adolescents appraise the credibility of online health information: evaluation based on Web site name and reputation, evaluation based on first impression of Web site, evaluation of Web site content, and absence of a sophisticated appraisal strategy. Adolescents are aware of the varying quality of online health information. Strategies used by individuals for searching and appraising online health information differ in their sophistication. It is important to develop resources to enhance search and appraisal skills and to collaborate with adolescents to ensure that such resources are appropriate for them. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tsai, Meng-Jung; Hsu, Chung-Yuan; Tsai, Chin-Chung
2012-04-01
Due to a growing trend of exploring scientific knowledge on the Web, a number of studies have been conducted to highlight examination of students' online searching strategies. The investigation of online searching generally employs methods including a survey, interview, screen-capturing, or transactional logs. The present study firstly intended to utilize a survey, the Online Information Searching Strategies Inventory (OISSI), to examine users' searching strategies in terms of control, orientation, trial and error, problem solving, purposeful thinking, selecting main ideas, and evaluation, which is defined as implicit strategies. Second, this study conducted screen-capturing to investigate the students' searching behaviors regarding the number of keywords, the quantity and depth of Web page exploration, and time attributes, which is defined as explicit strategies. Ultimately, this study explored the role that these two types of strategies played in predicting the students' online science information searching outcomes. A total of 103 Grade 10 students were recruited from a high school in northern Taiwan. Through Pearson correlation and multiple regression analyses, the results showed that the students' explicit strategies, particularly the time attributes proposed in the present study, were more successful than their implicit strategies in predicting their outcomes of searching science information. The participants who spent more time on detailed reading (explicit strategies) and had better skills of evaluating Web information (implicit strategies) tended to have superior searching performance.
‘Sciencenet’—towards a global search and share engine for all scientific knowledge
Lütjohann, Dominic S.; Shah, Asmi H.; Christen, Michael P.; Richter, Florian; Knese, Karsten; Liebel, Urban
2011-01-01
Summary: Modern biological experiments create vast amounts of data which are geographically distributed. These datasets consist of petabytes of raw data and billions of documents. Yet to the best of our knowledge, a search engine technology that searches and cross-links all different data types in life sciences does not exist. We have developed a prototype distributed scientific search engine technology, ‘Sciencenet’, which facilitates rapid searching over this large data space. By ‘bringing the search engine to the data’, we do not require server farms. This platform also allows users to contribute to the search index and publish their large-scale data to support e-Science. Furthermore, a community-driven method guarantees that only scientific content is crawled and presented. Our peer-to-peer approach is sufficiently scalable for the science web without performance or capacity tradeoff. Availability and Implementation: The free to use search portal web page and the downloadable client are accessible at: http://sciencenet.kit.edu. The web portal for index administration is implemented in ASP.NET, the ‘AskMe’ experiment publisher is written in Python 2.7, and the backend ‘YaCy’ search engine is based on Java 1.6. Contact: urban.liebel@kit.edu Supplementary Material: Detailed instructions and descriptions can be found on the project homepage: http://sciencenet.kit.edu. PMID:21493657
Metabolic effects of exercise on childhood obesity: a current view
Paes, Santiago Tavares; Marins, João Carlos Bouzas; Andreazzi, Ana Eliza
2015-01-01
OBJECTIVE: To review the current literature concerning the effects of physical exercise on several metabolic variables related to childhood obesity. DATA SOURCE: A search was performed in Pubmed/MEDLINE and Web of Science databases. The keywords used were as follows: Obesity, Children Obesity, Childhood Obesity, Exercise and Physical Activity. The online search was based on studies published in English, from April 2010 to December 2013. DATA SYNTHESIS: Search queries returned 88,393 studies based on the aforementioned keywords; 4,561 studies were selected by crossing chosen keywords. After applying inclusion criteria, four studies were selected from 182 eligible titles. Most studies found that aerobic and resistance training improves body composition, lipid profile and metabolic and inflammatory status of obese children and adolescents; however, the magnitude of these effects is associated with the type, intensity and duration of practice. CONCLUSIONS: Regardless of the type, physical exercise promotes positive adaptations to childhood obesity, mainly acting to restore cellular and cardiovascular homeostasis, to improve body composition, and to activate metabolism; therefore, physical exercise acts as a co-factor in fighting obesity. PMID:25662015
Candiru--a little fish with bad habits: need travel health professionals worry? A review.
Bauer, Irmgard L
2013-01-01
Over the last 150 years, a little South American fish with alleged unsavory habits has become the stuff legends are made of. With growing visitor numbers to the Amazon basin, the question of whether the animal poses a threat to the many travelers to the region arises. Scientific literature was identified by searching MEDLINE, ScienceDirect, ProQuest, and Google Scholar. The reference lists of all obtained sources served to refine the search, including the original historical writings where obtainable. Nonscientific material was discovered through extensive web searches. First, the current popular understanding of the fish and its interaction with humans are presented followed by an overview of the historical literature on which this understanding is based. Next, the fish and its supposed attraction to humans are introduced. Finally, this review queries the evidence current medical advice utilizes for the prevention of attacks and the treatment of unfortunate hosts. Until evidence of the fish's threat to humans is forthcoming, there appears to be no need for considering the candiru in health advice for travelers to the Amazon. © 2013 International Society of Travel Medicine.
Contact Allergy: A Review of Current Problems from a Clinical Perspective.
Uter, Wolfgang; Werfel, Thomas; White, Ian R; Johansen, Jeanne D
2018-05-29
Contact allergy is common, affecting 27% of the general population in Europe. Original publications, including case reports, published since 2016 (inclusive) were identified with the aim of collating a full review of current problems in the field. To this end, a literature search employing methods of systematic reviewing was performed in the Medline ® and Web of Science™ databases on 28 January 2018, using the search terms ("contact sensitization" or "contact allergy"). Of 446 non-duplicate publications identified by above search, 147 were excluded based on scrutiny of title, abstract and key words. Of the remaining 299 examined in full text, 291 were deemed appropriate for inclusion, and main findings were summarised in topic sections. In conclusion, diverse sources of exposures to chemicals of widely-differing types and structures, continue to induce sensitisation in man and may result in allergic contact dermatitis. Many of the chemicals are "evergreen" but others are "newcomers". Vigilance and proper investigation (patch testing) are required to detect and inform of the presence of these haptens to which our populations remain exposed.
ProGeRF: Proteome and Genome Repeat Finder Utilizing a Fast Parallel Hash Function
Moraes, Walas Jhony Lopes; Rodrigues, Thiago de Souza; Bartholomeu, Daniella Castanheira
2015-01-01
Repetitive element sequences are adjacent, repeating patterns, also called motifs, and can be of different lengths; repetitions can involve their exact or approximate copies. They have been widely used as molecular markers in population biology. Given the sizes of sequenced genomes, various bioinformatics tools have been developed for the extraction of repetitive elements from DNA sequences. However, currently available tools do not provide options for identifying repetitive elements in the genome or proteome, displaying a user-friendly web interface, and performing-exhaustive searches. ProGeRF is a web site for extracting repetitive regions from genome and proteome sequences. It was designed to be efficient, fast, and accurate and primarily user-friendly web tool allowing many ways to view and analyse the results. ProGeRF (Proteome and Genome Repeat Finder) is freely available as a stand-alone program, from which the users can download the source code, and as a web tool. It was developed using the hash table approach to extract perfect and imperfect repetitive regions in a (multi)FASTA file, while allowing a linear time complexity. PMID:25811026
Yang, Guo-Liang; Lim, C C Tchoyoson
2006-08-01
Radiology education is heavily dependent on visual images, and case-based teaching files comprising medical images can be an important tool for teaching diagnostic radiology. Currently, hardcopy film is being rapidly replaced by digital radiological images in teaching hospitals, and an electronic teaching file (ETF) library would be desirable. Furthermore, a repository of ETFs deployed on the World Wide Web has the potential for e-learning applications to benefit a larger community of learners. In this paper, we describe a Singapore National Medical Image Resource Centre (SN.MIRC) that can serve as a World Wide Web resource for teaching diagnostic radiology. On SN.MIRC, ETFs can be created using a variety of mechanisms including file upload and online form-filling, and users can search for cases using the Medical Image Resource Center (MIRC) query schema developed by the Radiological Society of North America (RSNA). The system can be improved with future enhancements, including multimedia interactive teaching files and distance learning for continuing professional development. However, significant challenges exist when exploring the potential of using the World Wide Web for radiology education.
Classifying Web Pages by Using Knowledge Bases for Entity Retrieval
NASA Astrophysics Data System (ADS)
Kiritani, Yusuke; Ma, Qiang; Yoshikawa, Masatoshi
In this paper, we propose a novel method to classify Web pages by using knowledge bases for entity search, which is a kind of typical Web search for information related to a person, location or organization. First, we map a Web page to entities according to the similarities between the page and the entities. Various methods for computing such similarity are applied. For example, we can compute the similarity between a given page and a Wikipedia article describing a certain entity. The frequency of an entity appearing in the page is another factor used in computing the similarity. Second, we construct a directed acyclic graph, named PEC graph, based on the relations among Web pages, entities, and categories, by referring to YAGO, a knowledge base built on Wikipedia and WordNet. Finally, by analyzing the PEC graph, we classify Web pages into categories. The results of some preliminary experiments validate the methods proposed in this paper.
Assessment and revision of clinical pharmacy practice internet web sites.
Edwards, Krystal L; Salvo, Marissa C; Ward, Kristina E; Attridge, Russell T; Kiser, Katie; Pinner, Nathan A; Gallegos, Patrick J; Kesteloot, Lori Lynn; Hylton, Ann; Bookstaver, P Brandon
2014-02-01
Health care professionals, trainees, and patients use the Internet extensively. Editable Web sites may contain inaccurate, incomplete, and/or outdated information that may mislead the public's perception of the topic. To evaluate the editable, online descriptions of clinical pharmacy and pharmacist and attempt to improve their accuracy. The authors identified key areas within clinical pharmacy to evaluate for accuracy and appropriateness on the Internet. Current descriptions that were reviewed on public domain Web sites included: (1) clinical pharmacy and the clinical pharmacist, (2) pharmacy education, (3) clinical pharmacy and development and provision for reimbursement, (4) clinical pharmacists and advanced specialty certifications/training opportunities, (5) pharmacists and advocacy, and (6) clinical pharmacists and interdisciplinary/interprofessional content. The authors assessed each content area to determine accuracy and prioritized the need for updating, when applicable, to achieve consistency in descriptions and relevancy. The authors found that Wikipedia, a public domain that allows users to update, was consistently the most common Web site produced in search results. The authors' evaluation resulted in the creation or revision of 14 Wikipedia Web pages. However, rejection of 3 proposed newly created Web pages affected the authors' ability to address identified content areas with deficiencies and/or inaccuracies. Through assessing and updating editable Web sites, the authors strengthened the online representation of clinical pharmacy in a clear, cohesive, and accurate manner. However, ongoing assessments of the Internet are continually needed to ensure accuracy and appropriateness.
Improving PHENIX search with Solr, Nutch and Drupal.
NASA Astrophysics Data System (ADS)
Morrison, Dave; Sourikova, Irina
2012-12-01
During its 20 years of R&D, construction and operation the PHENIX experiment at the Relativistic Heavy Ion Collider (RHIC) has accumulated large amounts of proprietary collaboration data that is hosted on many servers around the world and is not open for commercial search engines for indexing and searching. The legacy search infrastructure did not scale well with the fast growing PHENIX document base and produced results inadequate in both precision and recall. After considering the possible alternatives that would provide an aggregated, fast, full text search of a variety of data sources and file formats we decided to use Nutch [1] as a web crawler and Solr [2] as a search engine. To present XML-based Solr search results in a user-friendly format we use Drupal [3] as a web interface to Solr. We describe the experience of building a federated search for a heterogeneous collection of 10 million PHENIX documents with Nutch, Solr and Drupal.
Exploring Gendered Notions: Gender, Job Hunting and Web Searches
NASA Astrophysics Data System (ADS)
Martey, R. M.
Based on analysis of a series of interviews, this chapter suggests that in looking for jobs online, women confront gendered notions of the Internet as well as gendered notions of the jobs themselves. It argues that the social and cultural contexts of both the search tools and the search tasks should be considered in exploring how Web-based technologies serve women in a job search. For these women, the opportunities and limitations of online job-search tools were intimately related to their personal and social needs, especially needs for part-time work, maternity benefits, and career advancement. Although job-seeking services such as Monster.com were used frequently by most of these women, search services did not completely fulfill all their informational needs, and became an — often frustrating — initial starting point for a job search rather than an end-point.
Variability of patient spine education by Internet search engine.
Ghobrial, George M; Mehdi, Angud; Maltenfort, Mitchell; Sharan, Ashwini D; Harrop, James S
2014-03-01
Patients are increasingly reliant upon the Internet as a primary source of medical information. The educational experience varies by search engine, search term, and changes daily. There are no tools for critical evaluation of spinal surgery websites. To highlight the variability between common search engines for the same search terms. To detect bias, by prevalence of specific kinds of websites for certain spinal disorders. Demonstrate a simple scoring system of spinal disorder website for patient use, to maximize the quality of information exposed to the patient. Ten common search terms were used to query three of the most common search engines. The top fifty results of each query were tabulated. A negative binomial regression was performed to highlight the variation across each search engine. Google was more likely than Bing and Yahoo search engines to return hospital ads (P=0.002) and more likely to return scholarly sites of peer-reviewed lite (P=0.003). Educational web sites, surgical group sites, and online web communities had a significantly higher likelihood of returning on any search, regardless of search engine, or search string (P=0.007). Likewise, professional websites, including hospital run, industry sponsored, legal, and peer-reviewed web pages were less likely to be found on a search overall, regardless of engine and search string (P=0.078). The Internet is a rapidly growing body of medical information which can serve as a useful tool for patient education. High quality information is readily available, provided that the patient uses a consistent, focused metric for evaluating online spine surgery information, as there is a clear variability in the way search engines present information to the patient. Published by Elsevier B.V.
Evaluation of Federated Searching Options for the School Library
ERIC Educational Resources Information Center
Abercrombie, Sarah E.
2008-01-01
Three hosted federated search tools, Follett One Search, Gale PowerSearch Plus, and WebFeat Express, were configured and implemented in a school library. Databases from five vendors and the OPAC were systematically searched. Federated search results were compared with each other and to the results of the same searches in the database's native…
Web Searching: A Process-Oriented Experimental Study of Three Interactive Search Paradigms.
ERIC Educational Resources Information Center
Dennis, Simon; Bruza, Peter; McArthur, Robert
2002-01-01
Compares search effectiveness when using query-based Internet search via the Google search engine, directory-based search via Yahoo, and phrase-based query reformulation-assisted search via the Hyperindex browser by means of a controlled, user-based experimental study of undergraduates at the University of Queensland. Discusses cognitive load,…
Mahroum, Naim; Adawi, Mohammad; Sharif, Kassem; Waknin, Roy; Mahagna, Hussein; Bisharat, Bishara; Mahamid, Mahmud; Abu-Much, Arsalan; Amital, Howard; Luigi Bragazzi, Nicola; Watad, Abdulla
2018-01-01
The recent outbreak of Chikungunya virus in Italy represents a serious public health concern, which is attracting media coverage and generating public interest in terms of Internet searches and social media interactions. Here, we sought to assess the Chikungunya-related digital behavior and the interplay between epidemiological figures and novel data streams traffic. Reaction to the recent outbreak was analyzed in terms of Google Trends, Google News and Twitter traffic, Wikipedia visits and edits, and PubMed articles, exploiting structural modelling equations. A total of 233,678 page-views and 150 edits on the Italian Wikipedia page, 3,702 tweets, 149 scholarly articles, and 3,073 news articles were retrieved. The relationship between overall Chikungunya cases, as well as autochthonous cases, and tweets production was found to be fully mediated by Chikungunya-related web searches. However, in the allochthonous/imported cases model, tweet production was not found to be significantly mediated by epidemiological figures, with web searches still significantly mediating tweet production. Inconsistent relationships were detected in mediation models involving Wikipedia usage as a mediator variable. Similarly, the effect between news consumption and tweets production was suppressed by the Wikipedia usage. A further inconsistent mediation was found in the case of the effect between Wikipedia usage and tweets production, with web searches as a mediator variable. When adjusting for the Internet penetration index, similar findings could be obtained, with the important exception that in the adjusted model the relationship between GN and Twitter was found to be partially mediated by Wikipedia usage. Furthermore, the link between Wikipedia usage and PubMed/MEDLINE was fully mediated by GN, differently from what was found in the unadjusted model. In conclusion-a significant public reaction to the current Chikungunya outbreak was documented. Health authorities should be aware of this, recognizing the role of new technologies for collecting public concerns and replying to them, disseminating awareness and avoid misleading information.
Sharif, Kassem; Waknin, Roy; Mahagna, Hussein; Bisharat, Bishara; Mahamid, Mahmud; Abu-Much, Arsalan; Amital, Howard; Luigi Bragazzi, Nicola
2018-01-01
The recent outbreak of Chikungunya virus in Italy represents a serious public health concern, which is attracting media coverage and generating public interest in terms of Internet searches and social media interactions. Here, we sought to assess the Chikungunya-related digital behavior and the interplay between epidemiological figures and novel data streams traffic. Reaction to the recent outbreak was analyzed in terms of Google Trends, Google News and Twitter traffic, Wikipedia visits and edits, and PubMed articles, exploiting structural modelling equations. A total of 233,678 page-views and 150 edits on the Italian Wikipedia page, 3,702 tweets, 149 scholarly articles, and 3,073 news articles were retrieved. The relationship between overall Chikungunya cases, as well as autochthonous cases, and tweets production was found to be fully mediated by Chikungunya-related web searches. However, in the allochthonous/imported cases model, tweet production was not found to be significantly mediated by epidemiological figures, with web searches still significantly mediating tweet production. Inconsistent relationships were detected in mediation models involving Wikipedia usage as a mediator variable. Similarly, the effect between news consumption and tweets production was suppressed by the Wikipedia usage. A further inconsistent mediation was found in the case of the effect between Wikipedia usage and tweets production, with web searches as a mediator variable. When adjusting for the Internet penetration index, similar findings could be obtained, with the important exception that in the adjusted model the relationship between GN and Twitter was found to be partially mediated by Wikipedia usage. Furthermore, the link between Wikipedia usage and PubMed/MEDLINE was fully mediated by GN, differently from what was found in the unadjusted model. In conclusion—a significant public reaction to the current Chikungunya outbreak was documented. Health authorities should be aware of this, recognizing the role of new technologies for collecting public concerns and replying to them, disseminating awareness and avoid misleading information. PMID:29795578
Providing Multi-Page Data Extraction Services with XWRAPComposer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ling; Zhang, Jianjun; Han, Wei
2008-04-30
Dynamic Web data sources – sometimes known collectively as the Deep Web – increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deepmore » Web. To address these challenges, we present DYNABOT, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DYNABOT has three unique characteristics. First, DYNABOT utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DYNABOT employs a modular, self-tuning system architecture for focused crawling of the Deep Web using service class descriptions. Third, DYNABOT incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.« less
Effectiveness of off-line and web-based promotion of health information web sites.
Jones, Craig E; Pinnock, Carole B
2002-01-01
The relative effectiveness of off-line and web-based promotional activities in increasing the use of health information web sites by target audiences were compared. Visitor sessions were classified according to their method of arrival at the site (referral) as external web site, search engine, or "no referrer" (i.e., visitor arriving at the site by inputting URL or using bookmarks). The number of Australian visitor sessions correlated with no referrer referrals but not web site or search-engine referrals. Results showed that the targeted consumer group is more likely to access the web site as a result of off-line promotional activities. The properties of target audiences likely to influence the effectiveness of off-line versus on-line promotional strategies include the size of the Internet using population of the target audience, their proficiency in the use of the Internet, and the increase in effectiveness of off-line promotional activities when applied to locally defined target audiences.
Rare disease diagnosis: A review of web search, social media and large-scale data-mining approaches.
Svenstrup, Dan; Jørgensen, Henrik L; Winther, Ole
2015-01-01
Physicians and the general public are increasingly using web-based tools to find answers to medical questions. The field of rare diseases is especially challenging and important as shown by the long delay and many mistakes associated with diagnoses. In this paper we review recent initiatives on the use of web search, social media and data mining in data repositories for medical diagnosis. We compare the retrieval accuracy on 56 rare disease cases with known diagnosis for the web search tools google.com, pubmed.gov, omim.org and our own search tool findzebra.com. We give a detailed description of IBM's Watson system and make a rough comparison between findzebra.com and Watson on subsets of the Doctor's dilemma dataset. The recall@10 and recall@20 (fraction of cases where the correct result appears in top 10 and top 20) for the 56 cases are found to be be 29%, 16%, 27% and 59% and 32%, 18%, 34% and 64%, respectively. Thus, FindZebra has a significantly (p < 0.01) higher recall than the other 3 search engines. When tested under the same conditions, Watson and FindZebra showed similar recall@10 accuracy. However, the tests were performed on different subsets of Doctors dilemma questions. Advances in technology and access to high quality data have opened new possibilities for aiding the diagnostic process. Specialized search engines, data mining tools and social media are some of the areas that hold promise.
Rare disease diagnosis: A review of web search, social media and large-scale data-mining approaches
Svenstrup, Dan; Jørgensen, Henrik L; Winther, Ole
2015-01-01
Physicians and the general public are increasingly using web-based tools to find answers to medical questions. The field of rare diseases is especially challenging and important as shown by the long delay and many mistakes associated with diagnoses. In this paper we review recent initiatives on the use of web search, social media and data mining in data repositories for medical diagnosis. We compare the retrieval accuracy on 56 rare disease cases with known diagnosis for the web search tools google.com, pubmed.gov, omim.org and our own search tool findzebra.com. We give a detailed description of IBM's Watson system and make a rough comparison between findzebra.com and Watson on subsets of the Doctor's dilemma dataset. The recall@10 and recall@20 (fraction of cases where the correct result appears in top 10 and top 20) for the 56 cases are found to be be 29%, 16%, 27% and 59% and 32%, 18%, 34% and 64%, respectively. Thus, FindZebra has a significantly (p < 0.01) higher recall than the other 3 search engines. When tested under the same conditions, Watson and FindZebra showed similar recall@10 accuracy. However, the tests were performed on different subsets of Doctors dilemma questions. Advances in technology and access to high quality data have opened new possibilities for aiding the diagnostic process. Specialized search engines, data mining tools and social media are some of the areas that hold promise. PMID:26442199
White, Ryen W; Horvitz, Eric
2017-03-01
A statistical model that predicts the appearance of strong evidence of a lung carcinoma diagnosis via analysis of large-scale anonymized logs of web search queries from millions of people across the United States. To evaluate the feasibility of screening patients at risk of lung carcinoma via analysis of signals from online search activity. We identified people who issue special queries that provide strong evidence of a recent diagnosis of lung carcinoma. We then considered patterns of symptoms expressed as searches about concerning symptoms over several months prior to the appearance of the landmark web queries. We built statistical classifiers that predict the future appearance of landmark queries based on the search log signals. This was a retrospective log analysis of the online activity of millions of web searchers seeking health-related information online. Of web searchers who queried for symptoms related to lung carcinoma, some (n = 5443 of 4 813 985) later issued queries that provide strong evidence of recent clinical diagnosis of lung carcinoma and are regarded as positive cases in our analysis. Additional evidence on the reliability of these queries as representing clinical diagnoses is based on the significant increase in follow-on searches for treatments and medications for these searchers and on the correlation between lung carcinoma incidence rates and our log-based statistics. The remaining symptom searchers (n = 4 808 542) are regarded as negative cases. Performance of the statistical model for early detection from online search behavior, for different lead times, different sets of signals, and different cohorts of searchers stratified by potential risk. The statistical classifier predicting the future appearance of landmark web queries based on search log signals identified searchers who later input queries consistent with a lung carcinoma diagnosis, with a true-positive rate ranging from 3% to 57% for false-positive rates ranging from 0.00001 to 0.001, respectively. The methods can be used to identify people at highest risk up to a year in advance of the inferred diagnosis time. The 5 factors associated with the highest relative risk (RR) were evidence of family history (RR = 7.548; 95% CI, 3.937-14.470), age (RR = 3.558; 95% CI, 3.357-3.772), radon (RR = 2.529; 95% CI, 1.137-5.624), primary location (RR = 2.463; 95% CI, 1.364-4.446), and occupation (RR = 1.969; 95% CI, 1.143-3.391). Evidence of smoking (RR = 1.646; 95% CI, 1.032-2.260) was important but not top-ranked, which was due to the difficulty of identifying smoking history from search terms. Pattern recognition based on data drawn from large-scale web search queries holds opportunity for identifying risk factors and frames new directions with early detection of lung carcinoma.
Du, Hongru; Zhao, Yannan; Wu, Rongwei; Zhang, Xiaolei
2017-01-01
“The Belt and Road” initiative has been expected to facilitate interactions among numerous city centers. This initiative would generate a number of centers, both economic and political, which would facilitate greater interaction. To explore how information flows are merged and the specific opportunities that may be offered, Chinese cities along “the Belt and Road” are selected for a case study. Furthermore, urban networks in cyberspace have been characterized by their infrastructure orientation, which implies that there is a relative dearth of studies focusing on the investigation of urban hierarchies by capturing information flows between Chinese cities along “the Belt and Road”. This paper employs Baidu, the main web search engine in China, to examine urban hierarchies. The results show that urban networks become more balanced, shifting from a polycentric to a homogenized pattern. Furthermore, cities in networks tend to have both a hierarchical system and a spatial concentration primarily in regions such as Beijing-Tianjin-Hebei, Yangtze River Delta and the Pearl River Delta region. Urban hierarchy based on web search activity does not follow the existing hierarchical system based on geospatial and economic development in all cases. Moreover, urban networks, under the framework of “the Belt and Road”, show several significant corridors and more opportunities for more cities, particularly western cities. Furthermore, factors that may influence web search activity are explored. The results show that web search activity is significantly influenced by the economic gap, geographical proximity and administrative rank of the city. PMID:29200421
An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling.
Devi, R Suganya; Manjula, D; Siddharth, R K
2015-01-01
Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling.
An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling
Devi, R. Suganya; Manjula, D.; Siddharth, R. K.
2015-01-01
Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling. PMID:26137592
Spectroscopic data for an astronomy database
NASA Technical Reports Server (NTRS)
Parkinson, W. H.; Smith, Peter L.
1995-01-01
Very few of the atomic and molecular data used in analyses of astronomical spectra are currently available in World Wide Web (WWW) databases that are searchable with hypertext browsers. We have begun to rectify this situation by making extensive atomic data files available with simple search procedures. We have also established links to other on-line atomic and molecular databases. All can be accessed from our database homepage with URL: http:// cfa-www.harvard.edu/ amp/ data/ amdata.html.
Of Ivory and Smurfs: Loxodontan MapReduce Experiments for Web Search
2009-11-01
i.e., index construction may involve multiple flushes to local disk and on-disk merge sorts outside of MapReduce). Once the local indexes have been...contained 198 cores, which, with current dual -processor quad-core con- figurations, could fit into 25 machines—a far more modest cluster with today’s...signifi- cant impact on effectiveness. Our simple pruning technique was performed at query time and hence could be adapted to query-dependent
Googling endometriosis: a systematic review of information available on the Internet.
Hirsch, Martin; Aggarwal, Shivani; Barker, Claire; Davis, Colin J; Duffy, James M N
2017-05-01
The demand for health information online is increasing rapidly without clear governance. We aim to evaluate the credibility, quality, readability, and accuracy of online patient information concerning endometriosis. We searched 5 popular Internet search engines: aol.com, ask.com, bing.com, google.com, and yahoo.com. We developed a search strategy in consultation with patients with endometriosis, to identify relevant World Wide Web pages. Pages containing information related to endometriosis for women with endometriosis or the public were eligible. Two independent authors screened the search results. World Wide Web pages were evaluated using validated instruments across 3 of the 4 following domains: (1) credibility (White Paper instrument; range 0-10); (2) quality (DISCERN instrument; range 0-85); and (3) readability (Flesch-Kincaid instrument; range 0-100); and (4) accuracy (assessed by a prioritized criteria developed in consultation with health care professionals, researchers, and women with endometriosis based on the European Society of Human Reproduction and Embryology guidelines [range 0-30]). We summarized these data in diagrams, tables, and narratively. We identified 750 World Wide Web pages, of which 54 were included. Over a third of Web pages did not attribute authorship and almost half the included pages did not report the sources of information or academic references. No World Wide Web page provided information assessed as being written in plain English. A minority of web pages were assessed as high quality. A single World Wide Web page provided accurate information: evidentlycochrane.net. Available information was, in general, skewed toward the diagnosis of endometriosis. There were 16 credible World Wide Web pages, however the content limitations were infrequently discussed. No World Wide Web page scored highly across all 4 domains. In the unlikely event that a World Wide Web page reports high-quality, accurate, and credible health information it is typically challenging for a lay audience to comprehend. Health care professionals, and the wider community, should inform women with endometriosis of the risk of outdated, inaccurate, or even dangerous information online. The implementation of an information standard will incentivize providers of online information to establish and adhere to codes of conduct. Copyright © 2016 Elsevier Inc. All rights reserved.
Building a Propulsion Experiment Project Management Environment
NASA Technical Reports Server (NTRS)
Keiser, Ken; Tanner, Steve; Hatcher, Danny; Graves, Sara
2004-01-01
What do you get when you cross rocket scientists with computer geeks? It is an interactive, distributed computing web of tools and services providing a more productive environment for propulsion research and development. The Rocket Engine Advancement Program 2 (REAP2) project involves researchers at several institutions collaborating on propulsion experiments and modeling. In an effort to facilitate these collaborations among researchers at different locations and with different specializations, researchers at the Information Technology and Systems Center,' University of Alabama in Huntsville, are creating a prototype web-based interactive information system in support of propulsion research. This system, to be based on experience gained in creating similar systems for NASA Earth science field experiment campaigns such as the Convection and Moisture Experiments (CAMEX), will assist in the planning and analysis of model and experiment results across REAP2 participants. The initial version of the Propulsion Experiment Project Management Environment (PExPM) consists of a controlled-access web portal facilitating the drafting and sharing of working documents and publications. Interactive tools for building and searching an annotated bibliography of publications related to REAP2 research topics have been created to help organize and maintain the results of literature searches. Also work is underway, with some initial prototypes in place, for interactive project management tools allowing project managers to schedule experiment activities, track status and report on results. This paper describes current successes, plans, and expected challenges for this project.
Aguillo, I
2000-01-01
Although the Internet is already a valuable information resource in medicine, there are important challenges to be faced before physicians and general users will have extensive access to this information. As a result of a research effort to compile a health-related Internet directory, new tools and strategies have been developed to solve key problems derived from the explosive growth of medical information on the Net and the great concern over the quality of such critical information. The current Internet search engines lack some important capabilities. We suggest using second generation tools (client-side based) able to deal with large quantities of data and to increase the usability of the records recovered. We tested the capabilities of these programs to solve health-related information problems, recognising six groups according to the kind of topics addressed: Z39.50 clients, downloaders, multisearchers, tracing agents, indexers and mappers. The evaluation of the quality of health information available on the Internet could require a large amount of human effort. A possible solution may be to use quantitative indicators based on the hypertext visibility of the Web sites. The cybermetric measures are valid for quality evaluation if they are derived from indirect peer review by experts with Web pages citing the site. The hypertext links acting as citations need to be extracted from a controlled sample of quality super-sites.
Sentiment Analysis of Web Sites Related to Vaginal Mesh Use in Pelvic Reconstructive Surgery.
Hobson, Deslyn T G; Meriwether, Kate V; Francis, Sean L; Kinman, Casey L; Stewart, J Ryan
2018-05-02
The purpose of this study was to utilize sentiment analysis to describe online opinions toward vaginal mesh. We hypothesized that sentiment in legal Web sites would be more negative than that in medical and reference Web sites. We generated a list of relevant key words related to vaginal mesh and searched Web sites using the Google search engine. Each unique uniform resource locator (URL) was sorted into 1 of 6 categories: "medical", "legal", "news/media", "patient generated", "reference", or "unrelated". Sentiment of relevant Web sites, the primary outcome, was scored on a scale of -1 to +1, and mean sentiment was compared across all categories using 1-way analysis of variance. Tukey test evaluated differences between category pairs. Google searches of 464 unique key words resulted in 11,405 URLs. Sentiment analysis was performed on 8029 relevant URLs (3472 legal, 1625 "medical", 1774 "reference", 666 "news media", 492 "patient generated"). The mean sentiment for all relevant Web sites was +0.01 ± 0.16; analysis of variance revealed significant differences between categories (P < 0.001). Web sites categorized as "legal" and "news/media" had a slightly negative mean sentiment, whereas those categorized as "medical," "reference," and "patient generated" had slightly positive mean sentiments. Tukey test showed differences between all category pairs except the "medical" versus "reference" in comparison with the largest mean difference (-0.13) seen in the "legal" versus "reference" comparison. Web sites related to vaginal mesh have an overall mean neutral sentiment, and Web sites categorized as "medical," "reference," and "patient generated" have significantly higher sentiment scores than related Web sites in "legal" and "news/media" categories.
Hinds, Richard M; Klifto, Christopher S; Naik, Amish A; Sapienza, Anthony; Capo, John T
2016-08-01
The Internet is a common resource for applicants of hand surgery fellowships, however, the quality and accessibility of fellowship online information is unknown. The objectives of this study were to evaluate the accessibility of hand surgery fellowship Web sites and to assess the quality of information provided via program Web sites. Hand fellowship Web site accessibility was evaluated by reviewing the American Society for Surgery of the Hand (ASSH) on November 16, 2014 and the National Resident Matching Program (NRMP) fellowship directories on February 12, 2015, and performing an independent Google search on November 25, 2014. Accessible Web sites were then assessed for quality of the presented information. A total of 81 programs were identified with the ASSH directory featuring direct links to 32% of program Web sites and the NRMP directory directly linking to 0%. A Google search yielded direct links to 86% of program Web sites. The quality of presented information varied greatly among the 72 accessible Web sites. Program description (100%), fellowship application requirements (97%), program contact email address (85%), and research requirements (75%) were the most commonly presented components of fellowship information. Hand fellowship program Web sites can be accessed from the ASSH directory and, to a lesser extent, the NRMP directory. However, a Google search is the most reliable method to access online fellowship information. Of assessable programs, all featured a program description though the quality of the remaining information was variable. Hand surgery fellowship applicants may face some difficulties when attempting to gather program information online. Future efforts should focus on improving the accessibility and content quality on hand surgery fellowship program Web sites.
CellAtlasSearch: a scalable search engine for single cells.
Srivastava, Divyanshu; Iyer, Arvind; Kumar, Vibhor; Sengupta, Debarka
2018-05-21
Owing to the advent of high throughput single cell transcriptomics, past few years have seen exponential growth in production of gene expression data. Recently efforts have been made by various research groups to homogenize and store single cell expression from a large number of studies. The true value of this ever increasing data deluge can be unlocked by making it searchable. To this end, we propose CellAtlasSearch, a novel search architecture for high dimensional expression data, which is massively parallel as well as light-weight, thus infinitely scalable. In CellAtlasSearch, we use a Graphical Processing Unit (GPU) friendly version of Locality Sensitive Hashing (LSH) for unmatched speedup in data processing and query. Currently, CellAtlasSearch features over 300 000 reference expression profiles including both bulk and single-cell data. It enables the user query individual single cell transcriptomes and finds matching samples from the database along with necessary meta information. CellAtlasSearch aims to assist researchers and clinicians in characterizing unannotated single cells. It also facilitates noise free, low dimensional representation of single-cell expression profiles by projecting them on a wide variety of reference samples. The web-server is accessible at: http://www.cellatlassearch.com.