Sample records for mining google web

  1. Beyond Google: The Invisible Web in the Academic Library

    ERIC Educational Resources Information Center

    Devine, Jane; Egger-Sider, Francine

    2004-01-01

    This article analyzes the concept of the Invisible Web and its implication for academic librarianship. It offers a guide to tools that can be used to mine the Invisible Web and discusses the benefits of using the Invisible Web to promote interest in library services. In addition, the article includes an expanded definition, a literature review,…

  2. Development of a Google-based search engine for data mining radiology reports.

    PubMed

    Erinjeri, Joseph P; Picus, Daniel; Prior, Fred W; Rubin, David A; Koppel, Paul

    2009-08-01

    The aim of this study is to develop a secure, Google-based data-mining tool for radiology reports using free and open source technologies and to explore its use within an academic radiology department. A Health Insurance Portability and Accountability Act (HIPAA)-compliant data repository, search engine and user interface were created to facilitate treatment, operations, and reviews preparatory to research. The Institutional Review Board waived review of the project, and informed consent was not required. Comprising 7.9 GB of disk space, 2.9 million text reports were downloaded from our radiology information system to a fileserver. Extensible markup language (XML) representations of the reports were indexed using Google Desktop Enterprise search engine software. A hypertext markup language (HTML) form allowed users to submit queries to Google Desktop, and Google's XML response was interpreted by a practical extraction and report language (PERL) script, presenting ranked results in a web browser window. The query, reason for search, results, and documents visited were logged to maintain HIPAA compliance. Indexing averaged approximately 25,000 reports per hour. Keyword search of a common term like "pneumothorax" yielded the first ten most relevant results of 705,550 total results in 1.36 s. Keyword search of a rare term like "hemangioendothelioma" yielded the first ten most relevant results of 167 total results in 0.23 s; retrieval of all 167 results took 0.26 s. Data mining tools for radiology reports will improve the productivity of academic radiologists in clinical, educational, research, and administrative tasks. By leveraging existing knowledge of Google's interface, radiologists can quickly perform useful searches.

  3. Web-Based Collaborative Writing in L2 Contexts: Methodological Insights from Text Mining

    ERIC Educational Resources Information Center

    Yim, Soobin; Warschauer, Mark

    2017-01-01

    The increasingly widespread use of social software (e.g., Wikis, Google Docs) in second language (L2) settings has brought a renewed attention to collaborative writing. Although the current methodological approaches to examining collaborative writing are valuable to understand L2 students' interactional patterns or perceived experiences, they can…

  4. Googling trends in conservation biology.

    PubMed

    Proulx, Raphaël; Massicotte, Philippe; Pépino, Marc

    2014-02-01

    Web-crawling approaches, that is, automated programs data mining the internet to obtain information about a particular process, have recently been proposed for monitoring early signs of ecosystem degradation or for establishing crop calendars. However, lack of a clear conceptual and methodological framework has prevented the development of such approaches within the field of conservation biology. Our objective was to illustrate how Google Trends, a freely accessible web-crawling engine, can be used to track changes in timing of biological processes, spatial distribution of invasive species, and level of public awareness about key conservation issues. Google Trends returns the number of internet searches that were made for a keyword in a given region of the world over a defined period. Using data retrieved online for 13 countries, we exemplify how Google Trends can be used to study the timing of biological processes, such as the seasonal recurrence of pollen release or mosquito outbreaks across a latitudinal gradient. We mapped the spatial extent of results from Google Trends for 5 invasive species in the United States and found geographic patterns in invasions that are consistent with their coarse-grained distribution at state levels. From 2004 through 2012, Google Trends showed that the level of public interest and awareness about conservation issues related to ecosystem services, biodiversity, and climate change increased, decreased, and followed both trends, respectively. Finally, to further the development of research approaches at the interface of conservation biology, collective knowledge, and environmental management, we developed an algorithm that allows the rapid retrieval of Google Trends data. © 2013 Society for Conservation Biology.

  5. IRIS Earthquake Browser with Integration to the GEON IDV for 3-D Visualization of Hypocenters.

    NASA Astrophysics Data System (ADS)

    Weertman, B. R.

    2007-12-01

    We present a new generation of web based earthquake query tool - the IRIS Earthquake Browser (IEB). The IEB combines the DMC's large set of earthquake catalogs (provided by USGS/NEIC, ISC and the ANF) with the popular Google Maps web interface. With the IEB you can quickly and easily find earthquakes in any region of the globe. Using Google's detailed satellite images, earthquakes can be easily co-located with natural geographic features such as volcanoes as well as man made features such as commercial mines. A set of controls allow earthquakes to be filtered by time, magnitude, and depth range as well as catalog name, contributor name and magnitude type. Displayed events can be easily exported in NetCDF format into the GEON Integrated Data Viewer (IDV) where hypocenters may be visualized in three dimensions. Looking "under the hood", the IEB is based on AJAX technology and utilizes REST style web services hosted at the IRIS DMC. The IEB is part of a broader effort at the DMC aimed at making our data holdings available via web services. The IEB is useful both educationally and as a research tool.

  6. A Web-Based Interactive Mapping System of State Wide School Performance: Integrating Google Maps API Technology into Educational Achievement Data

    ERIC Educational Resources Information Center

    Wang, Kening; Mulvenon, Sean W.; Stegman, Charles; Anderson, Travis

    2008-01-01

    Google Maps API (Application Programming Interface), released in late June 2005 by Google, is an amazing technology that allows users to embed Google Maps in their own Web pages with JavaScript. Google Maps API has accelerated the development of new Google Maps based applications. This article reports a Web-based interactive mapping system…

  7. Towards the Geospatial Web: Media Platforms for Managing Geotagged Knowledge Repositories

    NASA Astrophysics Data System (ADS)

    Scharl, Arno

    International media have recognized the visual appeal of geo-browsers such as NASA World Wind and Google Earth, for example, when Web and television coverage on Hurricane Katrina used interactive geospatial projections to illustrate its path and the scale of destruction in August 2005. Yet these early applications only hint at the true potential of geospatial technology to build and maintain virtual communities and to revolutionize the production, distribution and consumption of media products. This chapter investigates this potential by reviewing the literature and discussing the integration of geospatial and semantic reference systems, with an emphasis on extracting geospatial context from unstructured text. A content analysis of news coverage based on a suite of text mining tools (webLyzard) sheds light on the popularity and adoption of geospatial platforms.

  8. Croatian Medical Journal citation score in Web of Science, Scopus, and Google Scholar.

    PubMed

    Sember, Marijan; Utrobicić, Ana; Petrak, Jelka

    2010-04-01

    To analyze the 2007 citation count of articles published by the Croatian Medical Journal in 2005-2006 based on data from the Web of Science, Scopus, and Google Scholar. Web of Science and Scopus were searched for the articles published in 2005-2006. As all articles returned by Scopus were included in Web of Science, the latter list was the sample for further analysis. Total citation counts for each article on the list were retrieved from Web of Science, Scopus, and Google Scholar. The overlap and unique citations were compared and analyzed. Proportions were compared using chi(2)-test. Google Scholar returned the greatest proportion of articles with citations (45%), followed by Scopus (42%), and Web of Science (38%). Almost a half (49%) of articles had no citations and 11% had an equal number of identical citations in all 3 databases. The greatest overlap was found between Web of Science and Scopus (54%), followed by Scopus and Google Scholar (51%), and Web of Science and Google Scholar (44%). The greatest number of unique citations was found by Google Scholar (n=86). The majority of these citations (64%) came from journals, followed by books and PhD theses. Approximately 55% of all citing documents were full-text resources in open access. The language of citing documents was mostly English, but as many as 25 citing documents (29%) were in Chinese. Google Scholar shares a total of 42% citations returned by two others, more influential, bibliographic resources. The list of unique citations in Google Scholar is predominantly journal based, but these journals are mainly of local character. Citations received by internationally recognized medical journals are crucial for increasing the visibility of small medical journals but Google Scholar may serve as an alternative bibliometric tool for an orientational citation insight.

  9. Curating the Web: Building a Google Custom Search Engine for the Arts

    ERIC Educational Resources Information Center

    Hennesy, Cody; Bowman, John

    2008-01-01

    Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…

  10. Rare disease diagnosis: A review of web search, social media and large-scale data-mining approaches.

    PubMed

    Svenstrup, Dan; Jørgensen, Henrik L; Winther, Ole

    2015-01-01

    Physicians and the general public are increasingly using web-based tools to find answers to medical questions. The field of rare diseases is especially challenging and important as shown by the long delay and many mistakes associated with diagnoses. In this paper we review recent initiatives on the use of web search, social media and data mining in data repositories for medical diagnosis. We compare the retrieval accuracy on 56 rare disease cases with known diagnosis for the web search tools google.com, pubmed.gov, omim.org and our own search tool findzebra.com. We give a detailed description of IBM's Watson system and make a rough comparison between findzebra.com and Watson on subsets of the Doctor's dilemma dataset. The recall@10 and recall@20 (fraction of cases where the correct result appears in top 10 and top 20) for the 56 cases are found to be be 29%, 16%, 27% and 59% and 32%, 18%, 34% and 64%, respectively. Thus, FindZebra has a significantly (p < 0.01) higher recall than the other 3 search engines. When tested under the same conditions, Watson and FindZebra showed similar recall@10 accuracy. However, the tests were performed on different subsets of Doctors dilemma questions. Advances in technology and access to high quality data have opened new possibilities for aiding the diagnostic process. Specialized search engines, data mining tools and social media are some of the areas that hold promise.

  11. Rare disease diagnosis: A review of web search, social media and large-scale data-mining approaches

    PubMed Central

    Svenstrup, Dan; Jørgensen, Henrik L; Winther, Ole

    2015-01-01

    Physicians and the general public are increasingly using web-based tools to find answers to medical questions. The field of rare diseases is especially challenging and important as shown by the long delay and many mistakes associated with diagnoses. In this paper we review recent initiatives on the use of web search, social media and data mining in data repositories for medical diagnosis. We compare the retrieval accuracy on 56 rare disease cases with known diagnosis for the web search tools google.com, pubmed.gov, omim.org and our own search tool findzebra.com. We give a detailed description of IBM's Watson system and make a rough comparison between findzebra.com and Watson on subsets of the Doctor's dilemma dataset. The recall@10 and recall@20 (fraction of cases where the correct result appears in top 10 and top 20) for the 56 cases are found to be be 29%, 16%, 27% and 59% and 32%, 18%, 34% and 64%, respectively. Thus, FindZebra has a significantly (p < 0.01) higher recall than the other 3 search engines. When tested under the same conditions, Watson and FindZebra showed similar recall@10 accuracy. However, the tests were performed on different subsets of Doctors dilemma questions. Advances in technology and access to high quality data have opened new possibilities for aiding the diagnostic process. Specialized search engines, data mining tools and social media are some of the areas that hold promise. PMID:26442199

  12. Differences in the quality of information on the internet about lung cancer between the United States and Japan.

    PubMed

    Goto, Yasushi; Sekine, Ikuo; Sekiguchi, Hiroshi; Yamada, Kazuhiko; Nokihara, Hiroshi; Yamamoto, Noboru; Kunitoh, Hideo; Ohe, Yuichiro; Tamura, Tomohide

    2009-07-01

    Quality of information available over the Internet has been a cause for concern. Our goal was to evaluate the quality of information available on lung cancer in the United States and Japan and assess the differences between the two. We conducted a prospective, observational Web review by searching the word "lung cancer" in Japanese and English, using Google Japan (Google-J), Google United States (Google-U), and Yahoo Japan (Yahoo-J). The first 50 Web sites displayed were evaluated from the ethical perspective and for the validity of the information. The administrator of each Web site was also investigated. Ethical policies were generally well described in the Web sites displayed by Google-U but less well so in the sites displayed by Google-J and Yahoo-J. The differences in the validity of the information available was more striking, in that 80% of the Web sites generated by Google-U described the most appropriate treatment methods, whereas less than 50% of the Web sites displayed by Google-J and Yahoo-J recommended the standard therapy, and more than 10% advertised alternative therapy. Nonprofit organizations and public institutions were the primary Web site administrators in the United States, whereas commercial or personal Web sites were more frequent in Japan. Differences in the quality of information on lung cancer available over the Internet were apparent between Japan and the United States. The reasons for such differences might be tracked to the administrators of the Web sites. Nonprofit organizations and public institutions are the up-and-coming Web site administrators for relaying reliable medical information.

  13. Google Scholar Goes to School: The Presence of Google Scholar on College and University Web Sites

    ERIC Educational Resources Information Center

    Neuhaus, Chris; Neuhaus, Ellen; Asher, Alan

    2008-01-01

    This study measured the degree of Google Scholar adoption within academia by analyzing the frequency of Google Scholar appearances on 948 campus and library Web sites, and by ascertaining the establishment of link resolution between Google Scholar and library resources. Results indicate a positive correlation between the implementation of Google…

  14. Characterization of topological structure on complex networks.

    PubMed

    Nakamura, Ikuo

    2003-10-01

    Characterizing the topological structure of complex networks is a significant problem especially from the viewpoint of data mining on the World Wide Web. "Page rank" used in the commercial search engine Google is such a measure of authority to rank all the nodes matching a given query. We have investigated the page-rank distribution of the real Web and a growing network model, both of which have directed links and exhibit a power law distributions of in-degree (the number of incoming links to the node) and out-degree (the number of outgoing links from the node), respectively. We find a concentration of page rank on a small number of nodes and low page rank on high degree regimes in the real Web, which can be explained by topological properties of the network, e.g., network motifs, and connectivities of nearest neighbors.

  15. Croatian Medical Journal Citation Score in Web of Science, Scopus, and Google Scholar

    PubMed Central

    Šember, Marijan; Utrobičić, Ana; Petrak, Jelka

    2010-01-01

    Aim To analyze the 2007 citation count of articles published by the Croatian Medical Journal in 2005-2006 based on data from the Web of Science, Scopus, and Google Scholar. Methods Web of Science and Scopus were searched for the articles published in 2005-2006. As all articles returned by Scopus were included in Web of Science, the latter list was the sample for further analysis. Total citation counts for each article on the list were retrieved from Web of Science, Scopus, and Google Scholar. The overlap and unique citations were compared and analyzed. Proportions were compared using χ2-test. Results Google Scholar returned the greatest proportion of articles with citations (45%), followed by Scopus (42%), and Web of Science (38%). Almost a half (49%) of articles had no citations and 11% had an equal number of identical citations in all 3 databases. The greatest overlap was found between Web of Science and Scopus (54%), followed by Scopus and Google Scholar (51%), and Web of Science and Google Scholar (44%). The greatest number of unique citations was found by Google Scholar (n = 86). The majority of these citations (64%) came from journals, followed by books and PhD theses. Approximately 55% of all citing documents were full-text resources in open access. The language of citing documents was mostly English, but as many as 25 citing documents (29%) were in Chinese. Conclusion Google Scholar shares a total of 42% citations returned by two others, more influential, bibliographic resources. The list of unique citations in Google Scholar is predominantly journal based, but these journals are mainly of local character. Citations received by internationally recognized medical journals are crucial for increasing the visibility of small medical journals but Google Scholar may serve as an alternative bibliometric tool for an orientational citation insight. PMID:20401951

  16. How to Optimize Your Web Site

    ERIC Educational Resources Information Center

    Dysart, Joe

    2008-01-01

    Given Google's growing market share--69% of all searches by the close of 2007--it's absolutely critical for any school on the Web to ensure its site is Google-friendly. A Google-optimized site ensures that students and parents can quickly find one's district on the Web even if they don't know the address. Plus, good search optimization simply…

  17. Flipping the Online Classroom with Web 2.0: The Asynchronous Workshop

    ERIC Educational Resources Information Center

    Cummings, Lance

    2016-01-01

    This article examines how Web 2.0 technologies can be used to "flip" the online classroom by creating asynchronous workshops in social environments where immediacy and social presence can be maximized. Using experience teaching several communication and writing classes in Google Apps (Google+, Google Hangouts, Google Drive, etc.), I…

  18. Taking advantage of Google's Web-based applications and services.

    PubMed

    Brigham, Tara J

    2014-01-01

    Google is a company that is constantly expanding and growing its services and products. While most librarians possess a "love/hate" relationship with Google, there are a number of reasons you should consider exploring some of the tools Google has created and made freely available. Applications and services such as Google Docs, Slides, and Google+ are functional and dynamic without the cost of comparable products. This column will address some of the issues users should be aware of before signing up to use Google's tools, and a description of some of Google's Web applications and services, plus how they can be useful to librarians in health care.

  19. Radon

    MedlinePlus

    ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: March 3, 2011 Page ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Contact Us: Agency for Toxic Substances and ...

  20. Mercury

    MedlinePlus

    ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: February 12, 2013 Page ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Contact Us: Agency for Toxic Substances and ...

  1. Asbestos

    MedlinePlus

    ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: March 3, 2011 Page ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Contact Us: Agency for Toxic Substances and ...

  2. Beryllium Toxicity

    MedlinePlus

    ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Beryllium Toxicity Patient Education Care Instruction Sheet ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: May 23, 2008 Page ...

  3. ToxFAQs

    MedlinePlus

    ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Get email updates To receive email updates ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: June 24, 2014 Page ...

  4. Side by Side: What a Comparative Usability Study Told Us about a Web Site Redesign

    ERIC Educational Resources Information Center

    Dougan, Kirstin; Fulton, Camilla

    2009-01-01

    Library Web sites must compete against easy-to-use sites, such as Google Scholar, Google Books, and Wikipedia, for students' time and attention. Library Web sites must therefore be designed with aesthetics and user perceptions at the forefront. The Music and Performing Arts Library at Urbana-Champaign's Web site was overcrowded and in much need of…

  5. Total Petroleum Hydrocarbons (TPH): ToxFAQs

    MedlinePlus

    ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: February 4, 2014 Page ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Contact Us: Agency for Toxic Substances and ...

  6. ToxGuides: Quick Reference Pocket Guide for Toxicological Profiles

    MedlinePlus

    ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Get email updates To receive email updates ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: January 21, 2015 Page ...

  7. Open data mining for Taiwan's dengue epidemic.

    PubMed

    Wu, ChienHsing; Kao, Shu-Chen; Shih, Chia-Hung; Kan, Meng-Hsuan

    2018-07-01

    By using a quantitative approach, this study examines the applicability of data mining technique to discover knowledge from open data related to Taiwan's dengue epidemic. We compare results when Google trend data are included or excluded. Data sources are government open data, climate data, and Google trend data. Research findings from analysis of 70,914 cases are obtained. Location and time (month) in open data show the highest classification power followed by climate variables (temperature and humidity), whereas gender and age show the lowest values. Both prediction accuracy and simplicity decrease when Google trends are considered (respectively 0.94 and 0.37, compared to 0.96 and 0.46). The article demonstrates the value of open data mining in the context of public health care. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Comparisons of citations in Web of Science, Scopus, and Google Scholar for articles published in general medical journals.

    PubMed

    Kulkarni, Abhaya V; Aziz, Brittany; Shams, Iffat; Busse, Jason W

    2009-09-09

    Until recently, Web of Science was the only database available to track citation counts for published articles. Other databases are now available, but their relative performance has not been established. To compare the citation count profiles of articles published in general medical journals among the citation databases of Web of Science, Scopus, and Google Scholar. Cohort study of 328 articles published in JAMA, Lancet, or the New England Journal of Medicine between October 1, 1999, and March 31, 2000. Total citation counts for each article up to June 2008 were retrieved from Web of Science, Scopus, and Google Scholar. Article characteristics were analyzed in linear regression models to determine interaction with the databases. Number of citations received by an article since publication and article characteristics associated with citation in databases. Google Scholar and Scopus retrieved more citations per article with a median of 160 (interquartile range [IQR], 83 to 324) and 149 (IQR, 78 to 289), respectively, than Web of Science (median, 122; IQR, 66 to 241) (P < .001 for both comparisons). Compared with Web of Science, Scopus retrieved more citations from non-English-language sources (median, 10.2% vs 4.1%) and reviews (30.8% vs 18.2%), and fewer citations from articles (57.2% vs 70.5%), editorials (2.1% vs 5.9%), and letters (0.8% vs 2.6%) (all P < .001). On a log(10)-transformed scale, fewer citations were found in Google Scholar to articles with declared industry funding (nonstandardized regression coefficient, -0.09; 95% confidence interval [CI], -0.15 to -0.03), reporting a study of a drug or medical device (-0.05; 95% CI, -0.11 to 0.01), or with group authorship (-0.29; 95% CI, -0.35 to -0.23). In multivariable analysis, group authorship was the only characteristic that differed among the databases; Google Scholar had significantly fewer citations to group-authored articles (-0.30; 95% CI, -0.36 to -0.23) compared with Web of Science. Web of Science, Scopus, and Google Scholar produced quantitatively and qualitatively different citation counts for articles published in 3 general medical journals.

  9. Going beyond Google for Faster and Smarter Web Searching

    ERIC Educational Resources Information Center

    Vine, Rita

    2004-01-01

    With more than 4 billion web pages in its database, Google is suitable for many different kinds of searches. When you know what you are looking for, Google can be a pretty good first choice, as long as you want to search a word pattern that can be expected to appear on any results pages. The problem starts when you don't know exactly what you're…

  10. The quality of patient-orientated Internet information on oral lichen planus: a pilot study.

    PubMed

    López-Jornet, Pía; Camacho-Alonso, Fabio

    2010-10-01

    This study examines the accessibility and quality Web pages related with oral lichen planus. Sites were identified using two search engines (Google and Yahoo!) and the search terms 'oral lichen planus' and 'oral lesion lichenoid'. The first 100 sites in each search were visited and classified. The web sites were evaluated for content quality by using the validated DISCERN rating instrument. JAMA benchmarks and 'Health on the Net' seal (HON). A total of 109,000 sites were recorded in Google using the search terms and 520,000 in Yahoo! A total of 19 Web pages considered relevant were examined on Google and 20 on Yahoo! As regards the JAMA benchmarks, only two pages satisfied the four criteria in Google (10%), and only three (15%) in Yahoo! As regards DISCERN, the overall quality of web site information was poor, no site reaching the maximum score. In Google 78.94% of sites had important deficiencies, and 50% in Yahoo!, the difference between the two search engines being statistically significant (P = 0.031). Only five pages (17.2%) on Google and eight (40%) on Yahoo! showed the HON code. Based on our review, doctors must assume primary responsibility for educating and counselling their patients. © 2010 Blackwell Publishing Ltd.

  11. What can Google and Wikipedia can tell us about a disease? Big Data trends analysis in Systemic Lupus Erythematosus.

    PubMed

    Sciascia, Savino; Radin, Massimo

    2017-11-01

    To investigate trends of Internet search volumes linked to Systemic Lupus Erythematosus (SLE), on-going clinical trials and research developments associated to the disease, using Big Data monitoring and data mining. We performed a longitudinal analysis based on the large amount of data generated by Google Trends, scientific search tools (SCOPUS, Medline/Pubmed/ClinicalTrails.gov) considering 'SLE', and 'lupus' in a 5-year web-based research. Wikipedia page views were also analysed using WikiTrends and the results were compared with the search volumes generated by Google Trends. We observed an overall higher distribution of search volumes from Google Trends in United States, South America, Canada, South Africa, Australia and Europe (mainly Italy, United Kingdom, Spain, France, Germany), showing a geographically heterogeneity in insight into health-related behaviour of the different populations towards SLE. By comparing the search volumes analysing the Wikipedia page views of both SLE and belimumab, we found a close peak trend, reflecting the knowledge translation after the approval of belimumab for the treatment of SLE. When focusing on search volumes of Google Trends, we noticed that the highest peaks were related to news headlines that involved celebrities affected by SLE, also when comparing to the peak generated by the approval of belimumab. This new approach, able to investigate health information seeking, might give an estimate of the health-related demand and even of the health-related behaviour of SLE, bringing new light to unanswered questions. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Citations and the h index of soil researchers and journals in the Web of Science, Scopus, and Google Scholar

    PubMed Central

    Hartemink, Alfred E.; McBratney, Alex; Jang, Ho-Jun

    2013-01-01

    Citation metrics and h indices differ using different bibliometric databases. We compiled the number of publications, number of citations, h index and year since the first publication from 340 soil researchers from all over the world. On average, Google Scholar has the highest h index, number of publications and citations per researcher, and the Web of Science the lowest. The number of papers in Google Scholar is on average 2.3 times higher and the number of citations is 1.9 times higher compared to the data in the Web of Science. Scopus metrics are slightly higher than that of the Web of Science. The h index in Google Scholar is on average 1.4 times larger than Web of Science, and the h index in Scopus is on average 1.1 times larger than Web of Science. Over time, the metrics increase in all three databases but fastest in Google Scholar. The h index of an individual soil scientist is about 0.7 times the number of years since his/her first publication. There is a large difference between the number of citations, number of publications and the h index using the three databases. From this analysis it can be concluded that the choice of the database affects widely-used citation and evaluation metrics but that bibliometric transfer functions exist to relate the metrics from these three databases. We also investigated the relationship between journal’s impact factor and Google Scholar’s h5-index. The h5-index is a better measure of a journal’s citation than the 2 or 5 year window impact factor. PMID:24167778

  13. Citations and the h index of soil researchers and journals in the Web of Science, Scopus, and Google Scholar.

    PubMed

    Minasny, Budiman; Hartemink, Alfred E; McBratney, Alex; Jang, Ho-Jun

    2013-01-01

    Citation metrics and h indices differ using different bibliometric databases. We compiled the number of publications, number of citations, h index and year since the first publication from 340 soil researchers from all over the world. On average, Google Scholar has the highest h index, number of publications and citations per researcher, and the Web of Science the lowest. The number of papers in Google Scholar is on average 2.3 times higher and the number of citations is 1.9 times higher compared to the data in the Web of Science. Scopus metrics are slightly higher than that of the Web of Science. The h index in Google Scholar is on average 1.4 times larger than Web of Science, and the h index in Scopus is on average 1.1 times larger than Web of Science. Over time, the metrics increase in all three databases but fastest in Google Scholar. The h index of an individual soil scientist is about 0.7 times the number of years since his/her first publication. There is a large difference between the number of citations, number of publications and the h index using the three databases. From this analysis it can be concluded that the choice of the database affects widely-used citation and evaluation metrics but that bibliometric transfer functions exist to relate the metrics from these three databases. We also investigated the relationship between journal's impact factor and Google Scholar's h5-index. The h5-index is a better measure of a journal's citation than the 2 or 5 year window impact factor.

  14. Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses.

    PubMed

    Falagas, Matthew E; Pitsouni, Eleni I; Malietzis, George A; Pappas, Georgios

    2008-02-01

    The evolution of the electronic age has led to the development of numerous medical databases on the World Wide Web, offering search facilities on a particular subject and the ability to perform citation analysis. We compared the content coverage and practical utility of PubMed, Scopus, Web of Science, and Google Scholar. The official Web pages of the databases were used to extract information on the range of journals covered, search facilities and restrictions, and update frequency. We used the example of a keyword search to evaluate the usefulness of these databases in biomedical information retrieval and a specific published article to evaluate their utility in performing citation analysis. All databases were practical in use and offered numerous search facilities. PubMed and Google Scholar are accessed for free. The keyword search with PubMed offers optimal update frequency and includes online early articles; other databases can rate articles by number of citations, as an index of importance. For citation analysis, Scopus offers about 20% more coverage than Web of Science, whereas Google Scholar offers results of inconsistent accuracy. PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range, of help both in keyword searching and citation analysis, but it is currently limited to recent articles (published after 1995) compared with Web of Science. Google Scholar, as for the Web in general, can help in the retrieval of even the most obscure information but its use is marred by inadequate, less often updated, citation information.

  15. Using Open Web APIs in Teaching Web Mining

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju

    2009-01-01

    With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…

  16. PhyloGeoViz: a web-based program that visualizes genetic data on maps.

    PubMed

    Tsai, Yi-Hsin E

    2011-05-01

    The first step of many population genetic studies is the simple visualization of allele frequencies on a landscape. This basic data exploration can be challenging without proprietary software, and the manual plotting of data is cumbersome and unfeasible at large sample sizes. I present an open source, web-based program that plots any kind of frequency or count data as pie charts in Google Maps (Google Inc., Mountain View, CA). Pie polygons are then exportable to Google Earth (Google Inc.), a free Geographic Information Systems platform. Import of genetic data into Google Earth allows phylogeographers access to a wealth of spatial information layers integral to forming hypotheses and understanding patterns in the data. © 2010 Blackwell Publishing Ltd.

  17. The Adversarial Route Analysis Tool: A Web Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casson, William H. Jr.

    2012-08-02

    The Adversarial Route Analysis Tool is a type of Google maps for adversaries. It's a web-based Geospatial application similar to Google Maps. It helps the U.S. government plan operations that predict where an adversary might be. It's easily accessible and maintainble and it's simple to use without much training.

  18. The Privilege of Ranking: Google Plays Ball.

    ERIC Educational Resources Information Center

    Wiggins, Richard

    2003-01-01

    Discussion of ranking systems used in various settings, including college football and academic admissions, focuses on the Google search engine. Explains the PageRank mathematical formula that scores Web pages by connecting the number of links; limitations, including authenticity and accuracy of ranked Web pages; relevancy; adjusting algorithms;…

  19. Through the Google Goggles: Sociopolitical Bias in Search Engine Design

    NASA Astrophysics Data System (ADS)

    Diaz, A.

    Search engines like Google are essential to navigating the Web's endless supply of news, political information, and citizen discourse. The mechanisms and conditions under which search results are selected should therefore be of considerable interest to media scholars, political theorists, and citizens alike. In this chapter, I adopt a "deliberative" ideal for search engines and examine whether Google exhibits the "same old" media biases of mainstreaming, hypercommercialism, and industry consolidation. In the end, serious objections to Google are raised: Google may favor popularity over richness; it provides advertising that competes directly with "editorial" content; it so overwhelmingly dominates the industry that users seldom get a second opinion, and this is unlikely to change. Ultimately, however, the results of this analysis may speak less about Google than about contradictions in the deliberative ideal and the so-called "inherently democratic" nature of the Web.

  20. Finding research information on the web: how to make the most of Google and other free search tools.

    PubMed

    Blakeman, Karen

    2013-01-01

    The Internet and the World Wide Web has had a major impact on the accessibility of research information. The move towards open access and development of institutional repositories has resulted in increasing amounts of information being made available free of charge. Many of these resources are not included in conventional subscription databases and Google is not always the best way to ensure that one is picking up all relevant material on a topic. This article will look at how Google's search engine works, how to use Google more effectively for identifying research information, alternatives to Google and will review some of the specialist tools that have evolved to cope with the diverse forms of information that now exist in electronic form.

  1. Confessions of a Librarian or: How I Learned to Stop Worrying and Love Google

    ERIC Educational Resources Information Center

    Gunnels, Claire B.; Sisson, Amy

    2009-01-01

    Have you ever stopped to think about life before Google? We will make the argument that Google is the first manifestation of Web 2.0, of the power and promise of social networking and the ubiquitous wiki. We will discuss the positive influence of Google and how Google and other social networking tools afford librarians leading-edge technologies…

  2. MaRGEE: Move and Rotate Google Earth Elements

    NASA Astrophysics Data System (ADS)

    Dordevic, Mladen M.; Whitmeyer, Steven J.

    2015-12-01

    Google Earth is recognized as a highly effective visualization tool for geospatial information. However, there remain serious limitations that have hindered its acceptance as a tool for research and education in the geosciences. One significant limitation is the inability to translate or rotate geometrical elements on the Google Earth virtual globe. Here we present a new JavaScript web application to "Move and Rotate Google Earth Elements" (MaRGEE). MaRGEE includes tools to simplify, translate, and rotate elements, add intermediate steps to a transposition, and batch process multiple transpositions. The transposition algorithm uses spherical geometry calculations, such as the haversine formula, to accurately reposition groups of points, paths, and polygons on the Google Earth globe without distortion. Due to the imminent deprecation of the Google Earth API and browser plugin, MaRGEE uses a Google Maps interface to facilitate and illustrate the transpositions. However, the inherent spatial distortions that result from the Google Maps Web Mercator projection are not apparent once the transposed elements are saved as a KML file and opened in Google Earth. Potential applications of the MaRGEE toolkit include tectonic reconstructions, the movements of glaciers or thrust sheets, and time-based animations of other large- and small-scale geologic processes.

  3. Google Analytics: Single Page Traffic Reports

    EPA Pesticide Factsheets

    These are pages that live outside of Google Analytics (GA) but allow you to view GA data for any individual page on either the public EPA web or EPA intranet. You do need to log in to Google Analytics to view them.

  4. Web mining in soft computing framework: relevance, state of the art and future directions.

    PubMed

    Pal, S K; Talwar, V; Mitra, P

    2002-01-01

    The paper summarizes the different characteristics of Web data, the basic components of Web mining and its different types, and the current state of the art. The reason for considering Web mining, a separate field from data mining, is explained. The limitations of some of the existing Web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) are highlighted. A survey of the existing literature on "soft Web mining" is provided along with the commercially available systems. The prospective areas of Web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing "soft Web mining" systems is explained. An extensive bibliography is also provided.

  5. Scholars Are Wary of Deal on Google's Book Search

    ERIC Educational Resources Information Center

    Howard, Jennifer

    2009-01-01

    Google's Book Search program mines the holdings of research libraries for texts to digitize. Some of that material is out of copyright; a lot of it isn't. A lawsuit came about when some authors and publishers decided that Google's project exceeded the bounds of fair use. As part of a settlement, the parties have proposed creating a Book Rights…

  6. 100 Colleges Sign Up with Google to Speed Access to Library Resources

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2005-01-01

    More than 100 colleges and universities have arranged to give people using the Google Scholar search engine on their campuses more-direct access to library materials. Google Scholar is a free tool that searches scholarly materials on the Web and in academic databases. The new arrangements essentially let Google know which online databases the…

  7. Applying Web Usage Mining for Personalizing Hyperlinks in Web-Based Adaptive Educational Systems

    ERIC Educational Resources Information Center

    Romero, Cristobal; Ventura, Sebastian; Zafra, Amelia; de Bra, Paul

    2009-01-01

    Nowadays, the application of Web mining techniques in e-learning and Web-based adaptive educational systems is increasing exponentially. In this paper, we propose an advanced architecture for a personalization system to facilitate Web mining. A specific Web mining tool is developed and a recommender engine is integrated into the AHA! system in…

  8. Googling DNA sequences on the World Wide Web.

    PubMed

    Hajibabaei, Mehrdad; Singer, Gregory A C

    2009-11-10

    New web-based technologies provide an excellent opportunity for sharing and accessing information and using web as a platform for interaction and collaboration. Although several specialized tools are available for analyzing DNA sequence information, conventional web-based tools have not been utilized for bioinformatics applications. We have developed a novel algorithm and implemented it for searching species-specific genomic sequences, DNA barcodes, by using popular web-based methods such as Google. We developed an alignment independent character based algorithm based on dividing a sequence library (DNA barcodes) and query sequence to words. The actual search is conducted by conventional search tools such as freely available Google Desktop Search. We implemented our algorithm in two exemplar packages. We developed pre and post-processing software to provide customized input and output services, respectively. Our analysis of all publicly available DNA barcode sequences shows a high accuracy as well as rapid results. Our method makes use of conventional web-based technologies for specialized genetic data. It provides a robust and efficient solution for sequence search on the web. The integration of our search method for large-scale sequence libraries such as DNA barcodes provides an excellent web-based tool for accessing this information and linking it to other available categories of information on the web.

  9. Utility of Web search query data in testing theoretical assumptions about mephedrone.

    PubMed

    Kapitány-Fövény, Máté; Demetrovics, Zsolt

    2017-05-01

    With growing access to the Internet, people who use drugs and traffickers started to obtain information about novel psychoactive substances (NPS) via online platforms. This paper aims to analyze whether a decreasing Web interest in formerly banned substances-cocaine, heroin, and MDMA-and the legislative status of mephedrone predict Web interest about this NPS. Google Trends was used to measure changes of Web interest on cocaine, heroin, MDMA, and mephedrone. Google search results for mephedrone within the same time frame were analyzed and categorized. Web interest about classic drugs found to be more persistent. Regarding geographical distribution, location of Web searches for heroin and cocaine was less centralized. Illicit status of mephedrone was a negative predictor of its Web search query rates. The connection between mephedrone-related Web search rates and legislative status of this substance was significantly mediated by ecstasy-related Web search queries, the number of documentaries, and forum/blog entries about mephedrone. The results might provide support for the hypothesis that mephedrone's popularity was highly correlated with its legal status as well as it functioned as a potential substitute for MDMA. Google Trends was found to be a useful tool for testing theoretical assumptions about NPS. Copyright © 2017 John Wiley & Sons, Ltd.

  10. IPAT: a freely accessible software tool for analyzing multiple patent documents with inbuilt landscape visualizer.

    PubMed

    Ajay, Dara; Gangwal, Rahul P; Sangamwar, Abhay T

    2015-01-01

    Intelligent Patent Analysis Tool (IPAT) is an online data retrieval tool, operated based on text mining algorithm to extract specific patent information in a predetermined pattern into an Excel sheet. The software is designed and developed to retrieve and analyze technology information from multiple patent documents and generate various patent landscape graphs and charts. The software is C# coded in visual studio 2010, which extracts the publicly available patent information from the web pages like Google Patent and simultaneously study the various technology trends based on user-defined parameters. In other words, IPAT combined with the manual categorization will act as an excellent technology assessment tool in competitive intelligence and due diligence for predicting the future R&D forecast.

  11. Visualization of Client-Side Web Browsing and Email Activity

    DTIC Science & Technology

    2009-06-01

    mantenimiento the amazing race dustin & candice oskar schindler mythbusters femjoy 080814-kathi in peace anna_ac_-_elixia miley cyrus mapa linea 12 metro... mantenimiento www.google.com.mx alcohol isopropilico www.google.com.mx descargas rapidshare corta final firefox www.google.com.mx desactivar

  12. Lecturers’ Understanding on Indexing Databases of SINTA, DOAJ, Google Scholar, SCOPUS, and Web of Science: A Study of Indonesians

    NASA Astrophysics Data System (ADS)

    Saleh Ahmar, Ansari; Kurniasih, Nuning; Irawan, Dasapta Erwin; Utami Sutiksno, Dian; Napitupulu, Darmawan; Ikhsan Setiawan, Muhammad; Simarmata, Janner; Hidayat, Rahmat; Busro; Abdullah, Dahlan; Rahim, Robbi; Abraham, Juneman

    2018-01-01

    The Ministry of Research, Technology and Higher Education of Indonesia has introduced several national and international indexers of scientific works. This policy becomes a guideline for lecturers and researchers in choosing the reputable publications. This study aimed to describe the understanding level of Indonesian lecturers related to indexing databases, i.e. SINTA, DOAJ, Scopus, Web of Science, and Google Scholar. This research used descriptive design and survey method. The populations in this study were Indonesian lecturers and researchers. The primary data were obtained from a questionnaire filled by 316 lecturers and researchers from 33 Provinces in Indonesia recruited with convenience sampling technique on October-November 2017. The data analysis was performed using frequency distribution tables, cross tabulation and descriptive analysis. The results of this study showed that the understanding of Indonesian lecturers and researchers regarding publications in indexing databases SINTA, DOAJ, Scopus, Web of Science and Google Scholar is that, on average, 66,5% have known about SINTA, DOAJ, Scopus, Web of Science and Google Scholar. However, based on empirical frequency 76% of them have never published with journals or proceedings indexed in Scopus.

  13. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers

    PubMed Central

    Alsaleh, Mansour; Alarifi, Abdulrahman

    2016-01-01

    Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents. PMID:27855179

  14. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    PubMed

    Alsaleh, Mansour; Alarifi, Abdulrahman

    2016-01-01

    Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  15. How To: Maximize Google

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2004-01-01

    Google is shaking out to be the leading Web search engine, with recent research from Nielsen NetRatings reporting about 40 percent of all U.S. households using the tool at least once in January 2004. This brief article discusses how teachers and students can maximize their use of Google.

  16. Finding Citations to Social Work Literature: The Relative Benefits of Using "Web of Science," "Scopus," or "Google Scholar"

    ERIC Educational Resources Information Center

    Bergman, Elaine M. Lasda

    2012-01-01

    Past studies of citation coverage of "Web of Science," "Scopus," and "Google Scholar" do not demonstrate a consistent pattern that can be applied to the interdisciplinary mix of resources used in social work research. To determine the utility of these tools to social work researchers, an analysis of citing references to well-known social work…

  17. Identifying Evidence for Public Health Guidance: A Comparison of Citation Searching with Web of Science and Google Scholar

    ERIC Educational Resources Information Center

    Levay, Paul; Ainsworth, Nicola; Kettle, Rachel; Morgan, Antony

    2016-01-01

    Aim: To examine how effectively forwards citation searching with Web of Science (WOS) or Google Scholar (GS) identified evidence to support public health guidance published by the National Institute for Health and Care Excellence. Method: Forwards citation searching was performed using GS on a base set of 46 publications and replicated using WOS.…

  18. Design and Implementation WebGIS for Improving the Quality of Exploration Decisions at Sin-Quyen Copper Mine, Northern Vietnam

    NASA Astrophysics Data System (ADS)

    Quang Truong, Xuan; Luan Truong, Xuan; Nguyen, Tuan Anh; Nguyen, Dinh Tuan; Cong Nguyen, Chi

    2017-12-01

    The objective of this study is to design and implement a WebGIS Decision Support System (WDSS) for reducing uncertainty and supporting to improve the quality of exploration decisions in the Sin-Quyen copper mine, northern Vietnam. The main distinctive feature of the Sin-Quyen deposit is an unusual composition of ores. Computer and software applied to the exploration problem have had a significant impact on the exploration process over the past 25 years, but up until now, no online system has been undertaken. The system was completely built on open source technology and the Open Geospatial Consortium Web Services (OWS). The input data includes remote sensing (RS), Geographical Information System (GIS) and data from drillhole explorations, the drillhole exploration data sets were designed as a geodatabase and stored in PostgreSQL. The WDSS must be able to processed exploration data and support users to access 2-dimensional (2D) or 3-dimensional (3D) cross-sections and map of boreholles exploration data and drill holes. The interface was designed in order to interact with based maps (e.g., Digital Elevation Model, Google Map, OpenStreetMap) and thematic maps (e.g., land use and land cover, administrative map, drillholes exploration map), and to provide GIS functions (such as creating a new map, updating an existing map, querying and statistical charts). In addition, the system provides geological cross-sections of ore bodies based on Inverse Distance Weighting (IDW), nearest neighbour interpolation and Kriging methods (e.g., Simple Kriging, Ordinary Kriging, Indicator Kriging and CoKriging). The results based on data available indicate that the best estimation method (of 23 borehole exploration data sets) for estimating geological cross-sections of ore bodies in Sin-Quyen copper mine is Ordinary Kriging. The WDSS could provide useful information to improve drilling efficiency in mineral exploration and for management decision making.

  19. Estimating search engine index size variability: a 9-year longitudinal study.

    PubMed

    van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice

    One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.

  20. Leveraging Google Geo Tools for Interactive STEM Education: Insights from the GEODE Project

    NASA Astrophysics Data System (ADS)

    Dordevic, M.; Whitmeyer, S. J.; De Paor, D. G.; Karabinos, P.; Burgin, S.; Coba, F.; Bentley, C.; St John, K. K.

    2016-12-01

    Web-based imagery and geospatial tools have transformed our ability to immerse students in global virtual environments. Google's suite of geospatial tools, such as Google Earth (± Engine), Google Maps, and Street View, allow developers and instructors to create interactive and immersive environments, where students can investigate and resolve common misconceptions in STEM concepts and natural processes. The GEODE (.net) project is developing digital resources to enhance STEM education. These include virtual field experiences (VFEs), such as an interactive visualization of the breakup of the Pangaea supercontinent, a "Grand Tour of the Terrestrial Planets," and GigaPan-based VFEs of sites like the Canadian Rockies. Web-based challenges, such as EarthQuiz (.net) and the "Fold Analysis Challenge," incorporate scaffolded investigations of geoscience concepts. EarthQuiz features web-hosted imagery, such as Street View, Photo Spheres, GigaPans, and Satellite View, as the basis for guided inquiry. In the Fold Analysis Challenge, upper-level undergraduates use Google Earth to evaluate a doubly-plunging fold at Sheep Mountain, WY. GEODE.net also features: "Reasons for the Seasons"—a Google Earth-based visualization that addresses misconceptions that abound amongst students, teachers, and the public, many of whom believe that seasonality is caused by large variations in Earth's distance from the Sun; "Plate Euler Pole Finder," which helps students understand rotational motion of tectonic plates on the globe; and "Exploring Marine Sediments Using Google Earth," an exercise that uses empirical data to explore the surficial distribution of marine sediments in the modern ocean. The GEODE research team includes the authors and: Heather Almquist, Cinzia Cervato, Gene Cooper, Helen Crompton, Terry Pavlis, Jen Piatek, Bill Richards, Jeff Ryan, Ron Schott, Barb Tewksbury, and their students and collaborating colleagues. We are supported by NSF DUE 1323419 and a Google Geo Curriculum Award.

  1. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui

    PubMed Central

    2012-01-01

    Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945

  2. Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui.

    PubMed

    Newton, Richard; Deonarine, Andrew; Wernisch, Lorenz

    2012-09-24

    The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications.

  3. Using Mobile App Development Tools to Build a GIS Application

    NASA Astrophysics Data System (ADS)

    Mital, A.; Catchen, M.; Mital, K.

    2014-12-01

    Our group designed and built working web, android, and IOS applications using different mapping libraries as bases on which to overlay fire data from NASA. The group originally planned to make app versions for Google Maps, Leaflet, and OpenLayers. However, because the Leaflet library did not properly load on Android, the group focused efforts on the other two mapping libraries. For Google Maps, the group first designed a UI for the web app and made a working version of the app. After updating the source of fire data to one which also provided historical fire data, the design had to be modified to include the extra data. After completing a working version of the web app, the group used webview in android, a built in resource which allowed porting the web app to android without rewriting the code for android. Upon completing this, the group found Apple IOS devices had a similar capability, and so decided to add an IOS app to the project using a function similar to webview. Alongside this effort, the group began implementing an OpenLayers fire map using a simpler UI. This web app was completed fairly quickly relative to Google Maps; however, it did not include functionality such as satellite imagery or searchable locations. The group finished the project with a working android version of the Google Maps based app supporting API levels 14-19 and an OpenLayers based app supporting API levels 8-19, as well as a Google Maps based IOS app supporting both old and new screen formats. This project was implemented by high school and college students under an SGT Inc. STEM internship program

  4. a Map Mash-Up Application: Investigation the Temporal Effects of Climate Change on Salt Lake Basin

    NASA Astrophysics Data System (ADS)

    Kirtiloglu, O. S.; Orhan, O.; Ekercin, S.

    2016-06-01

    The main purpose of this paper is to investigate climate change effects that have been occurred at the beginning of the twenty-first century at the Konya Closed Basin (KCB) located in the semi-arid central Anatolian region of Turkey and particularly in Salt Lake region where many major wetlands located in and situated in KCB and to share the analysis results online in a Web Geographical Information System (GIS) environment. 71 Landsat 5-TM, 7-ETM+ and 8-OLI images and meteorological data obtained from 10 meteorological stations have been used at the scope of this work. 56 of Landsat images have been used for extraction of Salt Lake surface area through multi-temporal Landsat imagery collected from 2000 to 2014 in Salt lake basin. 15 of Landsat images have been used to make thematic maps of Normalised Difference Vegetation Index (NDVI) in KCB, and 10 meteorological stations data has been used to generate the Standardized Precipitation Index (SPI), which was used in drought studies. For the purpose of visualizing and sharing the results, a Web GIS-like environment has been established by using Google Maps and its useful data storage and manipulating product Fusion Tables which are all Google's free of charge Web service elements. The infrastructure of web application includes HTML5, CSS3, JavaScript, Google Maps API V3 and Google Fusion Tables API technologies. These technologies make it possible to make effective "Map Mash-Ups" involving an embedded Google Map in a Web page, storing the spatial or tabular data in Fusion Tables and add this data as a map layer on embedded map. The analysing process and map mash-up application have been discussed in detail as the main sections of this paper.

  5. Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: A Comparative Analysis.

    PubMed

    Sharpe, J Danielle; Hopkins, Richard S; Cook, Robert L; Striley, Catherine W

    2016-10-20

    Traditional influenza surveillance relies on influenza-like illness (ILI) syndrome that is reported by health care providers. It primarily captures individuals who seek medical care and misses those who do not. Recently, Web-based data sources have been studied for application to public health surveillance, as there is a growing number of people who search, post, and tweet about their illnesses before seeking medical care. Existing research has shown some promise of using data from Google, Twitter, and Wikipedia to complement traditional surveillance for ILI. However, past studies have evaluated these Web-based sources individually or dually without comparing all 3 of them, and it would be beneficial to know which of the Web-based sources performs best in order to be considered to complement traditional methods. The objective of this study is to comparatively analyze Google, Twitter, and Wikipedia by examining which best corresponds with Centers for Disease Control and Prevention (CDC) ILI data. It was hypothesized that Wikipedia will best correspond with CDC ILI data as previous research found it to be least influenced by high media coverage in comparison with Google and Twitter. Publicly available, deidentified data were collected from the CDC, Google Flu Trends, HealthTweets, and Wikipedia for the 2012-2015 influenza seasons. Bayesian change point analysis was used to detect seasonal changes, or change points, in each of the data sources. Change points in Google, Twitter, and Wikipedia that occurred during the exact week, 1 preceding week, or 1 week after the CDC's change points were compared with the CDC data as the gold standard. All analyses were conducted using the R package "bcp" version 4.0.0 in RStudio version 0.99.484 (RStudio Inc). In addition, sensitivity and positive predictive values (PPV) were calculated for Google, Twitter, and Wikipedia. During the 2012-2015 influenza seasons, a high sensitivity of 92% was found for Google, whereas the PPV for Google was 85%. A low sensitivity of 50% was calculated for Twitter; a low PPV of 43% was found for Twitter also. Wikipedia had the lowest sensitivity of 33% and lowest PPV of 40%. Of the 3 Web-based sources, Google had the best combination of sensitivity and PPV in detecting Bayesian change points in influenza-related data streams. Findings demonstrated that change points in Google, Twitter, and Wikipedia data occasionally aligned well with change points captured in CDC ILI data, yet these sources did not detect all changes in CDC data and should be further studied and developed.

  6. [Google Scholar and the h-index in biomedicine: the popularization of bibliometric assessment].

    PubMed

    Cabezas-Clavijo, A; Delgado-López-Cózar, E

    2013-01-01

    The aim of this study is to review the features, benefits and limitations of the new scientific evaluation products derived from Google Scholar, such as Google Scholar Metrics and Google Scholar Citations, as well as the h-index, which is the standard bibliometric indicator adopted by these services. The study also outlines the potential of this new database as a source for studies in Biomedicine, and compares the h-index obtained by the most relevant journals and researchers in the field of intensive care medicine, based on data extracted from the Web of Science, Scopus and Google Scholar. Results show that although the average h-index values in Google Scholar are almost 30% higher than those obtained in Web of Science, and about 15% higher than those collected by Scopus, there are no substantial changes in the rankings generated from one data source or the other. Despite some technical problems, it is concluded that Google Scholar is a valid tool for researchers in Health Sciences, both for purposes of information retrieval and for the computation of bibliometric indicators. Copyright © 2012 Elsevier España, S.L. and SEMICYUC. All rights reserved.

  7. Google Is Not the Net: Social Networks Are Surging and Present the Real Service Challenge--And Opportunity--For Libraries

    ERIC Educational Resources Information Center

    Albanese, Andrew Richard

    2006-01-01

    This article observes that it's not hard to understand why Google creates such unease among librarians. The profession, however, can't afford to be myopic when it comes to Google. As inescapable as it is, Google is not the Internet. And as the web evolves, new opportunities and challenges loom larger for libraries than who's capturing the bulk of…

  8. Using Google AdWords in the MBA MIS Course

    ERIC Educational Resources Information Center

    Rosso, Mark A.; McClelland, Marilyn K.; Jansen, Bernard J.; Fleming, Sundar W.

    2009-01-01

    From February to June 2008, Google ran its first ever student competition in sponsored Web search, the 2008 Google Online Marketing Challenge (GOMC). The 2008 GOMC was based on registrations from 61 countries: 629 course sections from 468 universities participated, fielding over 4000 student teams of approximately 21,000 students. Working with a…

  9. Google Scholar Usage: An Academic Library's Experience

    ERIC Educational Resources Information Center

    Wang, Ya; Howard, Pamela

    2012-01-01

    Google Scholar is a free service that provides a simple way to broadly search for scholarly works and to connect patrons with the resources libraries provide. The researchers in this study analyzed Google Scholar usage data from 2006 for three library tools at San Francisco State University: SFX link resolver, Web Access Management proxy server,…

  10. Web Mining for Web Image Retrieval.

    ERIC Educational Resources Information Center

    Chen, Zheng; Wenyin, Liu; Zhang, Feng; Li, Mingjing; Zhang, Hongjiang

    2001-01-01

    Presents a prototype system for image retrieval from the Internet using Web mining. Discusses the architecture of the Web image retrieval prototype; document space modeling; user log mining; and image retrieval experiments to evaluate the proposed system. (AEF)

  11. How Accurately Can the Google Web Speech API Recognize and Transcribe Japanese L2 English Learners' Oral Production?

    ERIC Educational Resources Information Center

    Ashwell, Tim; Elam, Jesse R.

    2017-01-01

    The ultimate aim of our research project was to use the Google Web Speech API to automate scoring of elicited imitation (EI) tests. However, in order to achieve this goal, we had to take a number of preparatory steps. We needed to assess how accurate this speech recognition tool is in recognizing native speakers' production of the test items; we…

  12. Shifting Sands: Science Researchers on Google Scholar, Web of Science, and PubMed, with Implications for Library Collections Budgets

    ERIC Educational Resources Information Center

    Hightower, Christy; Caldwell, Christy

    2010-01-01

    Science researchers at the University of California Santa Cruz were surveyed about their article database use and preferences in order to inform collection budget choices. Web of Science was the single most used database, selected by 41.6%. Statistically there was no difference between PubMed (21.5%) and Google Scholar (18.7%) as the second most…

  13. Dr Google

    PubMed Central

    Pías-Peleteiro, Leticia; Cortés-Bordoy, Javier; Martinón-Torres, Federico

    2013-01-01

    Objectives: To assess and analyze the information and recommendations provided by Google Web Search™ (Google) in relation to web searches on the HPV vaccine, indications for females and males and possible adverse effects. Materials and Methods: Descriptive cross-sectional study of the results of 14 web searches. Comprehensive analysis of results based on general recommendation given (favorable/dissuasive), as well as compliance with pre-established criteria, namely design, content and credibility. Sub-analysis of results according to site category: general information, blog / forum and press. Results: In the comprehensive analysis of results, 72.2% of websites offer information favorable to HPV vaccination, with varying degrees of content detail, vs. 27.8% with highly dissuasive content in relation to HPV vaccination. The most frequent type of site is the blog or forum. The information found is frequently incomplete, poorly structured, and often lacking in updates, bibliography and adequate citations, as well as sound credibility criteria (scientific association accreditation and/or trust mark system). Conclusions: Google, as a tool which users employ to locate medical information and advice, is not specialized in providing information that is necessarily rigorous or valid from a scientific perspective. Search results and ranking based on Google's generalized algorithms can lead users to poorly grounded opinions and statements, which may impact HPV vaccination perception and subsequent decision making. PMID:23744505

  14. Infodemiology of systemic lupus erythematous using Google Trends.

    PubMed

    Radin, M; Sciascia, S

    2017-07-01

    Objective People affected by chronic rheumatic conditions, such as systemic lupus erythematosus (SLE), frequently rely on the Internet and search engines to look for terms related to their disease and its possible causes, symptoms and treatments. 'Infodemiology' and 'infoveillance' are two recent terms created to describe a new developing approach for public health, based on Big Data monitoring and data mining. In this study, we aim to investigate trends of Internet research linked to SLE and symptoms associated with the disease, applying a Big Data monitoring approach. Methods We analysed the large amount of data generated by Google Trends, considering 'lupus', 'relapse' and 'fatigue' in a 10-year web-based research. Google Trends automatically normalized data for the overall number of searches, and presented them as relative search volumes, in order to compare variations of different search terms across regions and periods. The Menn-Kendall test was used to evaluate the overall seasonal trend of each search term and possible correlation between search terms. Results We observed a seasonality for Google search volumes for lupus-related terms. In the Northern hemisphere, relative search volumes for 'lupus' were correlated with 'relapse' (τ = 0.85; p = 0.019) and with fatigue (τ = 0.82; p = 0.003), whereas in the Southern hemisphere we observed a significant correlation between 'fatigue' and 'relapse' (τ = 0.85; p = 0.018). Similarly, a significant correlation between 'fatigue' and 'relapse' (τ = 0.70; p < 0.001) was seen also in the Northern hemisphere. Conclusion Despite the intrinsic limitations of this approach, Internet-acquired data might represent a real-time surveillance tool and an alert for healthcare systems in order to plan the most appropriate resources in specific moments with higher disease burden.

  15. Study on online community user motif using web usage mining

    NASA Astrophysics Data System (ADS)

    Alphy, Meera; Sharma, Ajay

    2016-04-01

    The Web usage mining is the application of data mining, which is used to extract useful information from the online community. The World Wide Web contains at least 4.73 billion pages according to Indexed Web and it contains at least 228.52 million pages according Dutch Indexed web on 6th august 2015, Thursday. It’s difficult to get needed data from these billions of web pages in World Wide Web. Here is the importance of web usage mining. Personalizing the search engine helps the web user to identify the most used data in an easy way. It reduces the time consumption; automatic site search and automatic restore the useful sites. This study represents the old techniques to latest techniques used in pattern discovery and analysis in web usage mining from 1996 to 2015. Analyzing user motif helps in the improvement of business, e-commerce, personalisation and improvement of websites.

  16. NOAA's Big Data Partnership at the National Centers for Environmental Information

    NASA Astrophysics Data System (ADS)

    Kearns, E. J.

    2015-12-01

    In April of 2015, the U.S. Department of Commerce announced NOAA's Big Data Partnership (BDP) with Amazon Web Services, Google Cloud Platform, IBM, Microsoft Corp., and the Open Cloud Consortium through Cooperative Research and Development Agreements. Recent progress on the activities with these Partners at the National Centers for Environmental Information (NCEI) will be presented. These activities include the transfer of over 350 TB of NOAA's archived data from NCEI's tape-based archive system to BDP cloud providers; new opportunities for data mining and investigation; application of NOAA's data maturity and stewardship concepts to the BDP; and integration of both archived and near-realtime data streams into a synchronized, distributed data system. Both lessons learned and future opportunities for the environmental data community will be presented.

  17. Google Analytics – Index of Resources

    EPA Pesticide Factsheets

    Find how-to and best practice resources and training for accessing and understanding EPA's Google Analytics (GA) tools, including how to create reports that will help you improve and maintain the web areas you manage.

  18. Dark Web 101

    DTIC Science & Technology

    2016-07-21

    Todays internet has multiple webs. The surface web is what Google and other search engines index and pull based on links. Essentially, the surface...financial records, research and development), and personal data (medical records or legal documents). These are all deep web. Standard search engines dont

  19. Mining User Dwell Time for Personalized Web Search Re-Ranking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Jiang, Hao; Lau, Francis

    We propose a personalized re-ranking algorithm through mining user dwell times derived from a user's previously online reading or browsing activities. We acquire document level user dwell times via a customized web browser, from which we then infer conceptword level user dwell times in order to understand a user's personal interest. According to the estimated concept word level user dwell times, our algorithm can estimate a user's potential dwell time over a new document, based on which personalized webpage re-ranking can be carried out. We compare the rankings produced by our algorithm with rankings generated by popular commercial search enginesmore » and a recently proposed personalized ranking algorithm. The results clearly show the superiority of our method. In this paper, we propose a new personalized webpage ranking algorithmthrough mining dwell times of a user. We introduce a quantitative model to derive concept word level user dwell times from the observed document level user dwell times. Once we have inferred a user's interest over the set of concept words the user has encountered in previous readings, we can then predict the user's potential dwell time over a new document. Such predicted user dwell time allows us to carry out personalized webpage re-ranking. To explore the effectiveness of our algorithm, we measured the performance of our algorithm under two conditions - one with a relatively limited amount of user dwell time data and the other with a doubled amount. Both evaluation cases put our algorithm for generating personalized webpage rankings to satisfy a user's personal preference ahead of those by Google, Yahoo!, and Bing, as well as a recent personalized webpage ranking algorithm.« less

  20. Teaching Google Search Techniques in an L2 Academic Writing Context

    ERIC Educational Resources Information Center

    Han, Sumi; Shin, Jeong-Ah

    2017-01-01

    This mixed-method study examines the effectiveness of teaching Google search techniques (GSTs) to Korean EFL college students in an intermediate-level academic English writing course. 18 students participated in a 4-day GST workshop consisting of an overview session of the web as corpus and Google as a concordancer, and three training sessions…

  1. The Effects of Collaborative Writing Activity Using Google Docs on Students' Writing Abilities

    ERIC Educational Resources Information Center

    Suwantarathip, Ornprapat; Wichadee, Saovapa

    2014-01-01

    Google Docs, a free web-based version of Microsoft Word, offers collaborative features which can be used to facilitate collaborative writing in a foreign language classroom. The current study compared writing abilities of students who collaborated on writing assignments using Google Docs with those working in groups in a face-to-face classroom.…

  2. The design and implementation of web mining in web sites security

    NASA Astrophysics Data System (ADS)

    Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li

    2003-06-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  3. Deep Web video

    ScienceCinema

    None Available

    2018-02-06

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  4. Integrating web 2.0 in clinical research education in a developing country.

    PubMed

    Amgad, Mohamed; AlFaar, Ahmad Samir

    2014-09-01

    The use of Web 2.0 tools in education and health care has received heavy attention over the past years. Over two consecutive years, Children's Cancer Hospital - Egypt 57357 (CCHE 57357), in collaboration with Egyptian universities, student bodies, and NGOs, conducted a summer course that supports undergraduate medical students to cross the gap between clinical practice and clinical research. This time, there was a greater emphasis on reaching out to the students using social media and other Web 2.0 tools, which were heavily used in the course, including Google Drive, Facebook, Twitter, YouTube, Mendeley, Google Hangout, Live Streaming, Research Electronic Data Capture (REDCap), and Dropbox. We wanted to investigate the usefulness of integrating Web 2.0 technologies into formal educational courses and modules. The evaluation survey was filled in by 156 respondents, 134 of whom were course candidates (response rate = 94.4 %) and 22 of whom were course coordinators (response rate = 81.5 %). The course participants came from 14 different universities throughout Egypt. Students' feedback was positive and supported the integration of Web 2.0 tools in academic courses and modules. Google Drive, Facebook, and Dropbox were found to be most useful.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None Available

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  6. A Framework for Web Usage Mining in Electronic Government

    NASA Astrophysics Data System (ADS)

    Zhou, Ping; Le, Zhongjian

    Web usage mining has been a major component of management strategy to enhance organizational analysis and decision. The literature on Web usage mining that deals with strategies and technologies for effectively employing Web usage mining is quite vast. In recent years, E-government has received much attention from researchers and practitioners. Huge amounts of user access data are produced in Electronic government Web site everyday. The role of these data in the success of government management cannot be overstated because they affect government analysis, prediction, strategies, tactical, operational planning and control. Web usage miming in E-government has an important role to play in setting government objectives, discovering citizen behavior, and determining future courses of actions. Web usage mining in E-government has not received adequate attention from researchers or practitioners. We developed a framework to promote a better understanding of the importance of Web usage mining in E-government. Using the current literature, we developed the framework presented herein, in hopes that it would stimulate more interest in this important area.

  7. Feature Positioning on Google Street View Panoramas

    NASA Astrophysics Data System (ADS)

    Tsai, V. J. D.; Chang, C.-T.

    2012-07-01

    Location-based services (LBS) on web-based maps and images have come into real-time since Google launched its Street View imaging services in 2007. This research employs Google Maps API and Web Service, GAE for JAVA, AJAX, Proj4js, CSS and HTML in developing an internet platform for accessing the orientation parameters of Google Street View (GSV) panoramas in order to determine the three dimensional position of interest features that appear on two overlapping panoramas by geometric intersection. A pair of GSV panoramas was examined using known points located on the Library Building of National Chung Hsing University (NCHU) with the root-mean-squared errors of ±0.522m, ±1.230m, and ±5.779m for intersection and ±0.142m, ±1.558m, and ±5.733m for resection in X, Y, and h (elevation), respectively. Potential error sources in GSV positioning were analyzed and illustrated that the errors in Google provided GSV positional parameters dominate the errors in geometric intersection. The developed system is suitable for data collection in establishing LBS applications integrated with Google Maps and Google Earth in traffic sign and infrastructure inventory by adding automatic extraction and matching techniques for points of interest (POI) from GSV panoramas.

  8. An open annotation ontology for science on web 3.0

    PubMed Central

    2011-01-01

    Background There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Methods Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. Results This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables “stand-off” or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO’s Google Code page: http://code.google.com/p/annotation-ontology/ . Conclusions The Annotation Ontology meets critical requirements for an open, freely shareable model in OWL, of annotation metadata created against scientific documents on the Web. We believe AO can become a very useful common model for annotation metadata on Web documents, and will enable biomedical domain ontologies to be used quite widely to annotate the scientific literature. Potential collaborators and those with new relevant use cases are invited to contact the authors. PMID:21624159

  9. An open annotation ontology for science on web 3.0.

    PubMed

    Ciccarese, Paolo; Ocana, Marco; Garcia Castro, Leyla Jael; Das, Sudeshna; Clark, Tim

    2011-05-17

    There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables "stand-off" or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO's Google Code page: http://code.google.com/p/annotation-ontology/ . The Annotation Ontology meets critical requirements for an open, freely shareable model in OWL, of annotation metadata created against scientific documents on the Web. We believe AO can become a very useful common model for annotation metadata on Web documents, and will enable biomedical domain ontologies to be used quite widely to annotate the scientific literature. Potential collaborators and those with new relevant use cases are invited to contact the authors.

  10. EpiCollect: linking smartphones to web applications for epidemiology, ecology and community data collection.

    PubMed

    Aanensen, David M; Huntley, Derek M; Feil, Edward J; al-Own, Fada'a; Spratt, Brian G

    2009-09-16

    Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features) both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases. Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth). Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period. Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.

  11. Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: A Comparative Analysis

    PubMed Central

    Hopkins, Richard S; Cook, Robert L; Striley, Catherine W

    2016-01-01

    Background Traditional influenza surveillance relies on influenza-like illness (ILI) syndrome that is reported by health care providers. It primarily captures individuals who seek medical care and misses those who do not. Recently, Web-based data sources have been studied for application to public health surveillance, as there is a growing number of people who search, post, and tweet about their illnesses before seeking medical care. Existing research has shown some promise of using data from Google, Twitter, and Wikipedia to complement traditional surveillance for ILI. However, past studies have evaluated these Web-based sources individually or dually without comparing all 3 of them, and it would be beneficial to know which of the Web-based sources performs best in order to be considered to complement traditional methods. Objective The objective of this study is to comparatively analyze Google, Twitter, and Wikipedia by examining which best corresponds with Centers for Disease Control and Prevention (CDC) ILI data. It was hypothesized that Wikipedia will best correspond with CDC ILI data as previous research found it to be least influenced by high media coverage in comparison with Google and Twitter. Methods Publicly available, deidentified data were collected from the CDC, Google Flu Trends, HealthTweets, and Wikipedia for the 2012-2015 influenza seasons. Bayesian change point analysis was used to detect seasonal changes, or change points, in each of the data sources. Change points in Google, Twitter, and Wikipedia that occurred during the exact week, 1 preceding week, or 1 week after the CDC’s change points were compared with the CDC data as the gold standard. All analyses were conducted using the R package “bcp” version 4.0.0 in RStudio version 0.99.484 (RStudio Inc). In addition, sensitivity and positive predictive values (PPV) were calculated for Google, Twitter, and Wikipedia. Results During the 2012-2015 influenza seasons, a high sensitivity of 92% was found for Google, whereas the PPV for Google was 85%. A low sensitivity of 50% was calculated for Twitter; a low PPV of 43% was found for Twitter also. Wikipedia had the lowest sensitivity of 33% and lowest PPV of 40%. Conclusions Of the 3 Web-based sources, Google had the best combination of sensitivity and PPV in detecting Bayesian change points in influenza-related data streams. Findings demonstrated that change points in Google, Twitter, and Wikipedia data occasionally aligned well with change points captured in CDC ILI data, yet these sources did not detect all changes in CDC data and should be further studied and developed. PMID:27765731

  12. EPA Web Training Classes

    EPA Pesticide Factsheets

    Scheduled webinars can help you better manage EPA web content. Class topics include Drupal basics, creating different types of pages in the WebCMS such as document pages and forms, using Google Analytics, and best practices for metadata and accessibility.

  13. Introduction to the JASIST Special Topic Issue on Web Retrieval and Mining: A Machine Learning Perspective.

    ERIC Educational Resources Information Center

    Chen, Hsinchun

    2003-01-01

    Discusses information retrieval techniques used on the World Wide Web. Topics include machine learning in information extraction; relevance feedback; information filtering and recommendation; text classification and text clustering; Web mining, based on data mining techniques; hyperlink structure; and Web size. (LRW)

  14. Visual Based Retrieval Systems and Web Mining--Introduction.

    ERIC Educational Resources Information Center

    Iyengar, S. S.

    2001-01-01

    Briefly discusses Web mining and image retrieval techniques, and then presents a summary of articles in this special issue. Articles focus on Web content mining, artificial neural networks as tools for image retrieval, content-based image retrieval systems, and personalizing the Web browsing experience using media agents. (AEF)

  15. Systems and Methods for Decoy Routing and Convert Channel Bonding

    DTIC Science & Technology

    2013-11-26

    34 Proc. R. Soc. A, vol. 463, Jan. 12, 2007, pp. 1-16. " Stupid censorship Web Proxy," http://www.stupidcensorship.com/, retrieved from the internet on...services such as those offered by Google or Skype, web or microblogs such as Twitter, various social media services such as Face- book, and file...device (e.g., Skype, Google , Jabber, Firefox) to be directed to the proprietary software for processing. For instance, the proprietary software of

  16. Return of the Google Game: More Fun Ideas to Transform Students into Skilled Researchers

    ERIC Educational Resources Information Center

    Watkins, Katrine

    2008-01-01

    Teens are impatient and unsophisticated online researchers who are often limited by their poor reading skills. Because they are attracted to clean and simple Web interfaces, they often turn to Google--and now Wikipedia--to help meet their research needs. The Google Game, co-authored by this author, teaches kids that there is a well-thought-out…

  17. Web usage data mining agent

    NASA Astrophysics Data System (ADS)

    Madiraju, Praveen; Zhang, Yanqing

    2002-03-01

    When a user logs in to a website, behind the scenes the user leaves his/her impressions, usage patterns and also access patterns in the web servers log file. A web usage mining agent can analyze these web logs to help web developers to improve the organization and presentation of their websites. They can help system administrators in improving the system performance. Web logs provide invaluable help in creating adaptive web sites and also in analyzing the network traffic analysis. This paper presents the design and implementation of a Web usage mining agent for digging in to the web log files.

  18. Boverhof's App Earns Honorable Mention in Amazon's Web Services

    Science.gov Websites

    » Boverhof's App Earns Honorable Mention in Amazon's Web Services Competition News & Publications News Publications Facebook Google+ Twitter Boverhof's App Earns Honorable Mention in Amazon's Web Services by Amazon Web Services (AWS). Amazon officially announced the winners of its EC2 Spotathon on Monday

  19. Moving beyond a Google Search: Google Earth, SketchUp, Spreadsheet, and More

    ERIC Educational Resources Information Center

    Siegle, Del

    2007-01-01

    Google has been the search engine of choice for most Web surfers for the past half decade. More recently, the creative founders of the popular search engine have been busily creating and testing a variety of useful products that will appeal to gifted learners of varying ages. The purpose of this paper is to share information about three of these…

  20. Hand Society and Matching Program Web Sites Provide Poor Access to Information Regarding Hand Surgery Fellowship.

    PubMed

    Hinds, Richard M; Klifto, Christopher S; Naik, Amish A; Sapienza, Anthony; Capo, John T

    2016-08-01

    The Internet is a common resource for applicants of hand surgery fellowships, however, the quality and accessibility of fellowship online information is unknown. The objectives of this study were to evaluate the accessibility of hand surgery fellowship Web sites and to assess the quality of information provided via program Web sites. Hand fellowship Web site accessibility was evaluated by reviewing the American Society for Surgery of the Hand (ASSH) on November 16, 2014 and the National Resident Matching Program (NRMP) fellowship directories on February 12, 2015, and performing an independent Google search on November 25, 2014. Accessible Web sites were then assessed for quality of the presented information. A total of 81 programs were identified with the ASSH directory featuring direct links to 32% of program Web sites and the NRMP directory directly linking to 0%. A Google search yielded direct links to 86% of program Web sites. The quality of presented information varied greatly among the 72 accessible Web sites. Program description (100%), fellowship application requirements (97%), program contact email address (85%), and research requirements (75%) were the most commonly presented components of fellowship information. Hand fellowship program Web sites can be accessed from the ASSH directory and, to a lesser extent, the NRMP directory. However, a Google search is the most reliable method to access online fellowship information. Of assessable programs, all featured a program description though the quality of the remaining information was variable. Hand surgery fellowship applicants may face some difficulties when attempting to gather program information online. Future efforts should focus on improving the accessibility and content quality on hand surgery fellowship program Web sites.

  1. Creating Web Area Segments with Google Analytics

    EPA Pesticide Factsheets

    Segments allow you to quickly access data for a predefined set of Sessions or Users, such as government or education users, or sessions in a particular state. You can then apply this segment to any report within the Google Analytics (GA) interface.

  2. [Health information on the Internet and trust marks as quality indicators: vaccines case study].

    PubMed

    Mayer, Miguel Angel; Leis, Angela; Sanz, Ferran

    2009-10-01

    To find out the prevalence of quality trust marks present in websites and to analyse the quality of these websites displaying trust marks compared with those that do not display them, in order to put forward these trust marks as a quality indicator. Cross-sectional study. Internet. Websites on vaccines. Using "vacunas OR vaccines" as key words, the features of 40 web pages were analysed. These web pages were selected from the page results of two search engines, Google and Yahoo! Based on a total of 9 criteria, the average score of criteria fulfilled was 7 (95% CI 3.96-10.04) points for the web pages offered by Yahoo! and 7.3 (95% CI 3.86-10.74) offered by Google. Amongst web pages offered by Yahoo!, there were three with clearly inaccurate information, while there were four in the pages offered by Google. Trust marks were displayed in 20% and 30% medical web pages, respectively, and their presence reached statistical significance (P=0.033) when fulfilling the quality criteria compared with web pages where trust marks were not displayed. A wide variety of web pages was obtained by search engines and a large number of them with useless information. Although the websites analysed had a good quality, between 15% and 20% showed inaccurate information. Websites where trust marks were displayed had more quality than those that did not display one and none of them were included amongst those where inaccurate information was found.

  3. Ajax Architecture Implementation Techniques

    NASA Astrophysics Data System (ADS)

    Hussaini, Syed Asadullah; Tabassum, S. Nasira; Baig, Tabassum, M. Khader

    2012-03-01

    Today's rich Web applications use a mix of Java Script and asynchronous communication with the application server. This mechanism is also known as Ajax: Asynchronous JavaScript and XML. The intent of Ajax is to exchange small pieces of data between the browser and the application server, and in doing so, use partial page refresh instead of reloading the entire Web page. AJAX (Asynchronous JavaScript and XML) is a powerful Web development model for browser-based Web applications. Technologies that form the AJAX model, such as XML, JavaScript, HTTP, and XHTML, are individually widely used and well known. However, AJAX combines these technologies to let Web pages retrieve small amounts of data from the server without having to reload the entire page. This capability makes Web pages more interactive and lets them behave like local applications. Web 2.0 enabled by the Ajax architecture has given rise to a new level of user interactivity through web browsers. Many new and extremely popular Web applications have been introduced such as Google Maps, Google Docs, Flickr, and so on. Ajax Toolkits such as Dojo allow web developers to build Web 2.0 applications quickly and with little effort.

  4. Effect of Temporal Relationships in Associative Rule Mining for Web Log Data

    PubMed Central

    Mohd Khairudin, Nazli; Mustapha, Aida

    2014-01-01

    The advent of web-based applications and services has created such diverse and voluminous web log data stored in web servers, proxy servers, client machines, or organizational databases. This paper attempts to investigate the effect of temporal attribute in relational rule mining for web log data. We incorporated the characteristics of time in the rule mining process and analysed the effect of various temporal parameters. The rules generated from temporal relational rule mining are then compared against the rules generated from the classical rule mining approach such as the Apriori and FP-Growth algorithms. The results showed that by incorporating the temporal attribute via time, the number of rules generated is subsequently smaller but is comparable in terms of quality. PMID:24587757

  5. Global reaction to the recent outbreaks of Zika virus: Insights from a Big Data analysis.

    PubMed

    Bragazzi, Nicola Luigi; Alicino, Cristiano; Trucchi, Cecilia; Paganino, Chiara; Barberis, Ilaria; Martini, Mariano; Sticchi, Laura; Trinka, Eugen; Brigo, Francesco; Ansaldi, Filippo; Icardi, Giancarlo; Orsi, Andrea

    2017-01-01

    The recent spreading of Zika virus represents an emerging global health threat. As such, it is attracting public interest worldwide, generating a great amount of related Internet searches and social media interactions. The aim of this research was to understand Zika-related digital behavior throughout the epidemic spreading and to assess its consistence with real-world epidemiological data, using a behavioral informatics and analytics approach. In this study, the global web-interest and reaction to the recently occurred outbreaks of the Zika Virus were analyzed in terms of tweets and Google Trends (GT), Google News, YouTube, and Wikipedia search queries. These data streams were mined from 1st January 2004 to 31st October 2016, with a focus on the period November 2015-October 2016. This analysis was complemented with the use of epidemiological data. Spearman's correlation was performed to correlate all Zika-related data. Moreover, a multivariate regression was performed using Zika-related search queries as a dependent variable, and epidemiological data, number of inhabitants in 2015 and Human Development Index as predictor variables. Overall 3,864,395 tweets, 284,903 accesses to Wikipedia pages dedicated to the Zika virus were analyzed during the study period. All web-data sources showed that the main spike of researches and interactions occurred in February 2016 with a second peak in August 2016. All novel data streams-related activities increased markedly during the epidemic period with respect to pre-epidemic period when no web activity was detected. Correlations between data from all these web platforms resulted very high and statistically significant. The countries in which web searches were particularly concentrated are mainly from Central and South Americas. The majority of queries concerned the symptoms of the Zika virus, its vector of transmission, and its possible effect to babies, including microcephaly. No statistically significant correlation was found between novel data streams and global real-world epidemiological data. At country level, a correlation between the digital interest towards the Zika virus and Zika incidence rate or microcephaly cases has been detected. An increasing public interest and reaction to the current Zika virus outbreak was documented by all web-data sources and a similar pattern of web reactions has been detected. The public opinion seems to be particularly worried by the alert of teratogenicity of the Zika virus. Stakeholders and health authorities could usefully exploited these internet tools for collecting the concerns of public opinion and reply to them, disseminating key information.

  6. Global reaction to the recent outbreaks of Zika virus: Insights from a Big Data analysis

    PubMed Central

    Trucchi, Cecilia; Paganino, Chiara; Barberis, Ilaria; Martini, Mariano; Sticchi, Laura; Trinka, Eugen; Brigo, Francesco; Ansaldi, Filippo; Icardi, Giancarlo; Orsi, Andrea

    2017-01-01

    Objective The recent spreading of Zika virus represents an emerging global health threat. As such, it is attracting public interest worldwide, generating a great amount of related Internet searches and social media interactions. The aim of this research was to understand Zika-related digital behavior throughout the epidemic spreading and to assess its consistence with real-world epidemiological data, using a behavioral informatics and analytics approach. Methods In this study, the global web-interest and reaction to the recently occurred outbreaks of the Zika Virus were analyzed in terms of tweets and Google Trends (GT), Google News, YouTube, and Wikipedia search queries. These data streams were mined from 1st January 2004 to 31st October 2016, with a focus on the period November 2015—October 2016. This analysis was complemented with the use of epidemiological data. Spearman’s correlation was performed to correlate all Zika-related data. Moreover, a multivariate regression was performed using Zika-related search queries as a dependent variable, and epidemiological data, number of inhabitants in 2015 and Human Development Index as predictor variables. Results Overall 3,864,395 tweets, 284,903 accesses to Wikipedia pages dedicated to the Zika virus were analyzed during the study period. All web-data sources showed that the main spike of researches and interactions occurred in February 2016 with a second peak in August 2016. All novel data streams-related activities increased markedly during the epidemic period with respect to pre-epidemic period when no web activity was detected. Correlations between data from all these web platforms resulted very high and statistically significant. The countries in which web searches were particularly concentrated are mainly from Central and South Americas. The majority of queries concerned the symptoms of the Zika virus, its vector of transmission, and its possible effect to babies, including microcephaly. No statistically significant correlation was found between novel data streams and global real-world epidemiological data. At country level, a correlation between the digital interest towards the Zika virus and Zika incidence rate or microcephaly cases has been detected. Conclusions An increasing public interest and reaction to the current Zika virus outbreak was documented by all web-data sources and a similar pattern of web reactions has been detected. The public opinion seems to be particularly worried by the alert of teratogenicity of the Zika virus. Stakeholders and health authorities could usefully exploited these internet tools for collecting the concerns of public opinion and reply to them, disseminating key information. PMID:28934352

  7. Working with Data: Discovering Knowledge through Mining and Analysis; Systematic Knowledge Management and Knowledge Discovery; Text Mining; Methodological Approach in Discovering User Search Patterns through Web Log Analysis; Knowledge Discovery in Databases Using Formal Concept Analysis; Knowledge Discovery with a Little Perspective.

    ERIC Educational Resources Information Center

    Qin, Jian; Jurisica, Igor; Liddy, Elizabeth D.; Jansen, Bernard J; Spink, Amanda; Priss, Uta; Norton, Melanie J.

    2000-01-01

    These six articles discuss knowledge discovery in databases (KDD). Topics include data mining; knowledge management systems; applications of knowledge discovery; text and Web mining; text mining and information retrieval; user search patterns through Web log analysis; concept analysis; data collection; and data structure inconsistency. (LRW)

  8. Robotic Prostatectomy on the Web: A Cross-Sectional Qualitative Assessment.

    PubMed

    Borgmann, Hendrik; Mager, René; Salem, Johannes; Bründl, Johannes; Kunath, Frank; Thomas, Christian; Haferkamp, Axel; Tsaur, Igor

    2016-08-01

    Many patients diagnosed with prostate cancer search for information on robotic prostatectomy (RobP) on the Web. We aimed to evaluate the qualitative characteristics of the mostly frequented Web sites on RobP with a particular emphasis on provider-dependent issues. Google was searched for the term "robotic prostatectomy" in Europe and North America. The mostly frequented Web sites were selected and classified as physician-provided and publically-provided. Quality was measured using Journal of the American Medical Association (JAMA) benchmark criteria, DISCERN score, and addressing of Trifecta surgical outcomes. Popularity was analyzed using Google PageRank and Alexa tool. Accessibility, usability, and reliability were investigated using the LIDA tool and readability was assessed using readability indices. Twenty-eight Web sites were physician-provided and 15 publically-provided. For all Web sites, 88% of JAMA benchmark criteria were fulfilled, DISCERN quality score was high, and 81% of Trifecta outcome measurements were addressed. Popularity was average according to Google PageRank (mean 2.9 ± 1.5) and Alexa Traffic Rank (median, 49,109; minimum, 7; maximum, 8,582,295). Accessibility (85 ± 7%), usability (92 ± 3%), and reliability scores (88 ± 8%) were moderate to high. Automated Readability Index was 7.2 ± 2.1 and Flesch-Kincaid Grade Level was 9 ± 2, rating the Web sites as difficult to read. Physician-provided Web sites had higher quality scores and lower readability compared with publically-provided Web sites. Websites providing information on RobP obtained medium to high ratings in all domains of quality in the current assessment. In contrast, readability needs to be significantly improved so that this content can become available for the populace. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Web Analytics

    EPA Pesticide Factsheets

    EPA’s Web Analytics Program collects, analyzes, and provides reports on traffic, quality assurance, and customer satisfaction metrics for EPA’s website. The program uses a variety of analytics tools, including Google Analytics and CrazyEgg.

  10. Accredited hand surgery fellowship Web sites: analysis of content and accessibility.

    PubMed

    Trehan, Samir K; Morrell, Nathan T; Akelman, Edward

    2015-04-01

    To assess the accessibility and content of accredited hand surgery fellowship Web sites. A list of all accredited hand surgery fellowships was obtained from the online database of the American Society for Surgery of the Hand (ASSH). Fellowship program information on the ASSH Web site was recorded. All fellowship program Web sites were located via Google search. Fellowship program Web sites were analyzed for accessibility and content in 3 domains: program overview, application information/recruitment, and education. At the time of this study, there were 81 accredited hand surgery fellowships with 169 available positions. Thirty of 81 programs (37%) had a functional link on the ASSH online hand surgery fellowship directory; however, Google search identified 78 Web sites. Three programs did not have a Web site. Analysis of content revealed that most Web sites contained contact information, whereas information regarding the anticipated clinical, research, and educational experiences during fellowship was less often present. Furthermore, information regarding past and present fellows, salary, application process/requirements, call responsibilities, and case volume was frequently lacking. Overall, 52 of 81 programs (64%) had the minimal online information required for residents to independently complete the fellowship application process. Hand fellowship program Web sites could be accessed either via the ASSH online directory or Google search, except for 3 programs that did not have Web sites. Although most fellowship program Web sites contained contact information, other content such as application information/recruitment and education, was less frequently present. This study provides comparative data regarding the clinical and educational experiences outlined on hand fellowship program Web sites that are of relevance to residents, fellows, and academic hand surgeons. This study also draws attention to various ways in which the hand surgery fellowship application process can be made more user-friendly and efficient. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  11. Getting to the top of Google: search engine optimization.

    PubMed

    Maley, Catherine; Baum, Neil

    2010-01-01

    Search engine optimization is the process of making your Web site appear at or near the top of popular search engines such as Google, Yahoo, and MSN. This is not done by luck or knowing someone working for the search engines but by understanding the process of how search engines select Web sites for placement on top or on the first page. This article will review the process and provide methods and techniques to use to have your site rated at the top or very near the top.

  12. Efficiently Communicating Rich Heterogeneous Geospatial Data from the FeMO2008 Dive Cruise with FlashMap on EarthRef.org

    NASA Astrophysics Data System (ADS)

    Minnett, R. C.; Koppers, A. A.; Staudigel, D.; Staudigel, H.

    2008-12-01

    EarthRef.org is comprehensive and convenient resource for Earth Science reference data and models. It encompasses four main portals: the Geochemical Earth Reference Model (GERM), the Magnetics Information Consortium (MagIC), the Seamount Biogeosciences Network (SBN), and the Enduring Resources for Earth Science Education (ERESE). Their underlying databases are publically available and the scientific community has contributed widely and is urged to continue to do so. However, the net result is a vast and largely heterogeneous warehouse of geospatial data ranging from carefully prepared maps of seamounts to geochemical data/metadata, daily reports from seagoing expeditions, large volumes of raw and processed multibeam data, images of paleomagnetic sampling sites, etc. This presents a considerable obstacle for integrating other rich media content, such as videos, images, data files, cruise tracks, and interoperable database results, without overwhelming the web user. The four EarthRef.org portals clearly lend themselves to a more intuitive user interface and has, therefore, been an invaluable test bed for the design and implementation of FlashMap, a versatile KML-driven geospatial browser written for reliability and speed in Adobe Flash. FlashMap allows layers of content to be loaded and displayed over a streaming high-resolution map which can be zoomed and panned similarly to Google Maps and Google Earth. Many organizations, from National Geographic to the USGS, have begun using Google Earth software to display geospatial content. However, Google Earth, as a desktop application, does not integrate cleanly with existing websites requiring the user to navigate away from the browser and focus on a separate application and Google Maps, written in Java Script, does not scale up reliably to large datasets. FlashMap remedies these problems as a web-based application that allows for seamless integration of the real-time display power of Google Earth and the flexibility of the web without losing scalability and control of the base maps. Our Flash-based application is fully compatible with KML (Keyhole Markup Language) 2.2, the most recent iteration of KML, allowing users with existing Google Earth KML files to effortlessly display their geospatial content embedded in a web page. As a test case for FlashMap, the annual Iron-Oxidizing Microbial Observatory (FeMO) dive cruise to the Loihi Seamount, in conjunction with data available from ongoing and published FeMO laboratory studies, showcases the flexibility of this single web-based application. With a KML 2.2 compatible web-service providing the content, any database can display results in FlashMap. The user can then hide and show multiple layers of content, potentially from several data sources, and rapidly digest a vast quantity of information to narrow the search results. This flexibility gives experienced users the ability to drill down to exactly the record they are looking for (SERC at Carleton College's educational application of FlashMap at http://serc.carleton.edu/sp/erese/activities/22223.html) and allows users familiar with Google Earth the ability to load and view geospatial data content within a browser from any computer with an internet connection.

  13. How good is Google? The quality of otolaryngology information on the internet.

    PubMed

    Pusz, Max D; Brietzke, Scott E

    2012-09-01

    To assess the quality of the information a patient (parent) may encounter using a Google search for typical otolaryngology ailments. Cross-sectional study. Tertiary care center. A Google keyword search was performed for 10 common otolaryngology problems including ear infection, hearing loss, tonsillitis, and so on. The top 10 search results for each were critically examined using the 16-item (1-5 scale) standardized DISCERN instrument. The DISCERN instrument was developed to assess the quality and comprehensiveness of patient treatment choice literature. A total of 100 Web sites were assessed. Of these, 19 (19%) were primarily advertisements for products and were excluded from DISCERN scoring. Searches for more typically chronic otolaryngic problems (eg, tinnitus, sleep apnea, etc) resulted in more biased, advertisement-type results than those for typically acute problems (eg, ear infection, sinus infection, P = .03). The search for "sleep apnea treatment" produced the highest scoring results (mean overall DISCERN score = 3.49, range = 1.81-4.56), and the search for "hoarseness treatment" produced the lowest scores (mean = 2.49, range = 1.56-3.56). Results from major comprehensive Web sites (WebMD, EMedicinehealth.com, Wikipedia, etc.) scored higher than other Web sites (mean DISCERN score = 3.46 vs 2.48, P < .001). There is marked variability in the quality of Web site information for the treatment of common otolaryngologic problems. Searches on more chronic problems resulted in a higher proportion of biased advertisement Web sites. Larger, comprehensive Web sites generally provided better information but were less than perfect in presenting complete information on treatment options.

  14. Multi-Resource Fair Queueing for Packet Processing

    DTIC Science & Technology

    2012-06-19

    Huawei , Intel, MarkLogic, Microsoft, NetApp, Oracle, Quanta, Splunk, VMware and by DARPA (contract #FA8650-11-C-7136). Multi-Resource Fair Queueing for...Google PhD Fellowship, gifts from Amazon Web Services, Google, SAP, Blue Goji, Cisco, Cloud- era, Ericsson, General Electric, Hewlett Packard, Huawei

  15. Google Earth as a (Not Just) Geography Education Tool

    ERIC Educational Resources Information Center

    Patterson, Todd C.

    2007-01-01

    The implementation of Geographic Information Science (GIScience) applications and discussion of GIScience-related themes are useful for teaching fundamental geographic and technological concepts. As one of the newest geographic information tools available on the World Wide Web, Google Earth has considerable potential to enhance methods for…

  16. Evaluation of the Kloswall longwall mining system

    NASA Astrophysics Data System (ADS)

    Guay, P. J.

    1982-04-01

    A new longwal mining system specifically designed to extract a very deep web (48 inches or deeper) from a longwall panel was studied. Productivity and cost analysis comparing the new mining system with a conventional longwall operation taking a 30 inch wide web is presented. It is shown that the new system will increase annual production and return on investment in most cases. Conceptual drawings and specifications for a high capacity three drum shearer and a unique shield type of roof support specifically designed for very wide web operation are reported. The advantages and problems associated with wide web mining in general and as they relate specifically to the equipment selected for the new mining system are discussed.

  17. Web-based surveillance of public information needs for informing preconception interventions.

    PubMed

    D'Ambrosio, Angelo; Agricola, Eleonora; Russo, Luisa; Gesualdo, Francesco; Pandolfi, Elisabetta; Bortolus, Renata; Castellani, Carlo; Lalatta, Faustina; Mastroiacovo, Pierpaolo; Tozzi, Alberto Eugenio

    2015-01-01

    The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public's information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health. Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics. We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time. Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations.

  18. Web-Based Surveillance of Public Information Needs for Informing Preconception Interventions

    PubMed Central

    D’Ambrosio, Angelo; Agricola, Eleonora; Russo, Luisa; Gesualdo, Francesco; Pandolfi, Elisabetta; Bortolus, Renata; Castellani, Carlo; Lalatta, Faustina; Mastroiacovo, Pierpaolo; Tozzi, Alberto Eugenio

    2015-01-01

    Background The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public's information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health. Methods Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics. Results We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time. Conclusion Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations. PMID:25879682

  19. Novel data sources for women's health research: mapping breast screening online information seeking through Google trends.

    PubMed

    Fazeli Dehkordy, Soudabeh; Carlos, Ruth C; Hall, Kelli S; Dalton, Vanessa K

    2014-09-01

    Millions of people use online search engines everyday to find health-related information and voluntarily share their personal health status and behaviors in various Web sites. Thus, data from tracking of online information seeker's behavior offer potential opportunities for use in public health surveillance and research. Google Trends is a feature of Google which allows Internet users to graph the frequency of searches for a single term or phrase over time or by geographic region. We used Google Trends to describe patterns of information-seeking behavior in the subject of dense breasts and to examine their correlation with the passage or introduction of dense breast notification legislation. To capture the temporal variations of information seeking about dense breasts, the Web search query "dense breast" was entered in the Google Trends tool. We then mapped the dates of legislative actions regarding dense breasts that received widespread coverage in the lay media to information-seeking trends about dense breasts over time. Newsworthy events and legislative actions appear to correlate well with peaks in search volume of "dense breast". Geographic regions with the highest search volumes have passed, denied, or are currently considering the dense breast legislation. Our study demonstrated that any legislative action and respective news coverage correlate with increase in information seeking for "dense breast" on Google, suggesting that Google Trends has the potential to serve as a data source for policy-relevant research. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  20. Web Mining: Machine Learning for Web Applications.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Chau, Michael

    2004-01-01

    Presents an overview of machine learning research and reviews methods used for evaluating machine learning systems. Ways that machine-learning algorithms were used in traditional information retrieval systems in the "pre-Web" era are described, and the field of Web mining and how machine learning has been used in different Web mining…

  1. Sally Ride EarthKAM - Automated Image Geo-Referencing Using Google Earth Web Plug-In

    NASA Technical Reports Server (NTRS)

    Andres, Paul M.; Lazar, Dennis K.; Thames, Robert Q.

    2013-01-01

    Sally Ride EarthKAM is an educational program funded by NASA that aims to provide the public the ability to picture Earth from the perspective of the International Space Station (ISS). A computer-controlled camera is mounted on the ISS in a nadir-pointing window; however, timing limitations in the system cause inaccurate positional metadata. Manually correcting images within an orbit allows the positional metadata to be improved using mathematical regressions. The manual correction process is time-consuming and thus, unfeasible for a large number of images. The standard Google Earth program allows for the importing of KML (keyhole markup language) files that previously were created. These KML file-based overlays could then be manually manipulated as image overlays, saved, and then uploaded to the project server where they are parsed and the metadata in the database is updated. The new interface eliminates the need to save, download, open, re-save, and upload the KML files. Everything is processed on the Web, and all manipulations go directly into the database. Administrators also have the control to discard any single correction that was made and validate a correction. This program streamlines a process that previously required several critical steps and was probably too complex for the average user to complete successfully. The new process is theoretically simple enough for members of the public to make use of and contribute to the success of the Sally Ride EarthKAM project. Using the Google Earth Web plug-in, EarthKAM images, and associated metadata, this software allows users to interactively manipulate an EarthKAM image overlay, and update and improve the associated metadata. The Web interface uses the Google Earth JavaScript API along with PHP-PostgreSQL to present the user the same interface capabilities without leaving the Web. The simpler graphical user interface will allow the public to participate directly and meaningfully with EarthKAM. The use of similar techniques is being investigated to place ground-based observations in a Google Mars environment, allowing the MSL (Mars Science Laboratory) Science Team a means to visualize the rover and its environment.

  2. Detecting Runtime Anomalies in AJAX Applications through Trace Analysis

    DTIC Science & Technology

    2011-08-10

    statements by adding the instrumentation to the GWT UI classes, leaving the user code untouched. Some content management frameworks such as Drupal [12...Google web toolkit.” http://code.google.com/webtoolkit/. [12] “Form generation – drupal api.” http://api.drupal.org/api/group/form_api/6. 9

  3. Scan This Book!

    ERIC Educational Resources Information Center

    Albanese, Andrew Richard

    2007-01-01

    In this article, the author presents an interview with Brewster Kahle, leader of the Open Content Alliance (OCA). OCA book scan program is an alternative to Google's library project that aims to make books accessible online. In this interview, Kahle discusses his views on the challenges of getting books on the Web, on Google's library…

  4. (Meta)Search like Google

    ERIC Educational Resources Information Center

    Rochkind, Jonathan

    2007-01-01

    The ability to search and receive results in more than one database through a single interface--or metasearch--is something many users want. Google Scholar--the search engine of specifically scholarly content--and library metasearch products like Ex Libris's MetaLib, Serials Solution's Central Search, WebFeat, and products based on MuseGlobal used…

  5. Environmental asbestos exposure sources in Korea

    PubMed Central

    2016-01-01

    Background Because of the long asbestos-related disease latencies (10–50 years), detection, diagnosis, and epidemiologic studies require asbestos exposure history. However, environmental asbestos exposure source (EAES) data are lacking. Objectives To survey the available data for past EAES and supplement these data with interviews. Methods We constructed an EAES database using a literature review and interviews of experts, former traders, and workers. Exposure sources by time period and type were visualized using a geographic information system (ArcGIS), web-based mapping (Google Maps), and OpenWeatherMap. The data were mounted in the GIS to show the exposure source location and trend. Results The majority of asbestos mines, factories, and consumption was located in Chungnam; Gyeonggi, Busan, and Gyeongnam; and Gyeonggi, Daejeon, and Busan, respectively. Shipbuilding and repair companies were mostly located in Busan and Gyeongnam. Conclusions These tools might help evaluate past exposure from EAES and estimate the future asbestos burden in Korea. PMID:27726756

  6. Environmental asbestos exposure sources in Korea.

    PubMed

    Kang, Dong-Mug; Kim, Jong-Eun; Kim, Ju-Young; Lee, Hyun-Hee; Hwang, Young-Sik; Kim, Young-Ki; Lee, Yong-Jin

    2016-10-01

    Because of the long asbestos-related disease latencies (10-50 years), detection, diagnosis, and epidemiologic studies require asbestos exposure history. However, environmental asbestos exposure source (EAES) data are lacking. To survey the available data for past EAES and supplement these data with interviews. We constructed an EAES database using a literature review and interviews of experts, former traders, and workers. Exposure sources by time period and type were visualized using a geographic information system (ArcGIS), web-based mapping (Google Maps), and OpenWeatherMap. The data were mounted in the GIS to show the exposure source location and trend. The majority of asbestos mines, factories, and consumption was located in Chungnam; Gyeonggi, Busan, and Gyeongnam; and Gyeonggi, Daejeon, and Busan, respectively. Shipbuilding and repair companies were mostly located in Busan and Gyeongnam. These tools might help evaluate past exposure from EAES and estimate the future asbestos burden in Korea.

  7. Beyond Description: Converting Web Site Usage Statistics into Concrete Site Improvement Ideas

    ERIC Educational Resources Information Center

    Arendt, Julie; Wagner, Cassie

    2010-01-01

    Web site usage statistics are a widely used tool for Web site development, but libraries are still learning how to use them successfully. This case study summarizes how Morris Library at Southern Illinois University Carbondale implemented Google Analytics on its Web site and used the reports to inform a site redesign. As the main campus library at…

  8. Head Lice Surveillance on a Deregulated OTC-Sales Market: A Study Using Web Query Data

    PubMed Central

    Lindh, Johan; Magnusson, Måns; Grünewald, Maria; Hulth, Anette

    2012-01-01

    The head louse, Pediculus humanus capitis, is an obligate ectoparasite that causes infestations of humans. Studies have demonstrated a correlation between sales figures for over-the-counter (OTC) treatment products and the number of humans with head lice. The deregulation of the Swedish pharmacy market on July 1, 2009, decreased the possibility to obtain complete sale figures and thereby the possibility to obtain yearly trends of head lice infestations. In the presented study we wanted to investigate whether web queries on head lice can be used as substitute for OTC sales figures. Via Google Insights for Search and Vårdguiden medical web site, the number of queries on “huvudlöss” (head lice) and “hårlöss” (lice in hair) were obtained. The analysis showed that both the Vårdguiden series and the Google series were statistically significant (p<0.001) when added separately, but if the Google series were already included in the model, the Vårdguiden series were not statistically significant (p = 0.5689). In conclusion, web queries can detect if there is an increase or decrease of head lice infested humans in Sweden over a period of years, and be as reliable a proxy as the OTC-sales figures. PMID:23144923

  9. Head lice surveillance on a deregulated OTC-sales market: a study using web query data.

    PubMed

    Lindh, Johan; Magnusson, Måns; Grünewald, Maria; Hulth, Anette

    2012-01-01

    The head louse, Pediculus humanus capitis, is an obligate ectoparasite that causes infestations of humans. Studies have demonstrated a correlation between sales figures for over-the-counter (OTC) treatment products and the number of humans with head lice. The deregulation of the Swedish pharmacy market on July 1, 2009, decreased the possibility to obtain complete sale figures and thereby the possibility to obtain yearly trends of head lice infestations. In the presented study we wanted to investigate whether web queries on head lice can be used as substitute for OTC sales figures. Via Google Insights for Search and Vårdguiden medical web site, the number of queries on "huvudlöss" (head lice) and "hårlöss" (lice in hair) were obtained. The analysis showed that both the Vårdguiden series and the Google series were statistically significant (p<0.001) when added separately, but if the Google series were already included in the model, the Vårdguiden series were not statistically significant (p = 0.5689). In conclusion, web queries can detect if there is an increase or decrease of head lice infested humans in Sweden over a period of years, and be as reliable a proxy as the OTC-sales figures.

  10. An overview of new video coding tools under consideration for VP10: the successor to VP9

    NASA Astrophysics Data System (ADS)

    Mukherjee, Debargha; Su, Hui; Bankoski, James; Converse, Alex; Han, Jingning; Liu, Zoe; Xu, Yaowu

    2015-09-01

    Google started an opensource project, entitled the WebM Project, in 2010 to develop royaltyfree video codecs for the web. The present generation codec developed in the WebM project called VP9 was finalized in mid2013 and is currently being served extensively by YouTube, resulting in billions of views per day. Even though adoption of VP9 outside Google is still in its infancy, the WebM project has already embarked on an ambitious project to develop a next edition codec VP10 that achieves at least a generational bitrate reduction over the current generation codec VP9. Although the project is still in early stages, a set of new experimental coding tools have already been added to baseline VP9 to achieve modest coding gains over a large enough test set. This paper provides a technical overview of these coding tools.

  11. The Number of Scholarly Documents on the Public Web

    PubMed Central

    Khabsa, Madian; Giles, C. Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%. PMID:24817403

  12. The number of scholarly documents on the public web.

    PubMed

    Khabsa, Madian; Giles, C Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.

  13. HCLS 2.0/3.0: health care and life sciences data mashup using Web 2.0/3.0.

    PubMed

    Cheung, Kei-Hoi; Yip, Kevin Y; Townsend, Jeffrey P; Scotch, Matthew

    2008-10-01

    We describe the potential of current Web 2.0 technologies to achieve data mashup in the health care and life sciences (HCLS) domains, and compare that potential to the nascent trend of performing semantic mashup. After providing an overview of Web 2.0, we demonstrate two scenarios of data mashup, facilitated by the following Web 2.0 tools and sites: Yahoo! Pipes, Dapper, Google Maps and GeoCommons. In the first scenario, we exploited Dapper and Yahoo! Pipes to implement a challenging data integration task in the context of DNA microarray research. In the second scenario, we exploited Yahoo! Pipes, Google Maps, and GeoCommons to create a geographic information system (GIS) interface that allows visualization and integration of diverse categories of public health data, including cancer incidence and pollution prevalence data. Based on these two scenarios, we discuss the strengths and weaknesses of these Web 2.0 mashup technologies. We then describe Semantic Web, the mainstream Web 3.0 technology that enables more powerful data integration over the Web. We discuss the areas of intersection of Web 2.0 and Semantic Web, and describe the potential benefits that can be brought to HCLS research by combining these two sets of technologies.

  14. HCLS 2.0/3.0: Health Care and Life Sciences Data Mashup Using Web 2.0/3.0

    PubMed Central

    Cheung, Kei-Hoi; Yip, Kevin Y.; Townsend, Jeffrey P.; Scotch, Matthew

    2010-01-01

    We describe the potential of current Web 2.0 technologies to achieve data mashup in the health care and life sciences (HCLS) domains, and compare that potential to the nascent trend of performing semantic mashup. After providing an overview of Web 2.0, we demonstrate two scenarios of data mashup, facilitated by the following Web 2.0 tools and sites: Yahoo! Pipes, Dapper, Google Maps and GeoCommons. In the first scenario, we exploited Dapper and Yahoo! Pipes to implement a challenging data integration task in the context of DNA microarray research. In the second scenario, we exploited Yahoo! Pipes, Google Maps, and GeoCommons to create a geographic information system (GIS) interface that allows visualization and integration of diverse categories of public health data, including cancer incidence and pollution prevalence data. Based on these two scenarios, we discuss the strengths and weaknesses of these Web 2.0 mashup technologies. We then describe Semantic Web, the mainstream Web 3.0 technology that enables more powerful data integration over the Web. We discuss the areas of intersection of Web 2.0 and Semantic Web, and describe the potential benefits that can be brought to HCLS research by combining these two sets of technologies. PMID:18487092

  15. BioMon: A Google Earth Based Continuous Biomass Monitoring System (Demo Paper)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju

    2009-01-01

    We demonstrate a Google Earth based novel visualization system for continuous monitoring of biomass at regional and global scales. This system is integrated with a back-end spatiotemporal data mining system that continuously detects changes using high temporal resolution MODIS images. In addition to the visualization, we demonstrate novel query features of the system that provides insights into the current conditions of the landscape.

  16. Google classroom as a tool for active learning

    NASA Astrophysics Data System (ADS)

    Shaharanee, Izwan Nizal Mohd; Jamil, Jastini Mohd; Rodzi, Sarah Syamimi Mohamad

    2016-08-01

    As the world is being developed with the new technologies, discovering and manipulating new ideas and concepts of online education are changing rapidly. In response to these changes, many states, institutions, and organizations have been working on strategic plans to implement online education. At the same time, misconceptions and myths related to the difficulty of teaching and learning online, technologies available to support online instruction, the support and compensation needed for high-quality instructors, and the needs of online students create challenges for such vision statements and planning documents. This paper provides analysis and evaluation of the effectiveness of Google Classroom's active learning activities for data mining subject under the Decision Sciences program. Technology Acceptance Model (TAM) has been employed to measure the effectiveness of the learning activities. A total of 100 valid unduplicated responses from students who enrolled data mining subject were used in this study. The results indicated that majority of the students satisfy with the Google Classroom's tool that were introduced in the class. Results of data analyzed showed that all ratios are above averages. In particular, comparative performance is good in the areas of ease of access, perceived usefulness, communication and interaction, instruction delivery and students' satisfaction towards the Google Classroom's active learning activities.

  17. Exploring the Relationship between Self-Regulated Vocabulary Learning and Web-Based Collaboration

    ERIC Educational Resources Information Center

    Liu, Sarah Hsueh-Jui; Lan, Yu-Ju; Ho, Cloudia Ya-Yu

    2014-01-01

    Collaborative learning has placed an emphasis on co-constructing knowledge by sharing and negotiating meaning for problem-solving activities, and this cannot be accomplished without governing the self-regulatory processes of students. This study employed a Web-based tool, Google Docs, to determine the effects of Web-based collaboration on…

  18. Google Wave: Collaboration Reworked

    ERIC Educational Resources Information Center

    Rethlefsen, Melissa L.

    2010-01-01

    Over the past several years, Internet users have become accustomed to Web 2.0 and cloud computing-style applications. It's commonplace and even intuitive to drag and drop gadgets on personalized start pages, to comment on a Facebook post without reloading the page, and to compose and save documents through a web browser. The web paradigm has…

  19. Challenging Google, Microsoft Unveils a Search Tool for Scholarly Articles

    ERIC Educational Resources Information Center

    Carlson, Scott

    2006-01-01

    Microsoft has introduced a new search tool to help people find scholarly articles online. The service, which includes journal articles from prominent academic societies and publishers, puts Microsoft in direct competition with Google Scholar. The new free search tool, which should work on most Web browsers, is called Windows Live Academic Search…

  20. Novel Data Sources for Women’s Health Research: Mapping Breast Screening Online Information Seeking Through Google Trends

    PubMed Central

    Dehkordy, Soudabeh Fazeli; Carlos, Ruth C.; Hall, Kelli S.; Dalton, Vanessa K.

    2015-01-01

    Rationale and Objectives Millions of people use online search engines every day to find health-related information and voluntarily share their personal health status and behaviors in various Web sites. Thus, data from tracking of online information seeker’s behavior offer potential opportunities for use in public health surveillance and research. Google Trends is a feature of Google which allows internet users to graph the frequency of searches for a single term or phrase over time or by geographic region. We used Google Trends to describe patterns of information seeking behavior in the subject of dense breasts and to examine their correlation with the passage or introduction of dense breast notification legislation. Materials and Methods In order to capture the temporal variations of information seeking about dense breasts, the web search query “dense breast” was entered in the Google Trends tool. We then mapped the dates of legislative actions regarding dense breasts that received widespread coverage in the lay media to information seeking trends about dense breasts over time. Results Newsworthy events and legislative actions appear to correlate well with peaks in search volume of “dense breast”. Geographic regions with the highest search volumes have either passed, denied, or are currently considering the dense breast legislation. Conclusions Our study demonstrated that any legislative action and respective news coverage correlate with increase in information seeking for “dense breast” on Google, suggesting that Google Trends has the potential to serve as a data source for policy-relevant research. PMID:24998689

  1. From Analysis to Impact: Challenges and Outcomes from Google's Cloud-based Platforms for Analyzing and Leveraging Petapixels of Geospatial Data

    NASA Astrophysics Data System (ADS)

    Thau, D.

    2017-12-01

    For the past seven years, Google has made petabytes of Earth observation data, and the tools to analyze it, freely available to researchers around the world via cloud computing. These data and tools were initially available via Google Earth Engine and are increasingly available on the Google Cloud Platform. We have introduced a number of APIs for both the analysis and presentation of geospatial data that have been successfully used to create impactful datasets and web applications, including studies of global surface water availability, global tree cover change, and crop yield estimation. Each of these projects used the cloud to analyze thousands to millions of Landsat scenes. The APIs support a range of publishing options, from outputting imagery and data for inclusion in papers, to providing tools for full scale web applications that provide analysis tools of their own. Over the course of developing these tools, we have learned a number of lessons about how to build a publicly available cloud platform for geospatial analysis, and about how the characteristics of an API can affect the kinds of impacts a platform can enable. This study will present an overview of how Google Earth Engine works and how Google's geospatial capabilities are extending to Google Cloud Platform. We will provide a number of case studies describing how these platforms, and the data they host, have been leveraged to build impactful decision support tools used by governments, researchers, and other institutions, and we will describe how the available APIs have shaped (or constrained) those tools. [Image Credit: Tyler A. Erickson

  2. Multi-cultural Wikipedia mining of geopolitics interactions leveraging reduced Google matrix analysis

    NASA Astrophysics Data System (ADS)

    Frahm, Klaus M.; El Zant, Samer; Jaffrès-Runser, Katia; Shepelyansky, Dima L.

    2017-09-01

    Geopolitics focuses on political power in relation to geographic space. Interactions among world countries have been widely studied at various scales, observing economic exchanges, world history or international politics among others. This work exhibits the potential of Wikipedia mining for such studies. Indeed, Wikipedia stores valuable fine-grained dependencies among countries by linking webpages together for diverse types of interactions (not only related to economical, political or historical facts). We mine herein the Wikipedia networks of several language editions using the recently proposed method of reduced Google matrix analysis. This approach allows to establish direct and hidden links between a subset of nodes that belong to a much larger directed network. Our study concentrates on 40 major countries chosen worldwide. Our aim is to offer a multicultural perspective on their interactions by comparing networks extracted from five different Wikipedia language editions, emphasizing English, Russian and Arabic ones. We demonstrate that this approach allows to recover meaningful direct and hidden links among the 40 countries of interest.

  3. Coverage of Google Scholar, Scopus, and Web of Science: a case study of the h-index in nursing.

    PubMed

    De Groote, Sandra L; Raszewski, Rebecca

    2012-01-01

    This study compares the articles cited in CINAHL, Scopus, Web of Science (WOS), and Google Scholar and the h-index ratings provided by Scopus, WOS, and Google Scholar. The publications of 30 College of Nursing faculty at a large urban university were examined. Searches by author name were executed in Scopus, WOS, and POP (Publish or Perish, which searches Google Scholar), and the h-index for each author from each database was recorded. In addition, the citing articles of their published articles were imported into a bibliographic management program. This data was used to determine an aggregated h-index for each author. Scopus, WOS, and Google Scholar provided different h-index ratings for authors and each database found unique and duplicate citing references. More than one tool should be used to calculate the h-index for nursing faculty because one tool alone cannot be relied on to provide a thorough assessment of a researcher's impact. If researchers are interested in a comprehensive h-index, they should aggregate the citing references located by WOS and Scopus. Because h-index rankings differ among databases, comparisons between researchers should be done only within a specified database. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Automating Information Discovery Within the Invisible Web

    NASA Astrophysics Data System (ADS)

    Sweeney, Edwina; Curran, Kevin; Xie, Ermai

    A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.

  5. Critical Reading of the Web

    ERIC Educational Resources Information Center

    Griffin, Teresa; Cohen, Deb

    2012-01-01

    The ubiquity and familiarity of the world wide web means that students regularly turn to it as a source of information. In doing so, they "are said to rely heavily on simple search engines, such as Google to find what they want." Researchers have also investigated how students use search engines, concluding that "the young web users tended to…

  6. Microsoft or Google Web 2.0 Tools for Course Management

    ERIC Educational Resources Information Center

    Rienzo, Thomas; Han, Bernard

    2009-01-01

    While Web 2.0 has no universal definition, it always refers to online interactions in which user groups both provide and receive content with the aim of collective intelligence. Since 2005, online software has provided Web 2.0 collaboration technologies, for little or no charge, that were formerly available only to wealthy organizations. Academic…

  7. Using Google Scholar to Estimate the Impact of Journal Articles in Education

    ERIC Educational Resources Information Center

    van Aalst, Jan

    2010-01-01

    This article discusses the potential of Google Scholar as an alternative or complement to the Web of Science and Scopus for measuring the impact of journal articles in education. Three handbooks on research in science education, language education, and educational technology were used to identify a sample of 112 accomplished scholars. Google…

  8. Center for Adaptive Optics | Search

    Science.gov Websites

    Center for Adaptive Optics A University of California Science and Technology Center home Search CfAO Google Search search: CfAO All of UCOLick.org Whole Web Search for recent Adaptive Optics news at GoogleNews! Last Modified: Sep 21, 2010 Center for Adaptive Optics | Search | The Center | Adaptive Optics

  9. "Google Reigns Triumphant"?: Stemming the Tide of Googlitis via Collaborative, Situated Information Literacy Instruction

    ERIC Educational Resources Information Center

    Leibiger, Carol A.

    2011-01-01

    Googlitis, the overreliance on search engines for research and the resulting development of poor searching skills, is a recognized problem among today's students. Google is not an effective research tool because, in addition to encouraging keyword searching at the expense of more powerful subject searching, it only accesses the Surface Web and is…

  10. Data Access and Web Services at the EarthScope Plate Boundary Observatory

    NASA Astrophysics Data System (ADS)

    Matykiewicz, J.; Anderson, G.; Henderson, D.; Hodgkinson, K.; Hoyt, B.; Lee, E.; Persson, E.; Torrez, D.; Smith, J.; Wright, J.; Jackson, M.

    2007-12-01

    The EarthScope Plate Boundary Observatory (PBO) at UNAVCO, Inc., part of the NSF-funded EarthScope project, is designed to study the three-dimensional strain field resulting from deformation across the active boundary zone between the Pacific and North American plates in the western United States. To meet these goals, PBO will install 880 continuous GPS stations, 103 borehole strainmeter stations, and five laser strainmeters, as well as manage data for 209 previously existing continuous GPS stations and one previously existing laser strainmeter. UNAVCO provides access to data products from these stations, as well as general information about the PBO project, via the PBO web site (http://pboweb.unavco.org). GPS and strainmeter data products can be found using a variety of access methods, incuding map searches, text searches, and station specific data retrieval. In addition, the PBO construction status is available via multiple mapping interfaces, including custom web based map widgets and Google Earth. Additional construction details can be accessed from PBO operational pages and station specific home pages. The current state of health for the PBO network is available with the statistical snap-shot, full map interfaces, tabular web based reports, and automatic data mining and alerts. UNAVCO is currently working to enhance the community access to this information by developing a web service framework for the discovery of data products, interfacing with operational engineers, and exposing data services to third party participants. In addition, UNAVCO, through the PBO project, provides advanced data management and monitoring systems for use by the community in operating geodetic networks in the United States and beyond. We will demonstrate these systems during the AGU meeting, and we welcome inquiries from the community at any time.

  11. The Plate Boundary Observatory: Community Focused Web Services

    NASA Astrophysics Data System (ADS)

    Matykiewicz, J.; Anderson, G.; Lee, E.; Hoyt, B.; Hodgkinson, K.; Persson, E.; Wright, J.; Torrez, D.; Jackson, M.

    2006-12-01

    The Plate Boundary Observatory (PBO), part of the NSF-funded EarthScope project, is designed to study the three-dimensional strain field resulting from deformation across the active boundary zone between the Pacific and North American plates in the western United States. To meet these goals, PBO will install 852 continuous GPS stations, 103 borehole strainmeter stations, 28 tiltmeters, and five laser strainmeters, as well as manage data for 209 previously existing continuous GPS stations. UNAVCO provides access to data products from these stations, as well as general information about the PBO project, via the PBO web site (http://pboweb.unavco.org). GPS and strainmeter data products can be found using a variety of channels, including map searches, text searches, and station specific data retrieval. In addition, the PBO construction status is available via multiple mapping interfaces, including custom web based map widgets and Google Earth. Additional construction details can be accessed from PBO operational pages and station specific home pages. The current state of health for the PBO network is available with the statistical snap-shot, full map interfaces, tabular web based reports, and automatic data mining and alerts. UNAVCO is currently working to enhance the community access to this information by developing a web service framework for the discovery of data products, interfacing with operational engineers, and exposing data services to third party participants. In addition, UNAVCO, through the PBO project, provides advanced data management and monitoring systems for use by the community in operating geodetic networks in the United States and beyond. We will demonstrate these systems during the AGU meeting, and we welcome inquiries from the community at any time.

  12. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    PubMed

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Three options for citation tracking: Google Scholar, Scopus and Web of Science.

    PubMed

    Bakkalbasi, Nisa; Bauer, Kathleen; Glover, Janis; Wang, Lei

    2006-06-29

    Researchers turn to citation tracking to find the most influential articles for a particular topic and to see how often their own published papers are cited. For years researchers looking for this type of information had only one resource to consult: the Web of Science from Thomson Scientific. In 2004 two competitors emerged--Scopus from Elsevier and Google Scholar from Google. The research reported here uses citation analysis in an observational study examining these three databases; comparing citation counts for articles from two disciplines (oncology and condensed matter physics) and two years (1993 and 2003) to test the hypothesis that the different scholarly publication coverage provided by the three search tools will lead to different citation counts from each. Eleven journal titles with varying impact factors were selected from each discipline (oncology and condensed matter physics) using the Journal Citation Reports (JCR). All articles published in the selected titles were retrieved for the years 1993 and 2003, and a stratified random sample of articles was chosen, resulting in four sets of articles. During the week of November 7-12, 2005, the citation counts for each research article were extracted from the three sources. The actual citing references for a subset of the articles published in 2003 were also gathered from each of the three sources. For oncology 1993 Web of Science returned the highest average number of citations, 45.3. Scopus returned the highest average number of citations (8.9) for oncology 2003. Web of Science returned the highest number of citations for condensed matter physics 1993 and 2003 (22.5 and 3.9 respectively). The data showed a significant difference in the mean citation rates between all pairs of resources except between Google Scholar and Scopus for condensed matter physics 2003. For articles published in 2003 Google Scholar returned the largest amount of unique citing material for oncology and Web of Science returned the most for condensed matter physics. This study did not identify any one of these three resources as the answer to all citation tracking needs. Scopus showed strength in providing citing literature for current (2003) oncology articles, while Web of Science produced more citing material for 2003 and 1993 condensed matter physics, and 1993 oncology articles. All three tools returned some unique material. Our data indicate that the question of which tool provides the most complete set of citing literature may depend on the subject and publication year of a given article.

  14. Three options for citation tracking: Google Scholar, Scopus and Web of Science

    PubMed Central

    Bakkalbasi, Nisa; Bauer, Kathleen; Glover, Janis; Wang, Lei

    2006-01-01

    Background Researchers turn to citation tracking to find the most influential articles for a particular topic and to see how often their own published papers are cited. For years researchers looking for this type of information had only one resource to consult: the Web of Science from Thomson Scientific. In 2004 two competitors emerged – Scopus from Elsevier and Google Scholar from Google. The research reported here uses citation analysis in an observational study examining these three databases; comparing citation counts for articles from two disciplines (oncology and condensed matter physics) and two years (1993 and 2003) to test the hypothesis that the different scholarly publication coverage provided by the three search tools will lead to different citation counts from each. Methods Eleven journal titles with varying impact factors were selected from each discipline (oncology and condensed matter physics) using the Journal Citation Reports (JCR). All articles published in the selected titles were retrieved for the years 1993 and 2003, and a stratified random sample of articles was chosen, resulting in four sets of articles. During the week of November 7–12, 2005, the citation counts for each research article were extracted from the three sources. The actual citing references for a subset of the articles published in 2003 were also gathered from each of the three sources. Results For oncology 1993 Web of Science returned the highest average number of citations, 45.3. Scopus returned the highest average number of citations (8.9) for oncology 2003. Web of Science returned the highest number of citations for condensed matter physics 1993 and 2003 (22.5 and 3.9 respectively). The data showed a significant difference in the mean citation rates between all pairs of resources except between Google Scholar and Scopus for condensed matter physics 2003. For articles published in 2003 Google Scholar returned the largest amount of unique citing material for oncology and Web of Science returned the most for condensed matter physics. Conclusion This study did not identify any one of these three resources as the answer to all citation tracking needs. Scopus showed strength in providing citing literature for current (2003) oncology articles, while Web of Science produced more citing material for 2003 and 1993 condensed matter physics, and 1993 oncology articles. All three tools returned some unique material. Our data indicate that the question of which tool provides the most complete set of citing literature may depend on the subject and publication year of a given article. PMID:16805916

  15. Web GIS in practice III: creating a simple interactive map of England's Strategic Health Authorities using Google Maps API, Google Earth KML, and MSN Virtual Earth Map Control

    PubMed Central

    Boulos, Maged N Kamel

    2005-01-01

    This eye-opener article aims at introducing the health GIS community to the emerging online consumer geoinformatics services from Google and Microsoft (MSN), and their potential utility in creating custom online interactive health maps. Using the programmable interfaces provided by Google and MSN, we created three interactive demonstrator maps of England's Strategic Health Authorities. These can be browsed online at – Google Maps API (Application Programming Interface) version, – Google Earth KML (Keyhole Markup Language) version, and – MSN Virtual Earth Map Control version. Google and MSN's worldwide distribution of "free" geospatial tools, imagery, and maps is to be commended as a significant step towards the ultimate "wikification" of maps and GIS. A discussion is provided of these emerging online mapping trends, their expected future implications and development directions, and associated individual privacy, national security and copyrights issues. Although ESRI have announced their planned response to Google (and MSN), it remains to be seen how their envisaged plans will materialize and compare to the offerings from Google and MSN, and also how Google and MSN mapping tools will further evolve in the near future. PMID:16176577

  16. Web of Science, Scopus, and Google Scholar citation rates: a case study of medical physics and biomedical engineering: what gets cited and what doesn't?

    PubMed

    Trapp, Jamie

    2016-12-01

    There are often differences in a publication's citation count, depending on the database accessed. Here, aspects of citation counts for medical physics and biomedical engineering papers are studied using papers published in the journal Australasian physical and engineering sciences in medicine. Comparison is made between the Web of Science, Scopus, and Google Scholar. Papers are categorised into subject matter, and citation trends are examined. It is shown that review papers as a group tend to receive more citations on average; however the highest cited individual papers are more likely to be research papers.

  17. Trends in access of plant biodiversity data revealed by Google Analytics

    PubMed Central

    Baxter, David G.; Hagedorn, Gregor; Legler, Ben; Gilbert, Edward; Thiele, Kevin; Vargas-Rodriguez, Yalma; Urbatsch, Lowell E.

    2014-01-01

    Abstract The amount of plant biodiversity data available via the web has exploded in the last decade, but making these data available requires a considerable investment of time and work, both vital considerations for organizations and institutions looking to validate the impact factors of these online works. Here we used Google Analytics (GA), to measure the value of this digital presence. In this paper we examine usage trends using 15 different GA accounts, spread across 451 institutions or botanical projects that comprise over five percent of the world's herbaria. They were studied at both one year and total years. User data from the sample reveal: 1) over 17 million web sessions, 2) on five primary operating systems, 3) search and direct traffic dominates with minimal impact from social media, 4) mobile and new device types have doubled each year for the past three years, 5) and web browsers, the tools we use to interact with the web, are changing. Server-side analytics differ from site to site making the comparison of their data sets difficult. However, use of Google Analytics erases the reporting heterogeneity of unique server-side analytics, as they can now be examined with a standard that provides a clarity for data-driven decisions. The knowledge gained here empowers any collection-based environment regardless of size, with metrics about usability, design, and possible directions for future development. PMID:25425933

  18. Trends in access of plant biodiversity data revealed by Google Analytics.

    PubMed

    Jones, Timothy Mark; Baxter, David G; Hagedorn, Gregor; Legler, Ben; Gilbert, Edward; Thiele, Kevin; Vargas-Rodriguez, Yalma; Urbatsch, Lowell E

    2014-01-01

    The amount of plant biodiversity data available via the web has exploded in the last decade, but making these data available requires a considerable investment of time and work, both vital considerations for organizations and institutions looking to validate the impact factors of these online works. Here we used Google Analytics (GA), to measure the value of this digital presence. In this paper we examine usage trends using 15 different GA accounts, spread across 451 institutions or botanical projects that comprise over five percent of the world's herbaria. They were studied at both one year and total years. User data from the sample reveal: 1) over 17 million web sessions, 2) on five primary operating systems, 3) search and direct traffic dominates with minimal impact from social media, 4) mobile and new device types have doubled each year for the past three years, 5) and web browsers, the tools we use to interact with the web, are changing. Server-side analytics differ from site to site making the comparison of their data sets difficult. However, use of Google Analytics erases the reporting heterogeneity of unique server-side analytics, as they can now be examined with a standard that provides a clarity for data-driven decisions. The knowledge gained here empowers any collection-based environment regardless of size, with metrics about usability, design, and possible directions for future development.

  19. OntoGene web services for biomedical text mining.

    PubMed

    Rinaldi, Fabio; Clematide, Simon; Marques, Hernani; Ellendorff, Tilia; Romacker, Martin; Rodriguez-Esteban, Raul

    2014-01-01

    Text mining services are rapidly becoming a crucial component of various knowledge management pipelines, for example in the process of database curation, or for exploration and enrichment of biomedical data within the pharmaceutical industry. Traditional architectures, based on monolithic applications, do not offer sufficient flexibility for a wide range of use case scenarios, and therefore open architectures, as provided by web services, are attracting increased interest. We present an approach towards providing advanced text mining capabilities through web services, using a recently proposed standard for textual data interchange (BioC). The web services leverage a state-of-the-art platform for text mining (OntoGene) which has been tested in several community-organized evaluation challenges,with top ranked results in several of them.

  20. OntoGene web services for biomedical text mining

    PubMed Central

    2014-01-01

    Text mining services are rapidly becoming a crucial component of various knowledge management pipelines, for example in the process of database curation, or for exploration and enrichment of biomedical data within the pharmaceutical industry. Traditional architectures, based on monolithic applications, do not offer sufficient flexibility for a wide range of use case scenarios, and therefore open architectures, as provided by web services, are attracting increased interest. We present an approach towards providing advanced text mining capabilities through web services, using a recently proposed standard for textual data interchange (BioC). The web services leverage a state-of-the-art platform for text mining (OntoGene) which has been tested in several community-organized evaluation challenges, with top ranked results in several of them. PMID:25472638

  1. Are Google or Yahoo a good portal for getting quality healthcare web information?

    PubMed

    Chang, Polun; Hou, I-Ching; Hsu, Chiao-Ling; Lai, Hsiang-Fen

    2006-01-01

    We examined the ranks of 50 award-won health websites in Taiwan against the search results of two popular portals with 6 common diseases. The results showed that the portal search results do not rank the quality web sites reasonably.

  2. An integrated WebGIS framework for volunteered geographic information and social media in soil and water conservation.

    PubMed

    Werts, Joshua D; Mikhailova, Elena A; Post, Christopher J; Sharp, Julia L

    2012-04-01

    Volunteered geographic information and social networking in a WebGIS has the potential to increase public participation in soil and water conservation, promote environmental awareness and change, and provide timely data that may be otherwise unavailable to policymakers in soil and water conservation management. The objectives of this study were: (1) to develop a framework for combining current technologies, computing advances, data sources, and social media; and (2) develop and test an online web mapping interface. The mapping interface integrates Microsoft Silverlight, Bing Maps, ArcGIS Server, Google Picasa Web Albums Data API, RSS, Google Analytics, and Facebook to create a rich user experience. The website allows the public to upload photos and attributes of their own subdivisions or sites they have identified and explore other submissions. The website was made available to the public in early February 2011 at http://www.AbandonedDevelopments.com and evaluated for its potential long-term success in a pilot study.

  3. An Integrated WebGIS Framework for Volunteered Geographic Information and Social Media in Soil and Water Conservation

    NASA Astrophysics Data System (ADS)

    Werts, Joshua D.; Mikhailova, Elena A.; Post, Christopher J.; Sharp, Julia L.

    2012-04-01

    Volunteered geographic information and social networking in a WebGIS has the potential to increase public participation in soil and water conservation, promote environmental awareness and change, and provide timely data that may be otherwise unavailable to policymakers in soil and water conservation management. The objectives of this study were: (1) to develop a framework for combining current technologies, computing advances, data sources, and social media; and (2) develop and test an online web mapping interface. The mapping interface integrates Microsoft Silverlight, Bing Maps, ArcGIS Server, Google Picasa Web Albums Data API, RSS, Google Analytics, and Facebook to create a rich user experience. The website allows the public to upload photos and attributes of their own subdivisions or sites they have identified and explore other submissions. The website was made available to the public in early February 2011 at http://www.AbandonedDevelopments.com and evaluated for its potential long-term success in a pilot study.

  4. Lexicon Sextant: Modeling a Mnemonic System for Customizable Browser Information Organization and Management

    ERIC Educational Resources Information Center

    Shen, Siu-Tsen

    2016-01-01

    This paper presents an ongoing study of the development of a customizable web browser information organization and management system, which the author has named Lexicon Sextant (LS). LS is a user friendly, graphical web based add-on to the latest generation of web browsers, such as Google Chrome, making it easier and more intuitive to store and…

  5. Why We Are Not Google: Lessons from a Library Web Site Usability Study

    ERIC Educational Resources Information Center

    Swanson, Troy A.; Green, Jeremy

    2011-01-01

    In the Fall of 2009, the Moraine Valley Community College Library, using guidelines developed by Jakob Nielsen, conducted a usability study to determine how students were using the library Web site and to inform the redesign of the Web site. The authors found that Moraine Valley's current gateway design was a more effective access point to library…

  6. Moving Forward: The Next-Gen Catalog and the New Discovery Tools

    ERIC Educational Resources Information Center

    Weare, William H., Jr.; Toms, Sue; Breeding, Marshall

    2011-01-01

    Do students prefer to use Google instead of the library catalog? Ever wondered why? Google is easier to use and delivers plenty of "good enough" resources to meet their needs. The current generation of online catalogs has two main problems. First, the look and feel of the interface doesn't reflect the conventions adhered to elsewhere on the web,…

  7. Development of an Innovative Interactive Virtual Classroom System for K-12 Education Using Google App Engine

    ERIC Educational Resources Information Center

    Mumba, Frackson; Zhu, Mengxia

    2013-01-01

    This paper presents a Simulation-based interactive Virtual ClassRoom web system (SVCR: www.vclasie.com) powered by the state-of-the-art cloud computing technology from Google SVCR integrates popular free open-source math, science and engineering simulations and provides functions such as secure user access control and management of courses,…

  8. Social Constructivist Approach to Web-Based EFL Learning: Collaboration, Motivation, and Perception on the Use of Google Docs

    ERIC Educational Resources Information Center

    Liu, Sarah Hsueh-Jui; Lan, Yu-Ju

    2016-01-01

    This study reports on the differences in motivation, vocabulary gain, and perceptions on using or the Google Docs between individual and collaborative learning at a tertiary level. Two classes of English-as-a-Foreign Language (EFL) students were recruited and each class was randomly assigned into one of the two groups--individuals or…

  9. Why do people google movement disorders? An infodemiological study of information seeking behaviors.

    PubMed

    Brigo, Francesco; Erro, Roberto

    2016-05-01

    Millions of people worldwide everyday search Google or Wikipedia to look for health-related information. Aim of this study was to evaluate and interpret web search queries for terms related to movement disorders (MD) in English-speaking countries and their changes over time. We analyzed information regarding the volume of online searches in Google and Wikipedia for the most common MD and their treatments. We determined the highest search volume peaks to identify possible relation with online news headlines. The volume of searches for some queries related to MD entered in Google enormously increased over time. Most queries were related to definition, subtypes, symptoms and treatment (mostly to adverse effects, or alternatively, to possible alternative treatments). The highest peaks of MD search queries were temporally related to news about celebrities suffering from MD, to specific mass-media events or to news concerning pharmaceutic companies or scientific discoveries on MD. An increasing number of people use Google and Wikipedia to look for terms related to MD to obtain information on definitions, causes and symptoms, possibly to aid initial self-diagnosis. MD information demand and the actual prevalence of different MDs do not travel together: web search volume may mirrors patients' fears and worries about some particular disorders perceived as more serious than others, or may be driven by release of news about celebrities suffering from MD, "breaking news" or specific mass-media events regarding MD.

  10. "Publish or Perish" as citation metrics used to analyze scientific output in the humanities: International case studies in economics, geography, social sciences, philosophy, and history.

    PubMed

    Baneyx, Audrey

    2008-01-01

    Traditionally, the most commonly used source of bibliometric data is the Thomson ISI Web of Knowledge, in particular the (Social) Science Citation Index and the Journal Citation Reports, which provide the yearly Journal Impact Factors. This database used for the evaluation of researchers is not advantageous in the humanities, mainly because books, conference papers, and non-English journals, which are an important part of scientific activity, are not (well) covered. This paper presents the use of an alternative source of data, Google Scholar, and its benefits in calculating citation metrics in the humanities. Because of its broader range of data sources, the use of Google Scholar generally results in more comprehensive citation coverage in the humanities. This presentation compares and analyzes some international case studies with ISI Web of Knowledge and Google Scholar. The fields of economics, geography, social sciences, philosophy, and history are focused on to illustrate the differences of results between these two databases. To search for relevant publications in the Google Scholar database, the use of "Publish or Perish" and of CleanPoP, which the author developed to clean the results, are compared.

  11. Automatic generation of Web mining environments

    NASA Astrophysics Data System (ADS)

    Cibelli, Maurizio; Costagliola, Gennaro

    1999-02-01

    The main problem related to the retrieval of information from the world wide web is the enormous number of unstructured documents and resources, i.e., the difficulty of locating and tracking appropriate sources. This paper presents a web mining environment (WME), which is capable of finding, extracting and structuring information related to a particular domain from web documents, using general purpose indices. The WME architecture includes a web engine filter (WEF), to sort and reduce the answer set returned by a web engine, a data source pre-processor (DSP), which processes html layout cues in order to collect and qualify page segments, and a heuristic-based information extraction system (HIES), to finally retrieve the required data. Furthermore, we present a web mining environment generator, WMEG, that allows naive users to generate a WME specific to a given domain by providing a set of specifications.

  12. Search engine as a diagnostic tool in difficult immunological and allergologic cases: is Google useful?

    PubMed

    Lombardi, C; Griffiths, E; McLeod, B; Caviglia, A; Penagos, M

    2009-07-01

    Web search engines are an important tool in communication and diffusion of knowledge. Among these, Google appears to be the most popular one: in August 2008, it accounted for 87% of all web searches in the UK, compared with Yahoo's 3.3%. Google's value as a diagnostic guide in general medicine was recently reported. The aim of this comparative cross-sectional study was to evaluate whether searching Google with disease-related terms was effective in the identification and diagnosis of complex immunological and allergic cases. Forty-five case reports were randomly selected by an independent observer from peer-reviewed medical journals. Clinical data were presented separately to three investigators, blinded to the final diagnoses. Investigator A was a Consultant with an expert knowledge in Internal Medicine and Allergy (IM&A) and basic computing skills. Investigator B was a Registrar in IM&A. Investigator C was a Research Nurse. Both Investigators B and C were familiar with computers and search engines. For every clinical case presented, each investigator independently carried out an Internet search using Google to provide a final diagnosis. Their results were then compared with the published diagnoses. Correct diagnoses were provided in 30/45 (66%) cases, 39/45 (86%) cases, and in 29/45 (64%) cases by investigator A, B, and C, respectively. All of the three investigators achieved the correct diagnosis in 19 cases (42%), and all of them failed in two cases. This Google-based search was useful to identify an appropriate diagnosis in complex immunological and allergic cases. Computing skills may help to get better results.

  13. Mining a Web Citation Database for Author Co-Citation Analysis.

    ERIC Educational Resources Information Center

    He, Yulan; Hui, Siu Cheung

    2002-01-01

    Proposes a mining process to automate author co-citation analysis based on the Web Citation Database, a data warehouse for storing citation indices of Web publications. Describes the use of agglomerative hierarchical clustering for author clustering and multidimensional scaling for displaying author cluster maps, and explains PubSearch, a…

  14. Study of Command and Control (C&C) Structures on Integrating Unmanned Autonomous Systems (UAS) into Manned Environments

    DTIC Science & Technology

    2012-09-01

    and traveled all the way around Lake Tahoe. The self - driving cars have logged over 140,000 miles since October 9, 2010 (Google 2010) pictured here...UNDERWATER VEHICLES (AUV) STARFISH is the name given to a small team of autonomous robotic fish - a project carried out by the Acoustic Research...www.scribd.com/doc/42245301/Manual-Mine- Clearance-Book1. Accessed July 23, 2012. Google. The Self - Driving Car Logs more Miles on New Wheels. August 7

  15. An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.

    PubMed

    Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J

    2002-01-01

    Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.

  16. [Electronic poison information management system].

    PubMed

    Kabata, Piotr; Waldman, Wojciech; Kaletha, Krystian; Sein Anand, Jacek

    2013-01-01

    We describe deployment of electronic toxicological information database in poison control center of Pomeranian Center of Toxicology. System was based on Google Apps technology, by Google Inc., using electronic, web-based forms and data tables. During first 6 months from system deployment, we used it to archive 1471 poisoning cases, prepare monthly poisoning reports and facilitate statistical analysis of data. Electronic database usage made Poison Center work much easier.

  17. Next-Gen Search Engines

    ERIC Educational Resources Information Center

    Gupta, Amardeep

    2005-01-01

    Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…

  18. Data Mining for Web-Based Support Systems: A Case Study in e-Custom Systems

    NASA Astrophysics Data System (ADS)

    Razmerita, Liana; Kirchner, Kathrin

    This chapter provides an example of a Web-based support system (WSS) used to streamline trade procedures, prevent potential security threats, and reduce tax-related fraud in cross-border trade. The architecture is based on a service-oriented architecture that includes smart seals and Web services. We discuss the implications and suggest further enhancements to demonstrate how such systems can move toward a Web-based decision support system with the support of data mining methods. We provide a concrete example of how data mining can help to analyze the vast amount of data collected while monitoring the container movements along its supply chain.

  19. Semantic web for integrated network analysis in biomedicine.

    PubMed

    Chen, Huajun; Ding, Li; Wu, Zhaohui; Yu, Tong; Dhanapalan, Lavanya; Chen, Jake Y

    2009-03-01

    The Semantic Web technology enables integration of heterogeneous data on the World Wide Web by making the semantics of data explicit through formal ontologies. In this article, we survey the feasibility and state of the art of utilizing the Semantic Web technology to represent, integrate and analyze the knowledge in various biomedical networks. We introduce a new conceptual framework, semantic graph mining, to enable researchers to integrate graph mining with ontology reasoning in network data analysis. Through four case studies, we demonstrate how semantic graph mining can be applied to the analysis of disease-causal genes, Gene Ontology category cross-talks, drug efficacy analysis and herb-drug interactions analysis.

  20. Can people find patient decision aids on the Internet?

    PubMed

    Morris, Debra; Drake, Elizabeth; Saarimaki, Anton; Bennett, Carol; O'Connor, Annette

    2008-12-01

    To determine if people could find patient decision aids (PtDAs) on the Internet using the most popular general search engines. We chose five medical conditions for which English language PtDAs were available from at least three different developers. The search engines used were: Google (www.google.com), Yahoo! (www.yahoo.com), and MSN (www.msn.com). For each condition and search engine we ran six searches using a combination of search terms. We coded all non-sponsored Web pages that were linked from the first page of the search results. Most first page results linked to informational Web pages about the condition, only 16% linked to PtDAs. PtDAs were more readily found for the breast cancer surgery decision (our searches found seven of the nine developers). The searches using Yahoo and Google search engines were more likely to find PtDAs. The following combination of search terms: condition, treatment, decision (e.g. breast cancer surgery decision) was most successful across all search engines (29%). While some terms and search engines were more successful, few resulted in direct links to PtDAs. Finding PtDAs would be improved with use of standardized labelling, providing patients with specific Web site addresses or access to an independent PtDA clearinghouse.

  1. Web data mining

    NASA Astrophysics Data System (ADS)

    Wibonele, Kasanda J.; Zhang, Yanqing

    2002-03-01

    A web data mining system using granular computing and ASP programming is proposed. This is a web based application, which allows web users to submit survey data for many different companies. This survey is a collection of questions that will help these companies develop and improve their business and customer service with their clients by analyzing survey data. This web application allows users to submit data anywhere. All the survey data is collected into a database for further analysis. An administrator of this web application can login to the system and view all the data submitted. This web application resides on a web server, and the database resides on the MS SQL server.

  2. Start Your Search Engines. Part One: Taming Google--and Other Tips to Master Web Searches

    ERIC Educational Resources Information Center

    Adam, Anna; Mowers, Helen

    2008-01-01

    There are a lot of useful tools on the Web, all those social applications, and the like. Still most people go online for one thing--to perform a basic search. For most fact-finding missions, the Web is there. But--as media specialists well know--the sheer wealth of online information can hamper efforts to focus on a few reliable references.…

  3. QuakeSim: a Web Service Environment for Productive Investigations with Earth Surface Sensor Data

    NASA Astrophysics Data System (ADS)

    Parker, J. W.; Donnellan, A.; Granat, R. A.; Lyzenga, G. A.; Glasscoe, M. T.; McLeod, D.; Al-Ghanmi, R.; Pierce, M.; Fox, G.; Grant Ludwig, L.; Rundle, J. B.

    2011-12-01

    The QuakeSim science gateway environment includes a visually rich portal interface, web service access to data and data processing operations, and the QuakeTables ontology-based database of fault models and sensor data. The integrated tools and services are designed to assist investigators by covering the entire earthquake cycle of strain accumulation and release. The Web interface now includes Drupal-based access to diverse and changing content, with new ability to access data and data processing directly from the public page, as well as the traditional project management areas that require password access. The system is designed to make initial browsing of fault models and deformation data particularly engaging for new users. Popular data and data processing include GPS time series with data mining techniques to find anomalies in time and space, experimental forecasting methods based on catalogue seismicity, faulted deformation models (both half-space and finite element), and model-based inversion of sensor data. The fault models include the CGS and UCERF 2.0 faults of California and are easily augmented with self-consistent fault models from other regions. The QuakeTables deformation data include the comprehensive set of UAVSAR interferograms as well as a growing collection of satellite InSAR data.. Fault interaction simulations are also being incorporated in the web environment based on Virtual California. A sample usage scenario is presented which follows an investigation of UAVSAR data from viewing as an overlay in Google Maps, to selection of an area of interest via a polygon tool, to fast extraction of the relevant correlation and phase information from large data files, to a model inversion of fault slip followed by calculation and display of a synthetic model interferogram.

  4. Virtual Field Trips: Using Google Maps to Support Online Learning and Teaching of the History of Astronomy

    ERIC Educational Resources Information Center

    Fluke, Christopher J.

    2009-01-01

    I report on a pilot study on the use of Google Maps to provide virtual field trips as a component of a wholly online graduate course on the history of astronomy. The Astronomical Tourist Web site (http://astronomy.swin.edu.au/sao/tourist), themed around the role that specific locations on Earth have contributed to the development of astronomical…

  5. The Effectiveness of Web-Based Learning Environment: A Case Study of Public Universities in Kenya

    ERIC Educational Resources Information Center

    Kirui, Paul A.; Mutai, Sheila J.

    2010-01-01

    Web mining is emerging in many aspects of e-learning, aiming at improving online learning and teaching processes and making them more transparent and effective. Researchers using Web mining tools and techniques are challenged to learn more about the online students' reshaping online courses and educational websites, and create tools for…

  6. Podcast 1 2 3

    ERIC Educational Resources Information Center

    Griffey, Jason

    2007-01-01

    The University of Tennessee at Chattanooga (UTC) offers student workshops that range from Cool New Web Stuff (what is on the web that can help make research or just plain life easier) and How To Use Google Scholar. These workshops are brilliant fodder for podcasting. In fact, the initial idea for its podcast project came from a student plagiarism…

  7. Using Web Speech Technology with Language Learning Applications

    ERIC Educational Resources Information Center

    Daniels, Paul

    2015-01-01

    In this article, the author presents the history of human-to-computer interaction based upon the design of sophisticated computerized speech recognition algorithms. Advancements such as the arrival of cloud-based computing and software like Google's Web Speech API allows anyone with an Internet connection and Chrome browser to take advantage of…

  8. Tags Help Make Libraries Del.icio.us: Social Bookmarking and Tagging Boost Participation

    ERIC Educational Resources Information Center

    Rethlefsen, Melissa L.

    2007-01-01

    Traditional library web products, whether online public access catalogs, library databases, or even library web sites, have long been rigidly controlled and difficult to use. Patrons regularly prefer Google's simple interface. Now social bookmarking and tagging tools help librarians bridge the gap between the library's need to offer authoritative,…

  9. Web Analytics Reveal User Behavior: TTU Libraries' Experience with Google Analytics

    ERIC Educational Resources Information Center

    Barba, Ian; Cassidy, Ryan; De Leon, Esther; Williams, B. Justin

    2013-01-01

    Proper planning and assessment surveys of projects for academic library Web sites will not always be predictive of real world use, no matter how many responses they might receive. In this case, multiple-phase development, librarian focus groups, and patron surveys performed before implementation of such a project inaccurately overrated utility and…

  10. A-Z Link

    Science.gov Websites

    Index (this page) 2. Use search.lbl.gov powered by Google. 3. Use DS The Directory of both People and Berkeley Lab Lawrence Berkeley National Laboratory A-Z Index Directory Submit Web People Navigation Berkeley Lab Search Submit Web People Close About the Lab Leadership/Organization Calendar News Center

  11. Humans Do It Better: Inside the Open Directory Project.

    ERIC Educational Resources Information Center

    Sherman, Chris

    2000-01-01

    Explains the Open Directory Project (ODP), an attempt to catalog the World Wide Web by creating a human-compiled Web directory. Discusses the history of the project; open source models; the use of volunteer editors; quality control; problems and complaints; and use of ODP data by commercial services such as Google. (LRW)

  12. Teaching Lab Science Courses Online: Resources for Best Practices, Tools, and Technology

    ERIC Educational Resources Information Center

    Jeschofnig, Linda; Jeschofnig, Peter

    2011-01-01

    "Teaching Lab Science Courses Online" is a practical resource for educators developing and teaching fully online lab science courses. First, it provides guidance for using learning management systems and other web 2.0 technologies such as video presentations, discussion boards, Google apps, Skype, video/web conferencing, and social media…

  13. A Web Portal-Based Time-Aware KML Animation Tool for Exploring Spatiotemporal Dynamics of Hydrological Events

    NASA Astrophysics Data System (ADS)

    Bao, X.; Cai, X.; Liu, Y.

    2009-12-01

    Understanding spatiotemporal dynamics of hydrological events such as storms and droughts is highly valuable for decision making on disaster mitigation and recovery. Virtual Globe-based technologies such as Google Earth and Open Geospatial Consortium KML standards show great promises for collaborative exploration of such events using visual analytical approaches. However, currently there are two barriers for wider usage of such approaches. First, there lacks an easy way to use open source tools to convert legacy or existing data formats such as shapefiles, geotiff, or web services-based data sources to KML and to produce time-aware KML files. Second, an integrated web portal-based time-aware animation tool is currently not available. Thus users usually share their files in the portal but have no means to visually explore them without leaving the portal environment which the users are familiar with. We develop a web portal-based time-aware KML animation tool for viewing extreme hydrologic events. The tool is based on Google Earth JavaScript API and Java Portlet standard 2.0 JSR-286, and it is currently deployable in one of the most popular open source portal frameworks, namely Liferay. We have also developed an open source toolkit kml-soc-ncsa (http://code.google.com/p/kml-soc-ncsa/) to facilitate the conversion of multiple formats into KML and the creation of time-aware KML files. We illustrate our tool using some example cases, in which drought and storm events with both time and space dimension can be explored in this web-based KML animation portlet. The tool provides an easy-to-use web browser-based portal environment for multiple users to collaboratively share and explore their time-aware KML files as well as improving the understanding of the spatiotemporal dynamics of the hydrological events.

  14. Earth Science Mining Web Services

    NASA Astrophysics Data System (ADS)

    Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.

    2008-12-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  15. Earth Science Mining Web Services

    NASA Technical Reports Server (NTRS)

    Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken

    2008-01-01

    To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.

  16. Google searches help with diagnosis in dermatology.

    PubMed

    Amri, Montassar; Feroz, Kaliyadan

    2014-01-01

    Several previous studies have tried to assess the usefulness of Google search as a diagnostic aid. The results were discordant and have led to controversies. To investigate how often Google search is helpful to reach correct diagnoses in dermatology. Two fifth-year students (A and B) and one demonstrator (C) have participated as investigators in this paper. Twenty-five diagnostic dermatological cases were selected from all the clinical cases published in the Web only images in clinical medicine from March 2005 to November 2009. The main outcome measure of our paper was to compare the number of correct diagnoses provided by the investigators without, and with Google search. Investigator A gave correct diagnoses in 9/25 (36%) cases without Google search, his diagnostic success after Google search was 18/25 (72%). Investigator B results were 11/25 (44%) correct diagnoses without Google search, and 19/25 (76%) after this search. For investigator C, the results were 12/25 (48%) without Google search, and 18/25 (72%) after the use of this tool. Thus, the total correct diagnoses provided by the three investigators were 32 (42.6%) without Google search, and 55 (73.3%) when using this facility. The difference was statistically significant between the total number of correct diagnoses given by the three investigators without, and with Google search (p = 0.0002). In the light of our paper, Google search appears to be an interesting diagnostic aid in dermatology. However, we emphasize that diagnosis is primarily an art based on clinical skills and experience.

  17. Bootstrapping and Maintaining Trust in the Cloud

    DTIC Science & Technology

    2016-03-16

    of infrastructure-as-a- service (IaaS) cloud computing services such as Ama- zon Web Services, Google Compute Engine, Rackspace, et. al. means that...Implementation We implemented keylime in ∼3.2k lines of Python in four components: registrar, node, CV, and tenant. The registrar offers a REST-based web ...bootstrap key K. It provides an unencrypted REST-based web service for these two functions. As described earlier, the pro- tocols for exchanging data

  18. Successful participant recruitment strategies for an online smokeless tobacco cessation program.

    PubMed

    Gordon, Judith S; Akers, Laura; Severson, Herbert H; Danaher, Brian G; Boles, Shawn M

    2006-12-01

    An estimated 22% of Americans currently use smokeless tobacco (ST). Most live in small towns and rural areas that offer few ST cessation resources. Approximately 94 million Americans use the Internet for health-related information, and on-line access is growing among lower-income and less-educated groups. As part of a randomized clinical trial to assess the reach and effectiveness of Web-based programs for delivering an ST cessation intervention, the authors developed and evaluated several methods for overcoming the recruitment challenges associated with Web-based research. This report describes and evaluates these methods. Participants were recruited through: (a) Thematic promotional "releases" to print and broadcast media, (b) Google ads, (c) placement of a link on other Web sites, (d) limited purchase of paid advertising, (e) direct mailings to ST users, and (f) targeted mailings to health care and tobacco control professionals. Combined recruitment activities resulted in more than 23,500 hits on our recruitment website from distinct IP addresses over 15 months, which yielded 2,523 eligible ST users who completed the registration process and enrolled in the study. Self-reports revealed that at least 1,276 (50.6%) of these participants were recruited via mailings, 874 (34.6%) from Google ads or via search engines or links on another Web site, and 373 (14.8%) from all other methods combined. The use of thematic mailings is novel in research settings. Recruitment of study participants went quickly and smoothly. Google ads and mailings to media outlets were the methods that recruited the highest number of participants.

  19. Lightweight monitoring and control system for coal mine safety using REST style.

    PubMed

    Cheng, Bo; Cheng, Xin; Chen, Junliang

    2015-01-01

    The complex environment of a coal mine requires the underground environment, devices and miners to be constantly monitored to ensure safe coal production. However, existing coal mines do not meet these coverage requirements because blind spots occur when using a wired network. In this paper, we develop a Web-based, lightweight remote monitoring and control platform using a wireless sensor network (WSN) with the REST style to collect temperature, humidity and methane concentration data in a coal mine using sensor nodes. This platform also collects information on personnel positions inside the mine. We implement a RESTful application programming interface (API) that provides access to underground sensors and instruments through the Web such that underground coal mine physical devices can be easily interfaced to remote monitoring and control applications. We also implement three different scenarios for Web-based, lightweight remote monitoring and control of coal mine safety and measure and analyze the system performance. Finally, we present the conclusions from this study and discuss future work. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Service-based analysis of biological pathways

    PubMed Central

    Zheng, George; Bouguettaya, Athman

    2009-01-01

    Background Computer-based pathway discovery is concerned with two important objectives: pathway identification and analysis. Conventional mining and modeling approaches aimed at pathway discovery are often effective at achieving either objective, but not both. Such limitations can be effectively tackled leveraging a Web service-based modeling and mining approach. Results Inspired by molecular recognitions and drug discovery processes, we developed a Web service mining tool, named PathExplorer, to discover potentially interesting biological pathways linking service models of biological processes. The tool uses an innovative approach to identify useful pathways based on graph-based hints and service-based simulation verifying user's hypotheses. Conclusion Web service modeling of biological processes allows the easy access and invocation of these processes on the Web. Web service mining techniques described in this paper enable the discovery of biological pathways linking these process service models. Algorithms presented in this paper for automatically highlighting interesting subgraph within an identified pathway network enable the user to formulate hypothesis, which can be tested out using our simulation algorithm that are also described in this paper. PMID:19796403

  1. Exploiting Recurring Structure in a Semantic Network

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, Richard M.

    2004-01-01

    With the growing popularity of the Semantic Web, an increasing amount of information is becoming available in machine interpretable, semantically structured networks. Within these semantic networks are recurring structures that could be mined by existing or novel knowledge discovery methods. The mining of these semantic structures represents an interesting area that focuses on mining both for and from the Semantic Web, with surprising applicability to problems confronting the developers of Semantic Web applications. In this paper, we present representative examples of recurring structures and show how these structures could be used to increase the utility of a semantic repository deployed at NASA.

  2. Leveraging Google Trends, Twitter, and Wikipedia to Investigate the Impact of a Celebrity's Death From Rheumatoid Arthritis.

    PubMed

    Mahroum, Naim; Bragazzi, Nicola Luigi; Sharif, Kassem; Gianfredi, Vincenza; Nucci, Daniele; Rosselli, Roberto; Brigo, Francesco; Adawi, Mohammad; Amital, Howard; Watad, Abdulla

    2018-06-01

    Technological advancements, such as patient-centered smartphone applications, have enabled to support self-management of the disease. Further, the accessibility to health information through the Internet has grown tremendously. This article aimed to investigate how big data can be useful to assess the impact of a celebrity's rheumatic disease on the public opinion. Variable tools and statistical/computational approaches have been used, including massive data mining of Google Trends, Wikipedia, Twitter, and big data analytics. These tools were mined using an in-house script, which facilitated the process of data collection, parsing, handling, processing, and normalization. From Google Trends, the temporal correlation between "Anna Marchesini" and rheumatoid arthritis (RA) queries resulted 0.66 before Anna Marchesini's death and 0.90 after Anna Marchesini's death. The geospatial correlation between "Anna Marchesini" and RA queries resulted 0.45 before Anna Marchesini's death and 0.52 after Anna Marchesini's death. From Wikitrends, after Anna Marchesini's death, the number of accesses to Wikipedia page for RA has increased 5770%. From Twitter, 1979 tweets have been retrieved. Numbers of likes, retweets, and hashtags have increased throughout time. Novel data streams and big data analytics are effective to assess the impact of a disease in a famous person on the laypeople.

  3. There comes a baby! What should I do? Smartphones' pregnancy-related applications: A web-based overview.

    PubMed

    Bert, Fabrizio; Passi, Stefano; Scaioli, Giacomo; Gualano, Maria R; Siliquini, Roberta

    2016-09-01

    Our article aims to give an overview of the most mentioned smartphones' pregnancy-related applications (Apps). A keywords string with selected keywords was entered both in a general search engine (Google(®)) and PubMed. While PubMed returned no pertinent results, a total of 370 web pages were found on Google(®), and 146 of them were selected. All the pregnancy-related Apps cited at least eight times were included. Information about App's producer, price, contents, privacy policy, and presence of a scientific board was collected. Finally, nine apps were considered. The majority of them were free and available in the two main online markets (Apple(®) App Store and Android(®) Google Play). Five apps presented a privacy policy statement, while a scientific board was mentioned in only three. Further studies are needed in order to deepen the knowledge regarding the main risks of these devices, such as privacy loss, contents control concerns, the digital divide and a potential humanization reduction. © The Author(s) 2015.

  4. Implementing Web 2.0 Tools in the Classroom: Four Teachers' Accounts

    ERIC Educational Resources Information Center

    Kovalik, Cindy; Kuo, Chia-Ling; Cummins, Megan; Dipzinski, Erin; Joseph, Paula; Laskey, Stephanie

    2014-01-01

    In this paper, four teachers shared their experiences using the following free Web 2.0 tools with their students: Jing, Wix, Google Sites, and Blogger. The teachers found that students reacted positively to lessons in which these tools were used, and also noted improvements they could make when using them in the future.

  5. Development of Web-Based Learning Application for Generation Z

    ERIC Educational Resources Information Center

    Hariadi, Bambang; Dewiyani Sunarto, M. J.; Sudarmaningtyas, Pantjawati

    2016-01-01

    This study aimed to develop a web-based learning application as a form of learning revolution. The form of learning revolution includes the provision of unlimited teaching materials, real time class organization, and is not limited by time or place. The implementation of this application is in the form of hybrid learning by using Google Apps for…

  6. Collaborative Writing with Web 2.0 Technologies: Education Students' Perceptions

    ERIC Educational Resources Information Center

    Brodahl, Cornelia; Hadjerrouit, Said; Hansen, Nils Kristian

    2011-01-01

    Web 2.0 technologies are becoming popular in teaching and learning environments. Among them several online collaborative writing tools, like wikis and blogs, have been integrated into educational settings. Research has been carried out on a wide range of subjects related to wikis, while other, comparable tools like Google Docs and EtherPad remain…

  7. Stopping Web Plagiarists from Stealing Your Content

    ERIC Educational Resources Information Center

    Goldsborough, Reid

    2004-01-01

    This article gives tips on how to avoid having content stolen by plagiarists. Suggestions include: using a Web search service such as Google to search for unique strings of text at the individuals site to uncover other sites with the same content; buying a infringement-detection program; or hiring a public relations firm to do the work. There are…

  8. With Free Google Alert Services

    ERIC Educational Resources Information Center

    Gunn, Holly

    2005-01-01

    Alert services are a great way of keeping abreast of topics that interest you. Rather than searching the Web regularly to find new content about your areas of interest, an alert service keeps you informed by sending you notices when new material is added to the Web that matches your registered search criteria. Alert services are examples of push…

  9. 76 FR 20054 - Self-Regulatory Organizations; the NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-11

    ... over 50,000,000 investors on Web sites operated by Google, Interactive Data, and Dow Jones, among... systems (``ATSs''), including dark pools and electronic communication networks (``ECNs''). Each SRO market..., Attain, TracECN, BATS Trading and Direct Edge. Today, BATS publishes its data at no charge on its Web...

  10. A profile of anti-vaccination lobbying on the South African internet, 2011-2013.

    PubMed

    Burnett, Rosemary Joyce; von Gogh, Lauren Jennifer; Moloi, Molelekeng H; François, Guido

    2015-11-01

    The South African Vaccination and Immunisation Centre receives many requests to explain the validity of internet-based anti-vaccination claims. Previous global studies on internet-based anti-vaccination lobbying had not identified anti-vaccination web pages originating in South Africa (SA). To characterise SA internet-based anti-vaccination lobbying. In 2011, searches for anti-vaccination content were performed using Google, Yahoo and MSN-Bing, limited to English-language SA web pages. Content analysis was performed on web pages expressing anti-vaccination sentiment about infant vaccination. This was repeated in 2012 and 2013 using Google, with the first 700 web pages per search being analysed. Blogs/forums, articles and e-shops constituted 40.3%, 55.2% and 4.5% of web pages, respectively. Authors were lay people (63.5%), complementary/alternative medicine (CAM) practitioners (23.1%), medical professionals practising CAM (7.7%) and medical professionals practising only allopathic medicine (5.8%). Advertisements appeared on 55.2% of web pages. Of these, 67.6% were sponsored by or linked to organisations with financial interests in discrediting vaccines, with 80.0% and 24.0% of web pages sponsored by these organisations claiming respectively that vaccines are ineffective and that vaccination is profit driven. The vast majority of web pages (92.5%) claimed that vaccines are not safe, and 77.6% of anti-vaccination claims originated from the USA. South Africans are creating web pages or blogs for local anti-vaccination lobbying. Research is needed to understand what influence internet-based anti-vaccination lobbying has on the uptake of infant vaccination in SA.

  11. Data Mining Web Services for Science Data Repositories

    NASA Astrophysics Data System (ADS)

    Graves, S.; Ramachandran, R.; Keiser, K.; Maskey, M.; Lynnes, C.; Pham, L.

    2006-12-01

    The maturation of web services standards and technologies sets the stage for a distributed "Service-Oriented Architecture" (SOA) for NASA's next generation science data processing. This architecture will allow members of the scientific community to create and combine persistent distributed data processing services and make them available to other users over the Internet. NASA has initiated a project to create a suite of specialized data mining web services designed specifically for science data. The project leverages the Algorithm Development and Mining (ADaM) toolkit as its basis. The ADaM toolkit is a robust, mature and freely available science data mining toolkit that is being used by several research organizations and educational institutions worldwide. These mining services will give the scientific community a powerful and versatile data mining capability that can be used to create higher order products such as thematic maps from current and future NASA satellite data records with methods that are not currently available. The package of mining and related services are being developed using Web Services standards so that community-based measurement processing systems can access and interoperate with them. These standards-based services allow users different options for utilizing them, from direct remote invocation by a client application to deployment of a Business Process Execution Language (BPEL) solutions package where a complex data mining workflow is exposed to others as a single service. The ability to deploy and operate these services at a data archive allows the data mining algorithms to be run where the data are stored, a more efficient scenario than moving large amounts of data over the network. This will be demonstrated in a scenario in which a user uses a remote Web-Service-enabled clustering algorithm to create cloud masks from satellite imagery at the Goddard Earth Sciences Data and Information Services Center (GES DISC).

  12. An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.

    PubMed Central

    Zweigenbaum, P.; Darmoni, S. J.; Grabar, N.; Douyère, M.; Benichou, J.

    2002-01-01

    Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF. PMID:12463965

  13. Greater freedom of speech on Web 2.0 correlates with dominance of views linking vaccines to autism.

    PubMed

    Venkatraman, Anand; Garg, Neetika; Kumar, Nilay

    2015-03-17

    It is suspected that Web 2.0 web sites, with a lot of user-generated content, often support viewpoints that link autism to vaccines. We assessed the prevalence of the views supporting a link between vaccines and autism online by comparing YouTube, Google and Wikipedia with PubMed. Freedom of speech is highest on YouTube and progressively decreases for the others. Support for a link between vaccines and autism is most prominent on YouTube, followed by Google search results. It is far lower on Wikipedia and PubMed. Anti-vaccine activists use scientific arguments, certified physicians and official-sounding titles to gain credibility, while also leaning on celebrity endorsement and personalized stories. Online communities with greater freedom of speech lead to a dominance of anti-vaccine voices. Moderation of content by editors can offer balance between free expression and factual accuracy. Health communicators and medical institutions need to step up their activity on the Internet. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Google in a Quantum Network

    PubMed Central

    Paparo, G. D.; Martin-Delgado, M. A.

    2012-01-01

    We introduce the characterization of a class of quantum PageRank algorithms in a scenario in which some kind of quantum network is realizable out of the current classical internet web, but no quantum computer is yet available. This class represents a quantization of the PageRank protocol currently employed to list web pages according to their importance. We have found an instance of this class of quantum protocols that outperforms its classical counterpart and may break the classical hierarchy of web pages depending on the topology of the web. PMID:22685626

  15. A construction scheme of web page comment information extraction system based on frequent subtree mining

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  16. On transform coding tools under development for VP10

    NASA Astrophysics Data System (ADS)

    Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao

    2016-09-01

    Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.

  17. Infodemiology of status epilepticus: A systematic validation of the Google Trends-based search queries.

    PubMed

    Bragazzi, Nicola Luigi; Bacigaluppi, Susanna; Robba, Chiara; Nardone, Raffaele; Trinka, Eugen; Brigo, Francesco

    2016-02-01

    People increasingly use Google looking for health-related information. We previously demonstrated that in English-speaking countries most people use this search engine to obtain information on status epilepticus (SE) definition, types/subtypes, and treatment. Now, we aimed at providing a quantitative analysis of SE-related web queries. This analysis represents an advancement, with respect to what was already previously discussed, in that the Google Trends (GT) algorithm has been further refined and correlational analyses have been carried out to validate the GT-based query volumes. Google Trends-based SE-related query volumes were well correlated with information concerning causes and pharmacological and nonpharmacological treatments. Google Trends can provide both researchers and clinicians with data on realities and contexts that are generally overlooked and underexplored by classic epidemiology. In this way, GT can foster new epidemiological studies in the field and can complement traditional epidemiological tools. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. An overview of the web-based Google Earth coincident imaging tool

    USGS Publications Warehouse

    Chander, Gyanesh; Kilough, B.; Gowda, S.

    2010-01-01

    The Committee on Earth Observing Satellites (CEOS) Visualization Environment (COVE) tool is a browser-based application that leverages Google Earth web to display satellite sensor coverage areas. The analysis tool can also be used to identify near simultaneous surface observation locations for two or more satellites. The National Aeronautics and Space Administration (NASA) CEOS System Engineering Office (SEO) worked with the CEOS Working Group on Calibration and Validation (WGCV) to develop the COVE tool. The CEOS member organizations are currently operating and planning hundreds of Earth Observation (EO) satellites. Standard cross-comparison exercises between multiple sensors to compare near-simultaneous surface observations and to identify corresponding image pairs are time-consuming and labor-intensive. COVE is a suite of tools that have been developed to make such tasks easier.

  19. ETDEWEB versus the World-Wide-Web: a specific database/web comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cutler, Debbie

    2010-06-28

    A study was performed comparing user search results from the specialized scientific database on energy-related information, ETDEWEB, with search results from the internet search engines Google and Google Scholar. The primary objective of the study was to determine if ETDEWEB (the Energy Technology Data Exchange – World Energy Base) continues to bring the user search results that are not being found by Google and Google Scholar. As a multilateral information exchange initiative, ETDE’s member countries and partners contribute cost- and task-sharing resources to build the largest database of energy-related information in the world. As of early 2010, the ETDEWEB databasemore » has 4.3 million citations to world-wide energy literature. One of ETDEWEB’s strengths is its focused scientific content and direct access to full text for its grey literature (over 300,000 documents in PDF available for viewing from the ETDE site and over a million additional links to where the documents can be found at research organizations and major publishers globally). Google and Google Scholar are well-known for the wide breadth of the information they search, with Google bringing in news, factual and opinion-related information, and Google Scholar also emphasizing scientific content across many disciplines. The analysis compared the results of 15 energy-related queries performed on all three systems using identical words/phrases. A variety of subjects was chosen, although the topics were mostly in renewable energy areas due to broad international interest. Over 40,000 search result records from the three sources were evaluated. The study concluded that ETDEWEB is a significant resource to energy experts for discovering relevant energy information. For the 15 topics in this study, ETDEWEB was shown to bring the user unique results not shown by Google or Google Scholar 86.7% of the time. Much was learned from the study beyond just metric comparisons. Observations about the strengths of each system and factors impacting the search results are also shared along with background information and summary tables of the results. If a user knows a very specific title of a document, all three systems are helpful in finding the user a source for the document. But if the user is looking to discover relevant documents on a specific topic, each of the three systems will bring back a considerable volume of data, but quite different in focus. Google is certainly a highly-used and valuable tool to find significant ‘non-specialist’ information, and Google Scholar does help the user focus on scientific disciplines. But if a user’s interest is scientific and energy-specific, ETDEWEB continues to hold a strong position in the energy research, technology and development (RTD) information field and adds considerable value in knowledge discovery. (auth)« less

  20. 21st Century Senior Leader Education: Ubiquitous Open Access Learning Environment

    DTIC Science & Technology

    2011-02-22

    Failures: It‘s the content, stupid ‖22 because agencies focus on systems rather than substance and access to the content is critical. The access to Army...Resource Capabilities. 18 As an example to demonstrate how a civilian capability provides learning value to the PLE, the ― Google Alerts‖ ® web...technology pushed content to the author for review in the development of this paper. The technology consists of a user creating a Google account, logging

  1. Web GIS in practice V: 3-D interactive and real-time mapping in Second Life

    PubMed Central

    Boulos, Maged N Kamel; Burden, David

    2007-01-01

    This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy. PMID:18042275

  2. Quantifying the effect of media limitations on outbreak data in a global online web-crawling epidemic intelligence system, 2008–2011

    PubMed Central

    Scales, David; Zelenev, Alexei; Brownstein, John S.

    2013-01-01

    Background This is the first study quantitatively evaluating the effect that media-related limitations have on data from an automated epidemic intelligence system. Methods We modeled time series of HealthMap's two main data feeds, Google News and Moreover, to test for evidence of two potential limitations: first, human resources constraints, and second, high-profile outbreaks “crowding out” coverage of other infectious diseases. Results Google News events declined by 58.3%, 65.9%, and 14.7% on Saturday, Sunday and Monday, respectively, relative to other weekdays. Events were reduced by 27.4% during Christmas/New Years weeks and 33.6% lower during American Thanksgiving week than during an average week for Google News. Moreover data yielded similar results with the addition of Memorial Day (US) being associated with a 36.2% reduction in events. Other holiday effects were not statistically significant. We found evidence for a crowd out phenomenon for influenza/H1N1, where a 50% increase in influenza events corresponded with a 4% decline in other disease events for Google News only. Other prominent diseases in this database – avian influenza (H5N1), cholera, or foodborne illness – were not associated with a crowd out phenomenon. Conclusions These results provide quantitative evidence for the limited impact of editorial biases on HealthMap's web-crawling epidemic intelligence. PMID:24206612

  3. Kernel Methods for Mining Instance Data in Ontologies

    NASA Astrophysics Data System (ADS)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  4. Sentiment Analysis of Web Sites Related to Vaginal Mesh Use in Pelvic Reconstructive Surgery.

    PubMed

    Hobson, Deslyn T G; Meriwether, Kate V; Francis, Sean L; Kinman, Casey L; Stewart, J Ryan

    2018-05-02

    The purpose of this study was to utilize sentiment analysis to describe online opinions toward vaginal mesh. We hypothesized that sentiment in legal Web sites would be more negative than that in medical and reference Web sites. We generated a list of relevant key words related to vaginal mesh and searched Web sites using the Google search engine. Each unique uniform resource locator (URL) was sorted into 1 of 6 categories: "medical", "legal", "news/media", "patient generated", "reference", or "unrelated". Sentiment of relevant Web sites, the primary outcome, was scored on a scale of -1 to +1, and mean sentiment was compared across all categories using 1-way analysis of variance. Tukey test evaluated differences between category pairs. Google searches of 464 unique key words resulted in 11,405 URLs. Sentiment analysis was performed on 8029 relevant URLs (3472 legal, 1625 "medical", 1774 "reference", 666 "news media", 492 "patient generated"). The mean sentiment for all relevant Web sites was +0.01 ± 0.16; analysis of variance revealed significant differences between categories (P < 0.001). Web sites categorized as "legal" and "news/media" had a slightly negative mean sentiment, whereas those categorized as "medical," "reference," and "patient generated" had slightly positive mean sentiments. Tukey test showed differences between all category pairs except the "medical" versus "reference" in comparison with the largest mean difference (-0.13) seen in the "legal" versus "reference" comparison. Web sites related to vaginal mesh have an overall mean neutral sentiment, and Web sites categorized as "medical," "reference," and "patient generated" have significantly higher sentiment scores than related Web sites in "legal" and "news/media" categories.

  5. Abandoned Uranium Mines (AUM) Site Screening Map Service, 2016, US EPA Region 9

    EPA Pesticide Factsheets

    As described in detail in the Five-Year Report, US EPA completed on-the-ground screening of 521 abandoned uranium mine areas. US EPA and the Navajo EPA are using the Comprehensive Database and Atlas to determine which mines should be cleaned up first. US EPA continues to research and identify Potentially Responsible Parties (PRPs) under Superfund to contribute to the costs of cleanup efforts.This US EPA Region 9 web service contains the following map layers:Abandoned Uranium Mines, Priority Mines, Tronox Mines, Navajo Environmental Response Trust Mines, Mines with Enforcement Actions, Superfund AUM Regions, Navajo Nation Administrative Boundaries and Chapter Houses.Mine points have a maximum scale of 1:220,000, while Mine polygons have a minimum scale of 1:220,000. Chapter houses have a minimum scale of 1:200,000. BLM Land Status has a minimum scale of 1:150,000.Full FGDC metadata records for each layer can be found by clicking the layer name at the web service endpoint and viewing the layer description. Data used to create this web service are available for download at https://edg.epa.gov/metadata/catalog/data/data.page.Security Classification: Public. Access Constraints: None. Use Constraints: None. Please check sources, scale, accuracy, currentness and other available information. Please confirm that you are using the most recent copy of both data and metadata. Acknowledgement of the EPA would be appreciated.

  6. OntoMaton: a bioportal powered ontology widget for Google Spreadsheets.

    PubMed

    Maguire, Eamonn; González-Beltrán, Alejandra; Whetzel, Patricia L; Sansone, Susanna-Assunta; Rocca-Serra, Philippe

    2013-02-15

    Data collection in spreadsheets is ubiquitous, but current solutions lack support for collaborative semantic annotation that would promote shared and interdisciplinary annotation practices, supporting geographically distributed players. OntoMaton is an open source solution that brings ontology lookup and tagging capabilities into a cloud-based collaborative editing environment, harnessing Google Spreadsheets and the NCBO Web services. It is a general purpose, format-agnostic tool that may serve as a component of the ISA software suite. OntoMaton can also be used to assist the ontology development process. OntoMaton is freely available from Google widgets under the CPAL open source license; documentation and examples at: https://github.com/ISA-tools/OntoMaton.

  7. BioTextQuest(+): a knowledge integration platform for literature mining and concept discovery.

    PubMed

    Papanikolaou, Nikolas; Pavlopoulos, Georgios A; Pafilis, Evangelos; Theodosiou, Theodosios; Schneider, Reinhard; Satagopam, Venkata P; Ouzounis, Christos A; Eliopoulos, Aristides G; Promponas, Vasilis J; Iliopoulos, Ioannis

    2014-11-15

    The iterative process of finding relevant information in biomedical literature and performing bioinformatics analyses might result in an endless loop for an inexperienced user, considering the exponential growth of scientific corpora and the plethora of tools designed to mine PubMed(®) and related biological databases. Herein, we describe BioTextQuest(+), a web-based interactive knowledge exploration platform with significant advances to its predecessor (BioTextQuest), aiming to bridge processes such as bioentity recognition, functional annotation, document clustering and data integration towards literature mining and concept discovery. BioTextQuest(+) enables PubMed and OMIM querying, retrieval of abstracts related to a targeted request and optimal detection of genes, proteins, molecular functions, pathways and biological processes within the retrieved documents. The front-end interface facilitates the browsing of document clustering per subject, the analysis of term co-occurrence, the generation of tag clouds containing highly represented terms per cluster and at-a-glance popup windows with information about relevant genes and proteins. Moreover, to support experimental research, BioTextQuest(+) addresses integration of its primary functionality with biological repositories and software tools able to deliver further bioinformatics services. The Google-like interface extends beyond simple use by offering a range of advanced parameterization for expert users. We demonstrate the functionality of BioTextQuest(+) through several exemplary research scenarios including author disambiguation, functional term enrichment, knowledge acquisition and concept discovery linking major human diseases, such as obesity and ageing. The service is accessible at http://bioinformatics.med.uoc.gr/biotextquest. g.pavlopoulos@gmail.com or georgios.pavlopoulos@esat.kuleuven.be Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Post-Web 2.0 Pedagogy: From Student-Generated Content to International Co-Production Enabled by Mobile Social Media

    ERIC Educational Resources Information Center

    Cochrane, Thomas; Antonczak, Laurent; Wagner, Daniel

    2013-01-01

    The advent of web 2.0 has enabled new forms of collaboration centred upon user-generated content, however, mobile social media is enabling a new wave of social collaboration. Mobile devices have disrupted and reinvented traditional media markets and distribution: iTunes, Google Play and Amazon now dominate music industry distribution channels,…

  9. What Major Search Engines Like Google, Yahoo and Bing Need to Know about Teachers in the UK?

    ERIC Educational Resources Information Center

    Seyedarabi, Faezeh

    2014-01-01

    This article briefly outlines the current major search engines' approach to teachers' web searching. The aim of this article is to make Web searching easier for teachers when searching for relevant online teaching materials, in general, and UK teacher practitioners at primary, secondary and post-compulsory levels, in particular. Therefore, major…

  10. Some Features of "Alt" Texts Associated with Images in Web Pages

    ERIC Educational Resources Information Center

    Craven, Timothy C.

    2006-01-01

    Introduction: This paper extends a series on summaries of Web objects, in this case, the alt attribute of image files. Method: Data were logged from 1894 pages from Yahoo!'s random page service and 4703 pages from the Google directory; an img tag was extracted randomly from each where present; its alt attribute, if any, was recorded; and the…

  11. Measuring Link-Resolver Success: Comparing 360 Link with a Local Implementation of WebBridge

    ERIC Educational Resources Information Center

    Herrera, Gail

    2011-01-01

    This study reviewed link resolver success comparing 360 Link and a local implementation of WebBridge. Two methods were used: (1) comparing article-level access and (2) examining technical issues for 384 randomly sampled OpenURLs. Google Analytics was used to collect user-generated OpenURLs. For both methods, 360 Link out-performed the local…

  12. Fast segmentation of satellite images using SLIC, WebGL and Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Donchyts, Gennadii; Baart, Fedor; Gorelick, Noel; Eisemann, Elmar; van de Giesen, Nick

    2017-04-01

    Google Earth Engine (GEE) is a parallel geospatial processing platform, which harmonizes access to petabytes of freely available satellite images. It provides a very rich API, allowing development of dedicated algorithms to extract useful geospatial information from these images. At the same time, modern GPUs provide thousands of computing cores, which are mostly not utilized in this context. In the last years, WebGL became a popular and well-supported API, allowing fast image processing directly in web browsers. In this work, we will evaluate the applicability of WebGL to enable fast segmentation of satellite images. A new implementation of a Simple Linear Iterative Clustering (SLIC) algorithm using GPU shaders will be presented. SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It adapts a k-means clustering approach to generate superpixels efficiently. While this approach will be hard to scale, due to a significant amount of data to be transferred to the client, it should significantly improve exploratory possibilities and simplify development of dedicated algorithms for geoscience applications. Our prototype implementation will be used to improve surface water detection of the reservoirs using multispectral satellite imagery.

  13. A Web-Based Information System for Field Data Management

    NASA Astrophysics Data System (ADS)

    Weng, Y. H.; Sun, F. S.

    2014-12-01

    A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.

  14. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    NASA Astrophysics Data System (ADS)

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun

    2009-07-01

    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  15. Seeking science information online: Data mining Google to better understand the roles of the media and the education system.

    PubMed

    Segev, Elad; Baram-Tsabari, Ayelet

    2012-10-01

    Which extrinsic cues motivate people to search for science-related information? For many science-related search queries, media attention and time during the academic year are highly correlated with changes in information seeking behavior (expressed by changes in the proportion of Google science-related searches). The data mining analysis presented here shows that changes in the volume of searches for general and well-established science terms are strongly linked to the education system. By contrast, ad-hoc events and current concerns were better aligned with media coverage. The interest and ability to independently seek science knowledge in response to current events or concerns is one of the fundamental goals of the science literacy movement. This method provides a mirror of extrapolated behavior and as such can assist researchers in assessing the role of the media in shaping science interests, and inform the ways in which lifelong interests in science are manifested in real world situations.

  16. Citation Analysis of the Korean Journal of Urology From Web of Science, Scopus, Korean Medical Citation Index, KoreaMed Synapse, and Google Scholar

    PubMed Central

    2013-01-01

    The Korean Journal of Urology began to be published exclusively in English in 2010 and is indexed in PubMed Central/PubMed. This study analyzed a variety of citation indicators of the Korean Journal of Urology before and after 2010 to clarify the present position of the journal among the urology category journals. The impact factor, SCImago Journal Rank (SJR), impact index, Z-impact factor (ZIF, impact factor excluding self-citation), and Hirsch Index (H-index) were referenced or calculated from Web of Science, Scopus, SCImago Journal & Country Ranking, Korean Medical Citation Index (KoMCI), KoreaMed Synapse, and Google Scholar. Both the impact factor and the total citations rose rapidly beginning in 2011. The 2012 impact factor corresponded to the upper 84.9% in the nephrology-urology category, whereas the 2011 SJR was in the upper 58.5%. The ZIF in KoMCI was one fifth of the impact factor because there are only two other urology journals in KoMCI. Up to 2009, more than half of the citations in the Web of Science were from Korean researchers, but from 2010 to 2012, more than 85% of the citations were from international researchers. The H-indexes from Web of Science, Scopus, KoMCI, KoreaMed Synapse, and Google Scholar were 8, 10, 12, 9, and 18, respectively. The strategy of the language change in 2010 was successful from the perspective of citation indicators. The values of the citation indicators will continue to increase rapidly and consistently as the research achievement of authors of the Korean Journal of Urology increases. PMID:23614057

  17. Citation Analysis of the Korean Journal of Urology From Web of Science, Scopus, Korean Medical Citation Index, KoreaMed Synapse, and Google Scholar.

    PubMed

    Huh, Sun

    2013-04-01

    The Korean Journal of Urology began to be published exclusively in English in 2010 and is indexed in PubMed Central/PubMed. This study analyzed a variety of citation indicators of the Korean Journal of Urology before and after 2010 to clarify the present position of the journal among the urology category journals. The impact factor, SCImago Journal Rank (SJR), impact index, Z-impact factor (ZIF, impact factor excluding self-citation), and Hirsch Index (H-index) were referenced or calculated from Web of Science, Scopus, SCImago Journal & Country Ranking, Korean Medical Citation Index (KoMCI), KoreaMed Synapse, and Google Scholar. Both the impact factor and the total citations rose rapidly beginning in 2011. The 2012 impact factor corresponded to the upper 84.9% in the nephrology-urology category, whereas the 2011 SJR was in the upper 58.5%. The ZIF in KoMCI was one fifth of the impact factor because there are only two other urology journals in KoMCI. Up to 2009, more than half of the citations in the Web of Science were from Korean researchers, but from 2010 to 2012, more than 85% of the citations were from international researchers. The H-indexes from Web of Science, Scopus, KoMCI, KoreaMed Synapse, and Google Scholar were 8, 10, 12, 9, and 18, respectively. The strategy of the language change in 2010 was successful from the perspective of citation indicators. The values of the citation indicators will continue to increase rapidly and consistently as the research achievement of authors of the Korean Journal of Urology increases.

  18. A Case Study in Web 2.0 Application Development

    NASA Astrophysics Data System (ADS)

    Marganian, P.; Clark, M.; Shelton, A.; McCarty, M.; Sessoms, E.

    2010-12-01

    Recent web technologies focusing on languages, frameworks, and tools are discussed, using the Robert C. Byrd Green Bank Telescopes (GBT) new Dynamic Scheduling System as the primary example. Within that example, we use a popular Python web framework, Django, to build the extensive web services for our users. We also use a second complimentary server, written in Haskell, to incorporate the core scheduling algorithms. We provide a desktop-quality experience across all the popular browsers for our users with the Google Web Toolkit and judicious use of JQuery in Django templates. Single sign-on and authentication throughout all NRAO web services is accomplished via the Central Authentication Service protocol, or CAS.

  19. Saint: a lightweight integration environment for model annotation.

    PubMed

    Lister, Allyson L; Pocock, Matthew; Taschuk, Morgan; Wipat, Anil

    2009-11-15

    Saint is a web application which provides a lightweight annotation integration environment for quantitative biological models. The system enables modellers to rapidly mark up models with biological information derived from a range of data sources. Saint is freely available for use on the web at http://www.cisban.ac.uk/saint. The web application is implemented in Google Web Toolkit and Tomcat, with all major browsers supported. The Java source code is freely available for download at http://saint-annotate.sourceforge.net. The Saint web server requires an installation of libSBML and has been tested on Linux (32-bit Ubuntu 8.10 and 9.04).

  20. Graph Mining Meets the Semantic Web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Sukumar, Sreenivas R; Lim, Seung-Hwan

    The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today, data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. We address that need through implementation of three popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, and PageRank). We implement these algorithms as SPARQL queries, wrapped within Python scripts. We evaluatemore » the performance of our implementation on 6 real world data sets and show graph mining algorithms (that have a linear-algebra formulation) can indeed be unleashed on data represented as RDF graphs using the SPARQL query interface.« less

  1. Mining Longitudinal Web Queries: Trends and Patterns.

    ERIC Educational Resources Information Center

    Wang, Peiling; Berry, Michael W.; Yang, Yiheng

    2003-01-01

    Analyzed user queries submitted to an academic Web site during a four-year period, using a relational database, to examine users' query behavior, to identify problems they encounter, and to develop techniques for optimizing query analysis and mining. Linguistic analyses focus on query structures, lexicon, and word associations using statistical…

  2. An Expertise Recommender using Web Mining

    NASA Technical Reports Server (NTRS)

    Joshi, Anupam; Chandrasekaran, Purnima; ShuYang, Michelle; Ramakrishnan, Ramya

    2001-01-01

    This report explored techniques to mine web pages of scientists to extract information regarding their expertise, build expertise chains and referral webs, and semi automatically combine this information with directory information services to create a recommender system that permits query by expertise. The approach included experimenting with existing techniques that have been reported in research literature in recent past , and adapted them as needed. In addition, software tools were developed to capture and use this information.

  3. Visualization of usability and functionality of a professional website through web-mining.

    PubMed

    Jones, Josette F; Mahoui, Malika; Gopa, Venkata Devi Pragna

    2007-10-11

    Functional interface design requires understanding of the information system structure and the user. Web logs record user interactions with the interface, and thus provide some insight into user search behavior and efficiency of the search process. The present study uses a data-mining approach with techniques such as association rules, clustering and classification, to visualize the usability and functionality of a digital library through in depth analyses of web logs.

  4. Spectral properties of Google matrix of Wikipedia and other networks

    NASA Astrophysics Data System (ADS)

    Ermann, Leonardo; Frahm, Klaus M.; Shepelyansky, Dima L.

    2013-05-01

    We study the properties of eigenvalues and eigenvectors of the Google matrix of the Wikipedia articles hyperlink network and other real networks. With the help of the Arnoldi method, we analyze the distribution of eigenvalues in the complex plane and show that eigenstates with significant eigenvalue modulus are located on well defined network communities. We also show that the correlator between PageRank and CheiRank vectors distinguishes different organizations of information flow on BBC and Le Monde web sites.

  5. Content and Accessibility of Shoulder and Elbow Fellowship Web Sites in the United States.

    PubMed

    Young, Bradley L; Oladeji, Lasun O; Cichos, Kyle; Ponce, Brent

    2016-01-01

    Increasing numbers of training physicians are using the Internet to gather information about graduate medical education programs. The content and accessibility of web sites that provide this information have been demonstrated to influence applicants' decisions. Assessments of orthopedic fellowship web sites including sports medicine, pediatrics, hand and spine have found varying degrees of accessibility and material. The purpose of this study was to evaluate the accessibility and content of the American Shoulder and Elbow Surgeons (ASES) fellowship web sites (SEFWs). A complete list of ASES programs was obtained from a database on the ASES web site. The accessibility of each SEFWs was assessed by the existence of a functioning link found in the database and through Google®. Then, the following content areas of each SEFWs were evaluated: fellow education, faculty/previous fellow information, and recruitment. At the time of the study, 17 of the 28 (60.7%) ASES programs had web sites accessible through Google®, and only five (17.9%) had functioning links in the ASES database. Nine programs lacked a web site. Concerning web site content, the majority of SEFWs contained information regarding research opportunities, research requirements, case descriptions, meetings and conferences, teaching responsibilities, attending faculty, the application process, and a program description. Fewer than half of the SEFWs provided information regarding rotation schedules, current fellows, previous fellows, on-call expectations, journal clubs, medical school of current fellows, residency of current fellows, employment of previous fellows, current research, and previous research. A large portion of ASES fellowship programs lacked functioning web sites, and even fewer provided functioning links through the ASES database. Valuable information for potential applicants was largely inadequate across present SEFWs.

  6. Should we Google it? Resource use by internal medicine residents for point-of-care clinical decision making.

    PubMed

    Duran-Nelson, Alisa; Gladding, Sophia; Beattie, Jim; Nixon, L James

    2013-06-01

    To determine which resources residents use at the point-of-care (POC) for decision making, the drivers for selection of these resources, and how residents use Google/Google Scholar to answer clinical questions at the POC. In January 2012, 299 residents from three internal medicine residencies were sent an electronic survey regarding resources used for POC decision making. Resource use frequency and factors influencing choice were determined using descriptive statistics. Binary logistic regression analysis was performed to determine relationships between the independent variables. A total of 167 residents (56%) responded; similar numbers responded at each level of training. Residents most frequently reported using UpToDate and Google at the POC at least daily (85% and 63%, respectively), with speed and trust in the quality of information being the primary drivers of selection. Google, used by 68% of residents, was used primarily to locate Web sites and general information about diseases, whereas Google Scholar, used by 30% of residents, tended to be used for treatment and management decisions or locating a journal article. The findings suggest that internal medicine residents use UpToDate most frequently, followed by consultation with faculty and the search engines Google and Google Scholar; speed, trust, and portability are the biggest drivers for resource selection; and time and information overload appear to be the biggest barriers to resources such as Ovid MEDLINE. Residents frequently used Google and may benefit from further training in information management skills.

  7. Breast cancer on the world wide web: cross sectional survey of quality of information and popularity of websites

    PubMed Central

    Meric, Funda; Bernstam, Elmer V; Mirza, Nadeem Q; Hunt, Kelly K; Ames, Frederick C; Ross, Merrick I; Kuerer, Henry M; Pollock, Raphael E; Musen, Mark A; Singletary, S Eva

    2002-01-01

    Objectives To determine the characteristics of popular breast cancer related websites and whether more popular sites are of higher quality. Design The search engine Google was used to generate a list of websites about breast cancer. Google ranks search results by measures of link popularity—the number of links to a site from other sites. The top 200 sites returned in response to the query “breast cancer” were divided into “more popular” and “less popular” subgroups by three different measures of link popularity: Google rank and number of links reported independently by Google and by AltaVista (another search engine). Main outcome measures Type and quality of content. Results More popular sites according to Google rank were more likely than less popular ones to contain information on ongoing clinical trials (27% v 12%, P=0.01 ), results of trials (12% v 3%, P=0.02), and opportunities for psychosocial adjustment (48% v 23%, P<0.01). These characteristics were also associated with higher number of links as reported by Google and AltaVista. More popular sites by number of linking sites were also more likely to provide updates on other breast cancer research, information on legislation and advocacy, and a message board service. Measures of quality such as display of authorship, attribution or references, currency of information, and disclosure did not differ between groups. Conclusions Popularity of websites is associated with type rather than quality of content. Sites that include content correlated with popularity may best meet the public's desire for information about breast cancer. What is already known on this topicPatients are using the world wide web to search for health informationBreast cancer is one of the most popular search topicsCharacteristics of popular websites may reflect the information needs of patientsWhat this study addsType rather than quality of content correlates with popularity of websitesMeasures of quality correlate with accuracy of medical information PMID:11884322

  8. Application of data mining in science and technology management information system based on WebGIS

    NASA Astrophysics Data System (ADS)

    Wu, Xiaofang; Xu, Zhiyong; Bao, Shitai; Chen, Feixiang

    2009-10-01

    With the rapid development of science and technology and the quick increase of information, a great deal of data is accumulated in the management department of science and technology. Usually, many knowledge and rules are contained and concealed in the data. Therefore, how to excavate and use the knowledge fully is very important in the management of science and technology. It will help to examine and approve the project of science and technology more scientifically and make the achievement transformed as the realistic productive forces easier. Therefore, the data mine technology will be researched and applied to the science and technology management information system to find and excavate the knowledge in the paper. According to analyzing the disadvantages of traditional science and technology management information system, the database technology, data mining and web geographic information systems (WebGIS) technology will be introduced to develop and construct the science and technology management information system based on WebGIS. The key problems are researched in detail such as data mining and statistical analysis. What's more, the prototype system is developed and validated based on the project data of National Natural Science Foundation Committee. The spatial data mining is done from the axis of time, space and other factors. Then the variety of knowledge and rules will be excavated by using data mining technology, which helps to provide an effective support for decisionmaking.

  9. Discrepancies Between Classic and Digital Epidemiology in Searching for the Mayaro Virus: Preliminary Qualitative and Quantitative Analysis of Google Trends

    PubMed Central

    Adawi, Mohammad; Watad, Abdulla; Sharif, Kassem; Amital, Howard; Mahroum, Naim

    2017-01-01

    Background Mayaro virus (MAYV), first discovered in Trinidad in 1954, is spread by the Haemagogus mosquito. Small outbreaks have been described in the past in the Amazon jungles of Brazil and other parts of South America. Recently, a case was reported in rural Haiti. Objective Given the emerging importance of MAYV, we aimed to explore the feasibility of exploiting a Web-based tool for monitoring and tracking MAYV cases. Methods Google Trends is an online tracking system. A Google-based approach is particularly useful to monitor especially infectious diseases epidemics. We searched Google Trends from its inception (from January 2004 through to May 2017) for MAYV-related Web searches worldwide. Results We noted a burst in search volumes in the period from July 2016 (relative search volume [RSV]=13%) to December 2016 (RSV=18%), with a peak in September 2016 (RSV=100%). Before this burst, the average search activity related to MAYV was very low (median 1%). MAYV-related queries were concentrated in the Caribbean. Scientific interest from the research community and media coverage affected digital seeking behavior. Conclusions MAYV has always circulated in South America. Its recent appearance in the Caribbean has been a source of concern, which resulted in a burst of Internet queries. While Google Trends cannot be used to perform real-time epidemiological surveillance of MAYV, it can be exploited to capture the public’s reaction to outbreaks. Public health workers should be aware of this, in that information and communication technologies could be used to communicate with users, reassure them about their concerns, and to empower them in making decisions affecting their health. PMID:29196278

  10. Discrepancies Between Classic and Digital Epidemiology in Searching for the Mayaro Virus: Preliminary Qualitative and Quantitative Analysis of Google Trends.

    PubMed

    Adawi, Mohammad; Bragazzi, Nicola Luigi; Watad, Abdulla; Sharif, Kassem; Amital, Howard; Mahroum, Naim

    2017-12-01

    Mayaro virus (MAYV), first discovered in Trinidad in 1954, is spread by the Haemagogus mosquito. Small outbreaks have been described in the past in the Amazon jungles of Brazil and other parts of South America. Recently, a case was reported in rural Haiti. Given the emerging importance of MAYV, we aimed to explore the feasibility of exploiting a Web-based tool for monitoring and tracking MAYV cases. Google Trends is an online tracking system. A Google-based approach is particularly useful to monitor especially infectious diseases epidemics. We searched Google Trends from its inception (from January 2004 through to May 2017) for MAYV-related Web searches worldwide. We noted a burst in search volumes in the period from July 2016 (relative search volume [RSV]=13%) to December 2016 (RSV=18%), with a peak in September 2016 (RSV=100%). Before this burst, the average search activity related to MAYV was very low (median 1%). MAYV-related queries were concentrated in the Caribbean. Scientific interest from the research community and media coverage affected digital seeking behavior. MAYV has always circulated in South America. Its recent appearance in the Caribbean has been a source of concern, which resulted in a burst of Internet queries. While Google Trends cannot be used to perform real-time epidemiological surveillance of MAYV, it can be exploited to capture the public's reaction to outbreaks. Public health workers should be aware of this, in that information and communication technologies could be used to communicate with users, reassure them about their concerns, and to empower them in making decisions affecting their health. ©Mohammad Adawi, Nicola Luigi Bragazzi, Abdulla Watad, Kassem Sharif, Howard Amital, Naim Mahroum. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 01.12.2017.

  11. COREMIC: a web-tool to search for a niche associated CORE MICrobiome.

    PubMed

    Rodrigues, Richard R; Rodgers, Nyle C; Wu, Xiaowei; Williams, Mark A

    2018-01-01

    Microbial diversity on earth is extraordinary, and soils alone harbor thousands of species per gram of soil. Understanding how this diversity is sorted and selected into habitat niches is a major focus of ecology and biotechnology, but remains only vaguely understood. A systems-biology approach was used to mine information from databases to show how it can be used to answer questions related to the core microbiome of habitat-microbe relationships. By making use of the burgeoning growth of information from databases, our tool "COREMIC" meets a great need in the search for understanding niche partitioning and habitat-function relationships. The work is unique, furthermore, because it provides a user-friendly statistically robust web-tool (http://coremic2.appspot.com or http://core-mic.com), developed using Google App Engine, to help in the process of database mining to identify the "core microbiome" associated with a given habitat. A case study is presented using data from 31 switchgrass rhizosphere community habitats across a diverse set of soil and sampling environments. The methodology utilizes an outgroup of 28 non-switchgrass (other grasses and forbs) to identify a core switchgrass microbiome. Even across a diverse set of soils (five environments), and conservative statistical criteria (presence in more than 90% samples and FDR q -val <0.05% for Fisher's exact test) a core set of bacteria associated with switchgrass was observed. These included, among others, closely related taxa from Lysobacter spp., Mesorhizobium spp , and Chitinophagaceae . These bacteria have been shown to have functions related to the production of bacterial and fungal antibiotics and plant growth promotion. COREMIC can be used as a hypothesis generating or confirmatory tool that shows great potential for identifying taxa that may be important to the functioning of a habitat (e.g. host plant). The case study, in conclusion, shows that COREMIC can identify key habitat-specific microbes across diverse samples, using currently available databases and a unique freely available software.

  12. COREMIC: a web-tool to search for a niche associated CORE MICrobiome

    PubMed Central

    Rodgers, Nyle C.; Wu, Xiaowei; Williams, Mark A.

    2018-01-01

    Microbial diversity on earth is extraordinary, and soils alone harbor thousands of species per gram of soil. Understanding how this diversity is sorted and selected into habitat niches is a major focus of ecology and biotechnology, but remains only vaguely understood. A systems-biology approach was used to mine information from databases to show how it can be used to answer questions related to the core microbiome of habitat-microbe relationships. By making use of the burgeoning growth of information from databases, our tool “COREMIC” meets a great need in the search for understanding niche partitioning and habitat-function relationships. The work is unique, furthermore, because it provides a user-friendly statistically robust web-tool (http://coremic2.appspot.com or http://core-mic.com), developed using Google App Engine, to help in the process of database mining to identify the “core microbiome” associated with a given habitat. A case study is presented using data from 31 switchgrass rhizosphere community habitats across a diverse set of soil and sampling environments. The methodology utilizes an outgroup of 28 non-switchgrass (other grasses and forbs) to identify a core switchgrass microbiome. Even across a diverse set of soils (five environments), and conservative statistical criteria (presence in more than 90% samples and FDR q-val <0.05% for Fisher’s exact test) a core set of bacteria associated with switchgrass was observed. These included, among others, closely related taxa from Lysobacter spp., Mesorhizobium spp, and Chitinophagaceae. These bacteria have been shown to have functions related to the production of bacterial and fungal antibiotics and plant growth promotion. COREMIC can be used as a hypothesis generating or confirmatory tool that shows great potential for identifying taxa that may be important to the functioning of a habitat (e.g. host plant). The case study, in conclusion, shows that COREMIC can identify key habitat-specific microbes across diverse samples, using currently available databases and a unique freely available software. PMID:29473009

  13. Implications of Web of Science journal impact factor for scientific output evaluation in 16 institutions and investigators' opinion.

    PubMed

    Wáng, Yì-Xiáng J; Arora, Richa; Choi, Yongdoo; Chung, Hsiao-Wen; Egorov, Vyacheslav I; Frahm, Jens; Kudo, Hiroyuki; Kuyumcu, Suleyman; Laurent, Sophie; Loffroy, Romaric; Maurea, Simone; Morcos, Sameh K; Ni, Yicheng; Oei, Edwin H G; Sabarudin, Akmal; Yu, Xin

    2014-12-01

    Journal based metrics is known not to be ideal for the measurement of the quality of individual researcher's scientific output. In the current report 16 contributors from Hong Kong SAR, India, Korea, Taiwan, Russia, Germany, Japan, Turkey, Belgium, France, Italy, UK, The Netherlands, Malaysia, and USA are invited. The following six questions were asked: (I) is Web of Sciences journal impact factor (IF) and Institute for Scientific Information (ISI) citation the main academic output performance evaluation tool in your institution? and your country? (II) How does Google citation count in your institution? and your country? (III) If paper is published in a non-SCI journal but it is included in PubMed and searchable by Google scholar, how it is valued when compared with a paper published in a journal with an IF? (IV) Do you value to publish a piece of your work in a non-SCI journal as much as a paper published in a journal with an IF? (V) What is your personal view on the metric measurement of scientific output? (VI) Overall, do you think Web of Sciences journal IF is beneficial, or actually it is doing more harm? The results show that IF and ISI citation is heavily affecting the academic life in most of the institutions. Google citation and evaluation, while is being used and convenient and speedy, has not gain wide 'official' recognition as a tool for scientific output evaluation.

  14. Tapir: A web interface for transit/eclipse observability

    NASA Astrophysics Data System (ADS)

    Jensen, Eric

    2013-06-01

    Tapir is a set of tools, written in Perl, that provides a web interface for showing the observability of periodic astronomical events, such as exoplanet transits or eclipsing binaries. The package provides tools for creating finding charts for each target and airmass plots for each event. The code can access target lists that are stored on-line in a Google spreadsheet or in a local text file.

  15. Using Google Analytics to evaluate the impact of the CyberTraining project.

    PubMed

    McGuckin, Conor; Crowley, Niall

    2012-11-01

    A focus on results and impact should be at the heart of every project's approach to research and dissemination. This article discusses the potential of Google Analytics (GA: http://google.com/analytics ) as an effective resource for measuring the impact of academic research output and understanding the geodemographics of users of specific Web 2.0 content (e.g., intervention and prevention materials, health promotion and advice). This article presents the results of GA analyses as a resource used in measuring the impact of the EU-funded CyberTraining project, which provided a well-grounded, research-based training manual on cyberbullying for trainers through the medium of a Web-based eBook ( www.cybertraining-project.org ). The training manual includes review information on cyberbullying, its nature and extent across Europe, analyses of current projects, and provides resources for trainers working with the target groups of pupils, parents, teachers, and other professionals. Results illustrate the promise of GA as an effective tool for measuring the impact of academic research and project output with real potential for tracking and understanding intra- and intercountry regional variations in the uptake of prevention and intervention materials, thus enabling precision focusing of attention to those regions.

  16. Binary Coded Web Access Pattern Tree in Education Domain

    ERIC Educational Resources Information Center

    Gomathi, C.; Moorthi, M.; Duraiswamy, K.

    2008-01-01

    Web Access Pattern (WAP), which is the sequence of accesses pursued by users frequently, is a kind of interesting and useful knowledge in practice. Sequential Pattern mining is the process of applying data mining techniques to a sequential database for the purposes of discovering the correlation relationships that exist among an ordered list of…

  17. The impact of the web and social networks on vaccination. New challenges and opportunities offered to fight against vaccine hesitancy.

    PubMed

    Stahl, J-P; Cohen, R; Denis, F; Gaudelus, J; Martinot, A; Lery, T; Lepetit, H

    2016-05-01

    Vaccine hesitancy is a growing and threatening trend, increasing the risk of disease outbreaks and potentially defeating health authorities' strategies. We aimed to describe the significant role of social networks and the Internet on vaccine hesitancy, and more generally on vaccine attitudes and behaviors. Presentation and discussion of lessons learnt from: (i) the monitoring and analysis of web and social network contents on vaccination; (ii) the tracking of Google search terms used by web users; (iii) the analysis of Google search suggestions related to vaccination; (iv) results from the Vaccinoscopie(©) study, online annual surveys of representative samples of 6500 to 10,000 French mothers, monitoring vaccine behaviors and attitude of French parents as well as vaccination coverage of their children, since 2008; and (v) various studies published in the scientific literature. Social networks and the web play a major role in disseminating information about vaccination. They have modified the vaccination decision-making process and, more generally, the doctor/patient relationship. The Internet may fuel controversial issues related to vaccination and durably impact public opinion, but it may also provide new tools to fight against vaccine hesitancy. Vaccine hesitancy should be fought on the Internet battlefield, and for this purpose, communication strategies should take into account new threats and opportunities offered by the web and social networks. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  18. An Evaluation of Web- and Print-Based Methods to Attract People to a Physical Activity Intervention

    PubMed Central

    Jennings, Cally; Plotnikoff, Ronald C; Vandelanotte, Corneel

    2016-01-01

    Background Cost-effective and efficient methods to attract people to Web-based health behavior interventions need to be identified. Traditional print methods including leaflets, posters, and newspaper advertisements remain popular despite the expanding range of Web-based advertising options that have the potential to reach larger numbers at lower cost. Objective This study evaluated the effectiveness of multiple Web-based and print-based methods to attract people to a Web-based physical activity intervention. Methods A range of print-based (newspaper advertisements, newspaper articles, letterboxing, leaflets, and posters) and Web-based (Facebook advertisements, Google AdWords, and community calendars) methods were applied to attract participants to a Web-based physical activity intervention in Australia. The time investment, cost, number of first time website visits, the number of completed sign-up questionnaires, and the demographics of participants were recorded for each advertising method. Results A total of 278 people signed up to participate in the physical activity program. Of the print-based methods, newspaper advertisements totaled AUD $145, letterboxing AUD $135, leaflets AUD $66, posters AUD $52, and newspaper article AUD $3 per sign-up. Of the Web-based methods, Google AdWords totaled AUD $495, non-targeted Facebook advertisements AUD $68, targeted Facebook advertisements AUD $42, and community calendars AUD $12 per sign-up. Although the newspaper article and community calendars cost the least per sign-up, they resulted in only 17 and 6 sign-ups respectively. The targeted Facebook advertisements were the next most cost-effective method and reached a large number of sign-ups (n=184). The newspaper article and the targeted Facebook advertisements required the lowest time investment per sign-up (5 and 7 minutes respectively). People reached through the targeted Facebook advertisements were on average older (60 years vs 50 years, P<.001) and had a higher body mass index (32 vs 30, P<.05) than people reached through the other methods. Conclusions Overall, our results demonstrate that targeted Facebook advertising is the most cost-effective and efficient method at attracting moderate numbers to physical activity interventions in comparison to the other methods tested. Newspaper advertisements, letterboxing, and Google AdWords were not effective. The community calendars and newspaper articles may be effective for small community interventions. ClinicalTrial Australian New Zealand Clinical Trials Registry: ACTRN12614000339651; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=363570&isReview=true (Archived by WebCite at http://www.webcitation.org/6hMnFTvBt) PMID:27235075

  19. An Evaluation of Web- and Print-Based Methods to Attract People to a Physical Activity Intervention.

    PubMed

    Alley, Stephanie; Jennings, Cally; Plotnikoff, Ronald C; Vandelanotte, Corneel

    2016-05-27

    Cost-effective and efficient methods to attract people to Web-based health behavior interventions need to be identified. Traditional print methods including leaflets, posters, and newspaper advertisements remain popular despite the expanding range of Web-based advertising options that have the potential to reach larger numbers at lower cost. This study evaluated the effectiveness of multiple Web-based and print-based methods to attract people to a Web-based physical activity intervention. A range of print-based (newspaper advertisements, newspaper articles, letterboxing, leaflets, and posters) and Web-based (Facebook advertisements, Google AdWords, and community calendars) methods were applied to attract participants to a Web-based physical activity intervention in Australia. The time investment, cost, number of first time website visits, the number of completed sign-up questionnaires, and the demographics of participants were recorded for each advertising method. A total of 278 people signed up to participate in the physical activity program. Of the print-based methods, newspaper advertisements totaled AUD $145, letterboxing AUD $135, leaflets AUD $66, posters AUD $52, and newspaper article AUD $3 per sign-up. Of the Web-based methods, Google AdWords totaled AUD $495, non-targeted Facebook advertisements AUD $68, targeted Facebook advertisements AUD $42, and community calendars AUD $12 per sign-up. Although the newspaper article and community calendars cost the least per sign-up, they resulted in only 17 and 6 sign-ups respectively. The targeted Facebook advertisements were the next most cost-effective method and reached a large number of sign-ups (n=184). The newspaper article and the targeted Facebook advertisements required the lowest time investment per sign-up (5 and 7 minutes respectively). People reached through the targeted Facebook advertisements were on average older (60 years vs 50 years, P<.001) and had a higher body mass index (32 vs 30, P<.05) than people reached through the other methods. Overall, our results demonstrate that targeted Facebook advertising is the most cost-effective and efficient method at attracting moderate numbers to physical activity interventions in comparison to the other methods tested. Newspaper advertisements, letterboxing, and Google AdWords were not effective. The community calendars and newspaper articles may be effective for small community interventions. Australian New Zealand Clinical Trials Registry: ACTRN12614000339651; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=363570&isReview=true (Archived by WebCite at http://www.webcitation.org/6hMnFTvBt).

  20. Has the American Public's Interest in Information Related to Relationships Beyond "The Couple" Increased Over Time?

    PubMed

    Moors, Amy C

    2017-01-01

    Finding romance, love, and sexual intimacy is a central part of our life experience. Although people engage in romance in a variety of ways, alternatives to "the couple" are largely overlooked in relationship research. Scholars and the media have recently argued that the rules of romance are changing, suggesting that interest in consensual departures from monogamy may become popular as people navigate their long-term coupling. This study utilizes Google Trends to assess Americans' interest in seeking out information related to consensual nonmonogamous relationships across a 10-year period (2006-2015). Using anonymous Web queries from hundreds of thousands of Google search engine users, results show that searches for words related to polyamory and open relationships (but not swinging) have significantly increased over time. Moreover, the magnitude of the correlation between consensual nonmonogamy Web queries and time was significantly higher than popular Web queries over the same time period, indicating this pattern of increased interest in polyamory and open relationships is unique. Future research avenues for incorporating consensual nonmonogamous relationships into relationship science are discussed.

  1. The Application of Collaborative Business Intelligence Technology in the Hospital SPD Logistics Management Model.

    PubMed

    Liu, Tongzhu; Shen, Aizong; Hu, Xiaojian; Tong, Guixian; Gu, Wei

    2017-06-01

    We aimed to apply collaborative business intelligence (BI) system to hospital supply, processing and distribution (SPD) logistics management model. We searched Engineering Village database, China National Knowledge Infrastructure (CNKI) and Google for articles (Published from 2011 to 2016), books, Web pages, etc., to understand SPD and BI related theories and recent research status. For the application of collaborative BI technology in the hospital SPD logistics management model, we realized this by leveraging data mining techniques to discover knowledge from complex data and collaborative techniques to improve the theories of business process. For the application of BI system, we: (i) proposed a layered structure of collaborative BI system for intelligent management in hospital logistics; (ii) built data warehouse for the collaborative BI system; (iii) improved data mining techniques such as supporting vector machines (SVM) and swarm intelligence firefly algorithm to solve key problems in hospital logistics collaborative BI system; (iv) researched the collaborative techniques oriented to data and business process optimization to improve the business processes of hospital logistics management. Proper combination of SPD model and BI system will improve the management of logistics in the hospitals. The successful implementation of the study requires: (i) to innovate and improve the traditional SPD model and make appropriate implement plans and schedules for the application of BI system according to the actual situations of hospitals; (ii) the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial; (iii) timely response of external suppliers.

  2. The Role of Google Scholar in Evidence Reviews and Its Applicability to Grey Literature Searching

    PubMed Central

    Haddaway, Neal Robert; Collins, Alexandra Mary; Coughlin, Deborah; Kirk, Stuart

    2015-01-01

    Google Scholar (GS), a commonly used web-based academic search engine, catalogues between 2 and 100 million records of both academic and grey literature (articles not formally published by commercial academic publishers). Google Scholar collates results from across the internet and is free to use. As a result it has received considerable attention as a method for searching for literature, particularly in searches for grey literature, as required by systematic reviews. The reliance on GS as a standalone resource has been greatly debated, however, and its efficacy in grey literature searching has not yet been investigated. Using systematic review case studies from environmental science, we investigated the utility of GS in systematic reviews and in searches for grey literature. Our findings show that GS results contain moderate amounts of grey literature, with the majority found on average at page 80. We also found that, when searched for specifically, the majority of literature identified using Web of Science was also found using GS. However, our findings showed moderate/poor overlap in results when similar search strings were used in Web of Science and GS (10–67%), and that GS missed some important literature in five of six case studies. Furthermore, a general GS search failed to find any grey literature from a case study that involved manual searching of organisations’ websites. If used in systematic reviews for grey literature, we recommend that searches of article titles focus on the first 200 to 300 results. We conclude that whilst Google Scholar can find much grey literature and specific, known studies, it should not be used alone for systematic review searches. Rather, it forms a powerful addition to other traditional search methods. In addition, we advocate the use of tools to transparently document and catalogue GS search results to maintain high levels of transparency and the ability to be updated, critical to systematic reviews. PMID:26379270

  3. The Role of Google Scholar in Evidence Reviews and Its Applicability to Grey Literature Searching.

    PubMed

    Haddaway, Neal Robert; Collins, Alexandra Mary; Coughlin, Deborah; Kirk, Stuart

    2015-01-01

    Google Scholar (GS), a commonly used web-based academic search engine, catalogues between 2 and 100 million records of both academic and grey literature (articles not formally published by commercial academic publishers). Google Scholar collates results from across the internet and is free to use. As a result it has received considerable attention as a method for searching for literature, particularly in searches for grey literature, as required by systematic reviews. The reliance on GS as a standalone resource has been greatly debated, however, and its efficacy in grey literature searching has not yet been investigated. Using systematic review case studies from environmental science, we investigated the utility of GS in systematic reviews and in searches for grey literature. Our findings show that GS results contain moderate amounts of grey literature, with the majority found on average at page 80. We also found that, when searched for specifically, the majority of literature identified using Web of Science was also found using GS. However, our findings showed moderate/poor overlap in results when similar search strings were used in Web of Science and GS (10-67%), and that GS missed some important literature in five of six case studies. Furthermore, a general GS search failed to find any grey literature from a case study that involved manual searching of organisations' websites. If used in systematic reviews for grey literature, we recommend that searches of article titles focus on the first 200 to 300 results. We conclude that whilst Google Scholar can find much grey literature and specific, known studies, it should not be used alone for systematic review searches. Rather, it forms a powerful addition to other traditional search methods. In addition, we advocate the use of tools to transparently document and catalogue GS search results to maintain high levels of transparency and the ability to be updated, critical to systematic reviews.

  4. Text and Structural Data Mining of Influenza Mentions in Web and Social Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corley, Courtney D.; Cook, Diane; Mikler, Armin R.

    Text and structural data mining of Web and social media (WSM) provides a novel disease surveillance resource and can identify online communities for targeted public health communications (PHC) to assure wide dissemination of pertinent information. WSM that mention influenza are harvested over a 24-week period, 5-October-2008 to 21-March-2009. Link analysis reveals communities for targeted PHC. Text mining is shown to identify trends in flu posts that correlate to real-world influenza-like-illness patient report data. We also bring to bear a graph-based data mining technique to detect anomalies among flu blogs connected by publisher type, links, and user-tags.

  5. Optimizing the Information Presentation on Mining Potential by using Web Services Technology with Restful Protocol

    NASA Astrophysics Data System (ADS)

    Abdillah, T.; Dai, R.; Setiawan, E.

    2018-02-01

    This study aims to develop the application of Web Services technology with RestFul Protocol to optimize the information presentation on mining potential. This study used User Interface Design approach for the information accuracy and relevance as well as the Web Service for the reliability in presenting the information. The results show that: the information accuracy and relevance regarding mining potential can be seen from the achievement of User Interface implementation in the application that is based on the following rules: The consideration of the appropriate colours and objects, the easiness of using the navigation, and users’ interaction with the applications that employs symbols and languages understood by the users; the information accuracy and relevance related to mining potential can be observed by the information presented by using charts and Tool Tip Text to help the users understand the provided chart/figure; the reliability of the information presentation is evident by the results of Web Services testing in Figure 4.5.6. This study finds out that User Interface Design and Web Services approaches (for the access of different Platform apps) are able to optimize the presentation. The results of this study can be used as a reference for software developers and Provincial Government of Gorontalo.

  6. Using Google Blogs and Discussions to Recommend Biomedical Resources: A Case Study

    PubMed Central

    Reed, Robyn B.; Chattopadhyay, Ansuman; Iwema, Carrie L.

    2013-01-01

    This case study investigated whether data gathered from discussions within the social media provide a reliable basis for a biomedical resources recommendation system. Using a search query to mine text from Google Blogs and Discussions, a ranking of biomedical resources was determined based on those most frequently mentioned. To establish quality, these results were compared to rankings by subject experts. An overall agreement between the frequency of social media discussions and subject expert recommendations was observed when identifying key bioinformatics and consumer health resources. Testing the method in more than one biomedical area implies this procedure could be employed across different subjects. PMID:24180648

  7. Evaluation of the content and accessibility of web sites for accredited orthopaedic sports medicine fellowships.

    PubMed

    Mulcahey, Mary K; Gosselin, Michelle M; Fadale, Paul D

    2013-06-19

    The Internet is a common source of information for orthopaedic residents applying for sports medicine fellowships, with the web sites of the American Orthopaedic Society for Sports Medicine (AOSSM) and the San Francisco Match serving as central databases. We sought to evaluate the web sites for accredited orthopaedic sports medicine fellowships with regard to content and accessibility. We reviewed the existing web sites of the ninety-five accredited orthopaedic sports medicine fellowships included in the AOSSM and San Francisco Match databases from February to March 2012. A Google search was performed to determine the overall accessibility of program web sites and to supplement information obtained from the AOSSM and San Francisco Match web sites. The study sample consisted of the eighty-seven programs whose web sites connected to information about the fellowship. Each web site was evaluated for its informational value. Of the ninety-five programs, fifty-one (54%) had links listed in the AOSSM database. Three (3%) of all accredited programs had web sites that were linked directly to information about the fellowship. Eighty-eight (93%) had links listed in the San Francisco Match database; however, only five (5%) had links that connected directly to information about the fellowship. Of the eighty-seven programs analyzed in our study, all eighty-seven web sites (100%) provided a description of the program and seventy-six web sites (87%) included information about the application process. Twenty-one web sites (24%) included a list of current fellows. Fifty-six web sites (64%) described the didactic instruction, seventy (80%) described team coverage responsibilities, forty-seven (54%) included a description of cases routinely performed by fellows, forty-one (47%) described the role of the fellow in seeing patients in the office, eleven (13%) included call responsibilities, and seventeen (20%) described a rotation schedule. Two Google searches identified direct links for 67% to 71% of all accredited programs. Most accredited orthopaedic sports medicine fellowships lack easily accessible or complete web sites in the AOSSM or San Francisco Match databases. Improvement in the accessibility and quality of information on orthopaedic sports medicine fellowship web sites would facilitate the ability of applicants to obtain useful information.

  8. Engaging the YouTube Google-Eyed Generation: Strategies for Using Web 2.0 in Teaching and Learning

    ERIC Educational Resources Information Center

    Duffy, Peter

    2008-01-01

    YouTube, Podcasting, Blogs, Wikis and RSS are buzz words currently associated with the term Web 2.0 and represent a shifting pedagogical paradigm for the use of a new set of tools within education. The implication here is a possible shift from the basic archetypical vehicles used for (e)learning today (lecture notes, printed material, PowerPoint,…

  9. Extensible Probabilistic Repository Technology (XPRT)

    DTIC Science & Technology

    2004-10-01

    projects, such as, Centaurus , Evidence Data Base (EDB), etc., others were fabricated, such as INS and FED, while others contain data from the open...Google Web Report Unlimited SOAP API News BBC News Unlimited WEB RSS 1.0 Centaurus Person Demographics 204,402 people from 240 countries...objects of the domain ontology map to the various simulated data-sources. For example, the PersonDemographics are stored in the Centaurus database, while

  10. Being There is Only the Beginning: Toward More Effective Web 2.0 Use in Academic Libraries

    DTIC Science & Technology

    2010-01-02

    Google is Our Friend,” and “ Plagiarism 101.” Also unlike the hard-to-find blogs, many academic libraries, including both Hollins University and Urbana...Effective Web 2.0 Use in Academic Libraries by Hanna C. Bachrach Pratt Institute...5a. CONTRACT NUMBER 2.0 Use in Academic Libraries 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Bachrach

  11. Development of a Web-Based Visualization Platform for Climate Research Using Google Earth

    NASA Technical Reports Server (NTRS)

    Sun, Xiaojuan; Shen, Suhung; Leptoukh, Gregory G.; Wang, Panxing; Di, Liping; Lu, Mingyue

    2011-01-01

    Recently, it has become easier to access climate data from satellites, ground measurements, and models from various data centers, However, searching. accessing, and prc(essing heterogeneous data from different sources are very tim -consuming tasks. There is lack of a comprehensive visual platform to acquire distributed and heterogeneous scientific data and to render processed images from a single accessing point for climate studies. This paper. documents the design and implementation of a Web-based visual, interoperable, and scalable platform that is able to access climatological fields from models, satellites, and ground stations from a number of data sources using Google Earth (GE) as a common graphical interface. The development is based on the TCP/IP protocol and various data sharing open sources, such as OPeNDAP, GDS, Web Processing Service (WPS), and Web Mapping Service (WMS). The visualization capability of integrating various measurements into cE extends dramatically the awareness and visibility of scientific results. Using embedded geographic information in the GE, the designed system improves our understanding of the relationships of different elements in a four dimensional domain. The system enables easy and convenient synergistic research on a virtual platform for professionals and the general public, gr$tly advancing global data sharing and scientific research collaboration.

  12. Integrating Radar Image Data with Google Maps

    NASA Technical Reports Server (NTRS)

    Chapman, Bruce D.; Gibas, Sarah

    2010-01-01

    A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.

  13. Directed network modules

    NASA Astrophysics Data System (ADS)

    Palla, Gergely; Farkas, Illés J.; Pollner, Péter; Derényi, Imre; Vicsek, Tamás

    2007-06-01

    A search technique locating network modules, i.e. internally densely connected groups of nodes in directed networks is introduced by extending the clique percolation method originally proposed for undirected networks. After giving a suitable definition for directed modules we investigate their percolation transition in the Erdos-Rényi graph both analytically and numerically. We also analyse four real-world directed networks, including Google's own web-pages, an email network, a word association graph and the transcriptional regulatory network of the yeast Saccharomyces cerevisiae. The obtained directed modules are validated by additional information available for the nodes. We find that directed modules of real-world graphs inherently overlap and the investigated networks can be classified into two major groups in terms of the overlaps between the modules. Accordingly, in the word-association network and Google's web-pages, overlaps are likely to contain in-hubs, whereas the modules in the email and transcriptional regulatory network tend to overlap via out-hubs.

  14. Google matrix analysis of directed networks

    NASA Astrophysics Data System (ADS)

    Ermann, Leonardo; Frahm, Klaus M.; Shepelyansky, Dima L.

    2015-10-01

    In the past decade modern societies have developed enormous communication and social networks. Their classification and information retrieval processing has become a formidable task for the society. Because of the rapid growth of the World Wide Web, and social and communication networks, new mathematical methods have been invented to characterize the properties of these networks in a more detailed and precise way. Various search engines extensively use such methods. It is highly important to develop new tools to classify and rank a massive amount of network information in a way that is adapted to internal network structures and characteristics. This review describes the Google matrix analysis of directed complex networks demonstrating its efficiency using various examples including the World Wide Web, Wikipedia, software architectures, world trade, social and citation networks, brain neural networks, DNA sequences, and Ulam networks. The analytical and numerical matrix methods used in this analysis originate from the fields of Markov chains, quantum chaos, and random matrix theory.

  15. Participating in the Geospatial Web: Collaborative Mapping, Social Networks and Participatory GIS

    NASA Astrophysics Data System (ADS)

    Rouse, L. Jesse; Bergeron, Susan J.; Harris, Trevor M.

    In 2005, Google, Microsoft and Yahoo! released free Web mapping applications that opened up digital mapping to mainstream Internet users. Importantly, these companies also released free APIs for their platforms, allowing users to geo-locate and map their own data. These initiatives have spurred the growth of the Geospatial Web and represent spatially aware online communities and new ways of enabling communities to share information from the bottom up. This chapter explores how the emerging Geospatial Web can meet some of the fundamental needs of Participatory GIS projects to incorporate local knowledge into GIS, as well as promote public access and collaborative mapping.

  16. Motivation Mining: Prospecting the Web.

    ERIC Educational Resources Information Center

    Small, Ruth V.; Arnone, Marilyn P.

    1999-01-01

    Describes WebMAC instruments, which differ from other Web-evaluation instruments because they have a theoretical base, are user-centered, are designed for students in grades 7 through 12, and assess the motivational quality of Web sites. Examples are given of uses of WebMAC Middle and WebMAC Senior in activities to promote evaluation and…

  17. Analysis of mesenchymal stem cell differentiation in vitro using classification association rule mining.

    PubMed

    Wang, Weiqi; Wang, Yanbo Justin; Bañares-Alcántara, René; Coenen, Frans; Cui, Zhanfeng

    2009-12-01

    In this paper, data mining is used to analyze the data on the differentiation of mammalian Mesenchymal Stem Cells (MSCs), aiming at discovering known and hidden rules governing MSC differentiation, following the establishment of a web-based public database containing experimental data on the MSC proliferation and differentiation. To this effect, a web-based public interactive database comprising the key parameters which influence the fate and destiny of mammalian MSCs has been constructed and analyzed using Classification Association Rule Mining (CARM) as a data-mining technique. The results show that the proposed approach is technically feasible and performs well with respect to the accuracy of (classification) prediction. Key rules mined from the constructed MSC database are consistent with experimental observations, indicating the validity of the method developed and the first step in the application of data mining to the study of MSCs.

  18. Mining Student Data Captured from a Web-Based Tutoring Tool: Initial Exploration and Results

    ERIC Educational Resources Information Center

    Merceron, Agathe; Yacef, Kalina

    2004-01-01

    In this article we describe the initial investigations that we have conducted on student data collected from a web-based tutoring tool. We have used some data mining techniques such as association rule and symbolic data analysis, as well as traditional SQL queries to gain further insight on the students' learning and deduce information to improve…

  19. Beyond accuracy: creating interoperable and scalable text-mining web services.

    PubMed

    Wei, Chih-Hsuan; Leaman, Robert; Lu, Zhiyong

    2016-06-15

    The biomedical literature is a knowledge-rich resource and an important foundation for future research. With over 24 million articles in PubMed and an increasing growth rate, research in automated text processing is becoming increasingly important. We report here our recently developed web-based text mining services for biomedical concept recognition and normalization. Unlike most text-mining software tools, our web services integrate several state-of-the-art entity tagging systems (DNorm, GNormPlus, SR4GN, tmChem and tmVar) and offer a batch-processing mode able to process arbitrary text input (e.g. scholarly publications, patents and medical records) in multiple formats (e.g. BioC). We support multiple standards to make our service interoperable and allow simpler integration with other text-processing pipelines. To maximize scalability, we have preprocessed all PubMed articles, and use a computer cluster for processing large requests of arbitrary text. Our text-mining web service is freely available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/#curl : Zhiyong.Lu@nih.gov. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  20. Stratification-Based Outlier Detection over the Deep Web.

    PubMed

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web.

  1. Stratification-Based Outlier Detection over the Deep Web

    PubMed Central

    Xian, Xuefeng; Zhao, Pengpeng; Sheng, Victor S.; Fang, Ligang; Gu, Caidong; Yang, Yuanfeng; Cui, Zhiming

    2016-01-01

    For many applications, finding rare instances or outliers can be more interesting than finding common patterns. Existing work in outlier detection never considers the context of deep web. In this paper, we argue that, for many scenarios, it is more meaningful to detect outliers over deep web. In the context of deep web, users must submit queries through a query interface to retrieve corresponding data. Therefore, traditional data mining methods cannot be directly applied. The primary contribution of this paper is to develop a new data mining method for outlier detection over deep web. In our approach, the query space of a deep web data source is stratified based on a pilot sample. Neighborhood sampling and uncertainty sampling are developed in this paper with the goal of improving recall and precision based on stratification. Finally, a careful performance evaluation of our algorithm confirms that our approach can effectively detect outliers in deep web. PMID:27313603

  2. Use of Web 2.0 tools by hospital pharmacists.

    PubMed

    Bonaga Serrano, B; Aldaz Francés, R; Garrigues Sebastiá, M R; Hernández San Salvador, M

    2014-04-01

    Web 2.0 tools are transforming the pathways health professionals use to communicate among themselves and with their patients so this situation forces a change of mind to implement them. The aim of our study is to assess the state of knowledge of the main Web 2.0 applications and how are used in a sample of hospital pharmacists. The study was carried out through an anonymous survey to all members of the Spanish Society of Hospital Pharmacy (SEFH) by means of a questionnaire sent by the Google Drive® application. After the 3-month study period was completed, collected data were compiled and then analyzed using SPPS v15.0. The response rate was 7.3%, being 70.5% female and 76.3% specialists. The majority of respondents (54.2%) were aged 20 to 35. Pubmed was the main way of accessing published articles. 65.2% of pharmacists knew the term "Web 2.0". 45.3% pharmacists were Twitter users and over 58.9% mainly for professional purposes. Most pharmacists believed that Twitter was a good tool to interact with professionals and patients. 78.7% do not use an agregator, but when used, Google Reader was the most common. Although Web 2.0 applications are gaining mainstream popularity some health professionals may resist using them. In fact, more than a half of surveyed pharmacists referred a lack of knowledge about Web 2.0 tools. It would be positive for pharmacists to use them properly during their professional practice to get the best out of them. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  3. Basic GA Tools to Evaluate Your Web Area

    EPA Pesticide Factsheets

    Learn steps and tips for creating these Google Analytics (GA) reports, so you can learn which pages are popular or unpopular, which PDFs are getting looked at, who is using your pages, what search terms they used, and more.

  4. From the Director: Surfing the Web for Health Information

    MedlinePlus

    ... Reliable Results Most Internet users first visit a search engine — like Google or Yahoo! — when seeking health information. ... medical terms like "cancer" or "diabetes" into a search engine, the top-ten results will likely include authoritative ...

  5. Visualization of seismic tomography on Google Earth: Improvement of KML generator and its web application to accept the data file in European standard format

    NASA Astrophysics Data System (ADS)

    Yamagishi, Y.; Yanaka, H.; Tsuboi, S.

    2009-12-01

    We have developed a conversion tool for the data of seismic tomography into KML, called KML generator, and made it available on the web site (http://www.jamstec.go.jp/pacific21/google_earth). The KML generator enables us to display vertical and horizontal cross sections of the model on Google Earth in three-dimensional manner, which would be useful to understand the Earth's interior. The previous generator accepts text files of grid-point data having longitude, latitude, and seismic velocity anomaly. Each data file contains the data for each depth. Metadata, such as bibliographic reference, grid-point interval, depth, are described in other information file. We did not allow users to upload their own tomographic model to the web application, because there is not standard format to represent tomographic model. Recently European seismology research project, NEIRES (Network of Research Infrastructures for European Seismology), advocates that the data of seismic tomography should be standardized. They propose a new format based on JSON (JavaScript Object Notation), which is one of the data-interchange formats, as a standard one for the tomography. This format consists of two parts, which are metadata and grid-point data values. The JSON format seems to be powerful to handle and to analyze the tomographic model, because the structure of the format is fully defined by JavaScript objects, thus the elements are directly accessible by a script. In addition, there exist JSON libraries for several programming languages. The International Federation of Digital Seismograph Network (FDSN) adapted this format as a FDSN standard format for seismic tomographic model. There might be a possibility that this format would not only be accepted by European seismologists but also be accepted as the world standard. Therefore we improve our KML generator for seismic tomography to accept the data file having also JSON format. We also improve the web application of the generator so that the JSON formatted data file can be uploaded. Users can convert any tomographic model data to KML. The KML obtained through the new generator should provide an arena to compare various tomographic models and other geophysical observations on Google Earth, which may act as a common platform for geoscience browser.

  6. Assessing Ebola-related web search behaviour: insights and implications from an analytical study of Google Trends-based query volumes.

    PubMed

    Alicino, Cristiano; Bragazzi, Nicola Luigi; Faccio, Valeria; Amicizia, Daniela; Panatto, Donatella; Gasparini, Roberto; Icardi, Giancarlo; Orsi, Andrea

    2015-12-10

    The 2014 Ebola epidemic in West Africa has attracted public interest worldwide, leading to millions of Ebola-related Internet searches being performed during the period of the epidemic. This study aimed to evaluate and interpret Google search queries for terms related to the Ebola outbreak both at the global level and in all countries where primary cases of Ebola occurred. The study also endeavoured to look at the correlation between the number of overall and weekly web searches and the number of overall and weekly new cases of Ebola. Google Trends (GT) was used to explore Internet activity related to Ebola. The study period was from 29 December 2013 to 14 June 2015. Pearson's correlation was performed to correlate Ebola-related relative search volumes (RSVs) with the number of weekly and overall Ebola cases. Multivariate regression was performed using Ebola-related RSV as a dependent variable, and the overall number of Ebola cases and the Human Development Index were used as predictor variables. The greatest RSV was registered in the three West African countries mainly affected by the Ebola epidemic. The queries varied in the different countries. Both quantitative and qualitative differences between the affected African countries and other Western countries with primary cases were noted, in relation to the different flux volumes and different time courses. In the affected African countries, web query search volumes were mostly concentrated in the capital areas. However, in Western countries, web queries were uniformly distributed over the national territory. In terms of the three countries mainly affected by the Ebola epidemic, the correlation between the number of new weekly cases of Ebola and the weekly GT index varied from weak to moderate. The correlation between the number of Ebola cases registered in all countries during the study period and the GT index was very high. Google Trends showed a coarse-grained nature, strongly correlating with global epidemiological data, but was weaker at country level, as it was prone to distortions induced by unbalanced media coverage and the digital divide. Global and local health agencies could usefully exploit GT data to identify disease-related information needs and plan proper communication strategies, particularly in the case of health-threatening events.

  7. Exploratory Visual Analytics of a Dynamically Built Network of Nodes in a WebGL-Enabled Browser

    DTIC Science & Technology

    2014-01-01

    dimensionality reduction, feature extraction, high-dimensional data, t-distributed stochastic neighbor embedding, neighbor retrieval visualizer, visual...WebGL-enabled rendering is supported natively by browsers such as the latest Mozilla Firefox , Google Chrome, and Microsoft Internet Explorer 11. At the...appropriate names. The resultant 26-node network is displayed in a Mozilla Firefox browser in figure 2 (also see appendix B). 3 Figure 1. The

  8. LGscore: A method to identify disease-related genes using biological literature and Google data.

    PubMed

    Kim, Jeongwoo; Kim, Hyunjin; Yoon, Youngmi; Park, Sanghyun

    2015-04-01

    Since the genome project in 1990s, a number of studies associated with genes have been conducted and researchers have confirmed that genes are involved in disease. For this reason, the identification of the relationships between diseases and genes is important in biology. We propose a method called LGscore, which identifies disease-related genes using Google data and literature data. To implement this method, first, we construct a disease-related gene network using text-mining results. We then extract gene-gene interactions based on co-occurrences in abstract data obtained from PubMed, and calculate the weights of edges in the gene network by means of Z-scoring. The weights contain two values: the frequency and the Google search results. The frequency value is extracted from literature data, and the Google search result is obtained using Google. We assign a score to each gene through a network analysis. We assume that genes with a large number of links and numerous Google search results and frequency values are more likely to be involved in disease. For validation, we investigated the top 20 inferred genes for five different diseases using answer sets. The answer sets comprised six databases that contain information on disease-gene relationships. We identified a significant number of disease-related genes as well as candidate genes for Alzheimer's disease, diabetes, colon cancer, lung cancer, and prostate cancer. Our method was up to 40% more accurate than existing methods. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. TouchTerrain: A simple web-tool for creating 3D-printable topographic models

    NASA Astrophysics Data System (ADS)

    Hasiuk, Franciszek J.; Harding, Chris; Renner, Alex Raymond; Winer, Eliot

    2017-12-01

    An open-source web-application, TouchTerrain, was developed to simplify the production of 3D-printable terrain models. Direct Digital Manufacturing (DDM) using 3D Printers can change how geoscientists, students, and stakeholders interact with 3D data, with the potential to improve geoscience communication and environmental literacy. No other manufacturing technology can convert digital data into tangible objects quickly at relatively low cost; however, the expertise necessary to produce a 3D-printed terrain model can be a substantial burden: knowledge of geographical information systems, computer aided design (CAD) software, and 3D printers may all be required. Furthermore, printing models larger than the build volume of a 3D printer can pose further technical hurdles. The TouchTerrain web-application simplifies DDM for elevation data by generating digital 3D models customized for a specific 3D printer's capabilities. The only required user input is the selection of a region-of-interest using the provided web-application with a Google Maps-style interface. Publically available digital elevation data is processed via the Google Earth Engine API. To allow the manufacture of 3D terrain models larger than a 3D printer's build volume the selected area can be split into multiple tiles without third-party software. This application significantly reduces the time and effort required for a non-expert like an educator to obtain 3D terrain models for use in class. The web application is deployed at http://touchterrain.geol.iastate.edu/.

  10. Design and Implementation of Surrounding Transaction Plotting and Management System Based on Google Map API

    NASA Astrophysics Data System (ADS)

    Cao, Y. B.; Hua, Y. X.; Zhao, J. X.; Guo, S. M.

    2013-11-01

    With China's rapid economic development and comprehensive national strength growing, Border work has become a long-term and important task in China's diplomatic work. How to implement rapid plotting, real-time sharing and mapping surrounding affairs has taken great significance for government policy makers and diplomatic staff. However, at present the already exists Boundary information system are mainly have problems of Geospatial data update is heavily workload, plotting tools are in a state of serious lack of, Geographic events are difficult to share, this phenomenon has seriously hampered the smooth development of the border task. The development and progress of Geographic information system technology especially the development of Web GIS offers the possibility to solve the above problems, this paper adopts four layers of B/S architecture, with the support of Google maps service, uses the free API which is offered by Google maps and its features of openness, ease of use, sharing characteristics, highresolution images to design and implement the surrounding transaction plotting and management system based on the web development technology of ASP.NET, C#, Ajax. The system can provide decision support for government policy makers as well as diplomatic staff's real-time plotting and sharing of surrounding information. The practice has proved that the system has good usability and strong real-time.

  11. News trends and web search query of HIV/AIDS in Hong Kong.

    PubMed

    Chiu, Alice P Y; Lin, Qianying; He, Daihai

    2017-01-01

    The HIV epidemic in Hong Kong has worsened in recent years, with major contributions from high-risk subgroup of men who have sex with men (MSM). Internet use is prevalent among the majority of the local population, where they sought health information online. This study examines the impacts of HIV/AIDS and MSM news coverage on web search query in Hong Kong. Relevant news coverage about HIV/AIDS and MSM from January 1st, 2004 to December 31st, 2014 was obtained from the WiseNews databse. News trends were created by computing the number of relevant articles by type, topic, place of origin and sub-populations. We then obtained relevant search volumes from Google and analysed causality between news trends and Google Trends using Granger Causality test and orthogonal impulse function. We found that editorial news has an impact on "HIV" Google searches on HIV, with the search term popularity peaking at an average of two weeks after the news are published. Similarly, editorial news has an impact on the frequency of "AIDS" searches two weeks after. MSM-related news trends have a more fluctuating impact on "MSM" Google searches, although the time lag varies anywhere from one week later to ten weeks later. This infodemiological study shows that there is a positive impact of news trends on the online search behavior of HIV/AIDS or MSM-related issues for up to ten weeks after. Health promotional professionals could make use of this brief time window to tailor the timing of HIV awareness campaigns and public health interventions to maximise its reach and effectiveness.

  12. Deploying and sharing U-Compare workflows as web services.

    PubMed

    Kontonatsios, Georgios; Korkontzelos, Ioannis; Kolluru, Balakrishna; Thompson, Paul; Ananiadou, Sophia

    2013-02-18

    U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare's components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform.

  13. Deploying and sharing U-Compare workflows as web services

    PubMed Central

    2013-01-01

    Background U-Compare is a text mining platform that allows the construction, evaluation and comparison of text mining workflows. U-Compare contains a large library of components that are tuned to the biomedical domain. Users can rapidly develop biomedical text mining workflows by mixing and matching U-Compare’s components. Workflows developed using U-Compare can be exported and sent to other users who, in turn, can import and re-use them. However, the resulting workflows are standalone applications, i.e., software tools that run and are accessible only via a local machine, and that can only be run with the U-Compare platform. Results We address the above issues by extending U-Compare to convert standalone workflows into web services automatically, via a two-click process. The resulting web services can be registered on a central server and made publicly available. Alternatively, users can make web services available on their own servers, after installing the web application framework, which is part of the extension to U-Compare. We have performed a user-oriented evaluation of the proposed extension, by asking users who have tested the enhanced functionality of U-Compare to complete questionnaires that assess its functionality, reliability, usability, efficiency and maintainability. The results obtained reveal that the new functionality is well received by users. Conclusions The web services produced by U-Compare are built on top of open standards, i.e., REST and SOAP protocols, and therefore, they are decoupled from the underlying platform. Exported workflows can be integrated with any application that supports these open standards. We demonstrate how the newly extended U-Compare enhances the cross-platform interoperability of workflows, by seamlessly importing a number of text mining workflow web services exported from U-Compare into Taverna, i.e., a generic scientific workflow construction platform. PMID:23419017

  14. WebViz: A web browser based application for collaborative analysis of 3D data

    NASA Astrophysics Data System (ADS)

    Ruegg, C. S.

    2011-12-01

    In the age of high speed Internet where people can interact instantly, scientific tools have lacked technology which can incorporate this concept of communication using the web. To solve this issue a web application for geological studies has been created, tentatively titled WebViz. This web application utilizes tools provided by Google Web Toolkit to create an AJAX web application capable of features found in non web based software. Using these tools, a web application can be created to act as piece of software from anywhere in the globe with a reasonably speedy Internet connection. An application of this technology can be seen with data regarding the recent tsunami from the major japan earthquakes. After constructing the appropriate data to fit a computer render software called HVR, WebViz can request images of the tsunami data and display it to anyone who has access to the application. This convenience alone makes WebViz a viable solution, but the option to interact with this data with others around the world causes WebViz to be taken as a serious computational tool. WebViz also can be used on any javascript enabled browser such as those found on modern tablets and smart phones over a fast wireless connection. Due to the fact that WebViz's current state is built using Google Web Toolkit the portability of the application is in it's most efficient form. Though many developers have been involved with the project, each person has contributed to increase the usability and speed of the application. In the project's most recent form a dramatic speed increase has been designed as well as a more efficient user interface. The speed increase has been informally noticed in recent uses of the application in China and Australia with the hosting server being located at the University of Minnesota. The user interface has been improved to not only look better but the functionality has been improved. Major functions of the application are rotating the 3D object using buttons. These buttons have been replaced with a new layout that is easier to understand the function and is also easy to use with mobile devices. With these new changes, WebViz is easier to control and use for general use.

  15. Quality of Web-based Information for the 10 Most Common Fractures.

    PubMed

    Memon, Muzammil; Ginsberg, Lydia; Simunovic, Nicole; Ristevski, Bill; Bhandari, Mohit; Kleinlugtenbelt, Ydo Vincent

    2016-06-17

    In today's technologically advanced world, 75% of patients have used Google to search for health information. As a result, health care professionals fear that patients may be misinformed. Currently, there is a paucity of data on the quality and readability of Web-based health information on fractures. In this study, we assessed the quality and readability of Web-based health information related to the 10 most common fractures. Using the Google search engine, we assessed websites from the first results page for the 10 most common fractures using lay search terms. Website quality was measured using the DISCERN instrument, which scores websites as very poor (15-22.5), poor (22.5-37.5), fair (37.5-52.5), good (52.5-67.5), or excellent (67.5-75). The presence of Health on the Net code (HONcode) certification was assessed for all websites. Website readability was measured using the Flesch Reading Ease Score (0-100), where 60-69 is ideal for the general public, and the Flesch-Kincaid Grade Level (FKGL; -3.4 to ∞), where the mean FKGL of the US adult population is 8. Overall, website quality was "fair" for all fractures, with a mean (standard deviation) DISCERN score of 50.3 (5.8). The DISCERN score correlated positively with a higher website position on the search results page (r(2)=0.1, P=.002) and with HONcode certification (P=.007). The mean (standard deviation) Flesch Reading Ease Score and FKGL for all fractures were 62.2 (9.1) and 6.7 (1.6), respectively. The quality of Web-based health information on fracture care is fair, and its readability is appropriate for the general public. To obtain higher quality information, patients should select HONcode-certified websites. Furthermore, patients should select websites that are positioned higher on the results page because the Google ranking algorithms appear to rank the websites by quality.

  16. Quality of Web-based Information for the 10 Most Common Fractures

    PubMed Central

    Ginsberg, Lydia; Simunovic, Nicole; Ristevski, Bill; Bhandari, Mohit; Kleinlugtenbelt, Ydo Vincent

    2016-01-01

    Background In today's technologically advanced world, 75% of patients have used Google to search for health information. As a result, health care professionals fear that patients may be misinformed. Currently, there is a paucity of data on the quality and readability of Web-based health information on fractures. Objectives In this study, we assessed the quality and readability of Web-based health information related to the 10 most common fractures. Methods Using the Google search engine, we assessed websites from the first results page for the 10 most common fractures using lay search terms. Website quality was measured using the DISCERN instrument, which scores websites as very poor (15-22.5), poor (22.5-37.5), fair (37.5-52.5), good (52.5-67.5), or excellent (67.5-75). The presence of Health on the Net code (HONcode) certification was assessed for all websites. Website readability was measured using the Flesch Reading Ease Score (0-100), where 60-69 is ideal for the general public, and the Flesch-Kincaid Grade Level (FKGL; −3.4 to ∞), where the mean FKGL of the US adult population is 8. Results Overall, website quality was “fair” for all fractures, with a mean (standard deviation) DISCERN score of 50.3 (5.8). The DISCERN score correlated positively with a higher website position on the search results page (r2=0.1, P=.002) and with HONcode certification (P=.007). The mean (standard deviation) Flesch Reading Ease Score and FKGL for all fractures were 62.2 (9.1) and 6.7 (1.6), respectively. Conclusion The quality of Web-based health information on fracture care is fair, and its readability is appropriate for the general public. To obtain higher quality information, patients should select HONcode-certified websites. Furthermore, patients should select websites that are positioned higher on the results page because the Google ranking algorithms appear to rank the websites by quality. PMID:27317159

  17. MyWEST: my Web Extraction Software Tool for effective mining of annotations from web-based databanks.

    PubMed

    Masseroli, Marco; Stella, Andrea; Meani, Natalia; Alcalay, Myriam; Pinciroli, Francesco

    2004-12-12

    High-throughput technologies create the necessity to mine large amounts of gene annotations from diverse databanks, and to integrate the resulting data. Most databanks can be interrogated only via Web, for a single gene at a time, and query results are generally available only in the HTML format. Although some databanks provide batch retrieval of data via FTP, this requires expertise and resources for locally reimplementing the databank. We developed MyWEST, a tool aimed at researchers without extensive informatics skills or resources, which exploits user-defined templates to easily mine selected annotations from different Web-interfaced databanks, and aggregates and structures results in an automatically updated database. Using microarray results from a model system of retinoic acid-induced differentiation, MyWEST effectively gathered relevant annotations from various biomolecular databanks, highlighted significant biological characteristics and supported a global approach to the understanding of complex cellular mechanisms. MyWEST is freely available for non-profit use at http://www.medinfopoli.polimi.it/MyWEST/

  18. High correlation of Middle East respiratory syndrome spread with Google search and Twitter trends in Korea.

    PubMed

    Shin, Soo-Yong; Seo, Dong-Woo; An, Jisun; Kwak, Haewoon; Kim, Sung-Han; Gwack, Jin; Jo, Min-Woo

    2016-09-06

    The Middle East respiratory syndrome coronavirus (MERS-CoV) was exported to Korea in 2015, resulting in a threat to neighboring nations. We evaluated the possibility of using a digital surveillance system based on web searches and social media data to monitor this MERS outbreak. We collected the number of daily laboratory-confirmed MERS cases and quarantined cases from May 11, 2015 to June 26, 2015 using the Korean government MERS portal. The daily trends observed via Google search and Twitter during the same time period were also ascertained using Google Trends and Topsy. Correlations among the data were then examined using Spearman correlation analysis. We found high correlations (>0.7) between Google search and Twitter results and the number of confirmed MERS cases for the previous three days using only four simple keywords: "MERS", " ("MERS (in Korean)"), " ("MERS symptoms (in Korean)"), and " ("MERS hospital (in Korean)"). Additionally, we found high correlations between the Google search and Twitter results and the number of quarantined cases using the above keywords. This study demonstrates the possibility of using a digital surveillance system to monitor the outbreak of MERS.

  19. Wikipedia mining of hidden links between political leaders

    NASA Astrophysics Data System (ADS)

    Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.

    2016-12-01

    We describe a new method of reduced Google matrix which allows to establish direct and hidden links between a subset of nodes of a large directed network. This approach uses parallels with quantum scattering theory, developed for processes in nuclear and mesoscopic physics and quantum chaos. The method is applied to the Wikipedia networks in different language editions analyzing several groups of political leaders of USA, UK, Germany, France, Russia and G20. We demonstrate that this approach allows to recover reliably direct and hidden links among political leaders. We argue that the reduced Google matrix method can form the mathematical basis for studies in social and political sciences analyzing Leader-Members eXchange (LMX).

  20. FastLane: An Agile Congestion Signaling Mechanism for Improving Datacenter Performance

    DTIC Science & Technology

    2013-05-20

    Cloudera, Ericsson, Facebook, General Electric, Hortonworks, Huawei , Intel, Microsoft, NetApp, Oracle, Quanta, Samsung, Splunk, VMware and Yahoo...Web Services, Google, SAP, Blue Goji, Cisco, Clearstory Data, Cloud- era, Ericsson, Facebook, General Electric, Hortonworks, Huawei , Intel, Microsoft

  1. 76 FR 34124 - Civil Supersonic Aircraft Panel Discussion

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-10

    ... and continuing to the second line in the second column, the Web site address should read as follows: https://spreadsheets.google.com/spreadsheet/viewform?formkey=dEFEdlRnYzBiaHZtTUozTHVtbkF4d0E6MQ . [FR...

  2. 76 FR 60474 - Intent To Prepare a Draft Environmental Impact Statement (DEIS) for the Haile Gold Mine in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-29

    ...--County on January 28, 2011. The public notice is available on Charleston District's public Web site at... eight open mining pits over a twelve-year period, with pit depths ranging from 110 to 840 feet deep. The... of January 28, 2011, and are available on Charleston District's public Web site at http://www.sac...

  3. WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets

    NASA Astrophysics Data System (ADS)

    Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.

    2010-12-01

    WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface of the web application. These features are all in addition to a full range of essential visualization functions including 3-D camera and object orientation, position manipulation, time-stepping control, and custom color/alpha mapping.

  4. Oyster Fisheries App

    NASA Technical Reports Server (NTRS)

    Perez Guerrero, Geraldo A.; Armstrong, Duane; Underwood, Lauren

    2015-01-01

    This project is creating a cloud-enabled, HTML 5 web application to help oyster fishermen and state agencies apply Earth science to improve the management of this important natural and economic resource. The Oyster Fisheries app gathers and analyzes environmental and water quality information, and alerts fishermen and resources managers about problems in oyster fishing waters. An intuitive interface based on Google Maps displays the geospatial information and provides familiar interactive controls to the users. Alerts can be tailored to notify users when conditions in specific leases or public fishing areas require attention. The app is hosted on the Amazon Web Services cloud. It is being developed and tested using some of the latest web development tools such as web components and Polymer.

  5. Matsu: An Elastic Cloud Connected to a SensorWeb for Disaster Response

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel

    2011-01-01

    This slide presentation reviews the use of cloud computing combined with the SensorWeb in aiding disaster response planning. Included is an overview of the architecture of the SensorWeb, and overviews of the phase 1 of the EO-1 system and the steps to improve it to transform it to an On-demand product cloud as part of the Open Cloud Consortium (OCC). The effectiveness of this system is demonstrated in the SensorWeb for the Namibia flood in 2010, using information blended from MODIS, TRMM, River Gauge data, and the Google Earth version of Namibia the system enabled river surge predictions and could enable planning for future disaster responses.

  6. Disease Monitoring and Health Campaign Evaluation Using Google Search Activities for HIV and AIDS, Stroke, Colorectal Cancer, and Marijuana Use in Canada: A Retrospective Observational Study.

    PubMed

    Ling, Rebecca; Lee, Joon

    2016-10-12

    Infodemiology can offer practical and feasible health research applications through the practice of studying information available on the Web. Google Trends provides publicly accessible information regarding search behaviors in a population, which may be studied and used for health campaign evaluation and disease monitoring. Additional studies examining the use and effectiveness of Google Trends for these purposes remain warranted. The objective of our study was to explore the use of infodemiology in the context of health campaign evaluation and chronic disease monitoring. It was hypothesized that following a launch of a campaign, there would be an increase in information seeking behavior on the Web. Second, increasing and decreasing disease patterns in a population would be associated with search activity patterns. This study examined 4 different diseases: human immunodeficiency virus (HIV) infection, stroke, colorectal cancer, and marijuana use. Using Google Trends, relative search volume data were collected throughout the period of February 2004 to January 2015. Campaign information and disease statistics were obtained from governmental publications. Search activity trends were graphed and assessed with disease trends and the campaign interval. Pearson product correlation statistics and joinpoint methodology analyses were used to determine significance. Disease patterns and online activity across all 4 diseases were significantly correlated: HIV infection (r=.36, P<.001), stroke (r=.40, P<.001), colorectal cancer (r= -.41, P<.001), and substance use (r=.64, P<.001). Visual inspection and the joinpoint analysis showed significant correlations for the campaigns on colorectal cancer and marijuana use in stimulating search activity. No significant correlations were observed for the campaigns on stroke and HIV regarding search activity. The use of infoveillance shows promise as an alternative and inexpensive solution to disease surveillance and health campaign evaluation. Further research is needed to understand Google Trends as a valid and reliable tool for health research.

  7. Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.

    PubMed

    Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory

    2016-06-13

    Bioinformatics software often requires human-generated tabular text files as input and has specific requirements for how those data are formatted. Users frequently manage these data in spreadsheet programs, which is convenient for researchers who are compiling the requisite information because the spreadsheet programs can easily be used on different platforms including laptops and tablets, and because they provide a familiar interface. It is increasingly common for many different researchers to be involved in compiling these data, including study coordinators, clinicians, lab technicians and bioinformaticians. As a result, many research groups are shifting toward using cloud-based spreadsheet programs, such as Google Sheets, which support the concurrent editing of a single spreadsheet by different users working on different platforms. Most of the researchers who enter data are not familiar with the formatting requirements of the bioinformatics programs that will be used, so validating and correcting file formats is often a bottleneck prior to beginning bioinformatics analysis. We present Keemei, a Google Sheets Add-on, for validating tabular files used in bioinformatics analyses. Keemei is available free of charge from Google's Chrome Web Store. Keemei can be installed and run on any web browser supported by Google Sheets. Keemei currently supports the validation of two widely used tabular bioinformatics formats, the Quantitative Insights into Microbial Ecology (QIIME) sample metadata mapping file format and the Spatially Referenced Genetic Data (SRGD) format, but is designed to easily support the addition of others. Keemei will save researchers time and frustration by providing a convenient interface for tabular bioinformatics file format validation. By allowing everyone involved with data entry for a project to easily validate their data, it will reduce the validation and formatting bottlenecks that are commonly encountered when human-generated data files are first used with a bioinformatics system. Simplifying the validation of essential tabular data files, such as sample metadata, will reduce common errors and thereby improve the quality and reliability of research outcomes.

  8. Disease Monitoring and Health Campaign Evaluation Using Google Search Activities for HIV and AIDS, Stroke, Colorectal Cancer, and Marijuana Use in Canada: A Retrospective Observational Study

    PubMed Central

    2016-01-01

    Background Infodemiology can offer practical and feasible health research applications through the practice of studying information available on the Web. Google Trends provides publicly accessible information regarding search behaviors in a population, which may be studied and used for health campaign evaluation and disease monitoring. Additional studies examining the use and effectiveness of Google Trends for these purposes remain warranted. Objective The objective of our study was to explore the use of infodemiology in the context of health campaign evaluation and chronic disease monitoring. It was hypothesized that following a launch of a campaign, there would be an increase in information seeking behavior on the Web. Second, increasing and decreasing disease patterns in a population would be associated with search activity patterns. This study examined 4 different diseases: human immunodeficiency virus (HIV) infection, stroke, colorectal cancer, and marijuana use. Methods Using Google Trends, relative search volume data were collected throughout the period of February 2004 to January 2015. Campaign information and disease statistics were obtained from governmental publications. Search activity trends were graphed and assessed with disease trends and the campaign interval. Pearson product correlation statistics and joinpoint methodology analyses were used to determine significance. Results Disease patterns and online activity across all 4 diseases were significantly correlated: HIV infection (r=.36, P<.001), stroke (r=.40, P<.001), colorectal cancer (r= −.41, P<.001), and substance use (r=.64, P<.001). Visual inspection and the joinpoint analysis showed significant correlations for the campaigns on colorectal cancer and marijuana use in stimulating search activity. No significant correlations were observed for the campaigns on stroke and HIV regarding search activity. Conclusions The use of infoveillance shows promise as an alternative and inexpensive solution to disease surveillance and health campaign evaluation. Further research is needed to understand Google Trends as a valid and reliable tool for health research. PMID:27733330

  9. Web usage mining at an academic health sciences library: an exploratory study.

    PubMed

    Bracke, Paul J

    2004-10-01

    This paper explores the potential of multinomial logistic regression analysis to perform Web usage mining for an academic health sciences library Website. Usage of database-driven resource gateway pages was logged for a six-month period, including information about users' network addresses, referring uniform resource locators (URLs), and types of resource accessed. It was found that referring URL did vary significantly by two factors: whether a user was on-campus and what type of resource was accessed. Although the data available for analysis are limited by the nature of the Web and concerns for privacy, this method demonstrates the potential for gaining insight into Web usage that supplements Web log analysis. It can be used to improve the design of static and dynamic Websites today and could be used in the design of more advanced Web systems in the future.

  10. 78 FR 15051 - Investigations Regarding Eligibility To Apply for Worker Adjustment Assistance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-08

    ..., MN 02/19/13 02/15/13 Northshore Mining (State/ One-Stop). 82474 Ames True Temper (Workers).. Lewistown, PA 02/19/13 02/15/13 82475 Sysco Portland (State/One- Wilsonville, OR........ 02/19/13 02/13/13 Stop). 82476 SuperValu Inc. (Company).... Pleasant Prairie, WI... 02/19/13 02/15/13 82477 Google...

  11. 78 FR 41178 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... performance of the market. In May 2008, the internet portal Yahoo! began offering its Web site viewers real... products that must be obtained in tandem. For example, while Yahoo! and Google now both disseminate NASDAQ...

  12. 78 FR 19772 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-02

    ... performance of the market. In May 2008, the internet portal Yahoo! began offering its Web site viewers real... products that must be obtained in tandem. For example, while Yahoo! and Google now both disseminate NASDAQ...

  13. Effective Filtering of Query Results on Updated User Behavioral Profiles in Web Mining

    PubMed Central

    Sadesh, S.; Suganthe, R. C.

    2015-01-01

    Web with tremendous volume of information retrieves result for user related queries. With the rapid growth of web page recommendation, results retrieved based on data mining techniques did not offer higher performance filtering rate because relationships between user profile and queries were not analyzed in an extensive manner. At the same time, existing user profile based prediction in web data mining is not exhaustive in producing personalized result rate. To improve the query result rate on dynamics of user behavior over time, Hamilton Filtered Regime Switching User Query Probability (HFRS-UQP) framework is proposed. HFRS-UQP framework is split into two processes, where filtering and switching are carried out. The data mining based filtering in our research work uses the Hamilton Filtering framework to filter user result based on personalized information on automatic updated profiles through search engine. Maximized result is fetched, that is, filtered out with respect to user behavior profiles. The switching performs accurate filtering updated profiles using regime switching. The updating in profile change (i.e., switches) regime in HFRS-UQP framework identifies the second- and higher-order association of query result on the updated profiles. Experiment is conducted on factors such as personalized information search retrieval rate, filtering efficiency, and precision ratio. PMID:26221626

  14. Creation of a Web-Based GIS Server and Custom Geoprocessing Tools for Enhanced Hydrologic Applications

    NASA Astrophysics Data System (ADS)

    Welton, B.; Chouinard, K.; Sultan, M.; Becker, D.; Milewski, A.; Becker, R.

    2010-12-01

    Rising populations in the arid and semi arid parts of the World are increasing the demand for fresh water supplies worldwide. Many data sets needed for assessment of hydrologic applications across vast regions of the world are expensive, unpublished, difficult to obtain, or at varying scales which complicates their use. Fortunately, this situation is changing with the development of global remote sensing datasets and web-based platforms such as GIS Server. GIS provides a cost effective vehicle for comparing, analyzing, and querying a variety of spatial datasets as geographically referenced layers. We have recently constructed a web-based GIS, that incorporates all relevant geological, geochemical, geophysical, and remote sensing data sets that were readily used to identify reservoir types and potential well locations on local and regional scales in various tectonic settings including: (1) extensional environment (Red Sea rift), (2) transcurrent fault system (Najd Fault in the Arabian-Nubian Shield), and (3) compressional environments (Himalayas). The web-based GIS could also be used to detect spatial and temporal trends in precipitation, recharge, and runoff in large watersheds on local, regional, and continental scales. These applications were enabled through the construction of a web-based ArcGIS Server with Google Map’s interface and the development of customized geoprocessing tools. ArcGIS Server provides out-of-the-box setups that are generic in nature. This platform includes all of the standard web based GIS tools (e.g. pan, zoom, identify, search, data querying, and measurement). In addition to the standard suite of tools provided by ArcGIS Server an additional set of advanced data manipulation and display tools was also developed to allow for a more complete and customizable view of the area of interest. The most notable addition to the standard GIS Server tools is the custom on-demand geoprocessing tools (e.g., graph, statistical functions, custom raster creation, profile, TRMM). The generation of a wide range of derivative maps (e.g., buffer zone, contour map, graphs, temporal rainfall distribution maps) from various map layers (e.g., geologic maps, geophysics, satellite images) allows for more user flexibility. The use of these tools along with Google Map’s API which enables the website user to utilize high quality GeoEye 2 images provide by Google in conjunction with our data, creates a more complete image of the area being observed and allows for custom derivative maps to be created in the field and viewed immediately on the web, processes that were restricted to offline databases.

  15. A cognitive evaluation of four online search engines for answering definitional questions posed by physicians.

    PubMed

    Yu, Hong; Kaufman, David

    2007-01-01

    The Internet is having a profound impact on physicians' medical decision making. One recent survey of 277 physicians showed that 72% of physicians regularly used the Internet to research medical information and 51% admitted that information from web sites influenced their clinical decisions. This paper describes the first cognitive evaluation of four state-of-the-art Internet search engines: Google (i.e., Google and Scholar.Google), MedQA, Onelook, and PubMed for answering definitional questions (i.e., questions with the format of "What is X?") posed by physicians. Onelook is a portal for online definitions, and MedQA is a question answering system that automatically generates short texts to answer specific biomedical questions. Our evaluation criteria include quality of answer, ease of use, time spent, and number of actions taken. Our results show that MedQA outperforms Onelook and PubMed in most of the criteria, and that MedQA surpasses Google in time spent and number of actions, two important efficiency criteria. Our results show that Google is the best system for quality of answer and ease of use. We conclude that Google is an effective search engine for medical definitions, and that MedQA exceeds the other search engines in that it provides users direct answers to their questions; while the users of the other search engines have to visit several sites before finding all of the pertinent information.

  16. The Application of Collaborative Business Intelligence Technology in the Hospital SPD Logistics Management Model

    PubMed Central

    LIU, Tongzhu; SHEN, Aizong; HU, Xiaojian; TONG, Guixian; GU, Wei

    2017-01-01

    Background: We aimed to apply collaborative business intelligence (BI) system to hospital supply, processing and distribution (SPD) logistics management model. Methods: We searched Engineering Village database, China National Knowledge Infrastructure (CNKI) and Google for articles (Published from 2011 to 2016), books, Web pages, etc., to understand SPD and BI related theories and recent research status. For the application of collaborative BI technology in the hospital SPD logistics management model, we realized this by leveraging data mining techniques to discover knowledge from complex data and collaborative techniques to improve the theories of business process. Results: For the application of BI system, we: (i) proposed a layered structure of collaborative BI system for intelligent management in hospital logistics; (ii) built data warehouse for the collaborative BI system; (iii) improved data mining techniques such as supporting vector machines (SVM) and swarm intelligence firefly algorithm to solve key problems in hospital logistics collaborative BI system; (iv) researched the collaborative techniques oriented to data and business process optimization to improve the business processes of hospital logistics management. Conclusion: Proper combination of SPD model and BI system will improve the management of logistics in the hospitals. The successful implementation of the study requires: (i) to innovate and improve the traditional SPD model and make appropriate implement plans and schedules for the application of BI system according to the actual situations of hospitals; (ii) the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial; (iii) timely response of external suppliers. PMID:28828316

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raymond, David W.; Gaither, Katherine N.; Polsky, Yarom

    Sandia National Laboratories (Sandia) has a long history in developing compact, mobile, very high-speed drilling systems and this technology could be applied to increasing the rate at which boreholes are drilled during a mine accident response. The present study reviews current technical approaches, primarily based on technology developed under other programs, analyzes mine rescue specific requirements to develop a conceptual mine rescue drilling approach, and finally, proposes development of a phased mine rescue drilling system (MRDS) that accomplishes (1) development of rapid drilling MRDS equipment; (2) structuring improved web communication through the Mine Safety & Health Administration (MSHA) web site;more » (3) development of an improved protocol for employment of existing drilling technology in emergencies; (4) deployment of advanced technologies to complement mine rescue drilling operations during emergency events; and (5) preliminary discussion of potential future technology development of specialized MRDS equipment. This phased approach allows for rapid fielding of a basic system for improved rescue drilling, with the ability to improve the system over time at a reasonable cost.« less

  18. The Top 50 Articles on Minimally Invasive Spine Surgery.

    PubMed

    Virk, Sohrab S; Yu, Elizabeth

    2017-04-01

    Bibliometric study of current literature. To catalog the most important minimally invasive spine (MIS) surgery articles using the amount of citations as a marker of relevance. MIS surgery is a relatively new tool used by spinal surgeons. There is a dynamic and evolving field of research related to MIS techniques, clinical outcomes, and basic science research. To date, there is no comprehensive review of the most cited articles related to MIS surgery. A systematic search was performed over three widely used literature databases: Web of Science, Scopus, and Google Scholar. There were four searches performed using the terms "minimally invasive spine surgery," "endoscopic spine surgery," "percutaneous spinal surgery," and "lateral interbody surgery." The amount of citations included was averaged amongst the three databases to rank each article. The query of the three databases was performed in November 2015. Fifty articles were selected based upon the amount of citations each averaged amongst the three databases. The most cited article was titled "Extreme Lateral Interbody Fusion (XLIF): a novel surgical technique for anterior lumbar interbody fusion" by Ozgur et al and was credited with 447, 239, and 279 citations in Google Scholar, Web of Science, and Scopus, respectively. Citations ranged from 27 to 239 for Web of Science, 60 to 279 for Scopus, and 104 to 462 for Google Scholar. There was a large variety of articles written spanning over 14 different topics with the majority dealing with clinical outcomes related to MIS surgery. The majority of the most cited articles were level III and level IV studies. This is likely due to the relatively recent nature of technological advances in the field. Furthermore level I and level II studies are required in MIS surgery in the years ahead. 5.

  19. Anthropogenic and natural sources of acidity and metals and their influence on the structure of stream food webs.

    PubMed

    Hogsden, Kristy L; Harding, Jon S

    2012-03-01

    We compared food web structure in 20 streams with either anthropogenic or natural sources of acidity and metals or circumneutral water chemistry in New Zealand. Community and diet analysis indicated that mining streams receiving anthropogenic inputs of acidic and metal-rich drainage had much simpler food webs (fewer species, shorter food chains, less links) than those in naturally acidic, naturally high metal, and circumneutral streams. Food webs of naturally high metal streams were structurally similar to those in mining streams, lacking fish predators and having few species. Whereas, webs in naturally acidic streams differed very little from those in circumneutral streams due to strong similarities in community composition and diets of secondary and top consumers. The combined negative effects of acidity and metals on stream food webs are clear. However, elevated metal concentrations, regardless of source, appear to play a more important role than acidity in driving food web structure. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Social Web mining and exploitation for serious applications: Technosocial Predictive Analytics and related technologies for public health, environmental and national security surveillance.

    PubMed

    Kamel Boulos, Maged N; Sanfilippo, Antonio P; Corley, Courtney D; Wheeler, Steve

    2010-10-01

    This paper explores Technosocial Predictive Analytics (TPA) and related methods for Web "data mining" where users' posts and queries are garnered from Social Web ("Web 2.0") tools such as blogs, micro-blogging and social networking sites to form coherent representations of real-time health events. The paper includes a brief introduction to commonly used Social Web tools such as mashups and aggregators, and maps their exponential growth as an open architecture of participation for the masses and an emerging way to gain insight about people's collective health status of whole populations. Several health related tool examples are described and demonstrated as practical means through which health professionals might create clear location specific pictures of epidemiological data such as flu outbreaks. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  1. JournalMap: Geo-semantic searching for relevant knowledge

    USDA-ARS?s Scientific Manuscript database

    Ecologists struggling to understand rapidly changing environments and evolving ecosystem threats need quick access to relevant research and documentation of natural systems. The advent of semantic and aggregation searching (e.g., Google Scholar, Web of Science) has made it easier to find useful lite...

  2. Google Search Mastery Basics

    ERIC Educational Resources Information Center

    Hill, Paul; MacArthur, Stacey; Read, Nick

    2014-01-01

    Effective Internet search skills are essential with the continually increasing amount of information available on the Web. Extension personnel are required to find information to answer client questions and to conduct research on programs. Unfortunately, many lack the skills necessary to effectively navigate the Internet and locate needed…

  3. Accelerating North American rangeland conservation with earth observation data and user driven web applications.

    NASA Astrophysics Data System (ADS)

    Allred, B. W.; Naugle, D.; Donnelly, P.; Tack, J.; Jones, M. O.

    2016-12-01

    In 2010, the USDA Natural Resources Conservation Service (NRCS) launched the Sage Grouse Initiative (SGI) to voluntarily reduce threats facing sage-grouse and rangelands on private lands. Over the past five years, SGI has matured into a primary catalyst for rangeland and wildlife conservation across the North American west, focusing on the shared vision of wildlife conservation through sustainable working landscapes and providing win-win solutions for producers, sage grouse, and 350 other sagebrush obligate species. SGI and its partners have invested a total of $750 million into rangeland and wildlife conservation. Moving forward, SGI continues to focus on rangeland conservation. Partnering with Google Earth Engine, SGI has developed outcome monitoring and conservation planning tools at continental scales. The SGI science team is currently developing assessment and monitoring algorithms of key conservation indicators. The SGI web application utilizes Google Earth Engine for user defined analysis and planning, putting the appropriate information directly into the hands of managers and conservationists.

  4. Using the Browser for Science: A Collaborative Toolkit for Astronomy

    NASA Astrophysics Data System (ADS)

    Connolly, A. J.; Smith, I.; Krughoff, K. S.; Gibson, R.

    2011-07-01

    Astronomical surveys have yielded hundreds of terabytes of catalogs and images that span many decades of the electromagnetic spectrum. Even when observatories provide user-friendly web interfaces, exploring these data resources remains a complex and daunting task. In contrast, gadgets and widgets have become popular in social networking (e.g. iGoogle, Facebook). They provide a simple way to make complex data easily accessible that can be customized based on the interest of the user. With ASCOT (an AStronomical COllaborative Toolkit) we expand on these concepts to provide a customizable and extensible gadget framework for use in science. Unlike iGoogle, where all of the gadgets are independent, the gadgets we develop communicate and share information, enabling users to visualize and interact with data through multiple, simultaneous views. With this approach, web-based applications for accessing and visualizing data can be generated easily and, by linking these tools together, integrated and powerful data analysis and discovery tools can be constructed.

  5. News trends and web search query of HIV/AIDS in Hong Kong

    PubMed Central

    Chiu, Alice P. Y.; Lin, Qianying

    2017-01-01

    Background The HIV epidemic in Hong Kong has worsened in recent years, with major contributions from high-risk subgroup of men who have sex with men (MSM). Internet use is prevalent among the majority of the local population, where they sought health information online. This study examines the impacts of HIV/AIDS and MSM news coverage on web search query in Hong Kong. Methods Relevant news coverage about HIV/AIDS and MSM from January 1st, 2004 to December 31st, 2014 was obtained from the WiseNews databse. News trends were created by computing the number of relevant articles by type, topic, place of origin and sub-populations. We then obtained relevant search volumes from Google and analysed causality between news trends and Google Trends using Granger Causality test and orthogonal impulse function. Results We found that editorial news has an impact on “HIV” Google searches on HIV, with the search term popularity peaking at an average of two weeks after the news are published. Similarly, editorial news has an impact on the frequency of “AIDS” searches two weeks after. MSM-related news trends have a more fluctuating impact on “MSM” Google searches, although the time lag varies anywhere from one week later to ten weeks later. Conclusions This infodemiological study shows that there is a positive impact of news trends on the online search behavior of HIV/AIDS or MSM-related issues for up to ten weeks after. Health promotional professionals could make use of this brief time window to tailor the timing of HIV awareness campaigns and public health interventions to maximise its reach and effectiveness. PMID:28922376

  6. FindZebra: a search engine for rare diseases.

    PubMed

    Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole

    2013-06-01

    The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. The sources and popularity of online drug information: an analysis of top search engine results and web page views.

    PubMed

    Law, Michael R; Mintzes, Barbara; Morgan, Steven G

    2011-03-01

    The Internet has become a popular source of health information. However, there is little information on what drug information and which Web sites are being searched. To investigate the sources of online information about prescription drugs by assessing the most common Web sites returned in online drug searches and to assess the comparative popularity of Web pages for particular drugs. This was a cross-sectional study of search results for the most commonly dispensed drugs in the US (n=278 active ingredients) on 4 popular search engines: Bing, Google (both US and Canada), and Yahoo. We determined the number of times a Web site appeared as the first result. A linked retrospective analysis counted Wikipedia page hits for each of these drugs in 2008 and 2009. About three quarters of the first result on Google USA for both brand and generic names linked to the National Library of Medicine. In contrast, Wikipedia was the first result for approximately 80% of generic name searches on the other 3 sites. On these other sites, over two thirds of brand name searches led to industry-sponsored sites. The Wikipedia pages with the highest number of hits were mainly for opiates, benzodiazepines, antibiotics, and antidepressants. Wikipedia and the National Library of Medicine rank highly in online drug searches. Further, our results suggest that patients most often seek information on drugs with the potential for dependence, for stigmatized conditions, that have received media attention, and for episodic treatments. Quality improvement efforts should focus on these drugs.

  8. Profile-IQ: Web-based data query system for local health department infrastructure and activities.

    PubMed

    Shah, Gulzar H; Leep, Carolyn J; Alexander, Dayna

    2014-01-01

    To demonstrate the use of National Association of County & City Health Officials' Profile-IQ, a Web-based data query system, and how policy makers, researchers, the general public, and public health professionals can use the system to generate descriptive statistics on local health departments. This article is a descriptive account of an important health informatics tool based on information from the project charter for Profile-IQ and the authors' experience and knowledge in design and use of this query system. Profile-IQ is a Web-based data query system that is based on open-source software: MySQL 5.5, Google Web Toolkit 2.2.0, Apache Commons Math library, Google Chart API, and Tomcat 6.0 Web server deployed on an Amazon EC2 server. It supports dynamic queries of National Profile of Local Health Departments data on local health department finances, workforce, and activities. Profile-IQ's customizable queries provide a variety of statistics not available in published reports and support the growing information needs of users who do not wish to work directly with data files for lack of staff skills or time, or to avoid a data use agreement. Profile-IQ also meets the growing demand of public health practitioners and policy makers for data to support quality improvement, community health assessment, and other processes associated with voluntary public health accreditation. It represents a step forward in the recent health informatics movement of data liberation and use of open source information technology solutions to promote public health.

  9. The New USGS Volcano Hazards Program Web Site

    NASA Astrophysics Data System (ADS)

    Venezky, D. Y.; Graham, S. E.; Parker, T. J.; Snedigar, S. F.

    2008-12-01

    The U.S. Geological Survey's (USGS) Volcano Hazard Program (VHP) has launched a revised web site that uses a map-based interface to display hazards information for U.S. volcanoes. The web site is focused on better communication of hazards and background volcano information to our varied user groups by reorganizing content based on user needs and improving data display. The Home Page provides a synoptic view of the activity level of all volcanoes for which updates are written using a custom Google® Map. Updates are accessible by clicking on one of the map icons or clicking on the volcano of interest in the adjacent color-coded list of updates. The new navigation provides rapid access to volcanic activity information, background volcano information, images and publications, volcanic hazards, information about VHP, and the USGS volcano observatories. The Volcanic Activity section was tailored for emergency managers but provides information for all our user groups. It includes a Google® Map of the volcanoes we monitor, an Elevated Activity Page, a general status page, information about our Volcano Alert Levels and Aviation Color Codes, monitoring information, and links to monitoring data from VHP's volcano observatories: Alaska Volcano Observatory (AVO), Cascades Volcano Observatory (CVO), Long Valley Observatory (LVO), Hawaiian Volcano Observatory (HVO), and Yellowstone Volcano Observatory (YVO). The YVO web site was the first to move to the new navigation system and we are working on integrating the Long Valley Observatory web site next. We are excited to continue to implement new geospatial technologies to better display our hazards and supporting volcano information.

  10. Interfaces to PeptideAtlas: a case study of standard data access systems

    PubMed Central

    Handcock, Jeremy; Robinson, Thomas; Deutsch, Eric W.; Boyle, John

    2012-01-01

    Access to public data sets is important to the scientific community as a resource to develop new experiments or validate new data. Projects such as the PeptideAtlas, Ensembl and The Cancer Genome Atlas (TCGA) offer both access to public data and a repository to share their own data. Access to these data sets is often provided through a web page form and a web service API. Access technologies based on web protocols (e.g. http) have been in use for over a decade and are widely adopted across the industry for a variety of functions (e.g. search, commercial transactions, and social media). Each architecture adapts these technologies to provide users with tools to access and share data. Both commonly used web service technologies (e.g. REST and SOAP), and custom-built solutions over HTTP are utilized in providing access to research data. Providing multiple access points ensures that the community can access the data in the simplest and most effective manner for their particular needs. This article examines three common access mechanisms for web accessible data: BioMart, caBIG, and Google Data Sources. These are illustrated by implementing each over the PeptideAtlas repository and reviewed for their suitability based on specific usages common to research. BioMart, Google Data Sources, and caBIG are each suitable for certain uses. The tradeoffs made in the development of the technology are dependent on the uses each was designed for (e.g. security versus speed). This means that an understanding of specific requirements and tradeoffs is necessary before selecting the access technology. PMID:22941959

  11. Rule-based statistical data mining agents for an e-commerce application

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Zhang, Yan-Qing; King, K. N.; Sunderraman, Rajshekhar

    2003-03-01

    Intelligent data mining techniques have useful e-Business applications. Because an e-Commerce application is related to multiple domains such as statistical analysis, market competition, price comparison, profit improvement and personal preferences, this paper presents a hybrid knowledge-based e-Commerce system fusing intelligent techniques, statistical data mining, and personal information to enhance QoS (Quality of Service) of e-Commerce. A Web-based e-Commerce application software system, eDVD Web Shopping Center, is successfully implemented uisng Java servlets and an Oracle81 database server. Simulation results have shown that the hybrid intelligent e-Commerce system is able to make smart decisions for different customers.

  12. Collaborative writing: Tools and tips.

    PubMed

    Eapen, Bell Raj

    2007-01-01

    Majority of technical writing is done by groups of experts and various web based applications have made this collaboration easy. Email exchange of word processor documents with tracked changes used to be the standard technique for collaborative writing. However web based tools like Google docs and Spreadsheets have made the process fast and efficient. Various versioning tools and synchronous editors are available for those who need additional functionality. Having a group leader who decides the scheduling, communication and conflict resolving protocols is important for successful collaboration.

  13. Using Web and Social Media for Influenza Surveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corley, Courtney D.; Cook, Diane; Mikler, Armin R.

    2010-01-04

    Analysis of Google influenza-like-illness (ILI) search queries has shown a strongly correlated pattern with Centers for Disease Control (CDC) and Prevention seasonal ILI reporting data.Web and social media provide another resource to detect increases in ILI. This paper evaluates trends in blog posts that discuss influenza. Our key finding is that from 5-October 2008 to 31-January 2009 a high correlation exists between the frequency of posts, containing influenza keywords, per week and CDC influenza-like-illness surveillance data.

  14. Research on the optimization strategy of web search engine based on data mining

    NASA Astrophysics Data System (ADS)

    Chen, Ronghua

    2018-04-01

    With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.

  15. Informal Learning through Expertise Mining in the Social Web

    ERIC Educational Resources Information Center

    Valencia-Garcia, Rafael; Garcia-Sanchez, Francisco; Casado-Lumbreras, Cristina; Castellanos-Nieves, Dagoberto; Fernandez-Breis, Jesualdo Tomas

    2012-01-01

    The advent of Web 2.0, also called the Social Web, has changed the way people interact with the Web. Assisted by the technologies associated with this new trend, users now play a much more active role as content providers. This Web paradigm shift has also changed how companies operate and interact with their employees, partners and customers. The…

  16. Googling endometriosis: a systematic review of information available on the Internet.

    PubMed

    Hirsch, Martin; Aggarwal, Shivani; Barker, Claire; Davis, Colin J; Duffy, James M N

    2017-05-01

    The demand for health information online is increasing rapidly without clear governance. We aim to evaluate the credibility, quality, readability, and accuracy of online patient information concerning endometriosis. We searched 5 popular Internet search engines: aol.com, ask.com, bing.com, google.com, and yahoo.com. We developed a search strategy in consultation with patients with endometriosis, to identify relevant World Wide Web pages. Pages containing information related to endometriosis for women with endometriosis or the public were eligible. Two independent authors screened the search results. World Wide Web pages were evaluated using validated instruments across 3 of the 4 following domains: (1) credibility (White Paper instrument; range 0-10); (2) quality (DISCERN instrument; range 0-85); and (3) readability (Flesch-Kincaid instrument; range 0-100); and (4) accuracy (assessed by a prioritized criteria developed in consultation with health care professionals, researchers, and women with endometriosis based on the European Society of Human Reproduction and Embryology guidelines [range 0-30]). We summarized these data in diagrams, tables, and narratively. We identified 750 World Wide Web pages, of which 54 were included. Over a third of Web pages did not attribute authorship and almost half the included pages did not report the sources of information or academic references. No World Wide Web page provided information assessed as being written in plain English. A minority of web pages were assessed as high quality. A single World Wide Web page provided accurate information: evidentlycochrane.net. Available information was, in general, skewed toward the diagnosis of endometriosis. There were 16 credible World Wide Web pages, however the content limitations were infrequently discussed. No World Wide Web page scored highly across all 4 domains. In the unlikely event that a World Wide Web page reports high-quality, accurate, and credible health information it is typically challenging for a lay audience to comprehend. Health care professionals, and the wider community, should inform women with endometriosis of the risk of outdated, inaccurate, or even dangerous information online. The implementation of an information standard will incentivize providers of online information to establish and adhere to codes of conduct. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. USGS Coastal and Marine Geology Survey Data in Google Earth

    NASA Astrophysics Data System (ADS)

    Reiss, C.; Steele, C.; Ma, A.; Chin, J.

    2006-12-01

    The U.S. Geological Survey (USGS) Coastal and Marine Geology (CMG) program has a rich data catalog of geologic field activities and metadata called InfoBank, which has been a standard tool for researchers within and outside of the agency. Along with traditional web maps, the data are now accessible in Google Earth, which greatly expands the possible user audience. The Google Earth interface provides geographic orientation and panning/zooming capabilities to locate data relative to topography, bathymetry, and coastal areas. Viewing navigation with Google Earth's background imagery allows queries such as, why areas were not surveyed (answer presence of islands, shorelines, cliffs, etc.). Detailed box core subsample photos from selected sampling activities, published geotechnical data, and sample descriptions are now viewable on Google Earth, (for example, M-1-95-MB, P-2-95-MB, and P-1-97- MB box core samples). One example of the use of Google Earth is CMG's surveys of San Francisco's Ocean Beach since 2004. The surveys are conducted with an all-terrain vehicle (ATV) and shallow-water personal watercraft (PWC) equipped with Global Positioning System (GPS), and elevation and echo sounder data collectors. 3D topographic models with centimeter accuracy have been produced from these surveys to monitor beach and nearshore processes, including sand transport, sedimentation patterns, and seasonal trends. Using Google Earth, multiple track line data (examples: OB-1-05-CA and OB-2-05-CA) can be overlaid on beach imagery. The images also help explain the shape of track lines as objects are encountered.

  18. Spatiotemporal-Thematic Data Processing for the Semantic Web

    NASA Astrophysics Data System (ADS)

    Hakimpour, Farshad; Aleman-Meza, Boanerges; Perry, Matthew; Sheth, Amit

    This chapter presents practical approaches to data processing in the space, time and theme dimensions using existing Semantic Web technologies. It describes how we obtain geographic and event data from Internet sources and also how we integrate them into an RDF store. We briefly introduce a set of functionalities in space, time and semantics. These functionalities are implemented based on our existing technology for main-memory-based RDF data processing developed at the LSDIS Lab. A number of these functionalities are exposed as REST Web services. We present two sample client-side applications that are developed using a combination of our services with Google Maps service.

  19. Could we do better? Behavioural tracking on recommended consumer health websites.

    PubMed

    Burkell, Jacquelyn; Fortier, Alexandre

    2015-09-01

    This study examines behavioural tracking practices on consumer health websites, contrasting tracking on sites recommended by information professionals with tracking on sites returned by Google. Two lists of consumer health websites were constructed: sites recommended by information professionals and sites returned by Google searches. Sites were divided into three groups according to source (Recommended-Only, Google-Only or both) and type (Government, Not-for-Profit or Commercial). Behavioural tracking practices on each website were documented using a protocol that detected cookies, Web beacons and Flash cookies. The presence and the number of trackers that collect personal information were contrasted across source and type of site; a second set of analyses specifically examined Advertising trackers. Recommended-Only sites show lower levels of tracking - especially tracking by advertisers - than do Google-Only sites or sites found through both sources. Government and Not-for-Profit sites have fewer trackers, particularly from advertisers, than do Commercial sites. Recommended sites, especially those from Government or Not-for-Profit organisations, present a lower privacy threat than sites returned by Google searches. Nonetheless, most recommended websites include some trackers, and half include at least one Advertising tracker. To protect patron privacy, information professionals should examine the tracking practices of the websites they recommend. © 2015 Health Libraries Group.

  20. An Introduction to Science Education in Rural Australia

    ERIC Educational Resources Information Center

    Lyons, Terry

    2008-01-01

    Here's a challenge. Try searching "Google" for the phrase "rural science teachers" in Australian web content. Surprisingly, my attempts returned only two hits, neither of which actually referred to Australian teachers. Searches for "rural science education" fare little better. On this evidence one could be forgiven…

  1. Rainfall erosivity in Brazil: A Review

    USDA-ARS?s Scientific Manuscript database

    In this paper, we review the erosivity studies conducted in Brazil to verify the quality and representativeness of the results generated and to provide a greater understanding of the rainfall erosivity (R-factor) in Brazil. We searched the ISI Web of Science, Scopus, SciELO, and Google Scholar datab...

  2. A web-based platform to support an evidence-based mental health intervention: lessons from the CBITS web site.

    PubMed

    Vona, Pamela; Wilmoth, Pete; Jaycox, Lisa H; McMillen, Janey S; Kataoka, Sheryl H; Wong, Marleen; DeRosier, Melissa E; Langley, Audra K; Kaufman, Joshua; Tang, Lingqi; Stein, Bradley D

    2014-11-01

    To explore the role of Web-based platforms in behavioral health, the study examined usage of a Web site for supporting training and implementation of an evidence-based intervention. Using data from an online registration survey and Google Analytics, the investigators examined user characteristics and Web site utilization. Site engagement was substantial across user groups. Visit duration differed by registrants' characteristics. Less experienced clinicians spent more time on the Web site. The training section accounted for most page views across user groups. Individuals previously trained in the Cognitive-Behavioral Intervention for Trauma in Schools intervention viewed more implementation assistance and online community pages than did other user groups. Web-based platforms have the potential to support training and implementation of evidence-based interventions for clinicians of varying levels of experience and may facilitate more rapid dissemination. Web-based platforms may be promising for trauma-related interventions, because training and implementation support should be readily available after a traumatic event.

  3. Upper Animas Mining District

    EPA Pesticide Factsheets

    Web page provides narrative of What's New?, Site Description, Site Risk, Cleanup Progress, Community Involvement, Next Steps, Site Documents, FAQ, Contacts and LInks for the Upper Animas Mining District site in San Juan County, Colorado.

  4. Visualizing Mars data and imagery with Google Earth

    NASA Astrophysics Data System (ADS)

    Beyer, R. A.; Broxton, M.; Gorelick, N.; Hancher, M.; Lundy, M.; Kolb, E.; Moratto, Z.; Nefian, A.; Scharff, T.; Weiss-Malik, M.

    2009-12-01

    There is a vast store of planetary geospatial data that has been collected by NASA but is difficult to access and visualize. Virtual globes have revolutionized the way we visualize and understand the Earth, but other planetary bodies including Mars and the Moon can be visualized in similar ways. Extraterrestrial virtual globes are poised to revolutionize planetary science, bring an exciting new dimension to science education, and allow ordinary users to explore imagery being sent back to Earth by planetary science satellites. The original Google Mars Web site allowed users to view base maps of Mars via the Web, but it did not have the full features of the 3D Google Earth client. We have previously demonstrated the use of Google Earth to display Mars imagery, but now with the launch of Mars in Google Earth, there is a base set of Mars data available for anyone to work from and add to. There are a variety of global maps to choose from and display. The Terrain layer has the MOLA gridded data topography, and where available, HRSC terrain models are mosaicked into the topography. In some locations there is also meter-scale terrain derived from HiRISE stereo imagery. There is rich information in the form of the IAU nomenclature database, data for the rovers and landers on the surface, and a Spacecraft Imagery layer which contains the image outlines for all HiRISE, CTX, CRISM, HRSC, and MOC image data released to the PDS and links back to their science data. There are also features like the Traveler's Guide to Mars, Historic Maps, Guided Tours, as well as the 'Live from Mars' feature, which shows the orbital tracks of both the Mars Odyssey and Mars Reconnaissance Orbiter for a few days in the recent past. It shows where they have acquired imagery, and also some preview image data. These capabilities have obvious public outreach and education benefits, but the potential benefits of allowing planetary scientists to rapidly explore these large and varied data collections—in geological context and within a single user interface—are also becoming evident. Because anyone can produce additional KML content for use in Google Earth, scientists can customize the environment to their needs as well as publish their own processed data and results for others to use. Many scientists and organizations have begun to do this already, resulting in a useful and growing collection of planetary-science-oriented Google Earth layers.

  5. Data Mining of Extremely Large Ad-Hoc Data Sets to Produce Reverse Web-Link Graphs

    DTIC Science & Technology

    2017-03-01

    in most of the MR cases. From these studies , we also learned that computing -optimized instances should be chosen for serialized/compressed input data...maximum 200 words) Data mining can be a valuable tool, particularly in the acquisition of military intelligence. As the second study within a larger Naval...open web crawler data set Common Crawl. Similar to previous studies , this research employs MapReduce (MR) for sorting and categorizing output value

  6. Using open-source programs to create a web-based portal for hydrologic information

    NASA Astrophysics Data System (ADS)

    Kim, H.

    2013-12-01

    Some hydrologic data sets, such as basin climatology, precipitation, and terrestrial water storage, are not easily obtainable and distributable due to their size and complexity. We present a Hydrologic Information Portal (HIP) that has been implemented at the University of California for Hydrologic Modeling (UCCHM) and that has been organized around the large river basins of North America. This portal can be easily accessed through a modern web browser that enables easy access and visualization of such hydrologic data sets. Some of the main features of our HIP include a set of data visualization features so that users can search, retrieve, analyze, integrate, organize, and map data within large river basins. Recent information technologies such as Google Maps, Tornado (Python asynchronous web server), NumPy/SciPy (Scientific Library for Python) and d3.js (Visualization library for JavaScript) were incorporated into the HIP to create ease in navigating large data sets. With such open source libraries, HIP can give public users a way to combine and explore various data sets by generating multiple chart types (Line, Bar, Pie, Scatter plot) directly from the Google Maps viewport. Every rendered object such as a basin shape on the viewport is clickable, and this is the first step to access the visualization of data sets.

  7. An Interactive Web System for Field Data Sharing and Collaboration

    NASA Astrophysics Data System (ADS)

    Weng, Y.; Sun, F.; Grigsby, J. D.

    2010-12-01

    A Web 2.0 system is designed and developed to facilitate data collection for the field studies in the Geological Sciences department at Ball State University. The system provides a student-centered learning platform that enables the users to first upload their collected data in various formats, interact and collaborate dynamically online, and ultimately create a shared digital repository of field experiences. The data types considered for the system and their corresponding format and requirements are listed in the table below. The system has six main functionalities as follows. (1) Only the registered users can access the system with confidential identification and password. (2) Each user can upload/revise/delete data in various formats such as image, audio, video, and text files to the system. (3) Interested users are allowed to co-edit the contents and join the collaboration whiteboard for further discussion. (4) The system integrates with Google, Yahoo, or Flickr to search for similar photos with same tags. (5) Users can search the web system according to the specific key words. (6) Photos with recorded GPS readings can be mashed and mapped to Google Maps/Earth for visualization. Application of the system to geology field trips at Ball State University will be demonstrated to assess the usability of the system.Data Requirements

  8. Quality of consumer-targeted internet guidance on home firearm and ammunition storage.

    PubMed

    Freundlich, Katherine L; Skoczylas, Maria Shakour; Schmidt, John P; Keshavarzi, Nahid R; Mohr, Bethany Anne

    2016-10-01

    Four storage practices protect against unintentional and/or self-inflicted firearm injury among children and adolescents: keeping guns locked (1) and unloaded (2) and keeping ammunition locked up (3) and in a separate location from the guns (4). Our aim was to mimic common Google search strategies on firearm/ammunition storage and assess whether the resulting web pages provided recommendations consistent with those supported by the literature. We identified 87 web pages by Google search of the 10 most commonly used search terms in the USA related to firearm/ammunition storage. Two non-blinded independent reviewers analysed web page technical quality according to a 17-item checklist derived from previous studies. A single reviewer analysed readability by US grade level assigned by Flesch-Kincaid Grade Level Index. Two separate, blinded, independent reviewers analysed deidentified web page content for accuracy and completeness describing the four accepted storage practices. Reviewers resolved disagreements by consensus. The web pages described, on average, less than one of four accepted storage practices (mean 0.2 (95% CL 0.1 to 0.4)). Only two web pages (2%) identified all four practices. Two web pages (2%) made assertions inconsistent with recommendations; both implied that loaded firearms could be stored safely. Flesch-Kincaid Grade Level Index averaged 8.0 (95% CL 7.3 to 8.7). The average technical quality score was 7.1 (95% CL 6.8 to 7.4) out of an available score of 17. There was a high degree of agreement between reviewers regarding completeness (weighted κ 0.78 (95% CL 0.61 to 0.97)). The internet currently provides incomplete information about safe firearm storage. Understanding existing deficiencies may inform future strategies for improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  9. A web search on environmental topics: what is the role of ranking?

    PubMed

    Covolo, Loredana; Filisetti, Barbara; Mascaretti, Silvia; Limina, Rosa Maria; Gelatti, Umberto

    2013-12-01

    Although the Internet is easy to use, the mechanisms and logic behind a Web search are often unknown. Reliable information can be obtained, but it may not be visible as the Web site is not located in the first positions of search results. The possible risks of adverse health effects arising from environmental hazards are issues of increasing public interest, and therefore the information about these risks, particularly on topics for which there is no scientific evidence, is very crucial. The aim of this study was to investigate whether the presentation of information on some environmental health topics differed among various search engines, assuming that the most reliable information should come from institutional Web sites. Five search engines were used: Google, Yahoo!, Bing, Ask, and AOL. The following topics were searched in combination with the word "health": "nuclear energy," "electromagnetic waves," "air pollution," "waste," and "radon." For each topic three key words were used. The first 30 search results for each query were considered. The ranking variability among the search engines and the type of search results were analyzed for each topic and for each key word. The ranking of institutional Web sites was given particular consideration. Variable results were obtained when surfing the Internet on different environmental health topics. Multivariate logistic regression analysis showed that, when searching for radon and air pollution topics, it is more likely to find institutional Web sites in the first 10 positions compared with nuclear power (odds ratio=3.4, 95% confidence interval 2.1-5.4 and odds ratio=2.9, 95% confidence interval 1.8-4.7, respectively) and also when using Google compared with Bing (odds ratio=3.1, 95% confidence interval 1.9-5.1). The increasing use of online information could play an important role in forming opinions. Web users should become more aware of the importance of finding reliable information, and health institutions should be able to make that information more visible.

  10. Mining Social Media and Web Searches For Disease Detection

    PubMed Central

    Yang, Y. Tony; Horneffer, Michael; DiLisio, Nicole

    2013-01-01

    Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate. PMID:25170475

  11. Mining social media and web searches for disease detection.

    PubMed

    Yang, Y Tony; Horneffer, Michael; DiLisio, Nicole

    2013-04-28

    Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate.

  12. Reusable Client-Side JavaScript Modules for Immersive Web-Based Real-Time Collaborative Neuroimage Visualization.

    PubMed

    Bernal-Rusiel, Jorge L; Rannou, Nicolas; Gollub, Randy L; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E; Pienaar, Rudolph

    2017-01-01

    In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView , a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution.

  13. GoWeb: a semantic search engine for the life science web.

    PubMed

    Dietze, Heiko; Schroeder, Michael

    2009-10-01

    Current search engines are keyword-based. Semantic technologies promise a next generation of semantic search engines, which will be able to answer questions. Current approaches either apply natural language processing to unstructured text or they assume the existence of structured statements over which they can reason. Here, we introduce a third approach, GoWeb, which combines classical keyword-based Web search with text-mining and ontologies to navigate large results sets and facilitate question answering. We evaluate GoWeb on three benchmarks of questions on genes and functions, on symptoms and diseases, and on proteins and diseases. The first benchmark is based on the BioCreAtivE 1 Task 2 and links 457 gene names with 1352 functions. GoWeb finds 58% of the functional GeneOntology annotations. The second benchmark is based on 26 case reports and links symptoms with diseases. GoWeb achieves 77% success rate improving an existing approach by nearly 20%. The third benchmark is based on 28 questions in the TREC genomics challenge and links proteins to diseases. GoWeb achieves a success rate of 79%. GoWeb's combination of classical Web search with text-mining and ontologies is a first step towards answering questions in the biomedical domain. GoWeb is online at: http://www.gopubmed.org/goweb.

  14. InChI in the wild: an assessment of InChIKey searching in Google

    PubMed Central

    2013-01-01

    While chemical databases can be queried using the InChI string and InChIKey (IK) the latter was designed for open-web searching. It is becoming increasingly effective for this since more sources enhance crawling of their websites by the Googlebot and consequent IK indexing. Searchers who use Google as an adjunct to database access may be less familiar with the advantages of using the IK as explored in this review. As an example, the IK for atorvastatin retrieves ~200 low-redundancy links from a Google search in 0.3 of a second. These include most major databases and a very low false-positive rate. Results encompass less familiar but potentially useful sources and can be extended to isomer capture by using just the skeleton layer of the IK. Google Advanced Search can be used to filter large result sets. Image searching with the IK is also effective and complementary to open-web queries. Results can be particularly useful for less-common structures as exemplified by a major metabolite of atorvastatin giving only three hits. Testing also demonstrated document-to-document and document-to-database joins via structure matching. The necessary generation of an IK from chemical names can be accomplished using open tools and resources for patents, papers, abstracts or other text sources. Active global sharing of local IK-linked information can be accomplished via surfacing in open laboratory notebooks, blogs, Twitter, figshare and other routes. While information-rich chemistry (e.g. approved drugs) can exhibit swamping and redundancy effects, the much smaller IK result sets for link-poor structures become a transformative first-pass option. The IK indexing has therefore turned Google into a de-facto open global chemical information hub by merging links to most significant sources, including over 50 million PubChem and ChemSpider records. The simplicity, specificity and speed of matching make it a useful option for biologists or others less familiar with chemical searching. However, compared to rigorously maintained major databases, users need to be circumspect about the consistency of Google results and provenance of retrieved links. In addition, community engagement may be necessary to ameliorate possible future degradation of utility. PMID:23399051

  15. Intelligent Information Retrieval and Web Mining Architecture Using SOA

    ERIC Educational Resources Information Center

    El-Bathy, Naser Ibrahim

    2010-01-01

    The study of this dissertation provides a solution to a very specific problem instance in the area of data mining, data warehousing, and service-oriented architecture in publishing and newspaper industries. The research question focuses on the integration of data mining and data warehousing. The research problem focuses on the development of…

  16. Introducing Text Analytics as a Graduate Business School Course

    ERIC Educational Resources Information Center

    Edgington, Theresa M.

    2011-01-01

    Text analytics refers to the process of analyzing unstructured data from documented sources, including open-ended surveys, blogs, and other types of web dialog. Text analytics has enveloped the concept of text mining, an analysis approach influenced heavily from data mining. While text mining has been covered extensively in various computer…

  17. Learning Geomorphology Using Aerial Photography in a Web-Facilitated Class

    ERIC Educational Resources Information Center

    Palmer, R. Evan

    2013-01-01

    General education students taking freshman-level physical geography and geomorphology classes at Arizona State University completed an online laboratory whose main tool was Google Earth. Early in the semester, oblique and planimetric views introduced students to a few volcanic, tectonic, glacial, karst, and coastal landforms. Semi-quantitative…

  18. The New Digital Awareness

    ERIC Educational Resources Information Center

    Bohle, Shannon

    2008-01-01

    With all the new advances in library technology--including metadata, social networking, and Web 2.0, along with the advent of nonlibrary and for-profit digital information companies like Wikisource and Google Print--librarians have barely had time to reflect on the nontechnical implications of these innovations. They need to take a step back and…

  19. Is It Cheating if Everybody Does It?

    ERIC Educational Resources Information Center

    Gustafon, Chris

    2004-01-01

    A teacher brings you a paper he suspects is not the student's own work, and a google search confirms it was copied right off a web page. Intellectual honesty issues are impossible to duck in the library, but plagiarism lessons are often met with yawns and eye rolls from students.

  20. Information Portals: The Next Generation Catalog

    ERIC Educational Resources Information Center

    Allison, DeeAnn

    2010-01-01

    Libraries today face an increasing challenge: to provide relevant information to diverse populations with differing needs while competing with Web search engines like Google. In 2009, a large group of libraries, including the University of Nebraska-Lincoln Libraries, joined with Innovative Interfaces as development partners to design a new type of…

  1. Analysis of Orthopaedic Research Produced During the Wars in Iraq and Afghanistan.

    PubMed

    Balazs, George C; Dickens, Jonathan F; Brelin, Alaina M; Wolfe, Jared A; Rue, John-Paul H; Potter, Benjamin K

    2015-09-01

    Military orthopaedic surgeons have published a substantial amount of original research based on our care of combat-wounded service members and related studies during the wars in Iraq and Afghanistan. However, to our knowledge, the influence of this body of work has not been evaluated bibliometrically, and doing so is important to determine the modern impact of combat casualty research in the wider medical community. We sought to identify the 20 most commonly cited works from military surgeons published during the Iraq and Afghanistan conflicts and analyze them to answer the following questions: (1) What were the subject areas of these 20 articles and what was the 2013 Impact Factor of each journal that published them? (2) How many citations did they receive and what were the characteristics of the journals that cited them? (3) Do the citation analysis results obtained from Google Scholar mirror the results obtained from Thompson-Reuters' Web of Science? We searched the Web of Science Citation Index Expanded for relevant original research performed by US military orthopaedic surgeons related to Operation Iraqi Freedom and Operation Enduring Freedom between 2001 and 2014. Articles citing these studies were reviewed using both Web of Science and Google Scholar data. The 20 most cited articles meeting inclusion criteria were identified and analyzed by content domain, frequency of citation, and sources in which they were cited. Nine of these studies examined the epidemiology and outcome of combat injury. Six studies dealt with wound management, wound dehiscence, and formation of heterotopic ossification. Five studies examined infectious complications of combat trauma. The median number of citations garnered by these 20 articles was 41 (range, 28-264) in Web of Science. Other research citing these studies has appeared in 279 different journals, covering 26 different medical and surgical subspecialties, from authors in 31 different countries. Google Scholar contained 97% of the Web of Science citations, but also had 31 duplicate entries and 29 citations with defective links. Modern combat casualty research by military orthopaedic surgeons is widely cited by researchers in a diverse range of subspecialties and geographic locales. This suggests that the military continues to be a source of innovation that is broadly applicable to civilian medical and surgical practice and should encourage expansion of military-civilian collaboration to maximize the utility of the knowledge gained in the treatment of war trauma. Level IV, therapeutic study.

  2. [Distribution characteristics of soil nematodes in reclaimed land of copper-mine-tailings in different plant associations].

    PubMed

    Zhu, Yong-heng; Li, Ke-zhong; Zhang, Heng; Han, Fei; Zhou, Ju-hua; Gao, Ting-ting

    2015-02-01

    A survey was carried out to investigate soil nematode communities in the plant associations of gramineae (Arthraxon lanceolatus, AL; Imperata cylindrica, IC) and leguminosae (Glycine soja, GS) in reclaimed land of copper-mine-tailings and in the plant associations of gramineae (Digitaria chrysoblephara, DC-CK) of peripheral control in Fenghuang Mountain, Tongling City. A total of 1277 nematodes were extracted and sorted into 51 genera. The average individual density of the nematodes was 590 individuals · 100 g(-1) dry soil. In order to analyze the distribution character- istics of soil nematode communities in reclaimed land of copper-mine-tailings, Shannon community diversity index and soil food web structure indices were applied in the research. The results showed that the total number of nematode genus and the Shannon community diversity index of soil nematode in the three plant associations of AL, IC and GS were less than that in the plant associations of DC-CK. Compared with the ecological indices of soil nematode communities among the different plant associations in reclaimed land of copper-mine-tailings and peripheral natural habitat, we found that the structure of soil food web in the plant associations of GS was more mature, with bacterial decomposition being dominant in the soil organic matter decomposition, and that the soil ecosystem in the plant associations of GS was not stable with low interference. This indicated that the soil food web in the plant associations of leguminosae had a greater development potential to improve the ecological stability of the reclaimed land of copper-mine-tailings. On the other hand, the structure of soil food web in the plant associations of AL and IC were relatively stable in a structured state with fungal decomposition being dominant in the decomposition of soil organic matter. This indicated that the soil food web in the plant associations of gramineae was at a poor development level.

  3. Mining Tasks from the Web Anchor Text Graph: MSR Notebook Paper for the TREC 2015 Tasks Track

    DTIC Science & Technology

    2015-11-20

    Mining Tasks from the Web Anchor Text Graph: MSR Notebook Paper for the TREC 2015 Tasks Track Paul N. Bennett Microsoft Research Redmond, USA pauben...anchor text graph has proven useful in the general realm of query reformulation [2], we sought to quantify the value of extracting key phrases from...anchor text in the broader setting of the task understanding track. Given a query, our approach considers a simple method for identifying a relevant

  4. FwWebViewPlus: integration of web technologies into WinCC OA based Human-Machine Interfaces at CERN

    NASA Astrophysics Data System (ADS)

    Golonka, Piotr; Fabian, Wojciech; Gonzalez-Berges, Manuel; Jasiun, Piotr; Varela-Rodriguez, Fernando

    2014-06-01

    The rapid growth in popularity of web applications gives rise to a plethora of reusable graphical components, such as Google Chart Tools and JQuery Sparklines, implemented in JavaScript and run inside a web browser. In the paper we describe the tool that allows for seamless integration of web-based widgets into WinCC Open Architecture, the SCADA system used commonly at CERN to build complex Human-Machine Interfaces. Reuse of widely available widget libraries and pushing the development efforts to a higher abstraction layer based on a scripting language allow for significant reduction in maintenance of the code in multi-platform environments compared to those currently used in C++ visualization plugins. Adequately designed interfaces allow for rapid integration of new web widgets into WinCC OA. At the same time, the mechanisms familiar to HMI developers are preserved, making the use of new widgets "native". Perspectives for further integration between the realms of WinCC OA and Web development are also discussed.

  5. Health and medication information resources on the World Wide Web.

    PubMed

    Grossman, Sara; Zerilli, Tina

    2013-04-01

    Health care practitioners have increasingly used the Internet to obtain health and medication information. The vast number of Internet Web sites providing such information and concerns with their reliability makes it essential for users to carefully select and evaluate Web sites prior to use. To this end, this article reviews the general principles to consider in this process. Moreover, as cost may limit access to subscription-based health and medication information resources with established reputability, freely accessible online resources that may serve as an invaluable addition to one's reference collection are highlighted. These include government- and organization-sponsored resources (eg, US Food and Drug Administration Web site and the American Society of Health-System Pharmacists' Drug Shortage Resource Center Web site, respectively) as well as commercial Web sites (eg, Medscape, Google Scholar). Familiarity with such online resources can assist health care professionals in their ability to efficiently navigate the Web and may potentially expedite the information gathering and decision-making process, thereby improving patient care.

  6. Googling suicide: surfing for suicide information on the Internet.

    PubMed

    Recupero, Patricia R; Harms, Samara E; Noble, Jeffrey M

    2008-06-01

    This study examined the types of resources a suicidal person might find through search engines on the Internet. We were especially interested in determining the accessibility of potentially harmful resources, such as prosuicide forums, as such resources have been implicated in completed suicides and are known to exist on the Web. Using 5 popular search engines (Google, Yahoo!, Ask.com, Lycos, and Dogpile) and 4 suicide-related search terms (suicide, how to commit suicide, suicide methods, and how to kill yourself), we collected quantitative and qualitative data about the search results. The searches were conducted in August and September 2006. Several coraters assigned codes and characterizations to the first 30 Web sites per search term combination (and "sponsored links" on those pages), which were then confirmed by consensus ratings. Search results were classified as being prosuicide, antisuicide, suicide-neutral, not a suicide site, or error (i.e., page would not load). Additional information was collected to further characterize the nature of the information on these Web sites. Suicide-neutral and anti-suicide pages occurred most frequently (of 373 unique Web pages, 115 were coded as suicide-neutral, and 109 were anti-suicide). While pro-suicide resources were less frequent (41 Web pages), they were nonetheless easily accessible. Detailed how-to instructions for unusual and lethal suicide methods were likewise easily located through the searches. Mental health professionals should ask patients about their Internet use. Depressed, suicidal, or potentially suicidal patients who use the Internet may be especially at risk. Clinicians may wish to assist patients in locating helpful, supportive resources online so that patients' Internet use may be more therapeutic than harmful.

  7. Injury surveillance in low-resource settings using Geospatial and Social Web technologies

    PubMed Central

    2010-01-01

    Background Extensive public health gains have benefited high-income countries in recent decades, however, citizens of low and middle-income countries (LMIC) have largely not enjoyed the same advancements. This is in part due to the fact that public health data - the foundation for public health advances - are rarely collected in many LMIC. Injury data are particularly scarce in many low-resource settings, despite the huge associated burden of morbidity and mortality. Advances in freely-accessible and easy-to-use information and communication (ICT) technology may provide the impetus for increased public health data collection in settings with limited financial and personnel resources. Methods and Results A pilot study was conducted at a hospital in Cape Town, South Africa to assess the utility and feasibility of using free (non-licensed), and easy-to-use Social Web and GeoWeb tools for injury surveillance in low-resource settings. Data entry, geocoding, data exploration, and data visualization were successfully conducted using these technologies, including Google Spreadsheet, Mapalist, BatchGeocode, and Google Earth. Conclusion This study examined the potential for Social Web and GeoWeb technologies to contribute to public health data collection and analysis in low-resource settings through an injury surveillance pilot study conducted in Cape Town, South Africa. The success of this study illustrates the great potential for these technologies to be leveraged for public health surveillance in resource-constrained environments, given their ease-of-use and low-cost, and the sharing and collaboration capabilities they afford. The possibilities and potential limitations of these technologies are discussed in relation to the study, and to the field of public health in general. PMID:20497570

  8. Electronic Biomedical Literature Search for Budding Researcher

    PubMed Central

    Thakre, Subhash B.; Thakre S, Sushama S.; Thakre, Amol D.

    2013-01-01

    Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research. PMID:24179937

  9. Electronic biomedical literature search for budding researcher.

    PubMed

    Thakre, Subhash B; Thakre S, Sushama S; Thakre, Amol D

    2013-09-01

    Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research.

  10. Brave New Media World: Science Communication Voyages through the Global Seas

    NASA Astrophysics Data System (ADS)

    Clark, C. L.; Reisewitz, A.

    2010-12-01

    By leveraging online tools, such as blogs, Twitter, Facebook, Google Earth, flickr, web-based discussion boards, and a bi-monthly electronic magazine for the non-scientist, Scripps Institution of Oceanography is taking science communications out of the static webpage to create interactive journeys that spark social dialogue and helped raise awareness of science-based research on global marine environmental issues. Several new initiatives are being chronicled through popular blogs and expedition web sites as researchers share interesting scientific facts and unusual findings in near real-time.

  11. BAGEL4: a user-friendly web server to thoroughly mine RiPPs and bacteriocins.

    PubMed

    van Heel, Auke J; de Jong, Anne; Song, Chunxu; Viel, Jakob H; Kok, Jan; Kuipers, Oscar P

    2018-05-21

    Interest in secondary metabolites such as RiPPs (ribosomally synthesized and posttranslationally modified peptides) is increasing worldwide. To facilitate the research in this field we have updated our mining web server. BAGEL4 is faster than its predecessor and is now fully independent from ORF-calling. Gene clusters of interest are discovered using the core-peptide database and/or through HMM motifs that are present in associated context genes. The databases used for mining have been updated and extended with literature references and links to UniProt and NCBI. Additionally, we have included automated promoter and terminator prediction and the option to upload RNA expression data, which can be displayed along with the identified clusters. Further improvements include the annotation of the context genes, which is now based on a fast blast against the prokaryote part of the UniRef90 database, and the improved web-BLAST feature that dynamically loads structural data such as internal cross-linking from UniProt. Overall BAGEL4 provides the user with more information through a user-friendly web-interface which simplifies data evaluation. BAGEL4 is freely accessible at http://bagel4.molgenrug.nl.

  12. Public health awareness of autoimmune diseases after the death of a celebrity.

    PubMed

    Bragazzi, Nicola Luigi; Watad, Abdulla; Brigo, Francesco; Adawi, Mohammad; Amital, Howard; Shoenfeld, Yehuda

    2017-08-01

    Autoimmune disorders impose a high burden, in terms of morbidity and mortality worldwide. Vasculitis is an autoimmune disorder that causes inflammation and destruction of blood vessels. Harold Allen Ramis, a famous American actor, director, writer, and comedian, died on the February 24, 2014, of complications of an autoimmune inflammatory vasculitis. To investigate the relation between interests and awareness of an autoimmune disease after a relevant event such as the death of a celebrity, we systematically mined Google Trends, Wikitrends, Google News, YouTube, and Twitter, in any language, from their inception until October 31, 2016. Twenty-eight thousand eight hundred fifty-two tweets; 4,133,615 accesses to Wikipedia; 6780 news; and 11,400 YouTube videos were retrieved, processed, and analyzed. The Harold Ramis death of vasculitis resulted into an increase in vasculitis-related Google searches, Wikipedia page accesses, and tweet production, documenting a peak in February 2014. No trend could be detected concerning uploading YouTube videos. The usage of Big Data is promising in the fields of immunology and rheumatology. Clinical practitioners should be aware of this emerging phenomenon.

  13. Asymmetric threat data mining and knowledge discovery

    NASA Astrophysics Data System (ADS)

    Gilmore, John F.; Pagels, Michael A.; Palk, Justin

    2001-03-01

    Asymmetric threats differ from the conventional force-on- force military encounters that the Defense Department has historically been trained to engage. Terrorism by its nature is now an operational activity that is neither easily detected or countered as its very existence depends on small covert attacks exploiting the element of surprise. But terrorism does have defined forms, motivations, tactics and organizational structure. Exploiting a terrorism taxonomy provides the opportunity to discover and assess knowledge of terrorist operations. This paper describes the Asymmetric Threat Terrorist Assessment, Countering, and Knowledge (ATTACK) system. ATTACK has been developed to (a) data mine open source intelligence (OSINT) information from web-based newspaper sources, video news web casts, and actual terrorist web sites, (b) evaluate this information against a terrorism taxonomy, (c) exploit country/region specific social, economic, political, and religious knowledge, and (d) discover and predict potential terrorist activities and association links. Details of the asymmetric threat structure and the ATTACK system architecture are presented with results of an actual terrorist data mining and knowledge discovery test case shown.

  14. Informing child welfare policy and practice: using knowledge discovery and data mining technology via a dynamic Web site.

    PubMed

    Duncan, Dean F; Kum, Hye-Chung; Weigensberg, Elizabeth Caplick; Flair, Kimberly A; Stewart, C Joy

    2008-11-01

    Proper management and implementation of an effective child welfare agency requires the constant use of information about the experiences and outcomes of children involved in the system, emphasizing the need for comprehensive, timely, and accurate data. In the past 20 years, there have been many advances in technology that can maximize the potential of administrative data to promote better evaluation and management in the field of child welfare. Specifically, this article discusses the use of knowledge discovery and data mining (KDD), which makes it possible to create longitudinal data files from administrative data sources, extract valuable knowledge, and make the information available via a user-friendly public Web site. This article demonstrates a successful project in North Carolina where knowledge discovery and data mining technology was used to develop a comprehensive set of child welfare outcomes available through a public Web site to facilitate information sharing of child welfare data to improve policy and practice.

  15. Chemotext: A Publicly Available Web Server for Mining Drug-Target-Disease Relationships in PubMed.

    PubMed

    Capuzzi, Stephen J; Thornton, Thomas E; Liu, Kammy; Baker, Nancy; Lam, Wai In; O'Banion, Colin P; Muratov, Eugene N; Pozefsky, Diane; Tropsha, Alexander

    2018-02-26

    Elucidation of the mechanistic relationships between drugs, their targets, and diseases is at the core of modern drug discovery research. Thousands of studies relevant to the drug-target-disease (DTD) triangle have been published and annotated in the Medline/PubMed database. Mining this database affords rapid identification of all published studies that confirm connections between vertices of this triangle or enable new inferences of such connections. To this end, we describe the development of Chemotext, a publicly available Web server that mines the entire compendium of published literature in PubMed annotated by Medline Subject Heading (MeSH) terms. The goal of Chemotext is to identify all known DTD relationships and infer missing links between vertices of the DTD triangle. As a proof-of-concept, we show that Chemotext could be instrumental in generating new drug repurposing hypotheses or annotating clinical outcomes pathways for known drugs. The Chemotext Web server is freely available at http://chemotext.mml.unc.edu .

  16. Medicine 2.0: social networking, collaboration, participation, apomediation, and openness.

    PubMed

    Eysenbach, Gunther

    2008-08-25

    In a very significant development for eHealth, broad adoption of Web 2.0 technologies and approaches coincides with the more recent emergence of Personal Health Application Platforms and Personally Controlled Health Records such as Google Health, Microsoft HealthVault, and Dossia. "Medicine 2.0" applications, services and tools are defined as Web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies and/or semantic web and virtual reality approaches to enable and facilitate specifically 1) social networking, 2) participation, 3) apomediation, 4) openness and 5) collaboration, within and between these user groups. The Journal of Medical Internet Research (JMIR) publishes a Medicine 2.0 theme issue and sponsors a conference on "How Social Networking and Web 2.0 changes Health, Health Care, Medicine and Biomedical Research", to stimulate and encourage research in these five areas.

  17. Medicine 2.0: Social Networking, Collaboration, Participation, Apomediation, and Openness

    PubMed Central

    2008-01-01

    In a very significant development for eHealth, a broad adoption of Web 2.0 technologies and approaches coincides with the more recent emergence of Personal Health Application Platforms and Personally Controlled Health Records such as Google Health, Microsoft HealthVault, and Dossia. “Medicine 2.0” applications, services, and tools are defined as Web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies and/or semantic web and virtual reality approaches to enable and facilitate specifically 1) social networking, 2) participation, 3) apomediation, 4) openness, and 5) collaboration, within and between these user groups. The Journal of Medical Internet Research (JMIR) publishes a Medicine 2.0 theme issue and sponsors a conference on “How Social Networking and Web 2.0 changes Health, Health Care, Medicine, and Biomedical Research”, to stimulate and encourage research in these five areas. PMID:18725354

  18. Using food-web theory to conserve ecosystems

    PubMed Central

    McDonald-Madden, E.; Sabbadin, R.; Game, E. T.; Baxter, P. W. J.; Chadès, I.; Possingham, H. P.

    2016-01-01

    Food-web theory can be a powerful guide to the management of complex ecosystems. However, we show that indices of species importance common in food-web and network theory can be a poor guide to ecosystem management, resulting in significantly more extinctions than necessary. We use Bayesian Networks and Constrained Combinatorial Optimization to find optimal management strategies for a wide range of real and hypothetical food webs. This Artificial Intelligence approach provides the ability to test the performance of any index for prioritizing species management in a network. While no single network theory index provides an appropriate guide to management for all food webs, a modified version of the Google PageRank algorithm reliably minimizes the chance and severity of negative outcomes. Our analysis shows that by prioritizing ecosystem management based on the network-wide impact of species protection rather than species loss, we can substantially improve conservation outcomes. PMID:26776253

  19. Mind your crossings: Mining GIS imagery for crosswalk localization.

    PubMed

    Ahmetovic, Dragan; Manduchi, Roberto; Coughlan, James M; Mascetti, Sergio

    2017-04-01

    For blind travelers, finding crosswalks and remaining within their borders while traversing them is a crucial part of any trip involving street crossings. While standard Orientation & Mobility (O&M) techniques allow blind travelers to safely negotiate street crossings, additional information about crosswalks and other important features at intersections would be helpful in many situations, resulting in greater safety and/or comfort during independent travel. For instance, in planning a trip a blind pedestrian may wish to be informed of the presence of all marked crossings near a desired route. We have conducted a survey of several O&M experts from the United States and Italy to determine the role that crosswalks play in travel by blind pedestrians. The results show stark differences between survey respondents from the U.S. compared with Italy: the former group emphasized the importance of following standard O&M techniques at all legal crossings (marked or unmarked), while the latter group strongly recommended crossing at marked crossings whenever possible. These contrasting opinions reflect differences in the traffic regulations of the two countries and highlight the diversity of needs that travelers in different regions may have. To address the challenges faced by blind pedestrians in negotiating street crossings, we devised a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm can be improved by a final crowdsourcing validation. To this end, we developed a Pedestrian Crossing Human Validation (PCHV) web service, which supports crowdsourcing to rule out false positives and identify false negatives.

  20. A Novel Framework Based on the Improved Job Demands-Resources (JD-R) Model to Understand the Impact of Job Characteristics on Job Burnout from the View of Emotion Regulation Theory.

    PubMed

    Yang, Naiding; Lu, Jintao; Ye, Jinfu

    2018-03-01

    It has been suggested that individual job characteristics have a significant impact on job burnout, and the process is subject to the regulation of demographic variables. However, the influence path of job characteristics on job burnout is still a "black box". On the basis of a systematic literature review by employing Pub Med, Science Direct, Web of Science, Google Scholar, CNKI and Scopus for required information with the several keywords "Job burnout", "Emotion regulation", "Personality traits", and "Psychological stress", in this study, an improved mine rescue workers-oriented job demands-resources (JD-R) model was put forward. Then, a novel analysis framework, to explore the impact of job characteristics on job burnout from the view of emotion regulation theory, was proposed combining the personality trait theory. This study argues that job burnout is influenced by job demands through expressive suppression and by job resources through cognitive reappraisal respectively. Further more, job demands and job resources have the opposite effects on job burnout through the "loss-path" caused by job pressure and the "gain-path" arised from job motivation, respectively. Extrovert personality traits can affect the way the individual processes the information of work environment and then how individual further adopts emotion regulation strategies, finally resulting in indirectly affecting the influence path of mine rescue workers' job characteristics on job burnout. This present study can help managers to realize the importance of employees' psychological stress and job burnout problems. The obtained conclusions provide significant decision-making references for managers in intervening job burnout, managing emotional stress and mental health of employees.

  1. Mind your crossings: Mining GIS imagery for crosswalk localization

    PubMed Central

    Ahmetovic, Dragan; Manduchi, Roberto; Coughlan, James M.; Mascetti, Sergio

    2017-01-01

    For blind travelers, finding crosswalks and remaining within their borders while traversing them is a crucial part of any trip involving street crossings. While standard Orientation & Mobility (O&M) techniques allow blind travelers to safely negotiate street crossings, additional information about crosswalks and other important features at intersections would be helpful in many situations, resulting in greater safety and/or comfort during independent travel. For instance, in planning a trip a blind pedestrian may wish to be informed of the presence of all marked crossings near a desired route. We have conducted a survey of several O&M experts from the United States and Italy to determine the role that crosswalks play in travel by blind pedestrians. The results show stark differences between survey respondents from the U.S. compared with Italy: the former group emphasized the importance of following standard O&M techniques at all legal crossings (marked or unmarked), while the latter group strongly recommended crossing at marked crossings whenever possible. These contrasting opinions reflect differences in the traffic regulations of the two countries and highlight the diversity of needs that travelers in different regions may have. To address the challenges faced by blind pedestrians in negotiating street crossings, we devised a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm can be improved by a final crowdsourcing validation. To this end, we developed a Pedestrian Crossing Human Validation (PCHV) web service, which supports crowdsourcing to rule out false positives and identify false negatives. PMID:28757907

  2. Medical student appraisal: searching on smartphones.

    PubMed

    Khalifian, S; Markman, T; Sampognaro, P; Mitchell, S; Weeks, S; Dattilo, J

    2013-01-01

    The rapidly growing industry for mobile medical applications provides numerous smartphone resources designed for healthcare professionals. However, not all applications are equally useful in addressing the questions of early medical trainees. Three popular, free, mobile healthcare applications were evaluated along with a Google(TM) web search on both Apple(TM) and Android(TM) devices. Six medical students at a large academic hospital evaluated each application for a one-week period while on various clinical rotations. Google(TM) was the most frequently used search method and presented multimedia resources but was inefficient for obtaining clinical management information. Epocrates(TM) Pill ID feature was praised for its clinical utility. Medscape(TM) had the highest satisfaction of search and excelled through interactive educational features. Micromedex(TM) offered both FDA and off-label dosing for drugs. Google(TM) was the preferred search method for questions related to basic disease processes and multimedia resources, but was inadequate for clinical management. Caution should also be exercised when using Google(TM) in front of patients. Medscape(TM) was the most appealing application due to a broad scope of content and educational features relevant to medical trainees. Students should also be cognizant of how mobile technology may be perceived by their evaluators to avoid false impressions.

  3. Data mining for personal navigation

    NASA Astrophysics Data System (ADS)

    Hariharan, Gurushyam; Franti, Pasi; Mehta, Sandeep

    2002-03-01

    Relevance is the key in defining what data is to be extracted from the Internet. Traditionally, relevance has been defined mainly by keywords and user profiles. In this paper we discuss a fairly untouched dimension to relevance: location. Any navigational information sought by a user at large on earth is evidently governed by his location. We believe that task oriented data mining of the web amalgamated with location information is the key to providing relevant information for personal navigation. We explore the existential hurdles and propose novel approaches to tackle them. We also present naive, task-oriented data mining based approaches and their implementations in Java, to extract location based information. Ad-hoc pairing of data with coordinates (x, y) is very rare on the web. But if the same co-ordinates are converted to a logical address (state/city/street), a wide spectrum of location-based information base opens up. Hence, given the coordinates (x, y) on the earth, the scheme points to the logical address of the user. Location based information could either be picked up from fixed and known service providers (e.g. Yellow Pages) or from any arbitrary website on the Web. Once the web servers providing information relevant to the logical address are located, task oriented data mining is performed over these sites keeping in mind what information is interesting to the contemporary user. After all this, a simple data stream is provided to the user with information scaled to his convenience. The scheme has been implemented for cities of Finland.

  4. Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.

    PubMed

    Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta

    2014-07-01

    We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.

  5. Evaluation of the Content and Accessibility of Web Sites for Accredited Orthopaedic Trauma Surgery Fellowships.

    PubMed

    Shaath, M Kareem; Yeranosian, Michael G; Ippolito, Joseph A; Adams, Mark R; Sirkin, Michael S; Reilly, Mark C

    2018-05-02

    Orthopaedic trauma fellowship applicants use online-based resources when researching information on potential U.S. fellowship programs. The 2 primary sources for identifying programs are the Orthopaedic Trauma Association (OTA) database and the San Francisco Match (SF Match) database. Previous studies in other orthopaedic subspecialty areas have demonstrated considerable discrepancies among fellowship programs. The purpose of this study was to analyze content and availability of information on orthopaedic trauma surgery fellowship web sites. The online databases of the OTA and SF Match were reviewed to determine the availability of embedded program links or external links for the included programs. Thereafter, a Google search was performed for each program individually by typing the program's name, followed by the term "orthopaedic trauma fellowship." All identified fellowship web sites were analyzed for accessibility and content. Web sites were evaluated for comprehensiveness in mentioning key components of the orthopaedic trauma surgery curriculum. By consensus, we refined the final list of variables utilizing the methodology of previous studies on the topic. We identified 54 OTA-accredited fellowship programs, offering 87 positions. The majority (94%) of programs had web sites accessible through a Google search. Of the 51 web sites found, all (100%) described their program. Most commonly, hospital affiliation (88%), operative experiences (76%), and rotation overview (65%) were listed, and, least commonly, interview dates (6%), selection criteria (16%), on-call requirements (20%), and fellow evaluation criteria (20%) were listed. Programs with ≥2 fellows provided more information with regard to education content (p = 0.0001) and recruitment content (p = 0.013). Programs with Accreditation Council for Graduate Medical Education (ACGME) accreditation status also provided greater information with regard to education content (odds ratio, 4.0; p = 0.0001). Otherwise, no differences were seen by region, residency affiliation, medical school affiliation, or hospital affiliation. The SF Match and OTA databases provide few direct links to fellowship web sites. Individual program web sites do not effectively and completely convey information about the programs. The Internet is an underused resource for fellow recruitment. The lack of information on these sites allows for future opportunity to optimize this resource.

  6. Accessibility and quality of online information for pediatric orthopaedic surgery fellowships.

    PubMed

    Davidson, Austin R; Murphy, Robert F; Spence, David D; Kelly, Derek M; Warner, William C; Sawyer, Jeffrey R

    2014-12-01

    Pediatric orthopaedic fellowship applicants commonly use online-based resources for information on potential programs. Two primary sources are the San Francisco Match (SF Match) database and the Pediatric Orthopaedic Society of North America (POSNA) database. We sought to determine the accessibility and quality of information that could be obtained by using these 2 sources. The online databases of the SF Match and POSNA were reviewed to determine the availability of embedded program links or external links for the included programs. If not available in the SF Match or POSNA data, Web sites for listed programs were located with a Google search. All identified Web sites were analyzed for accessibility, content volume, and content quality. At the time of online review, 50 programs, offering 68 positions, were listed in the SF Match database. Although 46 programs had links included with their information, 36 (72%) of them simply listed http://www.sfmatch.org as their unique Web site. Ten programs (20%) had external links listed, but only 2 (4%) linked directly to the fellowship web page. The POSNA database does not list any links to the 47 programs it lists, which offer 70 positions. On the basis of a Google search of the 50 programs listed in the SF Match database, web pages were found for 35. Of programs with independent web pages, all had a description of the program and 26 (74%) described their application process. Twenty-nine (83%) listed research requirements, 22 (63%) described the rotation schedule, and 12 (34%) discussed the on-call expectations. A contact telephone number and/or email address was provided by 97% of programs. Twenty (57%) listed both the coordinator and fellowship director, 9 (26%) listed the coordinator only, 5 (14%) listed the fellowship director only, and 1 (3%) had no contact information given. The SF Match and POSNA databases provide few direct links to fellowship Web sites, and individual program Web sites either do not exist or do not effectively convey information about the programs. Improved accessibility and accurate information online would allow potential applicants to obtain information about pediatric fellowships in a more efficient manner.

  7. 78 FR 60876 - Advisory Committee to the Director (ACD), Centers for Disease Control and Prevention (CDC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-02

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Advisory... by teleconference. Please dial (877) 930-8819 and enter code 1579739. Web links: Windows Connection-2: http://wm.onlinevideoservice.com/CDC2 Flash Connection-4 (For Safari and Google Chrome Users): http...

  8. 78 FR 60876 - Advisory Committee to the Director (ACD), Centers for Disease Control and Prevention (CDC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-02

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Advisory... by teleconference. Please dial (877) 930-8819 and enter code 1579739. Web Links Windows Connection-2: http://wm.onlinevideoservice.com/CDC2 . Flash Connection-4 (For Safari and Google Chrome Users): http...

  9. Paying Your Way to the Top: Search Engine Advertising.

    ERIC Educational Resources Information Center

    Scott, David M.

    2003-01-01

    Explains how organizations can buy listings on major Web search engines, making it the fastest growing form of advertising. Highlights include two network models, Google and Overture; bidding on phrases to buy as links to use with ads; ad ranking; benefits for small businesses; and paid listings versus regular search results. (LRW)

  10. Prevalence of purulent vaginal discharge in dairy herds depends on timing but not method of detection

    USDA-ARS?s Scientific Manuscript database

    A review of existing literature was conducted to determine the prevalence of purulent vaginal discharge (PVD) in dairy herds around the world and detection methodologies that influence prevalence estimates. Four databases (PubMed, Google Scholar, Web of Science, and Scopus) were queried with the sea...

  11. 76 FR 74776 - Forum-Trends in Extreme Winds, Waves, and Extratropical Storms Along the Coasts

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-01

    ... Winds, Waves, and Extratropical Storms Along the Coasts AGENCY: National Environmental Satellite, Data... information, please check the forum Web site at https://sites.google.com/a/noaa.gov/extreme-winds-waves.../noaa.gov/extreme-winds-waves-extratropical-storms/home . Topics To Be Addressed This forum will address...

  12. Make Your Own Mashup Maps

    ERIC Educational Resources Information Center

    Lucking, Robert A.; Christmann, Edwin P.; Whiting, Mervyn J.

    2008-01-01

    "Mashup" is a new technology term used to describe a web application that combines data or technology from several different sources. You can apply this concept in your classroom by having students create their own mashup maps. Google Maps provides you with the simple tools, map databases, and online help you'll need to quickly master this…

  13. Concordancers and Dictionaries as Problem-Solving Tools for ESL Academic Writing

    ERIC Educational Resources Information Center

    Yoon, Choongil

    2016-01-01

    The present study investigated how 6 Korean ESL graduate students in Canada used a suite of freely available reference resources, consisting of Web-based corpus tools, Google search engines, and dictionaries, for solving linguistic problems while completing an authentic academic writing assignment in English. Using a mixed methods design, the…

  14. Voice-Recognition Augmented Performance Tools in Performance Poetry Pedagogy

    ERIC Educational Resources Information Center

    Devanny, David; McGowan, Jack

    2016-01-01

    This provocation shares findings from the use of bespoke voice-recognition performance software in a number of seminars (which took place in the 2014-2016 academic years at Glasgow School of Art, University of Warwick, and Falmouth University). The software, made available through this publication, is a web-app which uses Google Chrome's native…

  15. Factors Influencing Consent to Having Videotaped Mental Health Sessions

    ERIC Educational Resources Information Center

    Ko, Kenton; Goebert, Deborah

    2011-01-01

    Objective: The authors critically reviewed the literature regarding factors influencing consent to having videotaped mental health sessions. Methods: The authors searched the literature in PubMed, PsycINFO, Google Scholar, and Web of Science from the mid-1950s through February 2009. Results: The authors identified 27 studies, of which 19 (73%)…

  16. Teaching in Educational Leadership Using Web 2.0 Applications: Perspectives on What Works

    ERIC Educational Resources Information Center

    Shinsky, E. John; Stevens, Hans A.

    2011-01-01

    To prepare 21st Century school leaders, educational leadership professors need to learn and teach the utilization of increasingly sophisticated technologies in their courses. The co-authors, a professor and an educational specialist degree candidate, describe how the use of advanced technologies--such as Wikis, Google Docs, Wimba Classroom, and…

  17. Ready for Their Close-Ups

    ERIC Educational Resources Information Center

    Foster, Andrea L.

    2006-01-01

    American college students are increasingly posting videos of their lives online, due to Web sites like Vimeo and Google Video that host video material free and the ubiquity of camera phones and other devices that can take video-clips. However, the growing popularity of online socializing has many safety experts worried that students could be…

  18. Teaching Undergraduate Software Engineering Using Open Source Development Tools

    DTIC Science & Technology

    2012-01-01

    ware. Some example appliances are: a LAMP stack, Redmine, MySQL database, Moodle, Tom- cat on Apache, and Bugzilla. Some of the important features...Ada, C, C++, PHP , Py- thon, etc., and also supports a wide range of SDKs such as Google’s Android SDK and the Google Web Toolkit SDK. Additionally

  19. Assessing Journal Quality in Mathematics Education

    ERIC Educational Resources Information Center

    Nivens, Ryan Andrew; Otten, Samuel

    2017-01-01

    In this Research Commentary, we describe 3 journal metrics--the Web of Science's Impact Factor, Scopus's SCImago Journal Rank, and Google Scholar Metrics' h5-index--and compile the rankings (if they exist) for 69 mathematics education journals. We then discuss 2 paths that the mathematics education community should consider with regard to these…

  20. Web Searching: A Process-Oriented Experimental Study of Three Interactive Search Paradigms.

    ERIC Educational Resources Information Center

    Dennis, Simon; Bruza, Peter; McArthur, Robert

    2002-01-01

    Compares search effectiveness when using query-based Internet search via the Google search engine, directory-based search via Yahoo, and phrase-based query reformulation-assisted search via the Hyperindex browser by means of a controlled, user-based experimental study of undergraduates at the University of Queensland. Discusses cognitive load,…

  1. Where Do I Find It?--An Internet Glossary.

    ERIC Educational Resources Information Center

    Del Monte, Erin; Manso, Angela

    2001-01-01

    Lists 13 different Internet search engines that might be of interest to educators, including: AOL Search, Alta Vista, Google, Lycos, Northern Light, and Yahoo. Gives a brief description of each search engine's capabilities, strengths, and weaknesses and includes Web addresses of U.S. government offices, including the U.S. Department of Education.…

  2. School Librarians: Vital Educational Leaders

    ERIC Educational Resources Information Center

    Martineau, Pamela

    2010-01-01

    In the new millennium, school librarians are more likely to be found sitting behind a computer as they update the library web page or create a wiki on genetically modified organisms. Or they might be seen in the library computer lab as they lead students through tutorials on annotated bibliographies or Google docs. If adequately supported, school…

  3. The Physlet Approach to Simulation Design

    ERIC Educational Resources Information Center

    Christian, Wolfgang; Belloni, Mario; Esquembre, Francisco; Mason, Bruce A.; Barbato, Lyle; Riggsbee, Matt

    2015-01-01

    Over the past two years, the AAPT/ComPADRE staff and the Open Source Physics group have published the second edition of "Physlet Physics" and "Physlet Quantum Physics," delivered as interactive web pages on AAPT/ComPADRE and as free eBooks available through iTunes and Google Play. These two websites, and their associated books,…

  4. Modern Amphibious Operations: Why the United States Must Maintain a Joint Amphibious Forcible Entry Capability

    DTIC Science & Technology

    2012-03-23

    be reminded that the aforementioned movies depicted events that happened nearly 70 years ago.48 These films neither represent modem amphibious...contemporary sources. Few are more contemporary than those in this genre . Nothing can substitute a simple Google web search to get ideas about where

  5. 78 FR 62820 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-22

    ... on Web sites operated by Google, Interactive Data, and Dow Jones, among others. The text of the... and various forms of alternative trading systems (``ATSs''), including dark pools and electronic..., BATS Trading and Direct Edge. A proliferation of dark pools and other ATSs operate profitably with...

  6. Predicting Ambulance Time of Arrival to the Emergency Department Using Global Positioning System and Google Maps

    PubMed Central

    Fleischman, Ross J.; Lundquist, Mark; Jui, Jonathan; Newgard, Craig D.; Warden, Craig

    2014-01-01

    Objective To derive and validate a model that accurately predicts ambulance arrival time that could be implemented as a Google Maps web application. Methods This was a retrospective study of all scene transports in Multnomah County, Oregon, from January 1 through December 31, 2008. Scene and destination hospital addresses were converted to coordinates. ArcGIS Network Analyst was used to estimate transport times based on street network speed limits. We then created a linear regression model to improve the accuracy of these street network estimates using weather, patient characteristics, use of lights and sirens, daylight, and rush-hour intervals. The model was derived from a 50% sample and validated on the remainder. Significance of the covariates was determined by p < 0.05 for a t-test of the model coefficients. Accuracy was quantified by the proportion of estimates that were within 5 minutes of the actual transport times recorded by computer-aided dispatch. We then built a Google Maps-based web application to demonstrate application in real-world EMS operations. Results There were 48,308 included transports. Street network estimates of transport time were accurate within 5 minutes of actual transport time less than 16% of the time. Actual transport times were longer during daylight and rush-hour intervals and shorter with use of lights and sirens. Age under 18 years, gender, wet weather, and trauma system entry were not significant predictors of transport time. Our model predicted arrival time within 5 minutes 73% of the time. For lights and sirens transports, accuracy was within 5 minutes 77% of the time. Accuracy was identical in the validation dataset. Lights and sirens saved an average of 3.1 minutes for transports under 8.8 minutes, and 5.3 minutes for longer transports. Conclusions An estimate of transport time based only on a street network significantly underestimated transport times. A simple model incorporating few variables can predict ambulance time of arrival to the emergency department with good accuracy. This model could be linked to global positioning system data and an automated Google Maps web application to optimize emergency department resource use. Use of lights and sirens had a significant effect on transport times. PMID:23865736

  7. A Generic Framework for Extraction of Knowledge from Social Web Sources (Social Networking Websites) for an Online Recommendation System

    ERIC Educational Resources Information Center

    Sathick, Javubar; Venkat, Jaya

    2015-01-01

    Mining social web data is a challenging task and finding user interest for personalized and non-personalized recommendation systems is another important task. Knowledge sharing among web users has become crucial in determining usage of web data and personalizing content in various social websites as per the user's wish. This paper aims to design a…

  8. Mining Hidden Gems Beneath the Surface: A Look At the Invisible Web.

    ERIC Educational Resources Information Center

    Carlson, Randal D.; Repman, Judi

    2002-01-01

    Describes resources for researchers called the Invisible Web that are hidden from the usual search engines and other tools and contrasts them with those resources available on the surface Web. Identifies specialized search tools, databases, and strategies that can be used to locate credible in-depth information. (Author/LRW)

  9. Opinion Integration and Summarization

    ERIC Educational Resources Information Center

    Lu, Yue

    2011-01-01

    As Web 2.0 applications become increasingly popular, more and more people express their opinions on the Web in various ways in real time. Such wide coverage of topics and abundance of users make the Web an extremely valuable source for mining people's opinions about all kinds of topics. However, since the opinions are usually expressed as…

  10. Mining Formative Evaluation Rules Using Web-Based Learning Portfolios for Web-Based Learning Systems

    ERIC Educational Resources Information Center

    Chen, Chih-Ming; Hong, Chin-Ming; Chen, Shyuan-Yi; Liu, Chao-Yu

    2006-01-01

    Learning performance assessment aims to evaluate what knowledge learners have acquired from teaching activities. Objective technical measures of learning performance are difficult to develop, but are extremely important for both teachers and learners. Learning performance assessment using learning portfolios or web server log data is becoming an…

  11. 78 FR 77706 - Notice of Intent To Prepare an Environmental Impact Statement for the Proposed Gemfield Mine...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-24

    ... gold mine and associated processing and ancillary facilities. The project would be located on public... media, newspapers and the BLM Web site at: http://www.blm.gov/nv/st/en/fo/battle_mountain_field.html... to construct, operate, reclaim, and close an open pit, heap leach, gold mining operation known as the...

  12. Reusable Client-Side JavaScript Modules for Immersive Web-Based Real-Time Collaborative Neuroimage Visualization

    PubMed Central

    Bernal-Rusiel, Jorge L.; Rannou, Nicolas; Gollub, Randy L.; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E.; Pienaar, Rudolph

    2017-01-01

    In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView, a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution. PMID:28507515

  13. Survey of publications and the H-index of Academic Emergency Medicine Professors.

    PubMed

    Babineau, Matthew; Fischer, Christopher; Volz, Kathryn; Sanchez, Leon D

    2014-05-01

    The number of publications and how often these have been cited play a role in academic promotion. Bibliometrics that attempt to quantify the relative impact of scholarly work have been proposed. The h-index is defined as the number (h) of publications for an individual that have been cited at least h times. We calculated the h-index and number of publications for academic emergency physicians at the rank of professor. We accessed the Society for Academic Emergency Medicine professor list in January of 2012. We calculated the number of publications through Web of Science and PubMed and the h-index using Google scholar and Web of Science. We identified 299 professors of emergency medicine. The number of professors per institution ranged from 1 to 13. Median h-index in Web of Science was 11 (interquartile range [IQR] 6-17, range 0-51), in Google Scholar median h-index was 14 (IQR 9-22, range 0-63) The median number of publications reported in Web of Science was 36 (IQR 18-73, range 0-359. Total number of publications had a high correlation with the h-index (r=0.884). The h-index is only a partial measure of academic productivity. As a measure of the impact of an individual's publications it can provide a simple way to compare and measure academic progress and provide a metric that can be used when evaluating a person for academic promotion. Calculation of the h-index can provide a way to track academic progress and impact. [West J Emerg Med. 2014;15(3):290-292.].

  14. Breast reconstruction post mastectomy- Let's Google it. Accessibility, readability and quality of online information.

    PubMed

    Lynch, Noel P; Lang, Bronagh; Angelov, Sophia; McGarrigle, Sarah A; Boyle, Terence J; Al-Azawi, Dhafir; Connolly, Elizabeth M

    2017-04-01

    This study evaluated the readability, accessibility and quality of information pertaining to breast reconstruction post mastectomy on the Internet in the English language. Using the Google © search engine the keywords "Breast reconstruction post mastectomy" were searched for. We analyzed the top 75 sites. The Flesch Reading Ease Score and Gunning Fog Index were calculated to assess readability. Web site quality was assessed objectively using the University of Michigan Consumer Health Web site Evaluation Checklist. Accessibility was determined using an automated accessibility tool. In addition, the country of origin, type of organisation producing the site and presence of Health on the Net (HoN) Certification status was recorded. The Web sites were difficult to read and comprehend. The mean Flesch Reading Ease scores were 55.5. The mean Gunning Fog Index scores was 8.6. The mean Michigan score was 34.8 indicating weak quality of websites. Websites with HoN certification ranked higher in the search results (p = 0.007). Website quality was influenced by organisation type (p < 0.0001) with academic/healthcare, not for profit and government sites having higher Michigan scores. 20% of sites met the minimum accessibility criteria. Internet information on breast reconstruction post mastectomy and procedures is poorly written and we suggest that Webpages providing information must be made more readable and accessible. We suggest that health professionals should recommend Web sites that are easy to read and contain high-quality surgical information. Medical information on the Internet should be readable, accessible, reliable and of a consistent quality. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. QuadBase2: web server for multiplexed guanine quadruplex mining and visualization

    PubMed Central

    Dhapola, Parashar; Chowdhury, Shantanu

    2016-01-01

    DNA guanine quadruplexes or G4s are non-canonical DNA secondary structures which affect genomic processes like replication, transcription and recombination. G4s are computationally identified by specific nucleotide motifs which are also called putative G4 (PG4) motifs. Despite the general relevance of these structures, there is currently no tool available that can allow batch queries and genome-wide analysis of these motifs in a user-friendly interface. QuadBase2 (quadbase.igib.res.in) presents a completely reinvented web server version of previously published QuadBase database. QuadBase2 enables users to mine PG4 motifs in up to 178 eukaryotes through the EuQuad module. This module interfaces with Ensembl Compara database, to allow users mine PG4 motifs in the orthologues of genes of interest across eukaryotes. PG4 motifs can be mined across genes and their promoter sequences in 1719 prokaryotes through ProQuad module. This module includes a feature that allows genome-wide mining of PG4 motifs and their visualization as circular histograms. TetraplexFinder, the module for mining PG4 motifs in user-provided sequences is now capable of handling up to 20 MB of data. QuadBase2 is a comprehensive PG4 motif mining tool that further expands the configurations and algorithms for mining PG4 motifs in a user-friendly way. PMID:27185890

  16. Using ant-behavior-based simulation model AntWeb to improve website organization

    NASA Astrophysics Data System (ADS)

    Li, Weigang; Pinheiro Dib, Marcos V.; Teles, Wesley M.; Morais de Andrade, Vlaudemir; Alves de Melo, Alba C. M.; Cariolano, Judas T.

    2002-03-01

    Some web usage mining algorithms showed the potential application to find the difference among the organizations expected by visitors to the website. However, there are still no efficient method and criterion for a web administrator to measure the performance of the modification. In this paper, we developed an AntWeb, a model inspired by ants' behavior to simulate the sequence of visiting the website, in order to measure the efficient of the web structure. We implemented a web usage mining algorithm using backtrack to the intranet website of the Politec Informatic Ltd., Brazil. We defined throughput (the number of visitors to reach their target pages per time unit relates to the total number of visitors) as an index to measure the website's performance. We also used the link in a web page to represent the effect of visitors' pheromone trails. For every modification in the website organization, for example, putting a link from the expected location to the target object, the simulation reported the value of throughput as a quick answer about this modification. The experiment showed the stability of our simulation model, and a positive modification to the intranet website of the Politec.

  17. An initial log analysis of usage patterns on a research networking system.

    PubMed

    Boland, Mary Regina; Trembowelski, Sylvia; Bakken, Suzanne; Weng, Chunhua

    2012-08-01

    Usage data for research networking systems (RNSs) are valuable but generally unavailable for understanding scientific professionals' information needs and online collaborator seeking behaviors. This study contributes a method for evaluating RNSs and initial usage knowledge of one RNS obtained from using this method. We designed a log for an institutional RNS, defined categories of users and tasks, and analyzed correlations between usage patterns and user and query types. Our results show that scientific professionals spend more time performing deep Web searching on RNSs than generic Google users and we also show that retrieving scientist profiles is faster on an RNS than on Google (3.5 seconds vs. 34.2 seconds) whereas organization-specific browsing on a RNS takes longer than on Google (117.0 seconds vs. 34.2 seconds). Usage patterns vary by user role, e.g., faculty performed more informational queries than administrators, which implies role-specific user support is needed for RNSs. © 2012 Wiley Periodicals, Inc.

  18. An Initial Log Analysis of Usage Patterns on a Research Networking System

    PubMed Central

    Boland, Mary Regina; Trembowelski, Sylvia; Bakken, Suzanne; Weng, Chunhua

    2012-01-01

    Abstract Usage data for research networking systems (RNSs) are valuable but generally unavailable for understanding scientific professionals’ information needs and online collaborator seeking behaviors. This study contributes a method for evaluating RNSs and initial usage knowledge of one RNS obtained from using this method. We designed a log for an institutional RNS, defined categories of users and tasks, and analyzed correlations between usage patterns and user and query types. Our results show that scientific professionals spend more time performing deep Web searching on RNSs than generic Google users and we also show that retrieving scientist profiles is faster on an RNS than on Google (3.5 seconds vs. 34.2 seconds) whereas organization‐specific browsing on a RNS takes longer than on Google (117.0 seconds vs. 34.2 seconds). Usage patterns vary by user role, e.g., faculty performed more informational queries than administrators, which implies role‐specific user support is needed for RNSs. Clin Trans Sci 2012; Volume 5: 340–347 PMID:22883612

  19. Google matrix of business process management

    NASA Astrophysics Data System (ADS)

    Abel, M. W.; Shepelyansky, D. L.

    2011-12-01

    Development of efficient business process models and determination of their characteristic properties are subject of intense interdisciplinary research. Here, we consider a business process model as a directed graph. Its nodes correspond to the units identified by the modeler and the link direction indicates the causal dependencies between units. It is of primary interest to obtain the stationary flow on such a directed graph, which corresponds to the steady-state of a firm during the business process. Following the ideas developed recently for the World Wide Web, we construct the Google matrix for our business process model and analyze its spectral properties. The importance of nodes is characterized by PageRank and recently proposed CheiRank and 2DRank, respectively. The results show that this two-dimensional ranking gives a significant information about the influence and communication properties of business model units. We argue that the Google matrix method, described here, provides a new efficient tool helping companies to make their decisions on how to evolve in the exceedingly dynamic global market.

  20. Mining of the social network extraction

    NASA Astrophysics Data System (ADS)

    Nasution, M. K. M.; Hardi, M.; Syah, R.

    2017-01-01

    The use of Web as social media is steadily gaining ground in the study of social actor behaviour. However, information in Web can be interpreted in accordance with the ability of the method such as superficial methods for extracting social networks. Each method however has features and drawbacks: it cannot reveal the behaviour of social actors, but it has the hidden information about them. Therefore, this paper aims to reveal such information in the social networks mining. Social behaviour could be expressed through a set of words extracted from the list of snippets.

  1. Analysing Customer Opinions with Text Mining Algorithms

    NASA Astrophysics Data System (ADS)

    Consoli, Domenico

    2009-08-01

    Knowing what the customer thinks of a particular product/service helps top management to introduce improvements in processes and products, thus differentiating the company from their competitors and gain competitive advantages. The customers, with their preferences, determine the success or failure of a company. In order to know opinions of the customers we can use technologies available from the web 2.0 (blog, wiki, forums, chat, social networking, social commerce). From these web sites, useful information must be extracted, for strategic purposes, using techniques of sentiment analysis or opinion mining.

  2. Accessing Biomedical Literature in the Current Information Landscape

    PubMed Central

    Khare, Ritu; Leaman, Robert; Lu, Zhiyong

    2015-01-01

    i. Summary Biomedical and life sciences literature is unique because of its exponentially increasing volume and interdisciplinary nature. Biomedical literature access is essential for several types of users including biomedical researchers, clinicians, database curators, and bibliometricians. In the past few decades, several online search tools and literature archives, generic as well as biomedicine-specific, have been developed. We present this chapter in the light of three consecutive steps of literature access: searching for citations, retrieving full-text, and viewing the article. The first section presents the current state of practice of biomedical literature access, including an analysis of the search tools most frequently used by the users, including PubMed, Google Scholar, Web of Science, Scopus, and Embase, and a study on biomedical literature archives such as PubMed Central. The next section describes current research and the state-of-the-art systems motivated by the challenges a user faces during query formulation and interpretation of search results. The research solutions are classified into five key areas related to text and data mining, text similarity search, semantic search, query support, relevance ranking, and clustering results. Finally, the last section describes some predicted future trends for improving biomedical literature access, such as searching and reading articles on portable devices, and adoption of the open access policy. PMID:24788259

  3. A Web-based Google-Earth Coincident Imaging Tool for Satellite Calibration and Validation

    NASA Astrophysics Data System (ADS)

    Killough, B. D.; Chander, G.; Gowda, S.

    2009-12-01

    The Group on Earth Observations (GEO) is coordinating international efforts to build a Global Earth Observation System of Systems (GEOSS) to meet the needs of its nine “Societal Benefit Areas”, of which the most demanding, in terms of accuracy, is climate. To accomplish this vision, satellite on-orbit and ground-based data calibration and validation (Cal/Val) of Earth observation measurements are critical to our scientific understanding of the Earth system. Existing tools supporting space mission Cal/Val are often developed for specific campaigns or events with little desire for broad application. This paper describes a web-based Google-Earth based tool for the calculation of coincident satellite observations with the intention to support a diverse international group of satellite missions to improve data continuity, interoperability and data fusion. The Committee on Earth Observing Satellites (CEOS), which includes 28 space agencies and 20 other national and international organizations, are currently operating and planning over 240 Earth observation satellites in the next 15 years. The technology described here will better enable the use of multiple sensors to promote increased coordination toward a GEOSS. The CEOS Systems Engineering Office (SEO) and the Working Group on Calibration and Validation (WGCV) support the development of the CEOS Visualization Environment (COVE) tool to enhance international coordination of data exchange, mission planning and Cal/Val events. The objective is to develop a simple and intuitive application tool that leverages the capabilities of Google-Earth web to display satellite sensor coverage areas and for the identification of coincident scene locations along with dynamic menus for flexibility and content display. Key features and capabilities include user-defined evaluation periods (start and end dates) and regions of interest (rectangular areas) and multi-user collaboration. Users can select two or more CEOS missions from a database including Satellite Tool Kit (STK) generated orbit information and perform rapid calculations to identify coincident scenes where the groundtracks of the CEOS mission instrument fields-of-view intersect. Calculated results are displayed on a customized Google-Earth web interface to view location and time information along with optional output to EXCEL table format. In addition, multiple viewports can be used for comparisons. COVE was first introduced to the CEOS WGCV community in May 2009. Since that time, the development of a prototype version has progressed. It is anticipated that the capabilities and applications of COVE can support a variety of international Cal/Val activities as well as provide general information on Earth observation coverage for education and societal benefit. This project demonstrates the utility of a systems engineering tool with broad international appeal for enhanced communication and data evaluation opportunities among international CEOS agencies. The COVE tool is publicly accessible via NASA servers.

  4. Keywords to Recruit Spanish- and English-Speaking Participants: Evidence From an Online Postpartum Depression Randomized Controlled Trial

    PubMed Central

    Kelman, Alex R; Muñoz, Ricardo F

    2014-01-01

    Background One of the advantages of Internet-based research is the ability to efficiently recruit large, diverse samples of international participants. Currently, there is a dearth of information on the behind-the-scenes process to setting up successful online recruitment tools. Objective The objective of the study was to examine the comparative impact of Spanish- and English-language keywords for a Google AdWords campaign to recruit pregnant women to an Internet intervention and to describe the characteristics of those who enrolled in the trial. Methods Spanish- and English-language Google AdWords campaigns were created to advertise and recruit pregnant women to a Web-based randomized controlled trial for the prevention of postpartum depression, the Mothers and Babies/Mamás y Bebés Internet Project. Search engine users who clicked on the ads in response to keyword queries (eg, pregnancy, depression and pregnancy) were directed to the fully automated study website. Data on the performance of keywords associated with each Google ad reflect Web user queries from February 2009 to June 2012. Demographic information, self-reported depression symptom scores, major depressive episode status, and Internet use data were collected from enrolled participants before randomization in the intervention study. Results The Google ads received high exposure (12,983,196 impressions) and interest (176,295 clicks) from a global sample of Web users; 6745 pregnant women consented to participate and 2575 completed enrollment in the intervention study. Keywords that were descriptive of pregnancy and distress or pregnancy and health resulted in higher consent and enrollment rates (ie, high-performing ads). In both languages, broad keywords (eg, pregnancy) had the highest exposure, more consented participants, and greatest cost per consent (up to US $25.77 per consent). The online ads recruited a predominantly Spanish-speaking sample from Latin America of Mestizo racial identity. The English-speaking sample was also diverse with most participants residing in regions of Asia and Africa. Spanish-speaking participants were significantly more likely to be of Latino ethnic background, not married, completed fewer years of formal education, and were more likely to have accessed the Internet for depression information (P<.001). Conclusions The Internet is an effective method for reaching an international sample of pregnant women interested in online interventions to manage changes in their mood during the perinatal period. To increase efficiency, Internet advertisements need to be monitored and tailored to reflect the target population’s conceptualization of health issues being studied. Trial Registration ClinicalTrials.gov NCT00816725; http://clinicaltrials.gov/show/NCT00816725 (Archived by WebCite at http://www.webcitation.org/6LumonjZP). PMID:24407163

  5. Online palliative care and oncology patient education resources through Google: Do they meet national health literacy recommendations?

    PubMed

    Prabhu, Arpan V; Crihalmeanu, Tudor; Hansberry, David R; Agarwal, Nitin; Glaser, Christine; Clump, David A; Heron, Dwight E; Beriwal, Sushil

    The Google search engine is a resource commonly used by patients to access health-related patient education information. The American Medical Association and National Institutes of Health recommend that patient education resources be written at a level between the third and seventh grade reading levels. We assessed the readability levels of online palliative care patient education resources using 10 readability algorithms widely accepted in the medical literature. In October 2016, searches were conducted for 10 individual terms pertaining to palliative care and oncology using the Google search engine; the first 10 articles written for the public for each term were downloaded for a total of 100 articles. The terms included palliative care, hospice, advance directive, cancer pain management, treatment of metastatic disease, treatment of brain metastasis, treatment of bone metastasis, palliative radiation therapy, palliative chemotherapy, and end-of-life care. We determined the average reading level of the articles by readability scale and Web site domain. Nine readability assessments with scores equivalent to academic grade level found that the 100 palliative care education articles were collectively written at a 12.1 reading level (standard deviation, 2.1; range, 7.6-17.3). Zero articles were written below a seventh grade level. Forty-nine (49%) articles were written above a high school graduate reading level. The Flesch Reading Ease scale classified the articles as "difficult" to read with a score of 45.6 of 100. The articles were collected from 62 Web site domains. Seven domains were accessed 3 or more times; among these, www.mskcc.org had the highest average reading level at a 14.5 grade level (standard deviation, 1.4; range, 13.4-16.1). Most palliative care education articles readily available on Google are written above national health literacy recommendations. There is need to revise these resources to allow patients and their families to derive the most benefit from these materials. Copyright © 2017 729. Published by Elsevier Inc. All rights reserved.

  6. Assessing species habitat using Google Street View: a case study of cliff-nesting vultures.

    PubMed

    Olea, Pedro P; Mateo-Tomás, Patricia

    2013-01-01

    The assessment of a species' habitat is a crucial issue in ecology and conservation. While the collection of habitat data has been boosted by the availability of remote sensing technologies, certain habitat types have yet to be collected through costly, on-ground surveys, limiting study over large areas. Cliffs are ecosystems that provide habitat for a rich biodiversity, especially raptors. Because of their principally vertical structure, however, cliffs are not easy to study by remote sensing technologies, posing a challenge for many researches and managers working with cliff-related biodiversity. We explore the feasibility of Google Street View, a freely available on-line tool, to remotely identify and assess the nesting habitat of two cliff-nesting vultures (the griffon vulture and the globally endangered Egyptian vulture) in northwestern Spain. Two main usefulness of Google Street View to ecologists and conservation biologists were evaluated: i) remotely identifying a species' potential habitat and ii) extracting fine-scale habitat information. Google Street View imagery covered 49% (1,907 km) of the roads of our study area (7,000 km²). The potential visibility covered by on-ground surveys was significantly greater (mean: 97.4%) than that of Google Street View (48.1%). However, incorporating Google Street View to the vulture's habitat survey would save, on average, 36% in time and 49.5% in funds with respect to the on-ground survey only. The ability of Google Street View to identify cliffs (overall accuracy = 100%) outperformed the classification maps derived from digital elevation models (DEMs) (62-95%). Nonetheless, high-performance DEM maps may be useful to compensate Google Street View coverage limitations. Through Google Street View we could examine 66% of the vultures' nesting-cliffs existing in the study area (n = 148): 64% from griffon vultures and 65% from Egyptian vultures. It also allowed us the extraction of fine-scale features of cliffs. This World Wide Web-based methodology may be a useful, complementary tool to remotely map and assess the potential habitat of cliff-dependent biodiversity over large geographic areas, saving survey-related costs.

  7. Assessing Species Habitat Using Google Street View: A Case Study of Cliff-Nesting Vultures

    PubMed Central

    Olea, Pedro P.; Mateo-Tomás, Patricia

    2013-01-01

    The assessment of a species’ habitat is a crucial issue in ecology and conservation. While the collection of habitat data has been boosted by the availability of remote sensing technologies, certain habitat types have yet to be collected through costly, on-ground surveys, limiting study over large areas. Cliffs are ecosystems that provide habitat for a rich biodiversity, especially raptors. Because of their principally vertical structure, however, cliffs are not easy to study by remote sensing technologies, posing a challenge for many researches and managers working with cliff-related biodiversity. We explore the feasibility of Google Street View, a freely available on-line tool, to remotely identify and assess the nesting habitat of two cliff-nesting vultures (the griffon vulture and the globally endangered Egyptian vulture) in northwestern Spain. Two main usefulness of Google Street View to ecologists and conservation biologists were evaluated: i) remotely identifying a species’ potential habitat and ii) extracting fine-scale habitat information. Google Street View imagery covered 49% (1,907 km) of the roads of our study area (7,000 km2). The potential visibility covered by on-ground surveys was significantly greater (mean: 97.4%) than that of Google Street View (48.1%). However, incorporating Google Street View to the vulture’s habitat survey would save, on average, 36% in time and 49.5% in funds with respect to the on-ground survey only. The ability of Google Street View to identify cliffs (overall accuracy = 100%) outperformed the classification maps derived from digital elevation models (DEMs) (62–95%). Nonetheless, high-performance DEM maps may be useful to compensate Google Street View coverage limitations. Through Google Street View we could examine 66% of the vultures’ nesting-cliffs existing in the study area (n = 148): 64% from griffon vultures and 65% from Egyptian vultures. It also allowed us the extraction of fine-scale features of cliffs. This World Wide Web-based methodology may be a useful, complementary tool to remotely map and assess the potential habitat of cliff-dependent biodiversity over large geographic areas, saving survey-related costs. PMID:23355880

  8. Evaluating Social Media Networks in Medicines Safety Surveillance: Two Case Studies.

    PubMed

    Coloma, Preciosa M; Becker, Benedikt; Sturkenboom, Miriam C J M; van Mulligen, Erik M; Kors, Jan A

    2015-10-01

    There is growing interest in whether social media can capture patient-generated information relevant for medicines safety surveillance that cannot be found in traditional sources. The aim of this study was to evaluate the potential contribution of mining social media networks for medicines safety surveillance using the following associations as case studies: (1) rosiglitazone and cardiovascular events (i.e. stroke and myocardial infarction); and (2) human papilloma virus (HPV) vaccine and infertility. We collected publicly accessible, English-language posts on Facebook, Google+, and Twitter until September 2014. Data were queried for co-occurrence of keywords related to the drug/vaccine and event of interest within a post. Messages were analysed with respect to geographical distribution, context, linking to other web content, and author's assertion regarding the supposed association. A total of 2537 posts related to rosiglitazone/cardiovascular events and 2236 posts related to HPV vaccine/infertility were retrieved, with the majority of posts representing data from Twitter (98 and 85%, respectively) and originating from users in the US. Approximately 21% of rosiglitazone-related posts and 84% of HPV vaccine-related posts referenced other web pages, mostly news items, law firms' websites, or blogs. Assertion analysis predominantly showed affirmation of the association of rosiglitazone/cardiovascular events (72%; n = 1821) and of HPV vaccine/infertility (79%; n = 1758). Only ten posts described personal accounts of rosiglitazone/cardiovascular adverse event experiences, and nine posts described HPV vaccine problems related to infertility. Publicly available data from the considered social media networks were sparse and largely untrackable for the purpose of providing early clues of safety concerns regarding the prespecified case studies. Further research investigating other case studies and exploring other social media platforms are necessary to further characterise the usefulness of social media for safety surveillance.

  9. Zebra Crossing Spotter: Automatic Population of Spatial Databases for Increased Safety of Blind Travelers

    PubMed Central

    Ahmetovic, Dragan; Manduchi, Roberto; Coughlan, James M.; Mascetti, Sergio

    2016-01-01

    In this paper we propose a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Knowing the location of crosswalks is critical for a blind person planning a trip that includes street crossing. By augmenting existing spatial databases (such as Google Maps or OpenStreetMap) with this information, a blind traveler may make more informed routing decisions, resulting in greater safety during independent travel. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm could also be complemented by a final crowdsourcing validation stage for increased accuracy. PMID:26824080

  10. WWW Motivation Mining: Finding Treasures for Teaching Evaluation Skills, Grades 1-6. Professional Growth Series.

    ERIC Educational Resources Information Center

    Arnone, Marilyn P.; Small, Ruth V.

    Designed for elementary or middle school teachers and library media specialists, this book provides educators with practical, easy-to-use ways of applying motivation assessment techniques when selecting World Wide Web sites for inclusion in their lessons and offers concrete examples of how to use Web evaluation with young learners. WebMAC…

  11. Astrophysical data mining with GPU. A case study: Genetic classification of globular clusters

    NASA Astrophysics Data System (ADS)

    Cavuoti, S.; Garofalo, M.; Brescia, M.; Paolillo, M.; Pescape', A.; Longo, G.; Ventre, G.

    2014-01-01

    We present a multi-purpose genetic algorithm, designed and implemented with GPGPU/CUDA parallel computing technology. The model was derived from our CPU serial implementation, named GAME (Genetic Algorithm Model Experiment). It was successfully tested and validated on the detection of candidate Globular Clusters in deep, wide-field, single band HST images. The GPU version of GAME will be made available to the community by integrating it into the web application DAMEWARE (DAta Mining Web Application REsource, http://dame.dsf.unina.it/beta_info.html), a public data mining service specialized on massive astrophysical data. Since genetic algorithms are inherently parallel, the GPGPU computing paradigm leads to a speedup of a factor of 200× in the training phase with respect to the CPU based version.

  12. Web services-based text-mining demonstrates broad impacts for interoperability and process simplification.

    PubMed

    Wiegers, Thomas C; Davis, Allan Peter; Mattingly, Carolyn J

    2014-01-01

    The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/ © The Author(s) 2014. Published by Oxford University Press.

  13. Web services-based text-mining demonstrates broad impacts for interoperability and process simplification

    PubMed Central

    Wiegers, Thomas C.; Davis, Allan Peter; Mattingly, Carolyn J.

    2014-01-01

    The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/ PMID:24919658

  14. Operating System Support for Shared Hardware Data Structures

    DTIC Science & Technology

    2013-01-31

    Carbon [73] uses hardware queues to improve fine-grained multitasking for Recognition, Mining , and Synthesis. Compared to software ap- proaches...web transaction processing, data mining , and multimedia. Early work in database processors [114, 96, 79, 111] reduce the costs of relational database...assignment can be solved statically or dynamically. Static assignment deter- mines offline which data structures are assigned to use HWDS resources and at

  15. 77 FR 4360 - Notice of Availability of the Draft Environmental Impact Statement for the Hycroft Mine Expansion...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-27

    ... comments related to the Hycroft Mine Expansion Draft EIS by any of the following methods: Web site: www.blm..., Nevada 89445, Attn. Kathleen Rehberg. Copies of the Hycroft Mine Expansion Draft EIS are available in the... hours. The FIRS is available 24 hours a day, 7 days a week, to leave a message or question with the...

  16. Using Blogging to Enhance the Initiation of Students into Academic Research

    ERIC Educational Resources Information Center

    Chong, Eddy K. M.

    2010-01-01

    For the net-generation students learning in a Web 2.0 world, research is often equated with Googling and approached with a mindset accustomed to cut-and-paste practices. Recognizing educators' concern over such students' learning dispositions on the one hand, and the educational affordances of blogging on the other, this study examines the use of…

  17. A Mathematical and Sociological Analysis of Google Search Algorithm

    DTIC Science & Technology

    2013-01-16

    through the collective intelligence of the web to determine a page’s importance. Let v be a vector of RN with N ≥ 8 billion. Any unit vector in RN is...scrolled up by some artifical hits. Aknowledgment: The authors would like to thank Dr. John Lavery for his encouragement and support which enable them to

  18. The Effect of Creative Drama as a Method on Skills: A Meta-Analysis Study

    ERIC Educational Resources Information Center

    Ulubey, Özgür

    2018-01-01

    The aim of the current study was to synthesize the findings of experimental studies addressing the effect of the creative drama method on the skills of students. Research data were derived from ProQuest Citations, Web of Science, Google Academic, National Thesis Center, EBSCO, ERIC, Taylor & Francis Online, and ScienceDirect databases using…

  19. Machine Translation-Assisted Language Learning: Writing for Beginners

    ERIC Educational Resources Information Center

    Garcia, Ignacio; Pena, Maria Isabel

    2011-01-01

    The few studies that deal with machine translation (MT) as a language learning tool focus on its use by advanced learners, never by beginners. Yet, freely available MT engines (i.e. Google Translate) and MT-related web initiatives (i.e. Gabble-on.com) position themselves to cater precisely to the needs of learners with a limited command of a…

  20. Opening Up Access to Open Access

    ERIC Educational Resources Information Center

    Singer, Ross

    2008-01-01

    As the corpus of gray literature grows and the price of serials rises, it becomes increasingly important to explore ways to integrate the free and open Web seamlessly into one's collections. Users, after all, are discovering these materials all the time via sites such as Google Scholar and Scirus or by searching arXiv.org or CiteSeer directly.…

  1. Cyber Exercise Playbook

    DTIC Science & Technology

    2014-11-01

    unclassified tools and techniques that can be shared with PNs, to include social engineering, spear phishing , fake web sites, physical access attempts, and...and instead rely on commercial services such as Yahoo or Google . Some nations have quite advanced cyber security practices, but may take vastly...unauthorized access to data/systems Inject external network scanning, email phishing , malicious website access, social engineering Sample

  2. Media Use in Higher Education from a Cross-National Perspective

    ERIC Educational Resources Information Center

    Grosch, Michael

    2013-01-01

    The web 2.0 has already penetrated the learning environment of students ubiquitously. This dissemination of online services into tertiary education has led to constant changes in students' learning and study behaviour. Students use services such as Google and Wikipedia most often not only during free time but also for learning. At the same…

  3. Overview of the TREC 2014 Federated Web Search Track

    DTIC Science & Technology

    2014-11-01

    Pictures e021 Dailymotion Video e123 Picsearch Photo/Pictures e022 YouTube Video e124 Wikimedia Photo/Pictures e023 Google Blogs Blogs e126 Funny or...song of ice and fire 7045 Natural Parks America 7072 price gibson howard roberts custom 7092 How much was a gallon of gas during depression 7111 what is

  4. So, You Want to Be a Leader

    ERIC Educational Resources Information Center

    Wager, J. James

    2012-01-01

    Thousands--if not tens of thousands--of books, monographs, and articles have been written on the subject of leadership. A Google search of the word returns nearly a half-billion Web sites. As a professional who has spent nearly 40 years in the higher education sector, the author has been blessed with opportunities to view and practice leadership…

  5. Leveraging Learning Technologies for Collaborative Writing in an Online Pharmacotherapy Course

    ERIC Educational Resources Information Center

    Pittenger, Amy L.; Olson-Kellogg, Becky

    2012-01-01

    The purpose of this project was to evaluate the development and delivery of a hypertext case scenario document to be used as the capstone assessment tool for doctoral-level physical therapy students. The integration of Web-based collaborative tools (PBworks[TM] and Google Sites[TM]) allowed students in this all-online course to apply their…

  6. Spaces for Interactive Engagement or Technology for Differential Academic Participation? Google Groups for Collaborative Learning at a South African University

    ERIC Educational Resources Information Center

    Rambe, Patient

    2017-01-01

    The rhetoric on the potential of Web 2.0 technologies to democratize online engagement of students often overlooks the discomforting, differential participation and asymmetrical engagement that accompanies student adoption of emerging technologies. This paper, therefore, constitutes a critical reality check for student adoption of technology to…

  7. A Content Analysis of Online HPV Immunization Information

    ERIC Educational Resources Information Center

    Pappa, Sara T.

    2016-01-01

    The Human Papillomavirus (HPV) can cause some types of cancer and is the most common sexually transmitted infection in the US. Because most people turn to the internet for health information, this study analyzed HPV information found online. A content analysis was conducted on 69 web search results (URLs) from Google, Yahoo, Bing and Ask. The…

  8. Exploring Writing Individually and Collaboratively Using Google Docs in EFL Contexts

    ERIC Educational Resources Information Center

    Alsubaie, Jawaher; Ashuraidah, Ali

    2017-01-01

    Online teaching and learning became popular with the evolution of the World Wide Web now days. Implementing online learning tools within EFL contexts will help better address the multitude of teaching and learning styles. Difficulty in academic writing can be considered one of the common problems that students face in and outside their classrooms.…

  9. Web-Based Interactive Steel Sculpture for the Google Generation

    ERIC Educational Resources Information Center

    Chou, Karen C.; Moaveni, Saeed

    2009-01-01

    In almost all the civil engineering programs in the United States, a student is required to take at least one design course in either steel or reinforced concrete. One of the topics covered in an introductory steel design course is the design of connections. Steel connections play important roles in the integrity of a structure, and many…

  10. Evidence-Based Intervention for Individuals with Acquired Apraxia of Speech. EBP Briefs. Volume 11, Issue 2

    ERIC Educational Resources Information Center

    Van Sickle, Angela

    2016-01-01

    Clinical Question: Would individuals with acquired apraxia of speech (AOS) demonstrate greater improvements for speech production with an articulatory kinematic approach or a rate/rhythm approach? Method: EBP Intervention Comparison Review. Study Sources: ASHA journal, Google Scholar, PubMed, CINAHL Plus with Full Text, Web of Science, Ovid, and…

  11. Accessibility and content of individualized adult reconstructive hip and knee/musculoskeletal oncology fellowship web sites.

    PubMed

    Young, Bradley L; Cantrell, Colin K; Patt, Joshua C; Ponce, Brent A

    2018-06-01

    Accessible, adequate online information is important to fellowship applicants. Program web sites can affect which programs applicants apply to, subsequently altering interview costs incurred by both parties and ultimately impacting rank lists. Web site analyses have been performed for all orthopaedic subspecialties other than those involved in the combined adult reconstruction and musculoskeletal (MSK) oncology fellowship match. A complete list of active programs was obtained from the official adult reconstruction and MSK oncology society web sites. Web site accessibility was assessed using a structured Google search. Accessible web sites were evaluated based on 21 previously reported content criteria. Seventy-four adult reconstruction programs and 11 MSK oncology programs were listed on the official society web sites. Web sites were identified and accessible for 58 (78%) adult reconstruction and 9 (82%) MSK oncology fellowship programs. No web site contained all content criteria and more than half of both adult reconstruction and MSK oncology web sites failed to include 12 of the 21 criteria. Several programs participating in the combined Adult Reconstructive Hip and Knee/Musculoskeletal Oncology Fellowship Match did not have accessible web sites. Of the web sites that were accessible, none contained comprehensive information and the majority lacked information that has been previously identified as being important to perspective applicants.

  12. Virtual reality for spherical images

    NASA Astrophysics Data System (ADS)

    Pilarczyk, Rafal; Skarbek, Władysław

    2017-08-01

    Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.

  13. Effective Web and Desktop Retrieval with Enhanced Semantic Spaces

    NASA Astrophysics Data System (ADS)

    Daoud, Amjad M.

    We describe the design and implementation of the NETBOOK prototype system for collecting, structuring and efficiently creating semantic vectors for concepts, noun phrases, and documents from a corpus of free full text ebooks available on the World Wide Web. Automatic generation of concept maps from correlated index terms and extracted noun phrases are used to build a powerful conceptual index of individual pages. To ensure scalabilty of our system, dimension reduction is performed using Random Projection [13]. Furthermore, we present a complete evaluation of the relative effectiveness of the NETBOOK system versus the Google Desktop [8].

  14. A Web Search on Environmental Topics: What Is the Role of Ranking?

    PubMed Central

    Filisetti, Barbara; Mascaretti, Silvia; Limina, Rosa Maria; Gelatti, Umberto

    2013-01-01

    Abstract Background: Although the Internet is easy to use, the mechanisms and logic behind a Web search are often unknown. Reliable information can be obtained, but it may not be visible as the Web site is not located in the first positions of search results. The possible risks of adverse health effects arising from environmental hazards are issues of increasing public interest, and therefore the information about these risks, particularly on topics for which there is no scientific evidence, is very crucial. The aim of this study was to investigate whether the presentation of information on some environmental health topics differed among various search engines, assuming that the most reliable information should come from institutional Web sites. Materials and Methods: Five search engines were used: Google, Yahoo!, Bing, Ask, and AOL. The following topics were searched in combination with the word “health”: “nuclear energy,” “electromagnetic waves,” “air pollution,” “waste,” and “radon.” For each topic three key words were used. The first 30 search results for each query were considered. The ranking variability among the search engines and the type of search results were analyzed for each topic and for each key word. The ranking of institutional Web sites was given particular consideration. Results: Variable results were obtained when surfing the Internet on different environmental health topics. Multivariate logistic regression analysis showed that, when searching for radon and air pollution topics, it is more likely to find institutional Web sites in the first 10 positions compared with nuclear power (odds ratio=3.4, 95% confidence interval 2.1–5.4 and odds ratio=2.9, 95% confidence interval 1.8–4.7, respectively) and also when using Google compared with Bing (odds ratio=3.1, 95% confidence interval 1.9–5.1). Conclusions: The increasing use of online information could play an important role in forming opinions. Web users should become more aware of the importance of finding reliable information, and health institutions should be able to make that information more visible. PMID:24083368

  15. Alkemio: association of chemicals with biomedical topics by text and data mining

    PubMed Central

    Gijón-Correas, José A.; Andrade-Navarro, Miguel A.; Fontaine, Jean F.

    2014-01-01

    The PubMed® database of biomedical citations allows the retrieval of scientific articles studying the function of chemicals in biology and medicine. Mining millions of available citations to search reported associations between chemicals and topics of interest would require substantial human time. We have implemented the Alkemio text mining web tool and SOAP web service to help in this task. The tool uses biomedical articles discussing chemicals (including drugs), predicts their relatedness to the query topic with a naïve Bayesian classifier and ranks all chemicals by P-values computed from random simulations. Benchmarks on seven human pathways showed good retrieval performance (areas under the receiver operating characteristic curves ranged from 73.6 to 94.5%). Comparison with existing tools to retrieve chemicals associated to eight diseases showed the higher precision and recall of Alkemio when considering the top 10 candidate chemicals. Alkemio is a high performing web tool ranking chemicals for any biomedical topics and it is free to non-commercial users. Availability: http://cbdm.mdc-berlin.de/∼medlineranker/cms/alkemio. PMID:24838570

  16. Geovisualization of Local and Regional Migration Using Web-mined Demographics

    NASA Astrophysics Data System (ADS)

    Schuermann, R. T.; Chow, T. E.

    2014-11-01

    The intent of this research was to augment and facilitate analyses, which gauges the feasibility of web-mined demographics to study spatio-temporal dynamics of migration. As a case study, we explored the spatio-temporal dynamics of Vietnamese Americans (VA) in Texas through geovisualization of mined demographic microdata from the World Wide Web. Based on string matching across all demographic attributes, including full name, address, date of birth, age and phone number, multiple records of the same entity (i.e. person) over time were resolved and reconciled into a database. Migration trajectories were geovisualized through animated sprites by connecting the different addresses associated with the same person and segmenting the trajectory into small fragments. Intra-metropolitan migration patterns appeared at the local scale within many metropolitan areas. At the scale of metropolitan area, varying degrees of immigration and emigration manifest different types of migration clusters. This paper presents a methodology incorporating GIS methods and cartographic design to produce geovisualization animation, enabling the cognitive identification of migration patterns at multiple scales. Identification of spatio-temporal patterns often stimulates further research to better understand the phenomenon and enhance subsequent modeling.

  17. Accessibility and usability OCW data: The UTPL OCW.

    PubMed

    Rodríguez, Germania; Perez, Jennifer; Cueva, Samanta; Torres, Rommel

    2017-08-01

    This data article provides a data description on article entitled "A framework for improving web accessibility and usability of Open Course Ware sites" [3] This Data in Brief presents the data obtained from the accessibility and usability evaluation of the UTPL OCW. The data obtained from the framework evaluation consists of the manual evaluation of the standards criteria and the automatic evaluation of the tools Google PageSpeed and Google Analytics. In addition, this article presents the synthetized tables from standards that are used by the framework to evaluate the accessibility and usability of OCW, and the questionnaires required to extract the data. As a result, the article also provides the data required to reproduce the evaluation of other OCW.

  18. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  19. Mapping of the dilemma of mining against forest and conservation in the Lom and Djérem Division, Cameroon

    NASA Astrophysics Data System (ADS)

    Tchindjang, Mesmin; Voundi, Eric; Mbevo Fendoung, Philippes; Haman, Unusa; Saha, Frédéric; Casimir Njombissie Petcheu, Igor

    2018-05-01

    Mining practices in Cameroon began since the colonial period. The artisanal mining sector before independence contributed to 11-20 % of GDP. From 2000, the rich potential of the Cameroonian subsoil attract many foreign investors with over 600 research and mining permits already granted during the last decade. But, Cameroonian forests also have a long history from the colonial period to the pre-sent. However, mining activities in forest environments are governed by two different legal frameworks, including mining code i.e. Law No. 001 of 16 April 2001 organizing the mining industry and Law No. 94-01 of 20 January 1994 governing forests, wildlife and fisheries. Therefore, in the absence of detailed studies of these laws, there are conflicts of interests, rights and obligations that overlap, requiring research needs and taking appropriate decisions. The objective of this research in the Lom and Djérem division is to study, apart from the proliferation of mining li-censes and actors, the dilemma as well as the impact of the extension of mining activities on the degradation of forest cover. Using geospatial tools through multi-temporal and multisensor satellite images (Landsat from 1976 to 2015, IKONOS, GEOEYE, Google Earth) coupled with field investigations; we mapped the dynamic of different forms of land use (mining permits, FMU and protected areas of permanent forest estate) and highlighted paradoxically the conflict of land use. We came to the conclusion that the rhythm of issuing mining permits and authorizations in this forestall zone is so fast that one can wonder whether we still find a patch of forest within 50 years.

  20. Syndromic Surveillance Models Using Web Data: The Case of Influenza in Greece and Italy Using Google Trends.

    PubMed

    Samaras, Loukas; García-Barriocanal, Elena; Sicilia, Miguel-Angel

    2017-11-20

    An extended discussion and research has been performed in recent years using data collected through search queries submitted via the Internet. It has been shown that the overall activity on the Internet is related to the number of cases of an infectious disease outbreak. The aim of the study was to define a similar correlation between data from Google Trends and data collected by the official authorities of Greece and Europe by examining the development and the spread of seasonal influenza in Greece and Italy. We used multiple regressions of the terms submitted in the Google search engine related to influenza for the period from 2011 to 2012 in Greece and Italy (sample data for 104 weeks for each country). We then used the autoregressive integrated moving average statistical model to determine the correlation between the Google search data and the real influenza cases confirmed by the aforementioned authorities. Two methods were used: (1) a flu score was created for the case of Greece and (2) comparison of data from a neighboring country of Greece, which is Italy. The results showed that there is a significant correlation that can help the prediction of the spread and the peak of the seasonal influenza using data from Google searches. The correlation for Greece for 2011 and 2012 was .909 and .831, respectively, and correlation for Italy for 2011 and 2012 was .979 and .933, respectively. The prediction of the peak was quite precise, providing a forecast before it arrives to population. We can create an Internet surveillance system based on Google searches to track influenza in Greece and Italy. ©Loukas Samaras, Elena García-Barriocanal, Miguel-Angel Sicilia. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 20.11.2017.

  1. Googling in anatomy education: Can google trends inform educators of national online search patterns of anatomical syllabi?

    PubMed

    Phelan, Nigel; Davy, Shane; O'Keeffe, Gerard W; Barry, Denis S

    2017-03-01

    The role of e-learning platforms in anatomy education continues to expand as self-directed learning is promoted in higher education. Although a wide range of e-learning resources are available, determining student use of non-academic internet resources requires novel approaches. One such approach that may be useful is the Google Trends © web application. To determine the feasibility of Google Trends to gain insights into anatomy-related online searches, Google Trends data from the United States from January 2010 to December 2015 were analyzed. Data collected were based on the recurrence of keywords related to head and neck anatomy generated from the American Association of Clinical Anatomists and the Anatomical Society suggested anatomy syllabi. Relative search volume (RSV) data were analyzed for seasonal periodicity and their overall temporal trends. Following exclusions due to insufficient search volume data, 29 out of 36 search terms were analyzed. Significant seasonal patterns occurred in 23 search terms. Thirty-nine seasonal peaks were identified, mainly in October and April, coinciding with teaching periods in anatomy curricula. A positive correlation of RSV with time over the 6-year study period occurred in 25 out of 29 search terms. These data demonstrate how Google Trends may offer insights into the nature and timing of online search patterns of anatomical syllabi and may potentially inform the development and timing of targeted online supports to ensure that students of anatomy have the opportunity to engage with online content that is both accurate and fit for purpose. Anat Sci Educ 10: 152-159. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  2. Mars @ ASDC

    NASA Astrophysics Data System (ADS)

    Carraro, Francesco

    "Mars @ ASDC" is a project born with the goal of using the new web technologies to assist researches involved in the study of Mars. This project employs Mars map and javascript APIs provided by Google to visualize data acquired by space missions on the planet. So far, visualization of tracks acquired by MARSIS and regions observed by VIRTIS-Rosetta has been implemented. The main reason for the creation of this kind of tool is the difficulty in handling hundreds or thousands of acquisitions, like the ones from MARSIS, and the consequent difficulty in finding observations related to a particular region. This led to the development of a tool which allows to search for acquisitions either by defining the region of interest through a set of geometrical parameters or by manually selecting the region on the map through a few mouse clicks The system allows the visualization of tracks (acquired by MARSIS) or regions (acquired by VIRTIS-Rosetta) which intersect the user defined region. MARSIS tracks can be visualized both in Mercator and polar projections while the regions observed by VIRTIS can presently be visualized only in Mercator projection. The Mercator projection is the standard map provided by Google. The polar projections are provided by NASA and have been developed to be used in combination with APIs provided by Google The whole project has been developed following the "open source" philosophy: the client-side code which handles the functioning of the web page is written in javascript; the server-side code which executes the searches for tracks or regions is written in PHP and the DB which undergoes the system is MySQL.

  3. [An evaluation of the quality of health web pages using a validated questionnaire].

    PubMed

    Conesa Fuentes, Maria del Carmen; Aguinaga Ontoso, Enrique; Hernández Morante, Juan José

    2011-01-01

    The objective of the present study was to evaluate the quality of general health information in Spanish language web pages, and the official Regional Services web pages from the different Autonomous Regions. It is a cross-sectional study. We have used a previously validated questionnaire to study the present state of the health information on Internet for a lay-user point of view. By mean of PageRank (Google®), we obtained a group of webs, including a total of 65 health web pages. We applied some exclusion criteria, and finally obtained a total of 36 webs. We also analyzed the official web pages from the different Health Services in Spain (19 webs), making a total of 54 health web pages. In the light of our data, we observed that, the quality of the general information health web pages was generally rather low, especially regarding the information quality. Not one page reached the maximum score (19 points). The mean score of the web pages was of 9.8±2.8. In conclusion, to avoid the problems arising from the lack of quality, health professionals should design advertising campaigns and other media to teach the lay-user how to evaluate the information quality. Copyright © 2009 Elsevier España, S.L. All rights reserved.

  4. A Method to Assess Seasonality of Urinary Tract Infections Based on Medication Sales and Google Trends

    PubMed Central

    Lambert, Bruno; Flahault, Antoine; Chartier-Kastler, Emmanuel; Hanslik, Thomas

    2013-01-01

    Background Despite the fact that urinary tract infection (UTI) is a very frequent disease, little is known about its seasonality in the community. Methods and Findings To estimate seasonality of UTI using multiple time series constructed with available proxies of UTI. Eight time series based on two databases were used: sales of urinary antibacterial medications reported by a panel of pharmacy stores in France between 2000 and 2012, and search trends on the Google search engine for UTI-related terms between 2004 and 2012 in France, Germany, Italy, the USA, China, Australia and Brazil. Differences between summers and winters were statistically assessed with the Mann-Whitney test. We evaluated seasonality by applying the Harmonics Product Spectrum on Fast Fourier Transform. Seven time series out of eight displayed a significant increase in medication sales or web searches in the summer compared to the winter, ranging from 8% to 20%. The eight time series displayed a periodicity of one year. Annual increases were seen in the summer for UTI drug sales in France and Google searches in France, the USA, Germany, Italy, and China. Increases occurred in the austral summer for Google searches in Brazil and Australia. Conclusions An annual seasonality of UTIs was evidenced in seven different countries, with peaks during the summer. PMID:24204587

  5. Literature Mining Methods for Toxicology and Construction of ...

    EPA Pesticide Factsheets

    Webinar Presentation on text-mining methodologies in use at NCCT and how they can be used to assist with the OECD Retinoid project. Presentation to 1st Workshop/Scientific Expert Group meeting on the OECD Retinoid Project - April 26, 2016 –Brussels, Presented remotely via web.

  6. Google Scholar is not enough to be used alone for systematic reviews.

    PubMed

    Giustini, Dean; Boulos, Maged N Kamel

    2013-01-01

    Google Scholar (GS) has been noted for its ability to search broadly for important references in the literature. Gehanno et al. recently examined GS in their study: 'Is Google scholar enough to be used alone for systematic reviews?' In this paper, we revisit this important question, and some of Gehanno et al.'s other findings in evaluating the academic search engine. The authors searched for a recent systematic review (SR) of comparable size to run search tests similar to those in Gehanno et al. We selected Chou et al. (2013) contacting the authors for a list of publications they found in their SR on social media in health. We queried GS for each of those 506 titles (in quotes "), one by one. When GS failed to retrieve a paper, or produced too many results, we used the allintitle: command to find papers with the same title. Google Scholar produced records for ~95% of the papers cited by Chou et al. (n=476/506). A few of the 30 papers that were not in GS were later retrieved via PubMed and even regular Google Search. But due to its different structure, we could not run searches in GS that were originally performed by Chou et al. in PubMed, Web of Science, Scopus and PsycINFO®. Identifying 506 papers in GS was an inefficient process, especially for papers using similar search terms. Has Google Scholar improved enough to be used alone in searching for systematic reviews? No. GS' constantly-changing content, algorithms and database structure make it a poor choice for systematic reviews. Looking for papers when you know their titles is a far different issue from discovering them initially. Further research is needed to determine when and how (and for what purposes) GS can be used alone. Google should provide details about GS' database coverage and improve its interface (e.g., with semantic search filters, stored searching, etc.). Perhaps then it will be an appropriate choice for systematic reviews.

  7. EntrezAJAX: direct web browser access to the Entrez Programming Utilities.

    PubMed

    Loman, Nicholas J; Pallen, Mark J

    2010-06-21

    Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/

  8. Novel inter and intra prediction tools under consideration for the emerging AV1 video codec

    NASA Astrophysics Data System (ADS)

    Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe; Wang, Yunqing; Bankoski, Jim; Wang, Chen; Keyder, Emil

    2017-09-01

    Google started the WebM Project in 2010 to develop open source, royalty- free video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec AV1, in a consortium of major tech companies called the Alliance for Open Media, that achieves at least a generational improvement in coding efficiency over VP9. In this paper, we focus primarily on new tools in AV1 that improve the prediction of pixel blocks before transforms, quantization and entropy coding are invoked. Specifically, we describe tools and coding modes that improve intra, inter and combined inter-intra prediction. Results are presented on standard test sets.

  9. Monitoring food safety violation reports from internet forums.

    PubMed

    Kate, Kiran; Negi, Sumit; Kalagnanam, Jayant

    2014-01-01

    Food-borne illness is a growing public health concern in the world. Government bodies, which regulate and monitor the state of food safety, solicit citizen feedback about food hygiene practices followed by food establishments. They use traditional channels like call center, e-mail for such feedback collection. With the growing popularity of Web 2.0 and social media, citizens often post such feedback on internet forums, message boards etc. The system proposed in this paper applies text mining techniques to identify and mine such food safety complaints posted by citizens on web data sources thereby enabling the government agencies to gather more information about the state of food safety. In this paper, we discuss the architecture of our system and the text mining methods used. We also present results which demonstrate the effectiveness of this system in a real-world deployment.

  10. TCGA4U: A Web-Based Genomic Analysis Platform To Explore And Mine TCGA Genomic Data For Translational Research.

    PubMed

    Huang, Zhenzhen; Duan, Huilong; Li, Haomin

    2015-01-01

    Large-scale human cancer genomics projects, such as TCGA, generated large genomics data for further study. Exploring and mining these data to obtain meaningful analysis results can help researchers find potential genomics alterations that intervene the development and metastasis of tumors. We developed a web-based gene analysis platform, named TCGA4U, which used statistics methods and models to help translational investigators explore, mine and visualize human cancer genomic characteristic information from the TCGA datasets. Furthermore, through Gene Ontology (GO) annotation and clinical data integration, the genomic data were transformed into biological process, molecular function, cellular component and survival curves to help researchers identify potential driver genes. Clinical researchers without expertise in data analysis will benefit from such a user-friendly genomic analysis platform.

  11. Web-Based Learning for Cultural Heritage: First Experienced with Students of the Private University of Technology in Northern Taiwan

    NASA Astrophysics Data System (ADS)

    Yen, Y.-N.; Wu, Y.-W.; Weng, K.-H.

    2013-07-01

    E-learning assisted teaching and learning is the trend of the 21st century and has many advantages - freedom from the constraints of time and space, hypertext and multimedia rich resources - enhancing the interaction between students and the teaching materials. The purpose of this study is to explore how rich Internet resources assisted students with the Western Architectural History course. First, we explored the Internet resources which could assist teaching and learning activities. Second, according to course objectives, we built a web-based platform which integrated the Google spreadsheets form, SIMILE widget, Wikipedia and the Google Maps and applied it to the course of Western Architectural History. Finally, action research was applied to understanding the effectiveness of this teaching/learning mode. Participants were the students of the Department of Architecture in the Private University of Technology in northern Taiwan. Results showed that students were willing to use the web-based platform to assist their learning. They found this platform to be useful in understanding the relationship between different periods of buildings. Through the view of the map mode, this platform also helped students expand their international perspective. However, we found that the information shared by students via the Internet were not completely correct. One possible reason was that students could easily acquire information on Internet but they could not determine the correctness of the information. To conclude, this study found some useful and rich resources that could be well-integrated, from which we built a web-based platform to collect information and present this information in diverse modes to stimulate students' learning motivation. We recommend that future studies should consider hiring teaching assistants in order to ease the burden on teachers, and to assist in the maintenance of information quality.

  12. Data Visualization of Lunar Orbiter KAGUYA (SELENE) using web-based GIS

    NASA Astrophysics Data System (ADS)

    Okumura, H.; Sobue, S.; Yamamoto, A.; Fujita, T.

    2008-12-01

    The Japanese Lunar Orbiter KAGUYA (SELENE) was launched on Sep.14 2007, and started nominal observation from Dec. 21 2007. KAGUYA has 15 ongoing observation missions and is obtaining various physical quantity data of the moon such as elemental abundance, mineralogical composition, geological feature, magnetic field and gravity field. We are working on the visualization of these data and the application of them to web-based GIS. Our purpose of data visualization is the promotion of science and education and public outreach (EPO). As for scientific usage and public outreach, we already constructed KAGUYA Web Map Server (WMS) at JAXA Sagamihara Campus and began to test it among internal KAGUYA project. KAGUYA science team plans the integrated science using the data of multiple instruments with the aim of obtaining the new findings of the origin and the evolution of the moon. In the study of the integrated science, scientists have to access, compare and analyze various types of data with different resolution. Web-based GIS will allow users to map, overlay and share the data and information easily. So it will be the best way to progress such a study and we are developing the KAGUYA WMS as a platform of the KAGUYA integrated science. For the purpose of EPO, we are customizing NASA World Wind (NWW) JAVA for KAGUYA supported by NWW project. Users will be able to search and view many images and movies of KAGUYA on NWW JAVA in the easy and attractive way. In addition, we are considering applying KAGUYA images to Google Moon with KML format and adding KAGUYA movies to Google/YouTube.

  13. 78 FR 2398 - Motorola Mobility LLC and Google Inc.; Analysis of Proposed Consent Order to Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-11

    ... responsible for making sure that your comment does not include any sensitive health information, like medical records or other individually identifiable health information. In addition, do not include any ``[t]rade... overnight service. Visit the Commission Web site at http://www.ftc.gov to read this Notice and the news...

  14. Phraseology and Frequency of Occurrence on the Web: Native Speakers' Perceptions of Google-Informed Second Language Writing

    ERIC Educational Resources Information Center

    Geluso, Joe

    2013-01-01

    Usage-based theories of language learning suggest that native speakers of a language are acutely aware of formulaic language due in large part to frequency effects. Corpora and data-driven learning can offer useful insights into frequent patterns of naturally occurring language to second/foreign language learners who, unlike native speakers, are…

  15. Brief report: trends in US National autism awareness from 2004 to 2014: the impact of national autism awareness month.

    PubMed

    DeVilbiss, Elizabeth A; Lee, Brian K

    2014-12-01

    We sought to evaluate the potential for using historical web search data on autism spectrum disorders (ASD)-related topics as an indicator of ASD awareness. Analysis of Google Trend data suggested that National Autism Awareness Month and televised reports concerning autism are an effective method of promoting online search interest in autism.

  16. The Google Online Marketing Challenge: Real Clients, Real Money, Real Ads and Authentic Learning

    ERIC Educational Resources Information Center

    Miko, John S.

    2014-01-01

    Search marketing is the process of utilizing search engines to drive traffic to a Web site through both paid and unpaid efforts. One potential paid component of a search marketing strategy is the use of a pay-per-click (PPC) advertising campaign in which advertisers pay search engine hosts only when their advertisement is clicked. This paper…

  17. Brief Report: Consistency of Search Engine Rankings for Autism Websites

    ERIC Educational Resources Information Center

    Reichow, Brian; Naples, Adam; Steinhoff, Timothy; Halpern, Jason; Volkmar, Fred R.

    2012-01-01

    The World Wide Web is one of the most common methods used by parents to find information on autism spectrum disorders and most consumers find information through search engines such as Google or Bing. However, little is known about how the search engines operate or the consistency of the results that are returned over time. This study presents the…

  18. Googling NDIS: Evaluating the Quality of Online Information about the National Disability Insurance Scheme for Caregivers of Deaf Children

    ERIC Educational Resources Information Center

    Simpson, Andrea; Baldwin, Elizabeth Margaret

    2017-01-01

    This study sought to analyze and evaluate the accessibility, availability and quality of online information regarding the National Disability Insurance Scheme (NDIS) and hearing loss. The most common search engine keyword terms a caregiver may enter when conducting a web search was determined using a keyword search tool. The top websites linked…

  19. Education Students' Use of Collaborative Writing Tools in Collectively Reflective Essay Papers

    ERIC Educational Resources Information Center

    Brodahl, Cornelia; Hansen, Nils Kristian

    2014-01-01

    Google Docs and EtherPad are Web 2.0 tools providing opportunity for multiple users to work online on the same document consecutively or simultaneously. Over the last few years a number of research papers on the use of these collaborative tools in a teaching and learning environment have been published. This work builds on that of Brodahl,…

  20. The Effects of Online Homework on First Year Pre-Service Science Teachers' Learning Achievements of Introductory Organic Chemistry

    ERIC Educational Resources Information Center

    Ratniyom, Jadsada; Boonphadung, Suttipong; Unnanantn, Thassanant

    2016-01-01

    This study examined the effects of the introductory organic chemistry online homework on first year pre-service science teachers' learning achievements. The online homework was created using a web-based Google form in order to enhance the pre-service science teachers' learning achievements. The steps for constructing online homework were…

  1. Using Buttons to Better Manage Online Presence: How One Academic Institution Harnessed the Power of Flair

    ERIC Educational Resources Information Center

    Dority Baker, Marcia L.

    2013-01-01

    This article provides a case study of how the University of Nebraska College of Law and Schmid Law Library use "buttons" to manage Law College faculty members' and librarians' online presence. Since Google is the primary search engine used to find information, it is important that librarians and libraries assist Web site visitors in…

  2. E-Learning for Elementary Students: The Web 2.0 Tool Google Drive as Teaching and Learning Practice

    ERIC Educational Resources Information Center

    Apergi, Angeliki; Anagnostopoulou, Angeliki; Athanasiou, Alexandra

    2015-01-01

    It is a well-known fact that during recent years, the new economic and technological environment, which has emerged from the dynamic impacts of globalization, has given rise to the increased development of information and communication technologies that have immensely influenced education and training all over Europe. Within this framework, there…

  3. A systematic review of patient inflammatory bowel disease information resources on the World Wide Web.

    PubMed

    Bernard, André; Langille, Morgan; Hughes, Stephanie; Rose, Caren; Leddin, Desmond; Veldhuyzen van Zanten, Sander

    2007-09-01

    The Internet is a widely used information resource for patients with inflammatory bowel disease, but there is variation in the quality of Web sites that have patient information regarding Crohn's disease and ulcerative colitis. The purpose of the current study is to systematically evaluate the quality of these Web sites. The top 50 Web sites appearing in Google using the terms "Crohn's disease" or "ulcerative colitis" were included in the study. Web sites were evaluated using a (a) Quality Evaluation Instrument (QEI) that awarded Web sites points (0-107) for specific information on various aspects of inflammatory bowel disease, (b) a five-point Global Quality Score (GQS), (c) two reading grade level scores, and (d) a six-point integrity score. Thirty-four Web sites met the inclusion criteria, 16 Web sites were excluded because they were portals or non-IBD oriented. The median QEI score was 57 with five Web sites scoring higher than 75 points. The median Global Quality Score was 2.0 with five Web sites achieving scores of 4 or 5. The average reading grade level score was 11.2. The median integrity score was 3.0. There is marked variation in the quality of the Web sites containing information on Crohn's disease and ulcerative colitis. Many Web sites suffered from poor quality but there were five high-scoring Web sites.

  4. Using the internet to understand angler behavior in the information age

    USGS Publications Warehouse

    Martin, Dustin R.; Pracheil, Brenda M.; DeBoer, Jason A.; Wilde, Gene R.; Pope, Kevin L.

    2012-01-01

    Declining participation in recreational angling is of great concern to fishery managers because fishing license sales are an important revenue source for protection of aquatic resources. This decline is frequently attributed, in part, to increased societal reliance on electronics. Internet use by anglers is increasing and fishery managers may use the Internet as a unique means to increase angler participation. We examined Internet search behavior using Google Insights for Search, a free online tool that summarizes Google searches from 2004 to 2011 to determine (1) trends in Internet search volume for general fishing related terms and (2) the relative usefulness of terms related to angler recruitment programs across the United States. Though search volume declined for general fishing terms (e.g., fishing, fishing guide), search volume increased for social media and recruitment terms (e.g., fishing forum, family fishing) over the 7-year period. We encourage coordinators of recruitment programs to capitalize on anglers’ Internet usage by considering Internet search patterns when creating web-based information. Careful selection of terms used in web-based information to match those currently searched by potential anglers may help to direct traffic to state agency websites that support recruitment efforts.

  5. Timely Reporting and Interactive Visualization of Animal Health and Slaughterhouse Surveillance Data in Switzerland.

    PubMed

    Muellner, Ulrich J; Vial, Flavie; Wohlfender, Franziska; Hadorn, Daniela; Reist, Martin; Muellner, Petra

    2015-01-01

    The reporting of outputs from health surveillance systems should be done in a near real-time and interactive manner in order to provide decision makers with powerful means to identify, assess, and manage health hazards as early and efficiently as possible. While this is currently rarely the case in veterinary public health surveillance, reporting tools do exist for the visual exploration and interactive interrogation of health data. In this work, we used tools freely available from the Google Maps and Charts library to develop a web application reporting health-related data derived from slaughterhouse surveillance and from a newly established web-based equine surveillance system in Switzerland. Both sets of tools allowed entry-level usage without or with minimal programing skills while being flexible enough to cater for more complex scenarios for users with greater programing skills. In particular, interfaces linking statistical softwares and Google tools provide additional analytical functionality (such as algorithms for the detection of unusually high case occurrences) for inclusion in the reporting process. We show that such powerful approaches could improve timely dissemination and communication of technical information to decision makers and other stakeholders and could foster the early-warning capacity of animal health surveillance systems.

  6. Web application and database modeling of traffic impact analysis using Google Maps

    NASA Astrophysics Data System (ADS)

    Yulianto, Budi; Setiono

    2017-06-01

    Traffic impact analysis (TIA) is a traffic study that aims at identifying the impact of traffic generated by development or change in land use. In addition to identifying the traffic impact, TIA is also equipped with mitigation measurement to minimize the arising traffic impact. TIA has been increasingly important since it was defined in the act as one of the requirements in the proposal of Building Permit. The act encourages a number of TIA studies in various cities in Indonesia, including Surakarta. For that reason, it is necessary to study the development of TIA by adopting the concept Transportation Impact Control (TIC) in the implementation of the TIA standard document and multimodal modeling. It includes TIA's standardization for technical guidelines, database and inspection by providing TIA checklists, monitoring and evaluation. The research was undertaken by collecting the historical data of junctions, modeling of the data in the form of relational database, building a user interface for CRUD (Create, Read, Update and Delete) the TIA data in the form of web programming with Google Maps libraries. The result research is a system that provides information that helps the improvement and repairment of TIA documents that exist today which is more transparent, reliable and credible.

  7. Citation Analysis of Iranian Journal of Basic Medical Sciences in ISI Web of Knowledge, Scopus, and Google Scholar.

    PubMed

    Zarifmahmoudi, Leili; Kianifar, Hamid Reza; Sadeghi, Ramin

    2013-10-01

    Citation tracking is an important method to analyze the scientific impact of journal articles and can be done through Scopus (SC), Google Scholar (GS), or ISI web of knowledge (WOS). In the current study, we analyzed the citations to 2011-2012 articles of Iranian Journal of Basic Medical Sciences (IJBMS) in these three resources. The relevant data from SC, GS, and WOS official websites. Total number of citations, their overlap and unique citations of these three recourses were evaluated. WOS and SC covered 100% and GS covered 97% of the IJBMS items. Totally, 37 articles were cited at least once in one of the studied resources. Total number of citations were 20, 30, and 59 in WOS, SC, and GS respectively. Forty citations of GS, 6 citation of SC, and 2 citations of WOS were unique. Every scientific resource has its own inaccuracies in providing citation analysis information. Citation analysis studies are better to be done each year to correct any inaccuracy as soon as possible. IJBMS has gained considerable scientific attention from wide range of high impact journals and through citation tracking method; this visibility can be traced more thoroughly.

  8. Using GeoRSS feeds to distribute house renting and selling information based on Google map

    NASA Astrophysics Data System (ADS)

    Nong, Yu; Wang, Kun; Miao, Lei; Chen, Fei

    2007-06-01

    Geographically Encoded Objects RSS (GeoRSS) is a way to encode location in RSS feeds. RSS is a widely supported format for syndication of news and weblogs, and is extendable to publish any sort of itemized data. When Weblogs explode since RSS became new portals, Geo-tagged feed is necessary to show the location that story tells. Geographically Encoded Objects adopts the core of RSS framework, making itself the map annotations specified in the RSS XML format. The case studied illuminates that GeoRSS could be maximally concise in representation and conception, so it's simple to manipulate generation and then mashup GeoRSS feeds with Google Map through API to show the real estate information with other attribute in the information window. After subscribe to feeds of concerned subjects, users could easily check for new bulletin showing on map through syndication. The primary design goal of GeoRSS is to make spatial data creation as easy as regular Web content development. However, it does more for successfully bridging the gap between traditional GIS professionals and amateurs, Web map hackers, and numerous services that enable location-based content for its simplicity and effectiveness.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargupta, H.; Stafford, B.; Hamzaoglu, I.

    This paper describes an experimental parallel/distributed data mining system PADMA (PArallel Data Mining Agents) that uses software agents for local data accessing and analysis and a web based interface for interactive data visualization. It also presents the results of applying PADMA for detecting patterns in unstructured texts of postmortem reports and laboratory test data for Hepatitis C patients.

  10. Data warehousing as a basis for web-based documentation of data mining and analysis.

    PubMed

    Karlsson, J; Eklund, P; Hallgren, C G; Sjödin, J G

    1999-01-01

    In this paper we present a case study for data warehousing intended to support data mining and analysis. We also describe a prototype for data retrieval. Further we discuss some technical issues related to a particular choice of a patient record environment.

  11. On-Board Mining in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Tanner, S.; Conover, H.; Graves, S.; Ramachandran, R.; Rushing, J.

    2004-12-01

    On-board data mining can contribute to many research and engineering applications, including natural hazard detection and prediction, intelligent sensor control, and the generation of customized data products for direct distribution to users. The ability to mine sensor data in real time can also be a critical component of autonomous operations, supporting deep space missions, unmanned aerial and ground-based vehicles (UAVs, UGVs), and a wide range of sensor meshes, webs and grids. On-board processing is expected to play a significant role in the next generation of NASA, Homeland Security, Department of Defense and civilian programs, providing for greater flexibility and versatility in measurements of physical systems. In addition, the use of UAV and UGV systems is increasing in military, emergency response and industrial applications. As research into the autonomy of these vehicles progresses, especially in fleet or web configurations, the applicability of on-board data mining is expected to increase significantly. Data mining in real time on board sensor platforms presents unique challenges. Most notably, the data to be mined is a continuous stream, rather than a fixed store such as a database. This means that the data mining algorithms must be modified to make only a single pass through the data. In addition, the on-board environment requires real time processing with limited computing resources, thus the algorithms must use fixed and relatively small amounts of processing time and memory. The University of Alabama in Huntsville is developing an innovative processing framework for the on-board data and information environment. The Environment for On-Board Processing (EVE) and the Adaptive On-board Data Processing (AODP) projects serve as proofs-of-concept of advanced information systems for remote sensing platforms. The EVE real-time processing infrastructure will upload, schedule and control the execution of processing plans on board remote sensors. These plans provide capabilities for autonomous data mining, classification and feature extraction using both streaming and buffered data sources. A ground-based testbed provides a heterogeneous, embedded hardware and software environment representing both space-based and ground-based sensor platforms, including wireless sensor mesh architectures. The AODP project explores the EVE concepts in the world of sensor-networks, including ad-hoc networks of small sensor platforms.

  12. Designing and Managing Your Digital Library.

    ERIC Educational Resources Information Center

    Guenther, Kim

    2000-01-01

    Discusses digital libraries and Web site design issues. Highlights include accessibility issues, including standards, markup languages like HTML and XML, and metadata; building virtual communities; the use of Web portals for customized delivery of information; quality assurance tools, including data mining; and determining user needs, including…

  13. A Dynamic Recommender System for Improved Web Usage Mining and CRM Using Swarm Intelligence.

    PubMed

    Alphy, Anna; Prabakaran, S

    2015-01-01

    In modern days, to enrich e-business, the websites are personalized for each user by understanding their interests and behavior. The main challenges of online usage data are information overload and their dynamic nature. In this paper, to address these issues, a WebBluegillRecom-annealing dynamic recommender system that uses web usage mining techniques in tandem with software agents developed for providing dynamic recommendations to users that can be used for customizing a website is proposed. The proposed WebBluegillRecom-annealing dynamic recommender uses swarm intelligence from the foraging behavior of a bluegill fish. It overcomes the information overload by handling dynamic behaviors of users. Our dynamic recommender system was compared against traditional collaborative filtering systems. The results show that the proposed system has higher precision, coverage, F1 measure, and scalability than the traditional collaborative filtering systems. Moreover, the recommendations given by our system overcome the overspecialization problem by including variety in recommendations.

  14. A Dynamic Recommender System for Improved Web Usage Mining and CRM Using Swarm Intelligence

    PubMed Central

    Alphy, Anna; Prabakaran, S.

    2015-01-01

    In modern days, to enrich e-business, the websites are personalized for each user by understanding their interests and behavior. The main challenges of online usage data are information overload and their dynamic nature. In this paper, to address these issues, a WebBluegillRecom-annealing dynamic recommender system that uses web usage mining techniques in tandem with software agents developed for providing dynamic recommendations to users that can be used for customizing a website is proposed. The proposed WebBluegillRecom-annealing dynamic recommender uses swarm intelligence from the foraging behavior of a bluegill fish. It overcomes the information overload by handling dynamic behaviors of users. Our dynamic recommender system was compared against traditional collaborative filtering systems. The results show that the proposed system has higher precision, coverage, F1 measure, and scalability than the traditional collaborative filtering systems. Moreover, the recommendations given by our system overcome the overspecialization problem by including variety in recommendations. PMID:26229978

  15. Evaluating IPv6 Adoption in the Internet

    NASA Astrophysics Data System (ADS)

    Colitti, Lorenzo; Gunderson, Steinar H.; Kline, Erik; Refice, Tiziana

    As IPv4 address space approaches exhaustion, large networks are deploying IPv6 or preparing for deployment. However, there is little data available about the quantity and quality of IPv6 connectivity. We describe a methodology to measure IPv6 adoption from the perspective of a Web site operator and to evaluate the impact that adding IPv6 to a Web site will have on its users. We apply our methodology to the Google Web site and present results collected over the last year. Our data show that IPv6 adoption, while growing significantly, is still low, varies considerably by country, and is heavily influenced by a small number of large deployments. We find that native IPv6 latency is comparable to IPv4 and provide statistics on IPv6 transition mechanisms used.

  16. The protein information and property explorer: an easy-to-use, rich-client web application for the management and functional analysis of proteomic data

    PubMed Central

    Ramos, H.; Shannon, P.; Aebersold, R.

    2008-01-01

    Motivation: Mass spectrometry experiments in the field of proteomics produce lists containing tens to thousands of identified proteins. With the protein information and property explorer (PIPE), the biologist can acquire functional annotations for these proteins and explore the enrichment of the list, or fraction thereof, with respect to functional classes. These protein lists may be saved for access at a later time or different location. The PIPE is interoperable with the Firegoose and the Gaggle, permitting wide-ranging data exploration and analysis. The PIPE is a rich-client web application which uses AJAX capabilities provided by the Google Web Toolkit, and server-side data storage using Hibernate. Availability: http://pipe.systemsbiology.net Contact: pshannon@systemsbiology.org PMID:18635572

  17. HippDB: a database of readily targeted helical protein-protein interactions.

    PubMed

    Bergey, Christina M; Watkins, Andrew M; Arora, Paramjit S

    2013-11-01

    HippDB catalogs every protein-protein interaction whose structure is available in the Protein Data Bank and which exhibits one or more helices at the interface. The Web site accepts queries on variables such as helix length and sequence, and it provides computational alanine scanning and change in solvent-accessible surface area values for every interfacial residue. HippDB is intended to serve as a starting point for structure-based small molecule and peptidomimetic drug development. HippDB is freely available on the web at http://www.nyu.edu/projects/arora/hippdb. The Web site is implemented in PHP, MySQL and Apache. Source code freely available for download at http://code.google.com/p/helidb, implemented in Perl and supported on Linux. arora@nyu.edu.

  18. Innovative Instructional Tools from the AMS

    NASA Astrophysics Data System (ADS)

    Abshire, W. E.; Geer, I. W.; Mills, E. W.; Nugnes, K. A.; Stimach, A. E.

    2016-12-01

    Since 1996, the American Meteorological Society (AMS) has been developing online educational materials with dynamic features that engage students and encourage additional exploration of various concepts. Most recently, AMS transitioned its etextbooks to webBooks. Now accessible anywhere with internet access, webBooks can be read with any web browser. Prior versions of AMS etextbooks were difficult to use in a lab setting, however webBooks are much easier to use and no longer a hurdle to learning. Additionally, AMS eInvestigations Manuals, also in webBook format, include labs with innovative features and educational tools. One such example is the AMS Climate at a Glance (CAG) app that draws data from NOAA's Climate at a Glance website. The user selects historical data of a given parameter and the app calculates various statistics revealing whether or not the results are consistent with climate change. These results allow users to distinguish between climate variability and climate change. This can be done for hundreds of locations across the U.S. and on multiple time scales. Another innovative educational tool used in AMS eInvestigations Manuals is the AMS Conceptual Climate Energy Model (CCEM). The CCEM is a computer simulation designed to enable users to track the paths that units of energy might follow as they enter, move through and exit an imaginary system according to simple rules applied to different scenarios. The purpose is to provide insight into the impacts of physical processes that operate in the real world. Finally, AMS educational materials take advantage of Google Earth imagery to reproduce the physical aspects of globes, allowing users to investigate spatial relationships in three dimensions. Google Earth imagery is used to explore tides, ocean bottom bathymetry and El Nino and La Nina. AMS will continue to develop innovative educational materials and tools as technology advances, to attract more students to the Earth sciences.

  19. The GLIMS Glacier Database

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), MapInfo, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.

  20. Assessing and Minimizing Adversarial Risk in a Nuclear Material Transportation Network

    DTIC Science & Technology

    2013-09-01

    0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave Blank) 2. REPORT DATE 09-27-2013 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND...U.S. as of July 2013. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Figure A.1 Google Earth routing from Areva to Arkansas Nuclear...Uranium ore is mined or removed from the earth in a leaching process. 2. Conversion (1). Triuranium octoxide (U3O8, “yellowcake”) is converted into ura

Top