a Map Mash-Up Application: Investigation the Temporal Effects of Climate Change on Salt Lake Basin
NASA Astrophysics Data System (ADS)
Kirtiloglu, O. S.; Orhan, O.; Ekercin, S.
2016-06-01
The main purpose of this paper is to investigate climate change effects that have been occurred at the beginning of the twenty-first century at the Konya Closed Basin (KCB) located in the semi-arid central Anatolian region of Turkey and particularly in Salt Lake region where many major wetlands located in and situated in KCB and to share the analysis results online in a Web Geographical Information System (GIS) environment. 71 Landsat 5-TM, 7-ETM+ and 8-OLI images and meteorological data obtained from 10 meteorological stations have been used at the scope of this work. 56 of Landsat images have been used for extraction of Salt Lake surface area through multi-temporal Landsat imagery collected from 2000 to 2014 in Salt lake basin. 15 of Landsat images have been used to make thematic maps of Normalised Difference Vegetation Index (NDVI) in KCB, and 10 meteorological stations data has been used to generate the Standardized Precipitation Index (SPI), which was used in drought studies. For the purpose of visualizing and sharing the results, a Web GIS-like environment has been established by using Google Maps and its useful data storage and manipulating product Fusion Tables which are all Google's free of charge Web service elements. The infrastructure of web application includes HTML5, CSS3, JavaScript, Google Maps API V3 and Google Fusion Tables API technologies. These technologies make it possible to make effective "Map Mash-Ups" involving an embedded Google Map in a Web page, storing the spatial or tabular data in Fusion Tables and add this data as a map layer on embedded map. The analysing process and map mash-up application have been discussed in detail as the main sections of this paper.
Being There is Only the Beginning: Toward More Effective Web 2.0 Use in Academic Libraries
2010-01-02
Google is Our Friend,” and “ Plagiarism 101.” Also unlike the hard-to-find blogs, many academic libraries, including both Hollins University and Urbana...Effective Web 2.0 Use in Academic Libraries by Hanna C. Bachrach Pratt Institute...5a. CONTRACT NUMBER 2.0 Use in Academic Libraries 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Bachrach
ERIC Educational Resources Information Center
Williams, Lesley
2006-01-01
In a survey of a representative sample of over 3300 online information consumers and their information-seeking behavior, survey findings indicate that 84 percent of information searches begin with a search engine. Library web sites were selected by just one percent of respondents as the source used to begin an information search and 72 percent had…
ERIC Educational Resources Information Center
Wang, Kening; Mulvenon, Sean W.; Stegman, Charles; Anderson, Travis
2008-01-01
Google Maps API (Application Programming Interface), released in late June 2005 by Google, is an amazing technology that allows users to embed Google Maps in their own Web pages with JavaScript. Google Maps API has accelerated the development of new Google Maps based applications. This article reports a Web-based interactive mapping system…
Croatian Medical Journal citation score in Web of Science, Scopus, and Google Scholar.
Sember, Marijan; Utrobicić, Ana; Petrak, Jelka
2010-04-01
To analyze the 2007 citation count of articles published by the Croatian Medical Journal in 2005-2006 based on data from the Web of Science, Scopus, and Google Scholar. Web of Science and Scopus were searched for the articles published in 2005-2006. As all articles returned by Scopus were included in Web of Science, the latter list was the sample for further analysis. Total citation counts for each article on the list were retrieved from Web of Science, Scopus, and Google Scholar. The overlap and unique citations were compared and analyzed. Proportions were compared using chi(2)-test. Google Scholar returned the greatest proportion of articles with citations (45%), followed by Scopus (42%), and Web of Science (38%). Almost a half (49%) of articles had no citations and 11% had an equal number of identical citations in all 3 databases. The greatest overlap was found between Web of Science and Scopus (54%), followed by Scopus and Google Scholar (51%), and Web of Science and Google Scholar (44%). The greatest number of unique citations was found by Google Scholar (n=86). The majority of these citations (64%) came from journals, followed by books and PhD theses. Approximately 55% of all citing documents were full-text resources in open access. The language of citing documents was mostly English, but as many as 25 citing documents (29%) were in Chinese. Google Scholar shares a total of 42% citations returned by two others, more influential, bibliographic resources. The list of unique citations in Google Scholar is predominantly journal based, but these journals are mainly of local character. Citations received by internationally recognized medical journals are crucial for increasing the visibility of small medical journals but Google Scholar may serve as an alternative bibliometric tool for an orientational citation insight.
Curating the Web: Building a Google Custom Search Engine for the Arts
ERIC Educational Resources Information Center
Hennesy, Cody; Bowman, John
2008-01-01
Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…
Goto, Yasushi; Sekine, Ikuo; Sekiguchi, Hiroshi; Yamada, Kazuhiko; Nokihara, Hiroshi; Yamamoto, Noboru; Kunitoh, Hideo; Ohe, Yuichiro; Tamura, Tomohide
2009-07-01
Quality of information available over the Internet has been a cause for concern. Our goal was to evaluate the quality of information available on lung cancer in the United States and Japan and assess the differences between the two. We conducted a prospective, observational Web review by searching the word "lung cancer" in Japanese and English, using Google Japan (Google-J), Google United States (Google-U), and Yahoo Japan (Yahoo-J). The first 50 Web sites displayed were evaluated from the ethical perspective and for the validity of the information. The administrator of each Web site was also investigated. Ethical policies were generally well described in the Web sites displayed by Google-U but less well so in the sites displayed by Google-J and Yahoo-J. The differences in the validity of the information available was more striking, in that 80% of the Web sites generated by Google-U described the most appropriate treatment methods, whereas less than 50% of the Web sites displayed by Google-J and Yahoo-J recommended the standard therapy, and more than 10% advertised alternative therapy. Nonprofit organizations and public institutions were the primary Web site administrators in the United States, whereas commercial or personal Web sites were more frequent in Japan. Differences in the quality of information on lung cancer available over the Internet were apparent between Japan and the United States. The reasons for such differences might be tracked to the administrators of the Web sites. Nonprofit organizations and public institutions are the up-and-coming Web site administrators for relaying reliable medical information.
Google Scholar Goes to School: The Presence of Google Scholar on College and University Web Sites
ERIC Educational Resources Information Center
Neuhaus, Chris; Neuhaus, Ellen; Asher, Alan
2008-01-01
This study measured the degree of Google Scholar adoption within academia by analyzing the frequency of Google Scholar appearances on 948 campus and library Web sites, and by ascertaining the establishment of link resolution between Google Scholar and library resources. Results indicate a positive correlation between the implementation of Google…
Croatian Medical Journal Citation Score in Web of Science, Scopus, and Google Scholar
Šember, Marijan; Utrobičić, Ana; Petrak, Jelka
2010-01-01
Aim To analyze the 2007 citation count of articles published by the Croatian Medical Journal in 2005-2006 based on data from the Web of Science, Scopus, and Google Scholar. Methods Web of Science and Scopus were searched for the articles published in 2005-2006. As all articles returned by Scopus were included in Web of Science, the latter list was the sample for further analysis. Total citation counts for each article on the list were retrieved from Web of Science, Scopus, and Google Scholar. The overlap and unique citations were compared and analyzed. Proportions were compared using χ2-test. Results Google Scholar returned the greatest proportion of articles with citations (45%), followed by Scopus (42%), and Web of Science (38%). Almost a half (49%) of articles had no citations and 11% had an equal number of identical citations in all 3 databases. The greatest overlap was found between Web of Science and Scopus (54%), followed by Scopus and Google Scholar (51%), and Web of Science and Google Scholar (44%). The greatest number of unique citations was found by Google Scholar (n = 86). The majority of these citations (64%) came from journals, followed by books and PhD theses. Approximately 55% of all citing documents were full-text resources in open access. The language of citing documents was mostly English, but as many as 25 citing documents (29%) were in Chinese. Conclusion Google Scholar shares a total of 42% citations returned by two others, more influential, bibliographic resources. The list of unique citations in Google Scholar is predominantly journal based, but these journals are mainly of local character. Citations received by internationally recognized medical journals are crucial for increasing the visibility of small medical journals but Google Scholar may serve as an alternative bibliometric tool for an orientational citation insight. PMID:20401951
ERIC Educational Resources Information Center
Dysart, Joe
2008-01-01
Given Google's growing market share--69% of all searches by the close of 2007--it's absolutely critical for any school on the Web to ensure its site is Google-friendly. A Google-optimized site ensures that students and parents can quickly find one's district on the Web even if they don't know the address. Plus, good search optimization simply…
Flipping the Online Classroom with Web 2.0: The Asynchronous Workshop
ERIC Educational Resources Information Center
Cummings, Lance
2016-01-01
This article examines how Web 2.0 technologies can be used to "flip" the online classroom by creating asynchronous workshops in social environments where immediacy and social presence can be maximized. Using experience teaching several communication and writing classes in Google Apps (Google+, Google Hangouts, Google Drive, etc.), I…
2013-01-01
The Korean Journal of Urology began to be published exclusively in English in 2010 and is indexed in PubMed Central/PubMed. This study analyzed a variety of citation indicators of the Korean Journal of Urology before and after 2010 to clarify the present position of the journal among the urology category journals. The impact factor, SCImago Journal Rank (SJR), impact index, Z-impact factor (ZIF, impact factor excluding self-citation), and Hirsch Index (H-index) were referenced or calculated from Web of Science, Scopus, SCImago Journal & Country Ranking, Korean Medical Citation Index (KoMCI), KoreaMed Synapse, and Google Scholar. Both the impact factor and the total citations rose rapidly beginning in 2011. The 2012 impact factor corresponded to the upper 84.9% in the nephrology-urology category, whereas the 2011 SJR was in the upper 58.5%. The ZIF in KoMCI was one fifth of the impact factor because there are only two other urology journals in KoMCI. Up to 2009, more than half of the citations in the Web of Science were from Korean researchers, but from 2010 to 2012, more than 85% of the citations were from international researchers. The H-indexes from Web of Science, Scopus, KoMCI, KoreaMed Synapse, and Google Scholar were 8, 10, 12, 9, and 18, respectively. The strategy of the language change in 2010 was successful from the perspective of citation indicators. The values of the citation indicators will continue to increase rapidly and consistently as the research achievement of authors of the Korean Journal of Urology increases. PMID:23614057
Huh, Sun
2013-04-01
The Korean Journal of Urology began to be published exclusively in English in 2010 and is indexed in PubMed Central/PubMed. This study analyzed a variety of citation indicators of the Korean Journal of Urology before and after 2010 to clarify the present position of the journal among the urology category journals. The impact factor, SCImago Journal Rank (SJR), impact index, Z-impact factor (ZIF, impact factor excluding self-citation), and Hirsch Index (H-index) were referenced or calculated from Web of Science, Scopus, SCImago Journal & Country Ranking, Korean Medical Citation Index (KoMCI), KoreaMed Synapse, and Google Scholar. Both the impact factor and the total citations rose rapidly beginning in 2011. The 2012 impact factor corresponded to the upper 84.9% in the nephrology-urology category, whereas the 2011 SJR was in the upper 58.5%. The ZIF in KoMCI was one fifth of the impact factor because there are only two other urology journals in KoMCI. Up to 2009, more than half of the citations in the Web of Science were from Korean researchers, but from 2010 to 2012, more than 85% of the citations were from international researchers. The H-indexes from Web of Science, Scopus, KoMCI, KoreaMed Synapse, and Google Scholar were 8, 10, 12, 9, and 18, respectively. The strategy of the language change in 2010 was successful from the perspective of citation indicators. The values of the citation indicators will continue to increase rapidly and consistently as the research achievement of authors of the Korean Journal of Urology increases.
Taking advantage of Google's Web-based applications and services.
Brigham, Tara J
2014-01-01
Google is a company that is constantly expanding and growing its services and products. While most librarians possess a "love/hate" relationship with Google, there are a number of reasons you should consider exploring some of the tools Google has created and made freely available. Applications and services such as Google Docs, Slides, and Google+ are functional and dynamic without the cost of comparable products. This column will address some of the issues users should be aware of before signing up to use Google's tools, and a description of some of Google's Web applications and services, plus how they can be useful to librarians in health care.
... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: March 3, 2011 Page ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Contact Us: Agency for Toxic Substances and ...
... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: February 12, 2013 Page ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Contact Us: Agency for Toxic Substances and ...
... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: March 3, 2011 Page ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Contact Us: Agency for Toxic Substances and ...
... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Beryllium Toxicity Patient Education Care Instruction Sheet ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: May 23, 2008 Page ...
... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Get email updates To receive email updates ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: June 24, 2014 Page ...
Side by Side: What a Comparative Usability Study Told Us about a Web Site Redesign
ERIC Educational Resources Information Center
Dougan, Kirstin; Fulton, Camilla
2009-01-01
Library Web sites must compete against easy-to-use sites, such as Google Scholar, Google Books, and Wikipedia, for students' time and attention. Library Web sites must therefore be designed with aesthetics and user perceptions at the forefront. The Music and Performing Arts Library at Urbana-Champaign's Web site was overcrowded and in much need of…
Total Petroleum Hydrocarbons (TPH): ToxFAQs
... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: February 4, 2014 Page ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Contact Us: Agency for Toxic Substances and ...
Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.
Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory
2016-06-13
Bioinformatics software often requires human-generated tabular text files as input and has specific requirements for how those data are formatted. Users frequently manage these data in spreadsheet programs, which is convenient for researchers who are compiling the requisite information because the spreadsheet programs can easily be used on different platforms including laptops and tablets, and because they provide a familiar interface. It is increasingly common for many different researchers to be involved in compiling these data, including study coordinators, clinicians, lab technicians and bioinformaticians. As a result, many research groups are shifting toward using cloud-based spreadsheet programs, such as Google Sheets, which support the concurrent editing of a single spreadsheet by different users working on different platforms. Most of the researchers who enter data are not familiar with the formatting requirements of the bioinformatics programs that will be used, so validating and correcting file formats is often a bottleneck prior to beginning bioinformatics analysis. We present Keemei, a Google Sheets Add-on, for validating tabular files used in bioinformatics analyses. Keemei is available free of charge from Google's Chrome Web Store. Keemei can be installed and run on any web browser supported by Google Sheets. Keemei currently supports the validation of two widely used tabular bioinformatics formats, the Quantitative Insights into Microbial Ecology (QIIME) sample metadata mapping file format and the Spatially Referenced Genetic Data (SRGD) format, but is designed to easily support the addition of others. Keemei will save researchers time and frustration by providing a convenient interface for tabular bioinformatics file format validation. By allowing everyone involved with data entry for a project to easily validate their data, it will reduce the validation and formatting bottlenecks that are commonly encountered when human-generated data files are first used with a bioinformatics system. Simplifying the validation of essential tabular data files, such as sample metadata, will reduce common errors and thereby improve the quality and reliability of research outcomes.
ToxGuides: Quick Reference Pocket Guide for Toxicological Profiles
... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Get email updates To receive email updates ... Favorites Del.icio.us Digg Facebook Google Bookmarks Yahoo MyWeb Page last reviewed: January 21, 2015 Page ...
Kulkarni, Abhaya V; Aziz, Brittany; Shams, Iffat; Busse, Jason W
2009-09-09
Until recently, Web of Science was the only database available to track citation counts for published articles. Other databases are now available, but their relative performance has not been established. To compare the citation count profiles of articles published in general medical journals among the citation databases of Web of Science, Scopus, and Google Scholar. Cohort study of 328 articles published in JAMA, Lancet, or the New England Journal of Medicine between October 1, 1999, and March 31, 2000. Total citation counts for each article up to June 2008 were retrieved from Web of Science, Scopus, and Google Scholar. Article characteristics were analyzed in linear regression models to determine interaction with the databases. Number of citations received by an article since publication and article characteristics associated with citation in databases. Google Scholar and Scopus retrieved more citations per article with a median of 160 (interquartile range [IQR], 83 to 324) and 149 (IQR, 78 to 289), respectively, than Web of Science (median, 122; IQR, 66 to 241) (P < .001 for both comparisons). Compared with Web of Science, Scopus retrieved more citations from non-English-language sources (median, 10.2% vs 4.1%) and reviews (30.8% vs 18.2%), and fewer citations from articles (57.2% vs 70.5%), editorials (2.1% vs 5.9%), and letters (0.8% vs 2.6%) (all P < .001). On a log(10)-transformed scale, fewer citations were found in Google Scholar to articles with declared industry funding (nonstandardized regression coefficient, -0.09; 95% confidence interval [CI], -0.15 to -0.03), reporting a study of a drug or medical device (-0.05; 95% CI, -0.11 to 0.01), or with group authorship (-0.29; 95% CI, -0.35 to -0.23). In multivariable analysis, group authorship was the only characteristic that differed among the databases; Google Scholar had significantly fewer citations to group-authored articles (-0.30; 95% CI, -0.36 to -0.23) compared with Web of Science. Web of Science, Scopus, and Google Scholar produced quantitatively and qualitatively different citation counts for articles published in 3 general medical journals.
Going beyond Google for Faster and Smarter Web Searching
ERIC Educational Resources Information Center
Vine, Rita
2004-01-01
With more than 4 billion web pages in its database, Google is suitable for many different kinds of searches. When you know what you are looking for, Google can be a pretty good first choice, as long as you want to search a word pattern that can be expected to appear on any results pages. The problem starts when you don't know exactly what you're…
The quality of patient-orientated Internet information on oral lichen planus: a pilot study.
López-Jornet, Pía; Camacho-Alonso, Fabio
2010-10-01
This study examines the accessibility and quality Web pages related with oral lichen planus. Sites were identified using two search engines (Google and Yahoo!) and the search terms 'oral lichen planus' and 'oral lesion lichenoid'. The first 100 sites in each search were visited and classified. The web sites were evaluated for content quality by using the validated DISCERN rating instrument. JAMA benchmarks and 'Health on the Net' seal (HON). A total of 109,000 sites were recorded in Google using the search terms and 520,000 in Yahoo! A total of 19 Web pages considered relevant were examined on Google and 20 on Yahoo! As regards the JAMA benchmarks, only two pages satisfied the four criteria in Google (10%), and only three (15%) in Yahoo! As regards DISCERN, the overall quality of web site information was poor, no site reaching the maximum score. In Google 78.94% of sites had important deficiencies, and 50% in Yahoo!, the difference between the two search engines being statistically significant (P = 0.031). Only five pages (17.2%) on Google and eight (40%) on Yahoo! showed the HON code. Based on our review, doctors must assume primary responsibility for educating and counselling their patients. © 2010 Blackwell Publishing Ltd.
Hartemink, Alfred E.; McBratney, Alex; Jang, Ho-Jun
2013-01-01
Citation metrics and h indices differ using different bibliometric databases. We compiled the number of publications, number of citations, h index and year since the first publication from 340 soil researchers from all over the world. On average, Google Scholar has the highest h index, number of publications and citations per researcher, and the Web of Science the lowest. The number of papers in Google Scholar is on average 2.3 times higher and the number of citations is 1.9 times higher compared to the data in the Web of Science. Scopus metrics are slightly higher than that of the Web of Science. The h index in Google Scholar is on average 1.4 times larger than Web of Science, and the h index in Scopus is on average 1.1 times larger than Web of Science. Over time, the metrics increase in all three databases but fastest in Google Scholar. The h index of an individual soil scientist is about 0.7 times the number of years since his/her first publication. There is a large difference between the number of citations, number of publications and the h index using the three databases. From this analysis it can be concluded that the choice of the database affects widely-used citation and evaluation metrics but that bibliometric transfer functions exist to relate the metrics from these three databases. We also investigated the relationship between journal’s impact factor and Google Scholar’s h5-index. The h5-index is a better measure of a journal’s citation than the 2 or 5 year window impact factor. PMID:24167778
Minasny, Budiman; Hartemink, Alfred E; McBratney, Alex; Jang, Ho-Jun
2013-01-01
Citation metrics and h indices differ using different bibliometric databases. We compiled the number of publications, number of citations, h index and year since the first publication from 340 soil researchers from all over the world. On average, Google Scholar has the highest h index, number of publications and citations per researcher, and the Web of Science the lowest. The number of papers in Google Scholar is on average 2.3 times higher and the number of citations is 1.9 times higher compared to the data in the Web of Science. Scopus metrics are slightly higher than that of the Web of Science. The h index in Google Scholar is on average 1.4 times larger than Web of Science, and the h index in Scopus is on average 1.1 times larger than Web of Science. Over time, the metrics increase in all three databases but fastest in Google Scholar. The h index of an individual soil scientist is about 0.7 times the number of years since his/her first publication. There is a large difference between the number of citations, number of publications and the h index using the three databases. From this analysis it can be concluded that the choice of the database affects widely-used citation and evaluation metrics but that bibliometric transfer functions exist to relate the metrics from these three databases. We also investigated the relationship between journal's impact factor and Google Scholar's h5-index. The h5-index is a better measure of a journal's citation than the 2 or 5 year window impact factor.
Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses.
Falagas, Matthew E; Pitsouni, Eleni I; Malietzis, George A; Pappas, Georgios
2008-02-01
The evolution of the electronic age has led to the development of numerous medical databases on the World Wide Web, offering search facilities on a particular subject and the ability to perform citation analysis. We compared the content coverage and practical utility of PubMed, Scopus, Web of Science, and Google Scholar. The official Web pages of the databases were used to extract information on the range of journals covered, search facilities and restrictions, and update frequency. We used the example of a keyword search to evaluate the usefulness of these databases in biomedical information retrieval and a specific published article to evaluate their utility in performing citation analysis. All databases were practical in use and offered numerous search facilities. PubMed and Google Scholar are accessed for free. The keyword search with PubMed offers optimal update frequency and includes online early articles; other databases can rate articles by number of citations, as an index of importance. For citation analysis, Scopus offers about 20% more coverage than Web of Science, whereas Google Scholar offers results of inconsistent accuracy. PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range, of help both in keyword searching and citation analysis, but it is currently limited to recent articles (published after 1995) compared with Web of Science. Google Scholar, as for the Web in general, can help in the retrieval of even the most obscure information but its use is marred by inadequate, less often updated, citation information.
PhyloGeoViz: a web-based program that visualizes genetic data on maps.
Tsai, Yi-Hsin E
2011-05-01
The first step of many population genetic studies is the simple visualization of allele frequencies on a landscape. This basic data exploration can be challenging without proprietary software, and the manual plotting of data is cumbersome and unfeasible at large sample sizes. I present an open source, web-based program that plots any kind of frequency or count data as pie charts in Google Maps (Google Inc., Mountain View, CA). Pie polygons are then exportable to Google Earth (Google Inc.), a free Geographic Information Systems platform. Import of genetic data into Google Earth allows phylogeographers access to a wealth of spatial information layers integral to forming hypotheses and understanding patterns in the data. © 2010 Blackwell Publishing Ltd.
The Adversarial Route Analysis Tool: A Web Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casson, William H. Jr.
2012-08-02
The Adversarial Route Analysis Tool is a type of Google maps for adversaries. It's a web-based Geospatial application similar to Google Maps. It helps the U.S. government plan operations that predict where an adversary might be. It's easily accessible and maintainble and it's simple to use without much training.
The Privilege of Ranking: Google Plays Ball.
ERIC Educational Resources Information Center
Wiggins, Richard
2003-01-01
Discussion of ranking systems used in various settings, including college football and academic admissions, focuses on the Google search engine. Explains the PageRank mathematical formula that scores Web pages by connecting the number of links; limitations, including authenticity and accuracy of ranked Web pages; relevancy; adjusting algorithms;…
Through the Google Goggles: Sociopolitical Bias in Search Engine Design
NASA Astrophysics Data System (ADS)
Diaz, A.
Search engines like Google are essential to navigating the Web's endless supply of news, political information, and citizen discourse. The mechanisms and conditions under which search results are selected should therefore be of considerable interest to media scholars, political theorists, and citizens alike. In this chapter, I adopt a "deliberative" ideal for search engines and examine whether Google exhibits the "same old" media biases of mainstreaming, hypercommercialism, and industry consolidation. In the end, serious objections to Google are raised: Google may favor popularity over richness; it provides advertising that competes directly with "editorial" content; it so overwhelmingly dominates the industry that users seldom get a second opinion, and this is unlikely to change. Ultimately, however, the results of this analysis may speak less about Google than about contradictions in the deliberative ideal and the so-called "inherently democratic" nature of the Web.
Finding research information on the web: how to make the most of Google and other free search tools.
Blakeman, Karen
2013-01-01
The Internet and the World Wide Web has had a major impact on the accessibility of research information. The move towards open access and development of institutional repositories has resulted in increasing amounts of information being made available free of charge. Many of these resources are not included in conventional subscription databases and Google is not always the best way to ensure that one is picking up all relevant material on a topic. This article will look at how Google's search engine works, how to use Google more effectively for identifying research information, alternatives to Google and will review some of the specialist tools that have evolved to cope with the diverse forms of information that now exist in electronic form.
Supporting our scientists with Google Earth-based UIs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Janine
2010-10-01
Google Earth and Google Maps are incredibly useful for researchers looking for easily-digestible displays of data. This presentation will provide a step-by-step tutorial on how to begin using Google Earth to create tools that further the mission of the DOE national lab complex.
Confessions of a Librarian or: How I Learned to Stop Worrying and Love Google
ERIC Educational Resources Information Center
Gunnels, Claire B.; Sisson, Amy
2009-01-01
Have you ever stopped to think about life before Google? We will make the argument that Google is the first manifestation of Web 2.0, of the power and promise of social networking and the ubiquitous wiki. We will discuss the positive influence of Google and how Google and other social networking tools afford librarians leading-edge technologies…
MaRGEE: Move and Rotate Google Earth Elements
NASA Astrophysics Data System (ADS)
Dordevic, Mladen M.; Whitmeyer, Steven J.
2015-12-01
Google Earth is recognized as a highly effective visualization tool for geospatial information. However, there remain serious limitations that have hindered its acceptance as a tool for research and education in the geosciences. One significant limitation is the inability to translate or rotate geometrical elements on the Google Earth virtual globe. Here we present a new JavaScript web application to "Move and Rotate Google Earth Elements" (MaRGEE). MaRGEE includes tools to simplify, translate, and rotate elements, add intermediate steps to a transposition, and batch process multiple transpositions. The transposition algorithm uses spherical geometry calculations, such as the haversine formula, to accurately reposition groups of points, paths, and polygons on the Google Earth globe without distortion. Due to the imminent deprecation of the Google Earth API and browser plugin, MaRGEE uses a Google Maps interface to facilitate and illustrate the transpositions. However, the inherent spatial distortions that result from the Google Maps Web Mercator projection are not apparent once the transposed elements are saved as a KML file and opened in Google Earth. Potential applications of the MaRGEE toolkit include tectonic reconstructions, the movements of glaciers or thrust sheets, and time-based animations of other large- and small-scale geologic processes.
Google Analytics: Single Page Traffic Reports
These are pages that live outside of Google Analytics (GA) but allow you to view GA data for any individual page on either the public EPA web or EPA intranet. You do need to log in to Google Analytics to view them.
100 Colleges Sign Up with Google to Speed Access to Library Resources
ERIC Educational Resources Information Center
Young, Jeffrey R.
2005-01-01
More than 100 colleges and universities have arranged to give people using the Google Scholar search engine on their campuses more-direct access to library materials. Google Scholar is a free tool that searches scholarly materials on the Web and in academic databases. The new arrangements essentially let Google know which online databases the…
Googling DNA sequences on the World Wide Web.
Hajibabaei, Mehrdad; Singer, Gregory A C
2009-11-10
New web-based technologies provide an excellent opportunity for sharing and accessing information and using web as a platform for interaction and collaboration. Although several specialized tools are available for analyzing DNA sequence information, conventional web-based tools have not been utilized for bioinformatics applications. We have developed a novel algorithm and implemented it for searching species-specific genomic sequences, DNA barcodes, by using popular web-based methods such as Google. We developed an alignment independent character based algorithm based on dividing a sequence library (DNA barcodes) and query sequence to words. The actual search is conducted by conventional search tools such as freely available Google Desktop Search. We implemented our algorithm in two exemplar packages. We developed pre and post-processing software to provide customized input and output services, respectively. Our analysis of all publicly available DNA barcode sequences shows a high accuracy as well as rapid results. Our method makes use of conventional web-based technologies for specialized genetic data. It provides a robust and efficient solution for sequence search on the web. The integration of our search method for large-scale sequence libraries such as DNA barcodes provides an excellent web-based tool for accessing this information and linking it to other available categories of information on the web.
Utility of Web search query data in testing theoretical assumptions about mephedrone.
Kapitány-Fövény, Máté; Demetrovics, Zsolt
2017-05-01
With growing access to the Internet, people who use drugs and traffickers started to obtain information about novel psychoactive substances (NPS) via online platforms. This paper aims to analyze whether a decreasing Web interest in formerly banned substances-cocaine, heroin, and MDMA-and the legislative status of mephedrone predict Web interest about this NPS. Google Trends was used to measure changes of Web interest on cocaine, heroin, MDMA, and mephedrone. Google search results for mephedrone within the same time frame were analyzed and categorized. Web interest about classic drugs found to be more persistent. Regarding geographical distribution, location of Web searches for heroin and cocaine was less centralized. Illicit status of mephedrone was a negative predictor of its Web search query rates. The connection between mephedrone-related Web search rates and legislative status of this substance was significantly mediated by ecstasy-related Web search queries, the number of documentaries, and forum/blog entries about mephedrone. The results might provide support for the hypothesis that mephedrone's popularity was highly correlated with its legal status as well as it functioned as a potential substitute for MDMA. Google Trends was found to be a useful tool for testing theoretical assumptions about NPS. Copyright © 2017 John Wiley & Sons, Ltd.
Visualization of Client-Side Web Browsing and Email Activity
2009-06-01
mantenimiento the amazing race dustin & candice oskar schindler mythbusters femjoy 080814-kathi in peace anna_ac_-_elixia miley cyrus mapa linea 12 metro... mantenimiento www.google.com.mx alcohol isopropilico www.google.com.mx descargas rapidshare corta final firefox www.google.com.mx desactivar
NASA Astrophysics Data System (ADS)
Saleh Ahmar, Ansari; Kurniasih, Nuning; Irawan, Dasapta Erwin; Utami Sutiksno, Dian; Napitupulu, Darmawan; Ikhsan Setiawan, Muhammad; Simarmata, Janner; Hidayat, Rahmat; Busro; Abdullah, Dahlan; Rahim, Robbi; Abraham, Juneman
2018-01-01
The Ministry of Research, Technology and Higher Education of Indonesia has introduced several national and international indexers of scientific works. This policy becomes a guideline for lecturers and researchers in choosing the reputable publications. This study aimed to describe the understanding level of Indonesian lecturers related to indexing databases, i.e. SINTA, DOAJ, Scopus, Web of Science, and Google Scholar. This research used descriptive design and survey method. The populations in this study were Indonesian lecturers and researchers. The primary data were obtained from a questionnaire filled by 316 lecturers and researchers from 33 Provinces in Indonesia recruited with convenience sampling technique on October-November 2017. The data analysis was performed using frequency distribution tables, cross tabulation and descriptive analysis. The results of this study showed that the understanding of Indonesian lecturers and researchers regarding publications in indexing databases SINTA, DOAJ, Scopus, Web of Science and Google Scholar is that, on average, 66,5% have known about SINTA, DOAJ, Scopus, Web of Science and Google Scholar. However, based on empirical frequency 76% of them have never published with journals or proceedings indexed in Scopus.
NASA Technical Reports Server (NTRS)
Lloyd, Steven; Acker, James G.; Prados, Ana I.; Leptoukh, Gregory G.
2008-01-01
One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite-based remote sensing data sets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable data set to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface.
Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers
Alsaleh, Mansour; Alarifi, Abdulrahman
2016-01-01
Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents. PMID:27855179
Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.
Alsaleh, Mansour; Alarifi, Abdulrahman
2016-01-01
Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.
ERIC Educational Resources Information Center
Branzburg, Jeffrey
2004-01-01
Google is shaking out to be the leading Web search engine, with recent research from Nielsen NetRatings reporting about 40 percent of all U.S. households using the tool at least once in January 2004. This brief article discusses how teachers and students can maximize their use of Google.
ERIC Educational Resources Information Center
Bergman, Elaine M. Lasda
2012-01-01
Past studies of citation coverage of "Web of Science," "Scopus," and "Google Scholar" do not demonstrate a consistent pattern that can be applied to the interdisciplinary mix of resources used in social work research. To determine the utility of these tools to social work researchers, an analysis of citing references to well-known social work…
ERIC Educational Resources Information Center
Levay, Paul; Ainsworth, Nicola; Kettle, Rachel; Morgan, Antony
2016-01-01
Aim: To examine how effectively forwards citation searching with Web of Science (WOS) or Google Scholar (GS) identified evidence to support public health guidance published by the National Institute for Health and Care Excellence. Method: Forwards citation searching was performed using GS on a base set of 46 publications and replicated using WOS.…
Estimating search engine index size variability: a 9-year longitudinal study.
van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice
One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.
Leveraging Google Geo Tools for Interactive STEM Education: Insights from the GEODE Project
NASA Astrophysics Data System (ADS)
Dordevic, M.; Whitmeyer, S. J.; De Paor, D. G.; Karabinos, P.; Burgin, S.; Coba, F.; Bentley, C.; St John, K. K.
2016-12-01
Web-based imagery and geospatial tools have transformed our ability to immerse students in global virtual environments. Google's suite of geospatial tools, such as Google Earth (± Engine), Google Maps, and Street View, allow developers and instructors to create interactive and immersive environments, where students can investigate and resolve common misconceptions in STEM concepts and natural processes. The GEODE (.net) project is developing digital resources to enhance STEM education. These include virtual field experiences (VFEs), such as an interactive visualization of the breakup of the Pangaea supercontinent, a "Grand Tour of the Terrestrial Planets," and GigaPan-based VFEs of sites like the Canadian Rockies. Web-based challenges, such as EarthQuiz (.net) and the "Fold Analysis Challenge," incorporate scaffolded investigations of geoscience concepts. EarthQuiz features web-hosted imagery, such as Street View, Photo Spheres, GigaPans, and Satellite View, as the basis for guided inquiry. In the Fold Analysis Challenge, upper-level undergraduates use Google Earth to evaluate a doubly-plunging fold at Sheep Mountain, WY. GEODE.net also features: "Reasons for the Seasons"—a Google Earth-based visualization that addresses misconceptions that abound amongst students, teachers, and the public, many of whom believe that seasonality is caused by large variations in Earth's distance from the Sun; "Plate Euler Pole Finder," which helps students understand rotational motion of tectonic plates on the globe; and "Exploring Marine Sediments Using Google Earth," an exercise that uses empirical data to explore the surficial distribution of marine sediments in the modern ocean. The GEODE research team includes the authors and: Heather Almquist, Cinzia Cervato, Gene Cooper, Helen Crompton, Terry Pavlis, Jen Piatek, Bill Richards, Jeff Ryan, Ron Schott, Barb Tewksbury, and their students and collaborating colleagues. We are supported by NSF DUE 1323419 and a Google Geo Curriculum Award.
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui
2012-01-01
Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui.
Newton, Richard; Deonarine, Andrew; Wernisch, Lorenz
2012-09-24
The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications.
Using Mobile App Development Tools to Build a GIS Application
NASA Astrophysics Data System (ADS)
Mital, A.; Catchen, M.; Mital, K.
2014-12-01
Our group designed and built working web, android, and IOS applications using different mapping libraries as bases on which to overlay fire data from NASA. The group originally planned to make app versions for Google Maps, Leaflet, and OpenLayers. However, because the Leaflet library did not properly load on Android, the group focused efforts on the other two mapping libraries. For Google Maps, the group first designed a UI for the web app and made a working version of the app. After updating the source of fire data to one which also provided historical fire data, the design had to be modified to include the extra data. After completing a working version of the web app, the group used webview in android, a built in resource which allowed porting the web app to android without rewriting the code for android. Upon completing this, the group found Apple IOS devices had a similar capability, and so decided to add an IOS app to the project using a function similar to webview. Alongside this effort, the group began implementing an OpenLayers fire map using a simpler UI. This web app was completed fairly quickly relative to Google Maps; however, it did not include functionality such as satellite imagery or searchable locations. The group finished the project with a working android version of the Google Maps based app supporting API levels 14-19 and an OpenLayers based app supporting API levels 8-19, as well as a Google Maps based IOS app supporting both old and new screen formats. This project was implemented by high school and college students under an SGT Inc. STEM internship program
Defrosting the digital library: bibliographic tools for the next generation web.
Hull, Duncan; Pettifer, Steve R; Kell, Douglas B
2008-10-01
Many scientists now manage the bulk of their bibliographic information electronically, thereby organizing their publications and citation material from digital libraries. However, a library has been described as "thought in cold storage," and unfortunately many digital libraries can be cold, impersonal, isolated, and inaccessible places. In this Review, we discuss the current chilly state of digital libraries for the computational biologist, including PubMed, IEEE Xplore, the ACM digital library, ISI Web of Knowledge, Scopus, Citeseer, arXiv, DBLP, and Google Scholar. We illustrate the current process of using these libraries with a typical workflow, and highlight problems with managing data and metadata using URIs. We then examine a range of new applications such as Zotero, Mendeley, Mekentosj Papers, MyNCBI, CiteULike, Connotea, and HubMed that exploit the Web to make these digital libraries more personal, sociable, integrated, and accessible places. We conclude with how these applications may begin to help achieve a digital defrost, and discuss some of the issues that will help or hinder this in terms of making libraries on the Web warmer places in the future, becoming resources that are considerably more useful to both humans and machines.
Defrosting the Digital Library: Bibliographic Tools for the Next Generation Web
Hull, Duncan; Pettifer, Steve R.; Kell, Douglas B.
2008-01-01
Many scientists now manage the bulk of their bibliographic information electronically, thereby organizing their publications and citation material from digital libraries. However, a library has been described as “thought in cold storage,” and unfortunately many digital libraries can be cold, impersonal, isolated, and inaccessible places. In this Review, we discuss the current chilly state of digital libraries for the computational biologist, including PubMed, IEEE Xplore, the ACM digital library, ISI Web of Knowledge, Scopus, Citeseer, arXiv, DBLP, and Google Scholar. We illustrate the current process of using these libraries with a typical workflow, and highlight problems with managing data and metadata using URIs. We then examine a range of new applications such as Zotero, Mendeley, Mekentosj Papers, MyNCBI, CiteULike, Connotea, and HubMed that exploit the Web to make these digital libraries more personal, sociable, integrated, and accessible places. We conclude with how these applications may begin to help achieve a digital defrost, and discuss some of the issues that will help or hinder this in terms of making libraries on the Web warmer places in the future, becoming resources that are considerably more useful to both humans and machines. PMID:18974831
Sharpe, J Danielle; Hopkins, Richard S; Cook, Robert L; Striley, Catherine W
2016-10-20
Traditional influenza surveillance relies on influenza-like illness (ILI) syndrome that is reported by health care providers. It primarily captures individuals who seek medical care and misses those who do not. Recently, Web-based data sources have been studied for application to public health surveillance, as there is a growing number of people who search, post, and tweet about their illnesses before seeking medical care. Existing research has shown some promise of using data from Google, Twitter, and Wikipedia to complement traditional surveillance for ILI. However, past studies have evaluated these Web-based sources individually or dually without comparing all 3 of them, and it would be beneficial to know which of the Web-based sources performs best in order to be considered to complement traditional methods. The objective of this study is to comparatively analyze Google, Twitter, and Wikipedia by examining which best corresponds with Centers for Disease Control and Prevention (CDC) ILI data. It was hypothesized that Wikipedia will best correspond with CDC ILI data as previous research found it to be least influenced by high media coverage in comparison with Google and Twitter. Publicly available, deidentified data were collected from the CDC, Google Flu Trends, HealthTweets, and Wikipedia for the 2012-2015 influenza seasons. Bayesian change point analysis was used to detect seasonal changes, or change points, in each of the data sources. Change points in Google, Twitter, and Wikipedia that occurred during the exact week, 1 preceding week, or 1 week after the CDC's change points were compared with the CDC data as the gold standard. All analyses were conducted using the R package "bcp" version 4.0.0 in RStudio version 0.99.484 (RStudio Inc). In addition, sensitivity and positive predictive values (PPV) were calculated for Google, Twitter, and Wikipedia. During the 2012-2015 influenza seasons, a high sensitivity of 92% was found for Google, whereas the PPV for Google was 85%. A low sensitivity of 50% was calculated for Twitter; a low PPV of 43% was found for Twitter also. Wikipedia had the lowest sensitivity of 33% and lowest PPV of 40%. Of the 3 Web-based sources, Google had the best combination of sensitivity and PPV in detecting Bayesian change points in influenza-related data streams. Findings demonstrated that change points in Google, Twitter, and Wikipedia data occasionally aligned well with change points captured in CDC ILI data, yet these sources did not detect all changes in CDC data and should be further studied and developed.
[Google Scholar and the h-index in biomedicine: the popularization of bibliometric assessment].
Cabezas-Clavijo, A; Delgado-López-Cózar, E
2013-01-01
The aim of this study is to review the features, benefits and limitations of the new scientific evaluation products derived from Google Scholar, such as Google Scholar Metrics and Google Scholar Citations, as well as the h-index, which is the standard bibliometric indicator adopted by these services. The study also outlines the potential of this new database as a source for studies in Biomedicine, and compares the h-index obtained by the most relevant journals and researchers in the field of intensive care medicine, based on data extracted from the Web of Science, Scopus and Google Scholar. Results show that although the average h-index values in Google Scholar are almost 30% higher than those obtained in Web of Science, and about 15% higher than those collected by Scopus, there are no substantial changes in the rankings generated from one data source or the other. Despite some technical problems, it is concluded that Google Scholar is a valid tool for researchers in Health Sciences, both for purposes of information retrieval and for the computation of bibliometric indicators. Copyright © 2012 Elsevier España, S.L. and SEMICYUC. All rights reserved.
ERIC Educational Resources Information Center
Albanese, Andrew Richard
2006-01-01
This article observes that it's not hard to understand why Google creates such unease among librarians. The profession, however, can't afford to be myopic when it comes to Google. As inescapable as it is, Google is not the Internet. And as the web evolves, new opportunities and challenges loom larger for libraries than who's capturing the bulk of…
Using Google AdWords in the MBA MIS Course
ERIC Educational Resources Information Center
Rosso, Mark A.; McClelland, Marilyn K.; Jansen, Bernard J.; Fleming, Sundar W.
2009-01-01
From February to June 2008, Google ran its first ever student competition in sponsored Web search, the 2008 Google Online Marketing Challenge (GOMC). The 2008 GOMC was based on registrations from 61 countries: 629 course sections from 468 universities participated, fielding over 4000 student teams of approximately 21,000 students. Working with a…
Google Scholar Usage: An Academic Library's Experience
ERIC Educational Resources Information Center
Wang, Ya; Howard, Pamela
2012-01-01
Google Scholar is a free service that provides a simple way to broadly search for scholarly works and to connect patrons with the resources libraries provide. The researchers in this study analyzed Google Scholar usage data from 2006 for three library tools at San Francisco State University: SFX link resolver, Web Access Management proxy server,…
ERIC Educational Resources Information Center
Ashwell, Tim; Elam, Jesse R.
2017-01-01
The ultimate aim of our research project was to use the Google Web Speech API to automate scoring of elicited imitation (EI) tests. However, in order to achieve this goal, we had to take a number of preparatory steps. We needed to assess how accurate this speech recognition tool is in recognizing native speakers' production of the test items; we…
ERIC Educational Resources Information Center
Hightower, Christy; Caldwell, Christy
2010-01-01
Science researchers at the University of California Santa Cruz were surveyed about their article database use and preferences in order to inform collection budget choices. Web of Science was the single most used database, selected by 41.6%. Statistically there was no difference between PubMed (21.5%) and Google Scholar (18.7%) as the second most…
Pías-Peleteiro, Leticia; Cortés-Bordoy, Javier; Martinón-Torres, Federico
2013-01-01
Objectives: To assess and analyze the information and recommendations provided by Google Web Search™ (Google) in relation to web searches on the HPV vaccine, indications for females and males and possible adverse effects. Materials and Methods: Descriptive cross-sectional study of the results of 14 web searches. Comprehensive analysis of results based on general recommendation given (favorable/dissuasive), as well as compliance with pre-established criteria, namely design, content and credibility. Sub-analysis of results according to site category: general information, blog / forum and press. Results: In the comprehensive analysis of results, 72.2% of websites offer information favorable to HPV vaccination, with varying degrees of content detail, vs. 27.8% with highly dissuasive content in relation to HPV vaccination. The most frequent type of site is the blog or forum. The information found is frequently incomplete, poorly structured, and often lacking in updates, bibliography and adequate citations, as well as sound credibility criteria (scientific association accreditation and/or trust mark system). Conclusions: Google, as a tool which users employ to locate medical information and advice, is not specialized in providing information that is necessarily rigorous or valid from a scientific perspective. Search results and ranking based on Google's generalized algorithms can lead users to poorly grounded opinions and statements, which may impact HPV vaccination perception and subsequent decision making. PMID:23744505
Google Analytics – Index of Resources
Find how-to and best practice resources and training for accessing and understanding EPA's Google Analytics (GA) tools, including how to create reports that will help you improve and maintain the web areas you manage.
2016-07-21
Todays internet has multiple webs. The surface web is what Google and other search engines index and pull based on links. Essentially, the surface...financial records, research and development), and personal data (medical records or legal documents). These are all deep web. Standard search engines dont
Teaching Google Search Techniques in an L2 Academic Writing Context
ERIC Educational Resources Information Center
Han, Sumi; Shin, Jeong-Ah
2017-01-01
This mixed-method study examines the effectiveness of teaching Google search techniques (GSTs) to Korean EFL college students in an intermediate-level academic English writing course. 18 students participated in a 4-day GST workshop consisting of an overview session of the web as corpus and Google as a concordancer, and three training sessions…
The Effects of Collaborative Writing Activity Using Google Docs on Students' Writing Abilities
ERIC Educational Resources Information Center
Suwantarathip, Ornprapat; Wichadee, Saovapa
2014-01-01
Google Docs, a free web-based version of Microsoft Word, offers collaborative features which can be used to facilitate collaborative writing in a foreign language classroom. The current study compared writing abilities of students who collaborated on writing assignments using Google Docs with those working in groups in a face-to-face classroom.…
None Available
2018-02-06
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
ERIC Educational Resources Information Center
Gross, Liz
2012-01-01
As a new year begins, higher education professionals who manage social media are getting to know the latest social network, Google+, and how they can best use Google+ Pages to advance their institutions. When Google+ first came on the scene in late June 2011, several institutions signed up and began using the service. Given the popularity of other…
Integrating web 2.0 in clinical research education in a developing country.
Amgad, Mohamed; AlFaar, Ahmad Samir
2014-09-01
The use of Web 2.0 tools in education and health care has received heavy attention over the past years. Over two consecutive years, Children's Cancer Hospital - Egypt 57357 (CCHE 57357), in collaboration with Egyptian universities, student bodies, and NGOs, conducted a summer course that supports undergraduate medical students to cross the gap between clinical practice and clinical research. This time, there was a greater emphasis on reaching out to the students using social media and other Web 2.0 tools, which were heavily used in the course, including Google Drive, Facebook, Twitter, YouTube, Mendeley, Google Hangout, Live Streaming, Research Electronic Data Capture (REDCap), and Dropbox. We wanted to investigate the usefulness of integrating Web 2.0 technologies into formal educational courses and modules. The evaluation survey was filled in by 156 respondents, 134 of whom were course candidates (response rate = 94.4 %) and 22 of whom were course coordinators (response rate = 81.5 %). The course participants came from 14 different universities throughout Egypt. Students' feedback was positive and supported the integration of Web 2.0 tools in academic courses and modules. Google Drive, Facebook, and Dropbox were found to be most useful.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None Available
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
Feature Positioning on Google Street View Panoramas
NASA Astrophysics Data System (ADS)
Tsai, V. J. D.; Chang, C.-T.
2012-07-01
Location-based services (LBS) on web-based maps and images have come into real-time since Google launched its Street View imaging services in 2007. This research employs Google Maps API and Web Service, GAE for JAVA, AJAX, Proj4js, CSS and HTML in developing an internet platform for accessing the orientation parameters of Google Street View (GSV) panoramas in order to determine the three dimensional position of interest features that appear on two overlapping panoramas by geometric intersection. A pair of GSV panoramas was examined using known points located on the Library Building of National Chung Hsing University (NCHU) with the root-mean-squared errors of ±0.522m, ±1.230m, and ±5.779m for intersection and ±0.142m, ±1.558m, and ±5.733m for resection in X, Y, and h (elevation), respectively. Potential error sources in GSV positioning were analyzed and illustrated that the errors in Google provided GSV positional parameters dominate the errors in geometric intersection. The developed system is suitable for data collection in establishing LBS applications integrated with Google Maps and Google Earth in traffic sign and infrastructure inventory by adding automatic extraction and matching techniques for points of interest (POI) from GSV panoramas.
Aanensen, David M; Huntley, Derek M; Feil, Edward J; al-Own, Fada'a; Spratt, Brian G
2009-09-16
Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features) both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases. Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth). Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period. Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.
Hopkins, Richard S; Cook, Robert L; Striley, Catherine W
2016-01-01
Background Traditional influenza surveillance relies on influenza-like illness (ILI) syndrome that is reported by health care providers. It primarily captures individuals who seek medical care and misses those who do not. Recently, Web-based data sources have been studied for application to public health surveillance, as there is a growing number of people who search, post, and tweet about their illnesses before seeking medical care. Existing research has shown some promise of using data from Google, Twitter, and Wikipedia to complement traditional surveillance for ILI. However, past studies have evaluated these Web-based sources individually or dually without comparing all 3 of them, and it would be beneficial to know which of the Web-based sources performs best in order to be considered to complement traditional methods. Objective The objective of this study is to comparatively analyze Google, Twitter, and Wikipedia by examining which best corresponds with Centers for Disease Control and Prevention (CDC) ILI data. It was hypothesized that Wikipedia will best correspond with CDC ILI data as previous research found it to be least influenced by high media coverage in comparison with Google and Twitter. Methods Publicly available, deidentified data were collected from the CDC, Google Flu Trends, HealthTweets, and Wikipedia for the 2012-2015 influenza seasons. Bayesian change point analysis was used to detect seasonal changes, or change points, in each of the data sources. Change points in Google, Twitter, and Wikipedia that occurred during the exact week, 1 preceding week, or 1 week after the CDC’s change points were compared with the CDC data as the gold standard. All analyses were conducted using the R package “bcp” version 4.0.0 in RStudio version 0.99.484 (RStudio Inc). In addition, sensitivity and positive predictive values (PPV) were calculated for Google, Twitter, and Wikipedia. Results During the 2012-2015 influenza seasons, a high sensitivity of 92% was found for Google, whereas the PPV for Google was 85%. A low sensitivity of 50% was calculated for Twitter; a low PPV of 43% was found for Twitter also. Wikipedia had the lowest sensitivity of 33% and lowest PPV of 40%. Conclusions Of the 3 Web-based sources, Google had the best combination of sensitivity and PPV in detecting Bayesian change points in influenza-related data streams. Findings demonstrated that change points in Google, Twitter, and Wikipedia data occasionally aligned well with change points captured in CDC ILI data, yet these sources did not detect all changes in CDC data and should be further studied and developed. PMID:27765731
Scheduled webinars can help you better manage EPA web content. Class topics include Drupal basics, creating different types of pages in the WebCMS such as document pages and forms, using Google Analytics, and best practices for metadata and accessibility.
Systems and Methods for Decoy Routing and Convert Channel Bonding
2013-11-26
34 Proc. R. Soc. A, vol. 463, Jan. 12, 2007, pp. 1-16. " Stupid censorship Web Proxy," http://www.stupidcensorship.com/, retrieved from the internet on...services such as those offered by Google or Skype, web or microblogs such as Twitter, various social media services such as Face- book, and file...device (e.g., Skype, Google , Jabber, Firefox) to be directed to the proprietary software for processing. For instance, the proprietary software of
Return of the Google Game: More Fun Ideas to Transform Students into Skilled Researchers
ERIC Educational Resources Information Center
Watkins, Katrine
2008-01-01
Teens are impatient and unsophisticated online researchers who are often limited by their poor reading skills. Because they are attracted to clean and simple Web interfaces, they often turn to Google--and now Wikipedia--to help meet their research needs. The Google Game, co-authored by this author, teaches kids that there is a well-thought-out…
Boverhof's App Earns Honorable Mention in Amazon's Web Services
» Boverhof's App Earns Honorable Mention in Amazon's Web Services Competition News & Publications News Publications Facebook Google+ Twitter Boverhof's App Earns Honorable Mention in Amazon's Web Services by Amazon Web Services (AWS). Amazon officially announced the winners of its EC2 Spotathon on Monday
Moving beyond a Google Search: Google Earth, SketchUp, Spreadsheet, and More
ERIC Educational Resources Information Center
Siegle, Del
2007-01-01
Google has been the search engine of choice for most Web surfers for the past half decade. More recently, the creative founders of the popular search engine have been busily creating and testing a variety of useful products that will appeal to gifted learners of varying ages. The purpose of this paper is to share information about three of these…
Hinds, Richard M; Klifto, Christopher S; Naik, Amish A; Sapienza, Anthony; Capo, John T
2016-08-01
The Internet is a common resource for applicants of hand surgery fellowships, however, the quality and accessibility of fellowship online information is unknown. The objectives of this study were to evaluate the accessibility of hand surgery fellowship Web sites and to assess the quality of information provided via program Web sites. Hand fellowship Web site accessibility was evaluated by reviewing the American Society for Surgery of the Hand (ASSH) on November 16, 2014 and the National Resident Matching Program (NRMP) fellowship directories on February 12, 2015, and performing an independent Google search on November 25, 2014. Accessible Web sites were then assessed for quality of the presented information. A total of 81 programs were identified with the ASSH directory featuring direct links to 32% of program Web sites and the NRMP directory directly linking to 0%. A Google search yielded direct links to 86% of program Web sites. The quality of presented information varied greatly among the 72 accessible Web sites. Program description (100%), fellowship application requirements (97%), program contact email address (85%), and research requirements (75%) were the most commonly presented components of fellowship information. Hand fellowship program Web sites can be accessed from the ASSH directory and, to a lesser extent, the NRMP directory. However, a Google search is the most reliable method to access online fellowship information. Of assessable programs, all featured a program description though the quality of the remaining information was variable. Hand surgery fellowship applicants may face some difficulties when attempting to gather program information online. Future efforts should focus on improving the accessibility and content quality on hand surgery fellowship program Web sites.
Creating Web Area Segments with Google Analytics
Segments allow you to quickly access data for a predefined set of Sessions or Users, such as government or education users, or sessions in a particular state. You can then apply this segment to any report within the Google Analytics (GA) interface.
[Health information on the Internet and trust marks as quality indicators: vaccines case study].
Mayer, Miguel Angel; Leis, Angela; Sanz, Ferran
2009-10-01
To find out the prevalence of quality trust marks present in websites and to analyse the quality of these websites displaying trust marks compared with those that do not display them, in order to put forward these trust marks as a quality indicator. Cross-sectional study. Internet. Websites on vaccines. Using "vacunas OR vaccines" as key words, the features of 40 web pages were analysed. These web pages were selected from the page results of two search engines, Google and Yahoo! Based on a total of 9 criteria, the average score of criteria fulfilled was 7 (95% CI 3.96-10.04) points for the web pages offered by Yahoo! and 7.3 (95% CI 3.86-10.74) offered by Google. Amongst web pages offered by Yahoo!, there were three with clearly inaccurate information, while there were four in the pages offered by Google. Trust marks were displayed in 20% and 30% medical web pages, respectively, and their presence reached statistical significance (P=0.033) when fulfilling the quality criteria compared with web pages where trust marks were not displayed. A wide variety of web pages was obtained by search engines and a large number of them with useless information. Although the websites analysed had a good quality, between 15% and 20% showed inaccurate information. Websites where trust marks were displayed had more quality than those that did not display one and none of them were included amongst those where inaccurate information was found.
Ajax Architecture Implementation Techniques
NASA Astrophysics Data System (ADS)
Hussaini, Syed Asadullah; Tabassum, S. Nasira; Baig, Tabassum, M. Khader
2012-03-01
Today's rich Web applications use a mix of Java Script and asynchronous communication with the application server. This mechanism is also known as Ajax: Asynchronous JavaScript and XML. The intent of Ajax is to exchange small pieces of data between the browser and the application server, and in doing so, use partial page refresh instead of reloading the entire Web page. AJAX (Asynchronous JavaScript and XML) is a powerful Web development model for browser-based Web applications. Technologies that form the AJAX model, such as XML, JavaScript, HTTP, and XHTML, are individually widely used and well known. However, AJAX combines these technologies to let Web pages retrieve small amounts of data from the server without having to reload the entire page. This capability makes Web pages more interactive and lets them behave like local applications. Web 2.0 enabled by the Ajax architecture has given rise to a new level of user interactivity through web browsers. Many new and extremely popular Web applications have been introduced such as Google Maps, Google Docs, Flickr, and so on. Ajax Toolkits such as Dojo allow web developers to build Web 2.0 applications quickly and with little effort.
Robotic Prostatectomy on the Web: A Cross-Sectional Qualitative Assessment.
Borgmann, Hendrik; Mager, René; Salem, Johannes; Bründl, Johannes; Kunath, Frank; Thomas, Christian; Haferkamp, Axel; Tsaur, Igor
2016-08-01
Many patients diagnosed with prostate cancer search for information on robotic prostatectomy (RobP) on the Web. We aimed to evaluate the qualitative characteristics of the mostly frequented Web sites on RobP with a particular emphasis on provider-dependent issues. Google was searched for the term "robotic prostatectomy" in Europe and North America. The mostly frequented Web sites were selected and classified as physician-provided and publically-provided. Quality was measured using Journal of the American Medical Association (JAMA) benchmark criteria, DISCERN score, and addressing of Trifecta surgical outcomes. Popularity was analyzed using Google PageRank and Alexa tool. Accessibility, usability, and reliability were investigated using the LIDA tool and readability was assessed using readability indices. Twenty-eight Web sites were physician-provided and 15 publically-provided. For all Web sites, 88% of JAMA benchmark criteria were fulfilled, DISCERN quality score was high, and 81% of Trifecta outcome measurements were addressed. Popularity was average according to Google PageRank (mean 2.9 ± 1.5) and Alexa Traffic Rank (median, 49,109; minimum, 7; maximum, 8,582,295). Accessibility (85 ± 7%), usability (92 ± 3%), and reliability scores (88 ± 8%) were moderate to high. Automated Readability Index was 7.2 ± 2.1 and Flesch-Kincaid Grade Level was 9 ± 2, rating the Web sites as difficult to read. Physician-provided Web sites had higher quality scores and lower readability compared with publically-provided Web sites. Websites providing information on RobP obtained medium to high ratings in all domains of quality in the current assessment. In contrast, readability needs to be significantly improved so that this content can become available for the populace. Copyright © 2015 Elsevier Inc. All rights reserved.
EPA’s Web Analytics Program collects, analyzes, and provides reports on traffic, quality assurance, and customer satisfaction metrics for EPA’s website. The program uses a variety of analytics tools, including Google Analytics and CrazyEgg.
Google it: obtaining information about local STD/HIV testing services online.
Habel, Melissa A; Hood, Julia; Desai, Sheila; Kachur, Rachel; Buhi, Eric R; Liddon, Nicole
2011-04-01
Although the Internet is one of the most commonly accessed resources for health information, finding information on local sexual health services, such as sexually transmitted disease (STD) testing, can be challenging. Recognizing that most quests for online health information begin with search engines, the purpose of this exploratory study was to examine the extent to which online information about local STD/HIV testing services can be found using Google. Queries on STD and HIV testing services were executed in Google for 6 geographically unique locations across the United States. The first 3 websites that resulted from each query were coded for the following characteristics: (1) relevancy to the search topic, (2) domain and purpose, (3) rank in Google results, and (4) content. Websites hosted at .com (57.3%), .org (25.7%), and .gov (10.5%) domains were retrieved most frequently. Roughly half of all websites (n = 376) provided information relevant to the query, and about three-quarters (77.0%) of all queries yielded at least 1 relevant website within the first 3 results. Searches for larger cities were more likely to yield relevant results compared with smaller cities (odds ratio [OR] = 10.0, 95% confidence interval [CI] = 5.6, 17.9). On comparison with .com domains, .gov (OR = 2.9, 95% CI = 1.4, 5.6) and .org domains (OR = 2.9, 95% CI = 1.7, 4.8) were more likely to provide information of the location to get tested. Ease of online access to information about sexual health services varies by search topic and locale. Sexual health service providers must optimize their website placement so as to reach a greater proportion of the sexually active population who use web search engines.
Accredited hand surgery fellowship Web sites: analysis of content and accessibility.
Trehan, Samir K; Morrell, Nathan T; Akelman, Edward
2015-04-01
To assess the accessibility and content of accredited hand surgery fellowship Web sites. A list of all accredited hand surgery fellowships was obtained from the online database of the American Society for Surgery of the Hand (ASSH). Fellowship program information on the ASSH Web site was recorded. All fellowship program Web sites were located via Google search. Fellowship program Web sites were analyzed for accessibility and content in 3 domains: program overview, application information/recruitment, and education. At the time of this study, there were 81 accredited hand surgery fellowships with 169 available positions. Thirty of 81 programs (37%) had a functional link on the ASSH online hand surgery fellowship directory; however, Google search identified 78 Web sites. Three programs did not have a Web site. Analysis of content revealed that most Web sites contained contact information, whereas information regarding the anticipated clinical, research, and educational experiences during fellowship was less often present. Furthermore, information regarding past and present fellows, salary, application process/requirements, call responsibilities, and case volume was frequently lacking. Overall, 52 of 81 programs (64%) had the minimal online information required for residents to independently complete the fellowship application process. Hand fellowship program Web sites could be accessed either via the ASSH online directory or Google search, except for 3 programs that did not have Web sites. Although most fellowship program Web sites contained contact information, other content such as application information/recruitment and education, was less frequently present. This study provides comparative data regarding the clinical and educational experiences outlined on hand fellowship program Web sites that are of relevance to residents, fellows, and academic hand surgeons. This study also draws attention to various ways in which the hand surgery fellowship application process can be made more user-friendly and efficient. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Getting to the top of Google: search engine optimization.
Maley, Catherine; Baum, Neil
2010-01-01
Search engine optimization is the process of making your Web site appear at or near the top of popular search engines such as Google, Yahoo, and MSN. This is not done by luck or knowing someone working for the search engines but by understanding the process of how search engines select Web sites for placement on top or on the first page. This article will review the process and provide methods and techniques to use to have your site rated at the top or very near the top.
NASA Astrophysics Data System (ADS)
Minnett, R. C.; Koppers, A. A.; Staudigel, D.; Staudigel, H.
2008-12-01
EarthRef.org is comprehensive and convenient resource for Earth Science reference data and models. It encompasses four main portals: the Geochemical Earth Reference Model (GERM), the Magnetics Information Consortium (MagIC), the Seamount Biogeosciences Network (SBN), and the Enduring Resources for Earth Science Education (ERESE). Their underlying databases are publically available and the scientific community has contributed widely and is urged to continue to do so. However, the net result is a vast and largely heterogeneous warehouse of geospatial data ranging from carefully prepared maps of seamounts to geochemical data/metadata, daily reports from seagoing expeditions, large volumes of raw and processed multibeam data, images of paleomagnetic sampling sites, etc. This presents a considerable obstacle for integrating other rich media content, such as videos, images, data files, cruise tracks, and interoperable database results, without overwhelming the web user. The four EarthRef.org portals clearly lend themselves to a more intuitive user interface and has, therefore, been an invaluable test bed for the design and implementation of FlashMap, a versatile KML-driven geospatial browser written for reliability and speed in Adobe Flash. FlashMap allows layers of content to be loaded and displayed over a streaming high-resolution map which can be zoomed and panned similarly to Google Maps and Google Earth. Many organizations, from National Geographic to the USGS, have begun using Google Earth software to display geospatial content. However, Google Earth, as a desktop application, does not integrate cleanly with existing websites requiring the user to navigate away from the browser and focus on a separate application and Google Maps, written in Java Script, does not scale up reliably to large datasets. FlashMap remedies these problems as a web-based application that allows for seamless integration of the real-time display power of Google Earth and the flexibility of the web without losing scalability and control of the base maps. Our Flash-based application is fully compatible with KML (Keyhole Markup Language) 2.2, the most recent iteration of KML, allowing users with existing Google Earth KML files to effortlessly display their geospatial content embedded in a web page. As a test case for FlashMap, the annual Iron-Oxidizing Microbial Observatory (FeMO) dive cruise to the Loihi Seamount, in conjunction with data available from ongoing and published FeMO laboratory studies, showcases the flexibility of this single web-based application. With a KML 2.2 compatible web-service providing the content, any database can display results in FlashMap. The user can then hide and show multiple layers of content, potentially from several data sources, and rapidly digest a vast quantity of information to narrow the search results. This flexibility gives experienced users the ability to drill down to exactly the record they are looking for (SERC at Carleton College's educational application of FlashMap at http://serc.carleton.edu/sp/erese/activities/22223.html) and allows users familiar with Google Earth the ability to load and view geospatial data content within a browser from any computer with an internet connection.
How good is Google? The quality of otolaryngology information on the internet.
Pusz, Max D; Brietzke, Scott E
2012-09-01
To assess the quality of the information a patient (parent) may encounter using a Google search for typical otolaryngology ailments. Cross-sectional study. Tertiary care center. A Google keyword search was performed for 10 common otolaryngology problems including ear infection, hearing loss, tonsillitis, and so on. The top 10 search results for each were critically examined using the 16-item (1-5 scale) standardized DISCERN instrument. The DISCERN instrument was developed to assess the quality and comprehensiveness of patient treatment choice literature. A total of 100 Web sites were assessed. Of these, 19 (19%) were primarily advertisements for products and were excluded from DISCERN scoring. Searches for more typically chronic otolaryngic problems (eg, tinnitus, sleep apnea, etc) resulted in more biased, advertisement-type results than those for typically acute problems (eg, ear infection, sinus infection, P = .03). The search for "sleep apnea treatment" produced the highest scoring results (mean overall DISCERN score = 3.49, range = 1.81-4.56), and the search for "hoarseness treatment" produced the lowest scores (mean = 2.49, range = 1.56-3.56). Results from major comprehensive Web sites (WebMD, EMedicinehealth.com, Wikipedia, etc.) scored higher than other Web sites (mean DISCERN score = 3.46 vs 2.48, P < .001). There is marked variability in the quality of Web site information for the treatment of common otolaryngologic problems. Searches on more chronic problems resulted in a higher proportion of biased advertisement Web sites. Larger, comprehensive Web sites generally provided better information but were less than perfect in presenting complete information on treatment options.
Multi-Resource Fair Queueing for Packet Processing
2012-06-19
Huawei , Intel, MarkLogic, Microsoft, NetApp, Oracle, Quanta, Splunk, VMware and by DARPA (contract #FA8650-11-C-7136). Multi-Resource Fair Queueing for...Google PhD Fellowship, gifts from Amazon Web Services, Google, SAP, Blue Goji, Cisco, Cloud- era, Ericsson, General Electric, Hewlett Packard, Huawei
Google Earth as a (Not Just) Geography Education Tool
ERIC Educational Resources Information Center
Patterson, Todd C.
2007-01-01
The implementation of Geographic Information Science (GIScience) applications and discussion of GIScience-related themes are useful for teaching fundamental geographic and technological concepts. As one of the newest geographic information tools available on the World Wide Web, Google Earth has considerable potential to enhance methods for…
Beyond Google: The Invisible Web in the Academic Library
ERIC Educational Resources Information Center
Devine, Jane; Egger-Sider, Francine
2004-01-01
This article analyzes the concept of the Invisible Web and its implication for academic librarianship. It offers a guide to tools that can be used to mine the Invisible Web and discusses the benefits of using the Invisible Web to promote interest in library services. In addition, the article includes an expanded definition, a literature review,…
Web-based surveillance of public information needs for informing preconception interventions.
D'Ambrosio, Angelo; Agricola, Eleonora; Russo, Luisa; Gesualdo, Francesco; Pandolfi, Elisabetta; Bortolus, Renata; Castellani, Carlo; Lalatta, Faustina; Mastroiacovo, Pierpaolo; Tozzi, Alberto Eugenio
2015-01-01
The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public's information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health. Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics. We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time. Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations.
Web-Based Surveillance of Public Information Needs for Informing Preconception Interventions
D’Ambrosio, Angelo; Agricola, Eleonora; Russo, Luisa; Gesualdo, Francesco; Pandolfi, Elisabetta; Bortolus, Renata; Castellani, Carlo; Lalatta, Faustina; Mastroiacovo, Pierpaolo; Tozzi, Alberto Eugenio
2015-01-01
Background The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public's information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health. Methods Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics. Results We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time. Conclusion Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations. PMID:25879682
Fazeli Dehkordy, Soudabeh; Carlos, Ruth C; Hall, Kelli S; Dalton, Vanessa K
2014-09-01
Millions of people use online search engines everyday to find health-related information and voluntarily share their personal health status and behaviors in various Web sites. Thus, data from tracking of online information seeker's behavior offer potential opportunities for use in public health surveillance and research. Google Trends is a feature of Google which allows Internet users to graph the frequency of searches for a single term or phrase over time or by geographic region. We used Google Trends to describe patterns of information-seeking behavior in the subject of dense breasts and to examine their correlation with the passage or introduction of dense breast notification legislation. To capture the temporal variations of information seeking about dense breasts, the Web search query "dense breast" was entered in the Google Trends tool. We then mapped the dates of legislative actions regarding dense breasts that received widespread coverage in the lay media to information-seeking trends about dense breasts over time. Newsworthy events and legislative actions appear to correlate well with peaks in search volume of "dense breast". Geographic regions with the highest search volumes have passed, denied, or are currently considering the dense breast legislation. Our study demonstrated that any legislative action and respective news coverage correlate with increase in information seeking for "dense breast" on Google, suggesting that Google Trends has the potential to serve as a data source for policy-relevant research. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Sally Ride EarthKAM - Automated Image Geo-Referencing Using Google Earth Web Plug-In
NASA Technical Reports Server (NTRS)
Andres, Paul M.; Lazar, Dennis K.; Thames, Robert Q.
2013-01-01
Sally Ride EarthKAM is an educational program funded by NASA that aims to provide the public the ability to picture Earth from the perspective of the International Space Station (ISS). A computer-controlled camera is mounted on the ISS in a nadir-pointing window; however, timing limitations in the system cause inaccurate positional metadata. Manually correcting images within an orbit allows the positional metadata to be improved using mathematical regressions. The manual correction process is time-consuming and thus, unfeasible for a large number of images. The standard Google Earth program allows for the importing of KML (keyhole markup language) files that previously were created. These KML file-based overlays could then be manually manipulated as image overlays, saved, and then uploaded to the project server where they are parsed and the metadata in the database is updated. The new interface eliminates the need to save, download, open, re-save, and upload the KML files. Everything is processed on the Web, and all manipulations go directly into the database. Administrators also have the control to discard any single correction that was made and validate a correction. This program streamlines a process that previously required several critical steps and was probably too complex for the average user to complete successfully. The new process is theoretically simple enough for members of the public to make use of and contribute to the success of the Sally Ride EarthKAM project. Using the Google Earth Web plug-in, EarthKAM images, and associated metadata, this software allows users to interactively manipulate an EarthKAM image overlay, and update and improve the associated metadata. The Web interface uses the Google Earth JavaScript API along with PHP-PostgreSQL to present the user the same interface capabilities without leaving the Web. The simpler graphical user interface will allow the public to participate directly and meaningfully with EarthKAM. The use of similar techniques is being investigated to place ground-based observations in a Google Mars environment, allowing the MSL (Mars Science Laboratory) Science Team a means to visualize the rover and its environment.
Detecting Runtime Anomalies in AJAX Applications through Trace Analysis
2011-08-10
statements by adding the instrumentation to the GWT UI classes, leaving the user code untouched. Some content management frameworks such as Drupal [12...Google web toolkit.” http://code.google.com/webtoolkit/. [12] “Form generation – drupal api.” http://api.drupal.org/api/group/form_api/6. 9
ERIC Educational Resources Information Center
Albanese, Andrew Richard
2007-01-01
In this article, the author presents an interview with Brewster Kahle, leader of the Open Content Alliance (OCA). OCA book scan program is an alternative to Google's library project that aims to make books accessible online. In this interview, Kahle discusses his views on the challenges of getting books on the Web, on Google's library…
ERIC Educational Resources Information Center
Rochkind, Jonathan
2007-01-01
The ability to search and receive results in more than one database through a single interface--or metasearch--is something many users want. Google Scholar--the search engine of specifically scholarly content--and library metasearch products like Ex Libris's MetaLib, Serials Solution's Central Search, WebFeat, and products based on MuseGlobal used…
NASA Astrophysics Data System (ADS)
Howard, K. L.; Lee, S. S.
2015-12-01
Open-source, web-based forums and online resources can be used to develop a collaborative, active-learning approach for engaging and training students in the scientific process. We used the Diatoms of the United States website as an online resource for diatom taxonomy and developed a Google+ class community to serve as a platform for high school students to learn about research in diatom taxonomy, community ecology and diatom applications to the earth sciences. Ecology and Systematics of Diatoms is a field course that has been taught at the undergraduate and graduate levels at the Iowa Lakeside Lab field station for 52 years, beginning with the Diatom Clinic in 1963. Freshwater diatom education at Lakeside Lab has since evolved into a foundational training course attracting budding diatomists from all over the world, and has grown to include a week-long course for high school students. Successful since 2012, the high school course is now offered for college credit (University of Iowa), and covers methods of diatom specimen collection and preparation, microscopy, identification of diatom genera, diatom ecology, applications of diatom research, and an introduction to data analysis incorporating multivariate statistics (ordination) using the R statistical program, as well as primary scientific literature. During the 2015 course, students contributed to a Google+ class community where they posted images, data, and questions. The web-based platform allowed students to easily share information and to give and receive feedback from both peers and instructors. Students collaborated via the Google+ community and used the Diatoms of the United States website to develop a taxonomic reference for a field-based group research project, simulating how an actual diatom research program would develop a region or project-specific flora harmonized across analysts. Students investigated the taxonomy and ecology of diatom epiphytes on the green alga Cladophora from the littoral zone of West Lake Okoboji, Iowa. They found the epiphyte community went through a seasonal succession and developed hypotheses for the observed patterns by researching the ecology of diatoms in primary literature. These course activities may be used as a model for other field-based courses or educational programs in earth and environmental sciences.
Beyond Description: Converting Web Site Usage Statistics into Concrete Site Improvement Ideas
ERIC Educational Resources Information Center
Arendt, Julie; Wagner, Cassie
2010-01-01
Web site usage statistics are a widely used tool for Web site development, but libraries are still learning how to use them successfully. This case study summarizes how Morris Library at Southern Illinois University Carbondale implemented Google Analytics on its Web site and used the reports to inform a site redesign. As the main campus library at…
Head Lice Surveillance on a Deregulated OTC-Sales Market: A Study Using Web Query Data
Lindh, Johan; Magnusson, Måns; Grünewald, Maria; Hulth, Anette
2012-01-01
The head louse, Pediculus humanus capitis, is an obligate ectoparasite that causes infestations of humans. Studies have demonstrated a correlation between sales figures for over-the-counter (OTC) treatment products and the number of humans with head lice. The deregulation of the Swedish pharmacy market on July 1, 2009, decreased the possibility to obtain complete sale figures and thereby the possibility to obtain yearly trends of head lice infestations. In the presented study we wanted to investigate whether web queries on head lice can be used as substitute for OTC sales figures. Via Google Insights for Search and Vårdguiden medical web site, the number of queries on “huvudlöss” (head lice) and “hårlöss” (lice in hair) were obtained. The analysis showed that both the Vårdguiden series and the Google series were statistically significant (p<0.001) when added separately, but if the Google series were already included in the model, the Vårdguiden series were not statistically significant (p = 0.5689). In conclusion, web queries can detect if there is an increase or decrease of head lice infested humans in Sweden over a period of years, and be as reliable a proxy as the OTC-sales figures. PMID:23144923
Head lice surveillance on a deregulated OTC-sales market: a study using web query data.
Lindh, Johan; Magnusson, Måns; Grünewald, Maria; Hulth, Anette
2012-01-01
The head louse, Pediculus humanus capitis, is an obligate ectoparasite that causes infestations of humans. Studies have demonstrated a correlation between sales figures for over-the-counter (OTC) treatment products and the number of humans with head lice. The deregulation of the Swedish pharmacy market on July 1, 2009, decreased the possibility to obtain complete sale figures and thereby the possibility to obtain yearly trends of head lice infestations. In the presented study we wanted to investigate whether web queries on head lice can be used as substitute for OTC sales figures. Via Google Insights for Search and Vårdguiden medical web site, the number of queries on "huvudlöss" (head lice) and "hårlöss" (lice in hair) were obtained. The analysis showed that both the Vårdguiden series and the Google series were statistically significant (p<0.001) when added separately, but if the Google series were already included in the model, the Vårdguiden series were not statistically significant (p = 0.5689). In conclusion, web queries can detect if there is an increase or decrease of head lice infested humans in Sweden over a period of years, and be as reliable a proxy as the OTC-sales figures.
An overview of new video coding tools under consideration for VP10: the successor to VP9
NASA Astrophysics Data System (ADS)
Mukherjee, Debargha; Su, Hui; Bankoski, James; Converse, Alex; Han, Jingning; Liu, Zoe; Xu, Yaowu
2015-09-01
Google started an opensource project, entitled the WebM Project, in 2010 to develop royaltyfree video codecs for the web. The present generation codec developed in the WebM project called VP9 was finalized in mid2013 and is currently being served extensively by YouTube, resulting in billions of views per day. Even though adoption of VP9 outside Google is still in its infancy, the WebM project has already embarked on an ambitious project to develop a next edition codec VP10 that achieves at least a generational bitrate reduction over the current generation codec VP9. Although the project is still in early stages, a set of new experimental coding tools have already been added to baseline VP9 to achieve modest coding gains over a large enough test set. This paper provides a technical overview of these coding tools.
The Number of Scholarly Documents on the Public Web
Khabsa, Madian; Giles, C. Lee
2014-01-01
The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%. PMID:24817403
The number of scholarly documents on the public web.
Khabsa, Madian; Giles, C Lee
2014-01-01
The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.
HCLS 2.0/3.0: health care and life sciences data mashup using Web 2.0/3.0.
Cheung, Kei-Hoi; Yip, Kevin Y; Townsend, Jeffrey P; Scotch, Matthew
2008-10-01
We describe the potential of current Web 2.0 technologies to achieve data mashup in the health care and life sciences (HCLS) domains, and compare that potential to the nascent trend of performing semantic mashup. After providing an overview of Web 2.0, we demonstrate two scenarios of data mashup, facilitated by the following Web 2.0 tools and sites: Yahoo! Pipes, Dapper, Google Maps and GeoCommons. In the first scenario, we exploited Dapper and Yahoo! Pipes to implement a challenging data integration task in the context of DNA microarray research. In the second scenario, we exploited Yahoo! Pipes, Google Maps, and GeoCommons to create a geographic information system (GIS) interface that allows visualization and integration of diverse categories of public health data, including cancer incidence and pollution prevalence data. Based on these two scenarios, we discuss the strengths and weaknesses of these Web 2.0 mashup technologies. We then describe Semantic Web, the mainstream Web 3.0 technology that enables more powerful data integration over the Web. We discuss the areas of intersection of Web 2.0 and Semantic Web, and describe the potential benefits that can be brought to HCLS research by combining these two sets of technologies.
HCLS 2.0/3.0: Health Care and Life Sciences Data Mashup Using Web 2.0/3.0
Cheung, Kei-Hoi; Yip, Kevin Y.; Townsend, Jeffrey P.; Scotch, Matthew
2010-01-01
We describe the potential of current Web 2.0 technologies to achieve data mashup in the health care and life sciences (HCLS) domains, and compare that potential to the nascent trend of performing semantic mashup. After providing an overview of Web 2.0, we demonstrate two scenarios of data mashup, facilitated by the following Web 2.0 tools and sites: Yahoo! Pipes, Dapper, Google Maps and GeoCommons. In the first scenario, we exploited Dapper and Yahoo! Pipes to implement a challenging data integration task in the context of DNA microarray research. In the second scenario, we exploited Yahoo! Pipes, Google Maps, and GeoCommons to create a geographic information system (GIS) interface that allows visualization and integration of diverse categories of public health data, including cancer incidence and pollution prevalence data. Based on these two scenarios, we discuss the strengths and weaknesses of these Web 2.0 mashup technologies. We then describe Semantic Web, the mainstream Web 3.0 technology that enables more powerful data integration over the Web. We discuss the areas of intersection of Web 2.0 and Semantic Web, and describe the potential benefits that can be brought to HCLS research by combining these two sets of technologies. PMID:18487092
Exploring the Relationship between Self-Regulated Vocabulary Learning and Web-Based Collaboration
ERIC Educational Resources Information Center
Liu, Sarah Hsueh-Jui; Lan, Yu-Ju; Ho, Cloudia Ya-Yu
2014-01-01
Collaborative learning has placed an emphasis on co-constructing knowledge by sharing and negotiating meaning for problem-solving activities, and this cannot be accomplished without governing the self-regulatory processes of students. This study employed a Web-based tool, Google Docs, to determine the effects of Web-based collaboration on…
Google Wave: Collaboration Reworked
ERIC Educational Resources Information Center
Rethlefsen, Melissa L.
2010-01-01
Over the past several years, Internet users have become accustomed to Web 2.0 and cloud computing-style applications. It's commonplace and even intuitive to drag and drop gadgets on personalized start pages, to comment on a Facebook post without reloading the page, and to compose and save documents through a web browser. The web paradigm has…
Challenging Google, Microsoft Unveils a Search Tool for Scholarly Articles
ERIC Educational Resources Information Center
Carlson, Scott
2006-01-01
Microsoft has introduced a new search tool to help people find scholarly articles online. The service, which includes journal articles from prominent academic societies and publishers, puts Microsoft in direct competition with Google Scholar. The new free search tool, which should work on most Web browsers, is called Windows Live Academic Search…
Preservation in the Age of Google: Digitization, Digital Preservation, and Dilemmas
ERIC Educational Resources Information Center
Conway, Paul
2010-01-01
The cultural heritage preservation community now functions largely within the environment of digital technologies. This article begins by juxtaposing definitions of the terms "digitization for preservation" and "digital preservation" within a sociotechnical environment for which Google serves as a relevant metaphor. It then reviews two reports…
Dehkordy, Soudabeh Fazeli; Carlos, Ruth C.; Hall, Kelli S.; Dalton, Vanessa K.
2015-01-01
Rationale and Objectives Millions of people use online search engines every day to find health-related information and voluntarily share their personal health status and behaviors in various Web sites. Thus, data from tracking of online information seeker’s behavior offer potential opportunities for use in public health surveillance and research. Google Trends is a feature of Google which allows internet users to graph the frequency of searches for a single term or phrase over time or by geographic region. We used Google Trends to describe patterns of information seeking behavior in the subject of dense breasts and to examine their correlation with the passage or introduction of dense breast notification legislation. Materials and Methods In order to capture the temporal variations of information seeking about dense breasts, the web search query “dense breast” was entered in the Google Trends tool. We then mapped the dates of legislative actions regarding dense breasts that received widespread coverage in the lay media to information seeking trends about dense breasts over time. Results Newsworthy events and legislative actions appear to correlate well with peaks in search volume of “dense breast”. Geographic regions with the highest search volumes have either passed, denied, or are currently considering the dense breast legislation. Conclusions Our study demonstrated that any legislative action and respective news coverage correlate with increase in information seeking for “dense breast” on Google, suggesting that Google Trends has the potential to serve as a data source for policy-relevant research. PMID:24998689
NASA Astrophysics Data System (ADS)
Thau, D.
2017-12-01
For the past seven years, Google has made petabytes of Earth observation data, and the tools to analyze it, freely available to researchers around the world via cloud computing. These data and tools were initially available via Google Earth Engine and are increasingly available on the Google Cloud Platform. We have introduced a number of APIs for both the analysis and presentation of geospatial data that have been successfully used to create impactful datasets and web applications, including studies of global surface water availability, global tree cover change, and crop yield estimation. Each of these projects used the cloud to analyze thousands to millions of Landsat scenes. The APIs support a range of publishing options, from outputting imagery and data for inclusion in papers, to providing tools for full scale web applications that provide analysis tools of their own. Over the course of developing these tools, we have learned a number of lessons about how to build a publicly available cloud platform for geospatial analysis, and about how the characteristics of an API can affect the kinds of impacts a platform can enable. This study will present an overview of how Google Earth Engine works and how Google's geospatial capabilities are extending to Google Cloud Platform. We will provide a number of case studies describing how these platforms, and the data they host, have been leveraged to build impactful decision support tools used by governments, researchers, and other institutions, and we will describe how the available APIs have shaped (or constrained) those tools. [Image Credit: Tyler A. Erickson
Googling trends in conservation biology.
Proulx, Raphaël; Massicotte, Philippe; Pépino, Marc
2014-02-01
Web-crawling approaches, that is, automated programs data mining the internet to obtain information about a particular process, have recently been proposed for monitoring early signs of ecosystem degradation or for establishing crop calendars. However, lack of a clear conceptual and methodological framework has prevented the development of such approaches within the field of conservation biology. Our objective was to illustrate how Google Trends, a freely accessible web-crawling engine, can be used to track changes in timing of biological processes, spatial distribution of invasive species, and level of public awareness about key conservation issues. Google Trends returns the number of internet searches that were made for a keyword in a given region of the world over a defined period. Using data retrieved online for 13 countries, we exemplify how Google Trends can be used to study the timing of biological processes, such as the seasonal recurrence of pollen release or mosquito outbreaks across a latitudinal gradient. We mapped the spatial extent of results from Google Trends for 5 invasive species in the United States and found geographic patterns in invasions that are consistent with their coarse-grained distribution at state levels. From 2004 through 2012, Google Trends showed that the level of public interest and awareness about conservation issues related to ecosystem services, biodiversity, and climate change increased, decreased, and followed both trends, respectively. Finally, to further the development of research approaches at the interface of conservation biology, collective knowledge, and environmental management, we developed an algorithm that allows the rapid retrieval of Google Trends data. © 2013 Society for Conservation Biology.
Coverage of Google Scholar, Scopus, and Web of Science: a case study of the h-index in nursing.
De Groote, Sandra L; Raszewski, Rebecca
2012-01-01
This study compares the articles cited in CINAHL, Scopus, Web of Science (WOS), and Google Scholar and the h-index ratings provided by Scopus, WOS, and Google Scholar. The publications of 30 College of Nursing faculty at a large urban university were examined. Searches by author name were executed in Scopus, WOS, and POP (Publish or Perish, which searches Google Scholar), and the h-index for each author from each database was recorded. In addition, the citing articles of their published articles were imported into a bibliographic management program. This data was used to determine an aggregated h-index for each author. Scopus, WOS, and Google Scholar provided different h-index ratings for authors and each database found unique and duplicate citing references. More than one tool should be used to calculate the h-index for nursing faculty because one tool alone cannot be relied on to provide a thorough assessment of a researcher's impact. If researchers are interested in a comprehensive h-index, they should aggregate the citing references located by WOS and Scopus. Because h-index rankings differ among databases, comparisons between researchers should be done only within a specified database. Copyright © 2012 Elsevier Inc. All rights reserved.
Automating Information Discovery Within the Invisible Web
NASA Astrophysics Data System (ADS)
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
ERIC Educational Resources Information Center
Griffin, Teresa; Cohen, Deb
2012-01-01
The ubiquity and familiarity of the world wide web means that students regularly turn to it as a source of information. In doing so, they "are said to rely heavily on simple search engines, such as Google to find what they want." Researchers have also investigated how students use search engines, concluding that "the young web users tended to…
Microsoft or Google Web 2.0 Tools for Course Management
ERIC Educational Resources Information Center
Rienzo, Thomas; Han, Bernard
2009-01-01
While Web 2.0 has no universal definition, it always refers to online interactions in which user groups both provide and receive content with the aim of collective intelligence. Since 2005, online software has provided Web 2.0 collaboration technologies, for little or no charge, that were formerly available only to wealthy organizations. Academic…
Using Google Scholar to Estimate the Impact of Journal Articles in Education
ERIC Educational Resources Information Center
van Aalst, Jan
2010-01-01
This article discusses the potential of Google Scholar as an alternative or complement to the Web of Science and Scopus for measuring the impact of journal articles in education. Three handbooks on research in science education, language education, and educational technology were used to identify a sample of 112 accomplished scholars. Google…
Center for Adaptive Optics | Search
Center for Adaptive Optics A University of California Science and Technology Center home Search CfAO Google Search search: CfAO All of UCOLick.org Whole Web Search for recent Adaptive Optics news at GoogleNews! Last Modified: Sep 21, 2010 Center for Adaptive Optics | Search | The Center | Adaptive Optics
ERIC Educational Resources Information Center
Leibiger, Carol A.
2011-01-01
Googlitis, the overreliance on search engines for research and the resulting development of poor searching skills, is a recognized problem among today's students. Google is not an effective research tool because, in addition to encouraging keyword searching at the expense of more powerful subject searching, it only accesses the Surface Web and is…
Google's Evolution Leads to Library Revolution
ERIC Educational Resources Information Center
Jaworski, Susan; Sullivan, Roberta
2011-01-01
Do library catalogs compete with Google or is it the other way around? We know which came first but which will finish in the end? Only trained library professionals were considered qualified to develop reliable catalog records. However, with the increased sophistication of search engines, we are beginning to realize that a collaborative effort may…
Three options for citation tracking: Google Scholar, Scopus and Web of Science.
Bakkalbasi, Nisa; Bauer, Kathleen; Glover, Janis; Wang, Lei
2006-06-29
Researchers turn to citation tracking to find the most influential articles for a particular topic and to see how often their own published papers are cited. For years researchers looking for this type of information had only one resource to consult: the Web of Science from Thomson Scientific. In 2004 two competitors emerged--Scopus from Elsevier and Google Scholar from Google. The research reported here uses citation analysis in an observational study examining these three databases; comparing citation counts for articles from two disciplines (oncology and condensed matter physics) and two years (1993 and 2003) to test the hypothesis that the different scholarly publication coverage provided by the three search tools will lead to different citation counts from each. Eleven journal titles with varying impact factors were selected from each discipline (oncology and condensed matter physics) using the Journal Citation Reports (JCR). All articles published in the selected titles were retrieved for the years 1993 and 2003, and a stratified random sample of articles was chosen, resulting in four sets of articles. During the week of November 7-12, 2005, the citation counts for each research article were extracted from the three sources. The actual citing references for a subset of the articles published in 2003 were also gathered from each of the three sources. For oncology 1993 Web of Science returned the highest average number of citations, 45.3. Scopus returned the highest average number of citations (8.9) for oncology 2003. Web of Science returned the highest number of citations for condensed matter physics 1993 and 2003 (22.5 and 3.9 respectively). The data showed a significant difference in the mean citation rates between all pairs of resources except between Google Scholar and Scopus for condensed matter physics 2003. For articles published in 2003 Google Scholar returned the largest amount of unique citing material for oncology and Web of Science returned the most for condensed matter physics. This study did not identify any one of these three resources as the answer to all citation tracking needs. Scopus showed strength in providing citing literature for current (2003) oncology articles, while Web of Science produced more citing material for 2003 and 1993 condensed matter physics, and 1993 oncology articles. All three tools returned some unique material. Our data indicate that the question of which tool provides the most complete set of citing literature may depend on the subject and publication year of a given article.
Three options for citation tracking: Google Scholar, Scopus and Web of Science
Bakkalbasi, Nisa; Bauer, Kathleen; Glover, Janis; Wang, Lei
2006-01-01
Background Researchers turn to citation tracking to find the most influential articles for a particular topic and to see how often their own published papers are cited. For years researchers looking for this type of information had only one resource to consult: the Web of Science from Thomson Scientific. In 2004 two competitors emerged – Scopus from Elsevier and Google Scholar from Google. The research reported here uses citation analysis in an observational study examining these three databases; comparing citation counts for articles from two disciplines (oncology and condensed matter physics) and two years (1993 and 2003) to test the hypothesis that the different scholarly publication coverage provided by the three search tools will lead to different citation counts from each. Methods Eleven journal titles with varying impact factors were selected from each discipline (oncology and condensed matter physics) using the Journal Citation Reports (JCR). All articles published in the selected titles were retrieved for the years 1993 and 2003, and a stratified random sample of articles was chosen, resulting in four sets of articles. During the week of November 7–12, 2005, the citation counts for each research article were extracted from the three sources. The actual citing references for a subset of the articles published in 2003 were also gathered from each of the three sources. Results For oncology 1993 Web of Science returned the highest average number of citations, 45.3. Scopus returned the highest average number of citations (8.9) for oncology 2003. Web of Science returned the highest number of citations for condensed matter physics 1993 and 2003 (22.5 and 3.9 respectively). The data showed a significant difference in the mean citation rates between all pairs of resources except between Google Scholar and Scopus for condensed matter physics 2003. For articles published in 2003 Google Scholar returned the largest amount of unique citing material for oncology and Web of Science returned the most for condensed matter physics. Conclusion This study did not identify any one of these three resources as the answer to all citation tracking needs. Scopus showed strength in providing citing literature for current (2003) oncology articles, while Web of Science produced more citing material for 2003 and 1993 condensed matter physics, and 1993 oncology articles. All three tools returned some unique material. Our data indicate that the question of which tool provides the most complete set of citing literature may depend on the subject and publication year of a given article. PMID:16805916
Boulos, Maged N Kamel
2005-01-01
This eye-opener article aims at introducing the health GIS community to the emerging online consumer geoinformatics services from Google and Microsoft (MSN), and their potential utility in creating custom online interactive health maps. Using the programmable interfaces provided by Google and MSN, we created three interactive demonstrator maps of England's Strategic Health Authorities. These can be browsed online at – Google Maps API (Application Programming Interface) version, – Google Earth KML (Keyhole Markup Language) version, and – MSN Virtual Earth Map Control version. Google and MSN's worldwide distribution of "free" geospatial tools, imagery, and maps is to be commended as a significant step towards the ultimate "wikification" of maps and GIS. A discussion is provided of these emerging online mapping trends, their expected future implications and development directions, and associated individual privacy, national security and copyrights issues. Although ESRI have announced their planned response to Google (and MSN), it remains to be seen how their envisaged plans will materialize and compare to the offerings from Google and MSN, and also how Google and MSN mapping tools will further evolve in the near future. PMID:16176577
Trapp, Jamie
2016-12-01
There are often differences in a publication's citation count, depending on the database accessed. Here, aspects of citation counts for medical physics and biomedical engineering papers are studied using papers published in the journal Australasian physical and engineering sciences in medicine. Comparison is made between the Web of Science, Scopus, and Google Scholar. Papers are categorised into subject matter, and citation trends are examined. It is shown that review papers as a group tend to receive more citations on average; however the highest cited individual papers are more likely to be research papers.
Trends in access of plant biodiversity data revealed by Google Analytics
Baxter, David G.; Hagedorn, Gregor; Legler, Ben; Gilbert, Edward; Thiele, Kevin; Vargas-Rodriguez, Yalma; Urbatsch, Lowell E.
2014-01-01
Abstract The amount of plant biodiversity data available via the web has exploded in the last decade, but making these data available requires a considerable investment of time and work, both vital considerations for organizations and institutions looking to validate the impact factors of these online works. Here we used Google Analytics (GA), to measure the value of this digital presence. In this paper we examine usage trends using 15 different GA accounts, spread across 451 institutions or botanical projects that comprise over five percent of the world's herbaria. They were studied at both one year and total years. User data from the sample reveal: 1) over 17 million web sessions, 2) on five primary operating systems, 3) search and direct traffic dominates with minimal impact from social media, 4) mobile and new device types have doubled each year for the past three years, 5) and web browsers, the tools we use to interact with the web, are changing. Server-side analytics differ from site to site making the comparison of their data sets difficult. However, use of Google Analytics erases the reporting heterogeneity of unique server-side analytics, as they can now be examined with a standard that provides a clarity for data-driven decisions. The knowledge gained here empowers any collection-based environment regardless of size, with metrics about usability, design, and possible directions for future development. PMID:25425933
Trends in access of plant biodiversity data revealed by Google Analytics.
Jones, Timothy Mark; Baxter, David G; Hagedorn, Gregor; Legler, Ben; Gilbert, Edward; Thiele, Kevin; Vargas-Rodriguez, Yalma; Urbatsch, Lowell E
2014-01-01
The amount of plant biodiversity data available via the web has exploded in the last decade, but making these data available requires a considerable investment of time and work, both vital considerations for organizations and institutions looking to validate the impact factors of these online works. Here we used Google Analytics (GA), to measure the value of this digital presence. In this paper we examine usage trends using 15 different GA accounts, spread across 451 institutions or botanical projects that comprise over five percent of the world's herbaria. They were studied at both one year and total years. User data from the sample reveal: 1) over 17 million web sessions, 2) on five primary operating systems, 3) search and direct traffic dominates with minimal impact from social media, 4) mobile and new device types have doubled each year for the past three years, 5) and web browsers, the tools we use to interact with the web, are changing. Server-side analytics differ from site to site making the comparison of their data sets difficult. However, use of Google Analytics erases the reporting heterogeneity of unique server-side analytics, as they can now be examined with a standard that provides a clarity for data-driven decisions. The knowledge gained here empowers any collection-based environment regardless of size, with metrics about usability, design, and possible directions for future development.
Are Google or Yahoo a good portal for getting quality healthcare web information?
Chang, Polun; Hou, I-Ching; Hsu, Chiao-Ling; Lai, Hsiang-Fen
2006-01-01
We examined the ranks of 50 award-won health websites in Taiwan against the search results of two popular portals with 6 common diseases. The results showed that the portal search results do not rank the quality web sites reasonably.
Werts, Joshua D; Mikhailova, Elena A; Post, Christopher J; Sharp, Julia L
2012-04-01
Volunteered geographic information and social networking in a WebGIS has the potential to increase public participation in soil and water conservation, promote environmental awareness and change, and provide timely data that may be otherwise unavailable to policymakers in soil and water conservation management. The objectives of this study were: (1) to develop a framework for combining current technologies, computing advances, data sources, and social media; and (2) develop and test an online web mapping interface. The mapping interface integrates Microsoft Silverlight, Bing Maps, ArcGIS Server, Google Picasa Web Albums Data API, RSS, Google Analytics, and Facebook to create a rich user experience. The website allows the public to upload photos and attributes of their own subdivisions or sites they have identified and explore other submissions. The website was made available to the public in early February 2011 at http://www.AbandonedDevelopments.com and evaluated for its potential long-term success in a pilot study.
NASA Astrophysics Data System (ADS)
Werts, Joshua D.; Mikhailova, Elena A.; Post, Christopher J.; Sharp, Julia L.
2012-04-01
Volunteered geographic information and social networking in a WebGIS has the potential to increase public participation in soil and water conservation, promote environmental awareness and change, and provide timely data that may be otherwise unavailable to policymakers in soil and water conservation management. The objectives of this study were: (1) to develop a framework for combining current technologies, computing advances, data sources, and social media; and (2) develop and test an online web mapping interface. The mapping interface integrates Microsoft Silverlight, Bing Maps, ArcGIS Server, Google Picasa Web Albums Data API, RSS, Google Analytics, and Facebook to create a rich user experience. The website allows the public to upload photos and attributes of their own subdivisions or sites they have identified and explore other submissions. The website was made available to the public in early February 2011 at http://www.AbandonedDevelopments.com and evaluated for its potential long-term success in a pilot study.
ERIC Educational Resources Information Center
Shen, Siu-Tsen
2016-01-01
This paper presents an ongoing study of the development of a customizable web browser information organization and management system, which the author has named Lexicon Sextant (LS). LS is a user friendly, graphical web based add-on to the latest generation of web browsers, such as Google Chrome, making it easier and more intuitive to store and…
Why We Are Not Google: Lessons from a Library Web Site Usability Study
ERIC Educational Resources Information Center
Swanson, Troy A.; Green, Jeremy
2011-01-01
In the Fall of 2009, the Moraine Valley Community College Library, using guidelines developed by Jakob Nielsen, conducted a usability study to determine how students were using the library Web site and to inform the redesign of the Web site. The authors found that Moraine Valley's current gateway design was a more effective access point to library…
There's An App For That: Planning Ahead for the Solar Eclipse in August 2017
NASA Astrophysics Data System (ADS)
Chizek Frouard, Malynda R.; Lesniak, Michael V.; Bell, Steve
2017-01-01
With the total solar eclipse of 2017 August 21 over the continental United States approaching, the U.S. Naval Observatory (USNO) on-line Solar Eclipse Computer can now be accessed via an Android application, available on Google Play.Over the course of the eclipse, as viewed from a specific site, several events may be visible: the beginning and ending of the eclipse (first and fourth contacts), the beginning and ending of totality (second and third contacts), the moment of maximum eclipse, sunrise, or sunset. For each of these events, the USNO Solar Eclipse 2017 Android application reports the time, Sun's altitude and azimuth, and the event's position and vertex angles. The app also lists the duration of the total phase, the duration of the eclipse, the magnitude of the eclipse, and the percent of the Sun obscured for a particular eclipse site.All of the data available in the app comes from the flexible USNO Solar Eclipse Computer Application Programming Interface (API), which produces JavaScript Object Notation (JSON) that can be incorporated into third-party Web sites or custom applications. Additional information is available in the on-line documentation (http://aa.usno.navy.mil/data/docs/api.php).For those who prefer using a traditional data input form, the local circumstances can still be requested at http://aa.usno.navy.mil/data/docs/SolarEclipses.php.In addition the 2017 August 21 Solar Eclipse Resource page (http://aa.usno.navy.mil/data/docs/Eclipse2017.php) consolidates all of the USNO resources for this event, including a Google Map view of the eclipse track designed by Her Majesty's Nautical Almanac Office (HMNAO).Looking further ahead, a 2024 April 8 Solar Eclipse Resource page (http://aa.usno.navy.mil/data/docs/Eclipse2024.php) is also available.
Moving Forward: The Next-Gen Catalog and the New Discovery Tools
ERIC Educational Resources Information Center
Weare, William H., Jr.; Toms, Sue; Breeding, Marshall
2011-01-01
Do students prefer to use Google instead of the library catalog? Ever wondered why? Google is easier to use and delivers plenty of "good enough" resources to meet their needs. The current generation of online catalogs has two main problems. First, the look and feel of the interface doesn't reflect the conventions adhered to elsewhere on the web,…
ERIC Educational Resources Information Center
Mumba, Frackson; Zhu, Mengxia
2013-01-01
This paper presents a Simulation-based interactive Virtual ClassRoom web system (SVCR: www.vclasie.com) powered by the state-of-the-art cloud computing technology from Google SVCR integrates popular free open-source math, science and engineering simulations and provides functions such as secure user access control and management of courses,…
ERIC Educational Resources Information Center
Liu, Sarah Hsueh-Jui; Lan, Yu-Ju
2016-01-01
This study reports on the differences in motivation, vocabulary gain, and perceptions on using or the Google Docs between individual and collaborative learning at a tertiary level. Two classes of English-as-a-Foreign Language (EFL) students were recruited and each class was randomly assigned into one of the two groups--individuals or…
Why do people google movement disorders? An infodemiological study of information seeking behaviors.
Brigo, Francesco; Erro, Roberto
2016-05-01
Millions of people worldwide everyday search Google or Wikipedia to look for health-related information. Aim of this study was to evaluate and interpret web search queries for terms related to movement disorders (MD) in English-speaking countries and their changes over time. We analyzed information regarding the volume of online searches in Google and Wikipedia for the most common MD and their treatments. We determined the highest search volume peaks to identify possible relation with online news headlines. The volume of searches for some queries related to MD entered in Google enormously increased over time. Most queries were related to definition, subtypes, symptoms and treatment (mostly to adverse effects, or alternatively, to possible alternative treatments). The highest peaks of MD search queries were temporally related to news about celebrities suffering from MD, to specific mass-media events or to news concerning pharmaceutic companies or scientific discoveries on MD. An increasing number of people use Google and Wikipedia to look for terms related to MD to obtain information on definitions, causes and symptoms, possibly to aid initial self-diagnosis. MD information demand and the actual prevalence of different MDs do not travel together: web search volume may mirrors patients' fears and worries about some particular disorders perceived as more serious than others, or may be driven by release of news about celebrities suffering from MD, "breaking news" or specific mass-media events regarding MD.
Baneyx, Audrey
2008-01-01
Traditionally, the most commonly used source of bibliometric data is the Thomson ISI Web of Knowledge, in particular the (Social) Science Citation Index and the Journal Citation Reports, which provide the yearly Journal Impact Factors. This database used for the evaluation of researchers is not advantageous in the humanities, mainly because books, conference papers, and non-English journals, which are an important part of scientific activity, are not (well) covered. This paper presents the use of an alternative source of data, Google Scholar, and its benefits in calculating citation metrics in the humanities. Because of its broader range of data sources, the use of Google Scholar generally results in more comprehensive citation coverage in the humanities. This presentation compares and analyzes some international case studies with ISI Web of Knowledge and Google Scholar. The fields of economics, geography, social sciences, philosophy, and history are focused on to illustrate the differences of results between these two databases. To search for relevant publications in the Google Scholar database, the use of "Publish or Perish" and of CleanPoP, which the author developed to clean the results, are compared.
Lombardi, C; Griffiths, E; McLeod, B; Caviglia, A; Penagos, M
2009-07-01
Web search engines are an important tool in communication and diffusion of knowledge. Among these, Google appears to be the most popular one: in August 2008, it accounted for 87% of all web searches in the UK, compared with Yahoo's 3.3%. Google's value as a diagnostic guide in general medicine was recently reported. The aim of this comparative cross-sectional study was to evaluate whether searching Google with disease-related terms was effective in the identification and diagnosis of complex immunological and allergic cases. Forty-five case reports were randomly selected by an independent observer from peer-reviewed medical journals. Clinical data were presented separately to three investigators, blinded to the final diagnoses. Investigator A was a Consultant with an expert knowledge in Internal Medicine and Allergy (IM&A) and basic computing skills. Investigator B was a Registrar in IM&A. Investigator C was a Research Nurse. Both Investigators B and C were familiar with computers and search engines. For every clinical case presented, each investigator independently carried out an Internet search using Google to provide a final diagnosis. Their results were then compared with the published diagnoses. Correct diagnoses were provided in 30/45 (66%) cases, 39/45 (86%) cases, and in 29/45 (64%) cases by investigator A, B, and C, respectively. All of the three investigators achieved the correct diagnosis in 19 cases (42%), and all of them failed in two cases. This Google-based search was useful to identify an appropriate diagnosis in complex immunological and allergic cases. Computing skills may help to get better results.
An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.
Zweigenbaum, P; Darmoni, S J; Grabar, N; Douyère, M; Benichou, J
2002-01-01
Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF.
[Electronic poison information management system].
Kabata, Piotr; Waldman, Wojciech; Kaletha, Krystian; Sein Anand, Jacek
2013-01-01
We describe deployment of electronic toxicological information database in poison control center of Pomeranian Center of Toxicology. System was based on Google Apps technology, by Google Inc., using electronic, web-based forms and data tables. During first 6 months from system deployment, we used it to archive 1471 poisoning cases, prepare monthly poisoning reports and facilitate statistical analysis of data. Electronic database usage made Poison Center work much easier.
ERIC Educational Resources Information Center
Gupta, Amardeep
2005-01-01
Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…
Can people find patient decision aids on the Internet?
Morris, Debra; Drake, Elizabeth; Saarimaki, Anton; Bennett, Carol; O'Connor, Annette
2008-12-01
To determine if people could find patient decision aids (PtDAs) on the Internet using the most popular general search engines. We chose five medical conditions for which English language PtDAs were available from at least three different developers. The search engines used were: Google (www.google.com), Yahoo! (www.yahoo.com), and MSN (www.msn.com). For each condition and search engine we ran six searches using a combination of search terms. We coded all non-sponsored Web pages that were linked from the first page of the search results. Most first page results linked to informational Web pages about the condition, only 16% linked to PtDAs. PtDAs were more readily found for the breast cancer surgery decision (our searches found seven of the nine developers). The searches using Yahoo and Google search engines were more likely to find PtDAs. The following combination of search terms: condition, treatment, decision (e.g. breast cancer surgery decision) was most successful across all search engines (29%). While some terms and search engines were more successful, few resulted in direct links to PtDAs. Finding PtDAs would be improved with use of standardized labelling, providing patients with specific Web site addresses or access to an independent PtDA clearinghouse.
Quinn, Gregory B; Bi, Chunxiao; Christie, Cole H; Pang, Kyle; Prlić, Andreas; Nakane, Takanori; Zardecki, Christine; Voigt, Maria; Berman, Helen M; Bourne, Philip E; Rose, Peter W
2015-01-01
The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB) resource provides tools for query, analysis and visualization of the 3D structures in the PDB archive. As the mobile Web is starting to surpass desktop and laptop usage, scientists and educators are beginning to integrate mobile devices into their research and teaching. In response, we have developed the RCSB PDB Mobile app for the iOS and Android mobile platforms to enable fast and convenient access to RCSB PDB data and services. Using the app, users from the general public to expert researchers can quickly search and visualize biomolecules, and add personal annotations via the RCSB PDB's integrated MyPDB service. RCSB PDB Mobile is freely available from the Apple App Store and Google Play (http://www.rcsb.org). © The Author 2014. Published by Oxford University Press.
Quinn, Gregory B.; Bi, Chunxiao; Christie, Cole H.; Pang, Kyle; Prlić, Andreas; Nakane, Takanori; Zardecki, Christine; Voigt, Maria; Berman, Helen M.; Rose, Peter W.
2015-01-01
Summary: The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB) resource provides tools for query, analysis and visualization of the 3D structures in the PDB archive. As the mobile Web is starting to surpass desktop and laptop usage, scientists and educators are beginning to integrate mobile devices into their research and teaching. In response, we have developed the RCSB PDB Mobile app for the iOS and Android mobile platforms to enable fast and convenient access to RCSB PDB data and services. Using the app, users from the general public to expert researchers can quickly search and visualize biomolecules, and add personal annotations via the RCSB PDB’s integrated MyPDB service. Availability and implementation: RCSB PDB Mobile is freely available from the Apple App Store and Google Play (http://www.rcsb.org). Contact: pwrose@ucsd.edu PMID:25183487
The History of the Internet Search Engine: Navigational Media and the Traffic Commodity
NASA Astrophysics Data System (ADS)
van Couvering, E.
This chapter traces the economic development of the search engine industry over time, beginning with the earliest Web search engines and ending with the domination of the market by Google, Yahoo! and MSN. Specifically, it focuses on the ways in which search engines are similar to and different from traditional media institutions, and how the relations between traditional and Internet media have changed over time. In addition to its historical overview, a core contribution of this chapter is the analysis of the industry using a media value chain based on audiences rather than on content, and the development of traffic as the core unit of exchange. It shows that traditional media companies failed when they attempted to create vertically integrated portals in the late 1990s, based on the idea of controlling Internet content, while search engines succeeded in creating huge "virtually integrated" networks based on control of Internet traffic rather than Internet content.
Google Mercury: The Launch of a New Planet
NASA Astrophysics Data System (ADS)
Hirshon, B.; Chapman, C. R.; Edmonds, J.; Goldstein, J.; Hallau, K. G.; Solomon, S. C.; Vanhala, H.; Weir, H. M.; Messenger Education; Public Outreach Epo Team
2010-12-01
The NASA MESSENGER mission’s Education and Public Outreach (EPO) Team, in cooperation with Google, Inc., has launched Google Mercury, an immersive new environment on the Google Earth platform. Google Mercury features hundreds of surface features, most of them newly revealed by the three flybys of the innermost planet by the MESSENGER spacecraft. As with Google Earth, Google Mercury is available on line at no cost. This presentation will demonstrate how our team worked with Google staff, features we incorporated, how games can be developed within the Google Earth platform, and how others can add tours, games, and other educational features. Finally, we will detail new enhancements to be added once MESSENGER enters into orbit about Mercury in March 2011 and begins sending back compelling images and other global data sets on a daily basis. The MESSENGER EPO Team comprises individuals from the American Association for the Advancement of Science (AAAS); Carnegie Academy for Science Education (CASE); Center for Educational Resources (CERES) at Montana State University (MSU) - Bozeman; National Center for Earth and Space Science Education (NCESSE); Johns Hopkins University Applied Physics Laboratory (JHU/APL); National Air and Space Museum (NASM); Science Systems and Applications, Inc. (SSAI); and Southwest Research Institute (SwRI). Screen shot of Google Mercury as a work in progress
Start Your Search Engines. Part One: Taming Google--and Other Tips to Master Web Searches
ERIC Educational Resources Information Center
Adam, Anna; Mowers, Helen
2008-01-01
There are a lot of useful tools on the Web, all those social applications, and the like. Still most people go online for one thing--to perform a basic search. For most fact-finding missions, the Web is there. But--as media specialists well know--the sheer wealth of online information can hamper efforts to focus on a few reliable references.…
ERIC Educational Resources Information Center
Fluke, Christopher J.
2009-01-01
I report on a pilot study on the use of Google Maps to provide virtual field trips as a component of a wholly online graduate course on the history of astronomy. The Astronomical Tourist Web site (http://astronomy.swin.edu.au/sao/tourist), themed around the role that specific locations on Earth have contributed to the development of astronomical…
ERIC Educational Resources Information Center
Griffey, Jason
2007-01-01
The University of Tennessee at Chattanooga (UTC) offers student workshops that range from Cool New Web Stuff (what is on the web that can help make research or just plain life easier) and How To Use Google Scholar. These workshops are brilliant fodder for podcasting. In fact, the initial idea for its podcast project came from a student plagiarism…
Using Web Speech Technology with Language Learning Applications
ERIC Educational Resources Information Center
Daniels, Paul
2015-01-01
In this article, the author presents the history of human-to-computer interaction based upon the design of sophisticated computerized speech recognition algorithms. Advancements such as the arrival of cloud-based computing and software like Google's Web Speech API allows anyone with an Internet connection and Chrome browser to take advantage of…
Tags Help Make Libraries Del.icio.us: Social Bookmarking and Tagging Boost Participation
ERIC Educational Resources Information Center
Rethlefsen, Melissa L.
2007-01-01
Traditional library web products, whether online public access catalogs, library databases, or even library web sites, have long been rigidly controlled and difficult to use. Patrons regularly prefer Google's simple interface. Now social bookmarking and tagging tools help librarians bridge the gap between the library's need to offer authoritative,…
Web Analytics Reveal User Behavior: TTU Libraries' Experience with Google Analytics
ERIC Educational Resources Information Center
Barba, Ian; Cassidy, Ryan; De Leon, Esther; Williams, B. Justin
2013-01-01
Proper planning and assessment surveys of projects for academic library Web sites will not always be predictive of real world use, no matter how many responses they might receive. In this case, multiple-phase development, librarian focus groups, and patron surveys performed before implementation of such a project inaccurately overrated utility and…
Index (this page) 2. Use search.lbl.gov powered by Google. 3. Use DS The Directory of both People and Berkeley Lab Lawrence Berkeley National Laboratory A-Z Index Directory Submit Web People Navigation Berkeley Lab Search Submit Web People Close About the Lab Leadership/Organization Calendar News Center
Humans Do It Better: Inside the Open Directory Project.
ERIC Educational Resources Information Center
Sherman, Chris
2000-01-01
Explains the Open Directory Project (ODP), an attempt to catalog the World Wide Web by creating a human-compiled Web directory. Discusses the history of the project; open source models; the use of volunteer editors; quality control; problems and complaints; and use of ODP data by commercial services such as Google. (LRW)
Teaching Lab Science Courses Online: Resources for Best Practices, Tools, and Technology
ERIC Educational Resources Information Center
Jeschofnig, Linda; Jeschofnig, Peter
2011-01-01
"Teaching Lab Science Courses Online" is a practical resource for educators developing and teaching fully online lab science courses. First, it provides guidance for using learning management systems and other web 2.0 technologies such as video presentations, discussion boards, Google apps, Skype, video/web conferencing, and social media…
NASA Astrophysics Data System (ADS)
Bao, X.; Cai, X.; Liu, Y.
2009-12-01
Understanding spatiotemporal dynamics of hydrological events such as storms and droughts is highly valuable for decision making on disaster mitigation and recovery. Virtual Globe-based technologies such as Google Earth and Open Geospatial Consortium KML standards show great promises for collaborative exploration of such events using visual analytical approaches. However, currently there are two barriers for wider usage of such approaches. First, there lacks an easy way to use open source tools to convert legacy or existing data formats such as shapefiles, geotiff, or web services-based data sources to KML and to produce time-aware KML files. Second, an integrated web portal-based time-aware animation tool is currently not available. Thus users usually share their files in the portal but have no means to visually explore them without leaving the portal environment which the users are familiar with. We develop a web portal-based time-aware KML animation tool for viewing extreme hydrologic events. The tool is based on Google Earth JavaScript API and Java Portlet standard 2.0 JSR-286, and it is currently deployable in one of the most popular open source portal frameworks, namely Liferay. We have also developed an open source toolkit kml-soc-ncsa (http://code.google.com/p/kml-soc-ncsa/) to facilitate the conversion of multiple formats into KML and the creation of time-aware KML files. We illustrate our tool using some example cases, in which drought and storm events with both time and space dimension can be explored in this web-based KML animation portlet. The tool provides an easy-to-use web browser-based portal environment for multiple users to collaboratively share and explore their time-aware KML files as well as improving the understanding of the spatiotemporal dynamics of the hydrological events.
Google searches help with diagnosis in dermatology.
Amri, Montassar; Feroz, Kaliyadan
2014-01-01
Several previous studies have tried to assess the usefulness of Google search as a diagnostic aid. The results were discordant and have led to controversies. To investigate how often Google search is helpful to reach correct diagnoses in dermatology. Two fifth-year students (A and B) and one demonstrator (C) have participated as investigators in this paper. Twenty-five diagnostic dermatological cases were selected from all the clinical cases published in the Web only images in clinical medicine from March 2005 to November 2009. The main outcome measure of our paper was to compare the number of correct diagnoses provided by the investigators without, and with Google search. Investigator A gave correct diagnoses in 9/25 (36%) cases without Google search, his diagnostic success after Google search was 18/25 (72%). Investigator B results were 11/25 (44%) correct diagnoses without Google search, and 19/25 (76%) after this search. For investigator C, the results were 12/25 (48%) without Google search, and 18/25 (72%) after the use of this tool. Thus, the total correct diagnoses provided by the three investigators were 32 (42.6%) without Google search, and 55 (73.3%) when using this facility. The difference was statistically significant between the total number of correct diagnoses given by the three investigators without, and with Google search (p = 0.0002). In the light of our paper, Google search appears to be an interesting diagnostic aid in dermatology. However, we emphasize that diagnosis is primarily an art based on clinical skills and experience.
Bootstrapping and Maintaining Trust in the Cloud
2016-03-16
of infrastructure-as-a- service (IaaS) cloud computing services such as Ama- zon Web Services, Google Compute Engine, Rackspace, et. al. means that...Implementation We implemented keylime in ∼3.2k lines of Python in four components: registrar, node, CV, and tenant. The registrar offers a REST-based web ...bootstrap key K. It provides an unencrypted REST-based web service for these two functions. As described earlier, the pro- tocols for exchanging data
Successful participant recruitment strategies for an online smokeless tobacco cessation program.
Gordon, Judith S; Akers, Laura; Severson, Herbert H; Danaher, Brian G; Boles, Shawn M
2006-12-01
An estimated 22% of Americans currently use smokeless tobacco (ST). Most live in small towns and rural areas that offer few ST cessation resources. Approximately 94 million Americans use the Internet for health-related information, and on-line access is growing among lower-income and less-educated groups. As part of a randomized clinical trial to assess the reach and effectiveness of Web-based programs for delivering an ST cessation intervention, the authors developed and evaluated several methods for overcoming the recruitment challenges associated with Web-based research. This report describes and evaluates these methods. Participants were recruited through: (a) Thematic promotional "releases" to print and broadcast media, (b) Google ads, (c) placement of a link on other Web sites, (d) limited purchase of paid advertising, (e) direct mailings to ST users, and (f) targeted mailings to health care and tobacco control professionals. Combined recruitment activities resulted in more than 23,500 hits on our recruitment website from distinct IP addresses over 15 months, which yielded 2,523 eligible ST users who completed the registration process and enrolled in the study. Self-reports revealed that at least 1,276 (50.6%) of these participants were recruited via mailings, 874 (34.6%) from Google ads or via search engines or links on another Web site, and 373 (14.8%) from all other methods combined. The use of thematic mailings is novel in research settings. Recruitment of study participants went quickly and smoothly. Google ads and mailings to media outlets were the methods that recruited the highest number of participants.
Bert, Fabrizio; Passi, Stefano; Scaioli, Giacomo; Gualano, Maria R; Siliquini, Roberta
2016-09-01
Our article aims to give an overview of the most mentioned smartphones' pregnancy-related applications (Apps). A keywords string with selected keywords was entered both in a general search engine (Google(®)) and PubMed. While PubMed returned no pertinent results, a total of 370 web pages were found on Google(®), and 146 of them were selected. All the pregnancy-related Apps cited at least eight times were included. Information about App's producer, price, contents, privacy policy, and presence of a scientific board was collected. Finally, nine apps were considered. The majority of them were free and available in the two main online markets (Apple(®) App Store and Android(®) Google Play). Five apps presented a privacy policy statement, while a scientific board was mentioned in only three. Further studies are needed in order to deepen the knowledge regarding the main risks of these devices, such as privacy loss, contents control concerns, the digital divide and a potential humanization reduction. © The Author(s) 2015.
Implementing Web 2.0 Tools in the Classroom: Four Teachers' Accounts
ERIC Educational Resources Information Center
Kovalik, Cindy; Kuo, Chia-Ling; Cummins, Megan; Dipzinski, Erin; Joseph, Paula; Laskey, Stephanie
2014-01-01
In this paper, four teachers shared their experiences using the following free Web 2.0 tools with their students: Jing, Wix, Google Sites, and Blogger. The teachers found that students reacted positively to lessons in which these tools were used, and also noted improvements they could make when using them in the future.
Development of Web-Based Learning Application for Generation Z
ERIC Educational Resources Information Center
Hariadi, Bambang; Dewiyani Sunarto, M. J.; Sudarmaningtyas, Pantjawati
2016-01-01
This study aimed to develop a web-based learning application as a form of learning revolution. The form of learning revolution includes the provision of unlimited teaching materials, real time class organization, and is not limited by time or place. The implementation of this application is in the form of hybrid learning by using Google Apps for…
Collaborative Writing with Web 2.0 Technologies: Education Students' Perceptions
ERIC Educational Resources Information Center
Brodahl, Cornelia; Hadjerrouit, Said; Hansen, Nils Kristian
2011-01-01
Web 2.0 technologies are becoming popular in teaching and learning environments. Among them several online collaborative writing tools, like wikis and blogs, have been integrated into educational settings. Research has been carried out on a wide range of subjects related to wikis, while other, comparable tools like Google Docs and EtherPad remain…
Stopping Web Plagiarists from Stealing Your Content
ERIC Educational Resources Information Center
Goldsborough, Reid
2004-01-01
This article gives tips on how to avoid having content stolen by plagiarists. Suggestions include: using a Web search service such as Google to search for unique strings of text at the individuals site to uncover other sites with the same content; buying a infringement-detection program; or hiring a public relations firm to do the work. There are…
With Free Google Alert Services
ERIC Educational Resources Information Center
Gunn, Holly
2005-01-01
Alert services are a great way of keeping abreast of topics that interest you. Rather than searching the Web regularly to find new content about your areas of interest, an alert service keeps you informed by sending you notices when new material is added to the Web that matches your registered search criteria. Alert services are examples of push…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-11
... over 50,000,000 investors on Web sites operated by Google, Interactive Data, and Dow Jones, among... systems (``ATSs''), including dark pools and electronic communication networks (``ECNs''). Each SRO market..., Attain, TracECN, BATS Trading and Direct Edge. Today, BATS publishes its data at no charge on its Web...
A profile of anti-vaccination lobbying on the South African internet, 2011-2013.
Burnett, Rosemary Joyce; von Gogh, Lauren Jennifer; Moloi, Molelekeng H; François, Guido
2015-11-01
The South African Vaccination and Immunisation Centre receives many requests to explain the validity of internet-based anti-vaccination claims. Previous global studies on internet-based anti-vaccination lobbying had not identified anti-vaccination web pages originating in South Africa (SA). To characterise SA internet-based anti-vaccination lobbying. In 2011, searches for anti-vaccination content were performed using Google, Yahoo and MSN-Bing, limited to English-language SA web pages. Content analysis was performed on web pages expressing anti-vaccination sentiment about infant vaccination. This was repeated in 2012 and 2013 using Google, with the first 700 web pages per search being analysed. Blogs/forums, articles and e-shops constituted 40.3%, 55.2% and 4.5% of web pages, respectively. Authors were lay people (63.5%), complementary/alternative medicine (CAM) practitioners (23.1%), medical professionals practising CAM (7.7%) and medical professionals practising only allopathic medicine (5.8%). Advertisements appeared on 55.2% of web pages. Of these, 67.6% were sponsored by or linked to organisations with financial interests in discrediting vaccines, with 80.0% and 24.0% of web pages sponsored by these organisations claiming respectively that vaccines are ineffective and that vaccination is profit driven. The vast majority of web pages (92.5%) claimed that vaccines are not safe, and 77.6% of anti-vaccination claims originated from the USA. South Africans are creating web pages or blogs for local anti-vaccination lobbying. Research is needed to understand what influence internet-based anti-vaccination lobbying has on the uptake of infant vaccination in SA.
An assessment of the visibility of MeSH-indexed medical web catalogs through search engines.
Zweigenbaum, P.; Darmoni, S. J.; Grabar, N.; Douyère, M.; Benichou, J.
2002-01-01
Manually indexed Internet health catalogs such as CliniWeb or CISMeF provide resources for retrieving high-quality health information. Users of these quality-controlled subject gateways are most often referred to them by general search engines such as Google, AltaVista, etc. This raises several questions, among which the following: what is the relative visibility of medical Internet catalogs through search engines? This study addresses this issue by measuring and comparing the visibility of six major, MeSH-indexed health catalogs through four different search engines (AltaVista, Google, Lycos, Northern Light) in two languages (English and French). Over half a million queries were sent to the search engines; for most of these search engines, according to our measures at the time the queries were sent, the most visible catalog for English MeSH terms was CliniWeb and the most visible one for French MeSH terms was CISMeF. PMID:12463965
Greater freedom of speech on Web 2.0 correlates with dominance of views linking vaccines to autism.
Venkatraman, Anand; Garg, Neetika; Kumar, Nilay
2015-03-17
It is suspected that Web 2.0 web sites, with a lot of user-generated content, often support viewpoints that link autism to vaccines. We assessed the prevalence of the views supporting a link between vaccines and autism online by comparing YouTube, Google and Wikipedia with PubMed. Freedom of speech is highest on YouTube and progressively decreases for the others. Support for a link between vaccines and autism is most prominent on YouTube, followed by Google search results. It is far lower on Wikipedia and PubMed. Anti-vaccine activists use scientific arguments, certified physicians and official-sounding titles to gain credibility, while also leaning on celebrity endorsement and personalized stories. Online communities with greater freedom of speech lead to a dominance of anti-vaccine voices. Moderation of content by editors can offer balance between free expression and factual accuracy. Health communicators and medical institutions need to step up their activity on the Internet. Copyright © 2015 Elsevier Ltd. All rights reserved.
Paparo, G. D.; Martin-Delgado, M. A.
2012-01-01
We introduce the characterization of a class of quantum PageRank algorithms in a scenario in which some kind of quantum network is realizable out of the current classical internet web, but no quantum computer is yet available. This class represents a quantization of the PageRank protocol currently employed to list web pages according to their importance. We have found an instance of this class of quantum protocols that outperforms its classical counterpart and may break the classical hierarchy of web pages depending on the topology of the web. PMID:22685626
On transform coding tools under development for VP10
NASA Astrophysics Data System (ADS)
Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao
2016-09-01
Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.
Bragazzi, Nicola Luigi; Bacigaluppi, Susanna; Robba, Chiara; Nardone, Raffaele; Trinka, Eugen; Brigo, Francesco
2016-02-01
People increasingly use Google looking for health-related information. We previously demonstrated that in English-speaking countries most people use this search engine to obtain information on status epilepticus (SE) definition, types/subtypes, and treatment. Now, we aimed at providing a quantitative analysis of SE-related web queries. This analysis represents an advancement, with respect to what was already previously discussed, in that the Google Trends (GT) algorithm has been further refined and correlational analyses have been carried out to validate the GT-based query volumes. Google Trends-based SE-related query volumes were well correlated with information concerning causes and pharmacological and nonpharmacological treatments. Google Trends can provide both researchers and clinicians with data on realities and contexts that are generally overlooked and underexplored by classic epidemiology. In this way, GT can foster new epidemiological studies in the field and can complement traditional epidemiological tools. Copyright © 2015 Elsevier Inc. All rights reserved.
An overview of the web-based Google Earth coincident imaging tool
Chander, Gyanesh; Kilough, B.; Gowda, S.
2010-01-01
The Committee on Earth Observing Satellites (CEOS) Visualization Environment (COVE) tool is a browser-based application that leverages Google Earth web to display satellite sensor coverage areas. The analysis tool can also be used to identify near simultaneous surface observation locations for two or more satellites. The National Aeronautics and Space Administration (NASA) CEOS System Engineering Office (SEO) worked with the CEOS Working Group on Calibration and Validation (WGCV) to develop the COVE tool. The CEOS member organizations are currently operating and planning hundreds of Earth Observation (EO) satellites. Standard cross-comparison exercises between multiple sensors to compare near-simultaneous surface observations and to identify corresponding image pairs are time-consuming and labor-intensive. COVE is a suite of tools that have been developed to make such tasks easier.
ETDEWEB versus the World-Wide-Web: a specific database/web comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cutler, Debbie
2010-06-28
A study was performed comparing user search results from the specialized scientific database on energy-related information, ETDEWEB, with search results from the internet search engines Google and Google Scholar. The primary objective of the study was to determine if ETDEWEB (the Energy Technology Data Exchange – World Energy Base) continues to bring the user search results that are not being found by Google and Google Scholar. As a multilateral information exchange initiative, ETDE’s member countries and partners contribute cost- and task-sharing resources to build the largest database of energy-related information in the world. As of early 2010, the ETDEWEB databasemore » has 4.3 million citations to world-wide energy literature. One of ETDEWEB’s strengths is its focused scientific content and direct access to full text for its grey literature (over 300,000 documents in PDF available for viewing from the ETDE site and over a million additional links to where the documents can be found at research organizations and major publishers globally). Google and Google Scholar are well-known for the wide breadth of the information they search, with Google bringing in news, factual and opinion-related information, and Google Scholar also emphasizing scientific content across many disciplines. The analysis compared the results of 15 energy-related queries performed on all three systems using identical words/phrases. A variety of subjects was chosen, although the topics were mostly in renewable energy areas due to broad international interest. Over 40,000 search result records from the three sources were evaluated. The study concluded that ETDEWEB is a significant resource to energy experts for discovering relevant energy information. For the 15 topics in this study, ETDEWEB was shown to bring the user unique results not shown by Google or Google Scholar 86.7% of the time. Much was learned from the study beyond just metric comparisons. Observations about the strengths of each system and factors impacting the search results are also shared along with background information and summary tables of the results. If a user knows a very specific title of a document, all three systems are helpful in finding the user a source for the document. But if the user is looking to discover relevant documents on a specific topic, each of the three systems will bring back a considerable volume of data, but quite different in focus. Google is certainly a highly-used and valuable tool to find significant ‘non-specialist’ information, and Google Scholar does help the user focus on scientific disciplines. But if a user’s interest is scientific and energy-specific, ETDEWEB continues to hold a strong position in the energy research, technology and development (RTD) information field and adds considerable value in knowledge discovery. (auth)« less
21st Century Senior Leader Education: Ubiquitous Open Access Learning Environment
2011-02-22
Failures: It‘s the content, stupid ‖22 because agencies focus on systems rather than substance and access to the content is critical. The access to Army...Resource Capabilities. 18 As an example to demonstrate how a civilian capability provides learning value to the PLE, the ― Google Alerts‖ ® web...technology pushed content to the author for review in the development of this paper. The technology consists of a user creating a Google account, logging
Web GIS in practice V: 3-D interactive and real-time mapping in Second Life
Boulos, Maged N Kamel; Burden, David
2007-01-01
This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy. PMID:18042275
Scales, David; Zelenev, Alexei; Brownstein, John S.
2013-01-01
Background This is the first study quantitatively evaluating the effect that media-related limitations have on data from an automated epidemic intelligence system. Methods We modeled time series of HealthMap's two main data feeds, Google News and Moreover, to test for evidence of two potential limitations: first, human resources constraints, and second, high-profile outbreaks “crowding out” coverage of other infectious diseases. Results Google News events declined by 58.3%, 65.9%, and 14.7% on Saturday, Sunday and Monday, respectively, relative to other weekdays. Events were reduced by 27.4% during Christmas/New Years weeks and 33.6% lower during American Thanksgiving week than during an average week for Google News. Moreover data yielded similar results with the addition of Memorial Day (US) being associated with a 36.2% reduction in events. Other holiday effects were not statistically significant. We found evidence for a crowd out phenomenon for influenza/H1N1, where a 50% increase in influenza events corresponded with a 4% decline in other disease events for Google News only. Other prominent diseases in this database – avian influenza (H5N1), cholera, or foodborne illness – were not associated with a crowd out phenomenon. Conclusions These results provide quantitative evidence for the limited impact of editorial biases on HealthMap's web-crawling epidemic intelligence. PMID:24206612
Sentiment Analysis of Web Sites Related to Vaginal Mesh Use in Pelvic Reconstructive Surgery.
Hobson, Deslyn T G; Meriwether, Kate V; Francis, Sean L; Kinman, Casey L; Stewart, J Ryan
2018-05-02
The purpose of this study was to utilize sentiment analysis to describe online opinions toward vaginal mesh. We hypothesized that sentiment in legal Web sites would be more negative than that in medical and reference Web sites. We generated a list of relevant key words related to vaginal mesh and searched Web sites using the Google search engine. Each unique uniform resource locator (URL) was sorted into 1 of 6 categories: "medical", "legal", "news/media", "patient generated", "reference", or "unrelated". Sentiment of relevant Web sites, the primary outcome, was scored on a scale of -1 to +1, and mean sentiment was compared across all categories using 1-way analysis of variance. Tukey test evaluated differences between category pairs. Google searches of 464 unique key words resulted in 11,405 URLs. Sentiment analysis was performed on 8029 relevant URLs (3472 legal, 1625 "medical", 1774 "reference", 666 "news media", 492 "patient generated"). The mean sentiment for all relevant Web sites was +0.01 ± 0.16; analysis of variance revealed significant differences between categories (P < 0.001). Web sites categorized as "legal" and "news/media" had a slightly negative mean sentiment, whereas those categorized as "medical," "reference," and "patient generated" had slightly positive mean sentiments. Tukey test showed differences between all category pairs except the "medical" versus "reference" in comparison with the largest mean difference (-0.13) seen in the "legal" versus "reference" comparison. Web sites related to vaginal mesh have an overall mean neutral sentiment, and Web sites categorized as "medical," "reference," and "patient generated" have significantly higher sentiment scores than related Web sites in "legal" and "news/media" categories.
OntoMaton: a bioportal powered ontology widget for Google Spreadsheets.
Maguire, Eamonn; González-Beltrán, Alejandra; Whetzel, Patricia L; Sansone, Susanna-Assunta; Rocca-Serra, Philippe
2013-02-15
Data collection in spreadsheets is ubiquitous, but current solutions lack support for collaborative semantic annotation that would promote shared and interdisciplinary annotation practices, supporting geographically distributed players. OntoMaton is an open source solution that brings ontology lookup and tagging capabilities into a cloud-based collaborative editing environment, harnessing Google Spreadsheets and the NCBO Web services. It is a general purpose, format-agnostic tool that may serve as a component of the ISA software suite. OntoMaton can also be used to assist the ontology development process. OntoMaton is freely available from Google widgets under the CPAL open source license; documentation and examples at: https://github.com/ISA-tools/OntoMaton.
ERIC Educational Resources Information Center
Cochrane, Thomas; Antonczak, Laurent; Wagner, Daniel
2013-01-01
The advent of web 2.0 has enabled new forms of collaboration centred upon user-generated content, however, mobile social media is enabling a new wave of social collaboration. Mobile devices have disrupted and reinvented traditional media markets and distribution: iTunes, Google Play and Amazon now dominate music industry distribution channels,…
What Major Search Engines Like Google, Yahoo and Bing Need to Know about Teachers in the UK?
ERIC Educational Resources Information Center
Seyedarabi, Faezeh
2014-01-01
This article briefly outlines the current major search engines' approach to teachers' web searching. The aim of this article is to make Web searching easier for teachers when searching for relevant online teaching materials, in general, and UK teacher practitioners at primary, secondary and post-compulsory levels, in particular. Therefore, major…
Some Features of "Alt" Texts Associated with Images in Web Pages
ERIC Educational Resources Information Center
Craven, Timothy C.
2006-01-01
Introduction: This paper extends a series on summaries of Web objects, in this case, the alt attribute of image files. Method: Data were logged from 1894 pages from Yahoo!'s random page service and 4703 pages from the Google directory; an img tag was extracted randomly from each where present; its alt attribute, if any, was recorded; and the…
Measuring Link-Resolver Success: Comparing 360 Link with a Local Implementation of WebBridge
ERIC Educational Resources Information Center
Herrera, Gail
2011-01-01
This study reviewed link resolver success comparing 360 Link and a local implementation of WebBridge. Two methods were used: (1) comparing article-level access and (2) examining technical issues for 384 randomly sampled OpenURLs. Google Analytics was used to collect user-generated OpenURLs. For both methods, 360 Link out-performed the local…
Fast segmentation of satellite images using SLIC, WebGL and Google Earth Engine
NASA Astrophysics Data System (ADS)
Donchyts, Gennadii; Baart, Fedor; Gorelick, Noel; Eisemann, Elmar; van de Giesen, Nick
2017-04-01
Google Earth Engine (GEE) is a parallel geospatial processing platform, which harmonizes access to petabytes of freely available satellite images. It provides a very rich API, allowing development of dedicated algorithms to extract useful geospatial information from these images. At the same time, modern GPUs provide thousands of computing cores, which are mostly not utilized in this context. In the last years, WebGL became a popular and well-supported API, allowing fast image processing directly in web browsers. In this work, we will evaluate the applicability of WebGL to enable fast segmentation of satellite images. A new implementation of a Simple Linear Iterative Clustering (SLIC) algorithm using GPU shaders will be presented. SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It adapts a k-means clustering approach to generate superpixels efficiently. While this approach will be hard to scale, due to a significant amount of data to be transferred to the client, it should significantly improve exploratory possibilities and simplify development of dedicated algorithms for geoscience applications. Our prototype implementation will be used to improve surface water detection of the reservoirs using multispectral satellite imagery.
A Web-Based Information System for Field Data Management
NASA Astrophysics Data System (ADS)
Weng, Y. H.; Sun, F. S.
2014-12-01
A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.
IRIS Earthquake Browser with Integration to the GEON IDV for 3-D Visualization of Hypocenters.
NASA Astrophysics Data System (ADS)
Weertman, B. R.
2007-12-01
We present a new generation of web based earthquake query tool - the IRIS Earthquake Browser (IEB). The IEB combines the DMC's large set of earthquake catalogs (provided by USGS/NEIC, ISC and the ANF) with the popular Google Maps web interface. With the IEB you can quickly and easily find earthquakes in any region of the globe. Using Google's detailed satellite images, earthquakes can be easily co-located with natural geographic features such as volcanoes as well as man made features such as commercial mines. A set of controls allow earthquakes to be filtered by time, magnitude, and depth range as well as catalog name, contributor name and magnitude type. Displayed events can be easily exported in NetCDF format into the GEON Integrated Data Viewer (IDV) where hypocenters may be visualized in three dimensions. Looking "under the hood", the IEB is based on AJAX technology and utilizes REST style web services hosted at the IRIS DMC. The IEB is part of a broader effort at the DMC aimed at making our data holdings available via web services. The IEB is useful both educationally and as a research tool.
The ethics of Google Earth: crossing thresholds from spatial data to landscape visualisation.
Sheppard, Stephen R J; Cizek, Petr
2009-05-01
'Virtual globe' software systems such as Google Earth are growing rapidly in popularity as a way to visualise and share 3D environmental data. Scientists and environmental professionals, many of whom are new to 3D modeling and visual communications, are beginning routinely to use such techniques in their work. While the appeal of these techniques is evident, with unprecedented opportunities for public access to data and collaborative engagement over the web, are there nonetheless risks in their widespread usage when applied in areas of the public interest such as planning and policy-making? This paper argues that the Google Earth phenomenon, which features realistic imagery of places, cannot be dealt with only as a question of spatial data and geographic information science. The virtual globe type of visualisation crosses several key thresholds in communicating scientific and environmental information, taking it well beyond the realm of conventional spatial data and geographic information science, and engaging more complex dimensions of human perception and aesthetic preference. The realism, perspective views, and social meanings of the landscape visualisations embedded in virtual globes invoke not only cognition but also emotional and intuitive responses, with associated issues of uncertainty, credibility, and bias in interpreting the imagery. This paper considers the types of risks as well as benefits that may exist with participatory uses of virtual globes by experts and lay-people. It is illustrated with early examples from practice and relevant themes from the literature in landscape visualisation and related disciplines such as environmental psychology and landscape planning. Existing frameworks and principles for the appropriate use of environmental visualisation methods are applied to the special case of widely accessible, realistic 3D and 4D visualisation systems such as Google Earth, in the context of public awareness-building and agency decision-making on environmental issues. Relevant principles are suggested which lend themselves to much-needed evaluation of risks and benefits of virtual globe systems. Possible approaches for balancing these benefits and risks include codes of ethics, software design, and metadata templates.
NASA Astrophysics Data System (ADS)
Dimitrova, L. L.; Haines, M.; Holt, W. E.; Schultz, R. A.; Richard, G.; Haines, A. J.
2006-12-01
Interactive maps of surface-breaking faults and stress models on Mars provide important tools to engage undergraduate students, educators, and scientists with current geological and geophysical research. We have developed a map based on the Google Maps API -- an Internet based tool combining DHTML and AJAX, -- which allows very large maps to be viewed over the World Wide Web. Typically, small portions of the maps are downloaded as needed, rather than the entire image at once. This set-up enables relatively fast access for users with low bandwidth. Furthermore, Google Maps provides an extensible interactive interface making it ideal for visualizing multiple data sets at the user's choice. The Google Maps API works primarily with data referenced to latitudes and longitudes, which is then mapped in Mercator projection only. We have developed utilities for general cylindrical coordinate systems by converting these coordinates into equivalent Mercator projection before including them on the map. The MARTIAN project is available at http://rock.geo.sunysb.edu/~holt/Mars/MARTIAN/. We begin with an introduction to the Martian surface using a topography model. Faults from several datasets are classified by type (extension vs. compression) and by time epoch. Deviatoric stresses due to gravitational potential energy differences, calculated from the topography and crustal thickness, can be overlain. Several quantitative measures for the fit of the stress field to the faults are also included. We provide introductory text and exercises spanning a range of topics: how are faults identified, what stress is and how it relates to faults, what gravitational potential energy is and how variations in it produce stress, how the models are created, and how these models can be evaluated and interpreted. The MARTIAN tool is used at Stony Brook University in GEO 310: Introduction to Geophysics, a class geared towards junior and senior geosciences majors. Although this project is in its early stages, high school and college teachers, as well as researchers have expressed interest in using and extending these tools for visualizing and interacting with data on Earth and other planetary bodies.
A Case Study in Web 2.0 Application Development
NASA Astrophysics Data System (ADS)
Marganian, P.; Clark, M.; Shelton, A.; McCarty, M.; Sessoms, E.
2010-12-01
Recent web technologies focusing on languages, frameworks, and tools are discussed, using the Robert C. Byrd Green Bank Telescopes (GBT) new Dynamic Scheduling System as the primary example. Within that example, we use a popular Python web framework, Django, to build the extensive web services for our users. We also use a second complimentary server, written in Haskell, to incorporate the core scheduling algorithms. We provide a desktop-quality experience across all the popular browsers for our users with the Google Web Toolkit and judicious use of JQuery in Django templates. Single sign-on and authentication throughout all NRAO web services is accomplished via the Central Authentication Service protocol, or CAS.
Saint: a lightweight integration environment for model annotation.
Lister, Allyson L; Pocock, Matthew; Taschuk, Morgan; Wipat, Anil
2009-11-15
Saint is a web application which provides a lightweight annotation integration environment for quantitative biological models. The system enables modellers to rapidly mark up models with biological information derived from a range of data sources. Saint is freely available for use on the web at http://www.cisban.ac.uk/saint. The web application is implemented in Google Web Toolkit and Tomcat, with all major browsers supported. The Java source code is freely available for download at http://saint-annotate.sourceforge.net. The Saint web server requires an installation of libSBML and has been tested on Linux (32-bit Ubuntu 8.10 and 9.04).
Spectral properties of Google matrix of Wikipedia and other networks
NASA Astrophysics Data System (ADS)
Ermann, Leonardo; Frahm, Klaus M.; Shepelyansky, Dima L.
2013-05-01
We study the properties of eigenvalues and eigenvectors of the Google matrix of the Wikipedia articles hyperlink network and other real networks. With the help of the Arnoldi method, we analyze the distribution of eigenvalues in the complex plane and show that eigenstates with significant eigenvalue modulus are located on well defined network communities. We also show that the correlator between PageRank and CheiRank vectors distinguishes different organizations of information flow on BBC and Le Monde web sites.
Content and Accessibility of Shoulder and Elbow Fellowship Web Sites in the United States.
Young, Bradley L; Oladeji, Lasun O; Cichos, Kyle; Ponce, Brent
2016-01-01
Increasing numbers of training physicians are using the Internet to gather information about graduate medical education programs. The content and accessibility of web sites that provide this information have been demonstrated to influence applicants' decisions. Assessments of orthopedic fellowship web sites including sports medicine, pediatrics, hand and spine have found varying degrees of accessibility and material. The purpose of this study was to evaluate the accessibility and content of the American Shoulder and Elbow Surgeons (ASES) fellowship web sites (SEFWs). A complete list of ASES programs was obtained from a database on the ASES web site. The accessibility of each SEFWs was assessed by the existence of a functioning link found in the database and through Google®. Then, the following content areas of each SEFWs were evaluated: fellow education, faculty/previous fellow information, and recruitment. At the time of the study, 17 of the 28 (60.7%) ASES programs had web sites accessible through Google®, and only five (17.9%) had functioning links in the ASES database. Nine programs lacked a web site. Concerning web site content, the majority of SEFWs contained information regarding research opportunities, research requirements, case descriptions, meetings and conferences, teaching responsibilities, attending faculty, the application process, and a program description. Fewer than half of the SEFWs provided information regarding rotation schedules, current fellows, previous fellows, on-call expectations, journal clubs, medical school of current fellows, residency of current fellows, employment of previous fellows, current research, and previous research. A large portion of ASES fellowship programs lacked functioning web sites, and even fewer provided functioning links through the ASES database. Valuable information for potential applicants was largely inadequate across present SEFWs.
Duran-Nelson, Alisa; Gladding, Sophia; Beattie, Jim; Nixon, L James
2013-06-01
To determine which resources residents use at the point-of-care (POC) for decision making, the drivers for selection of these resources, and how residents use Google/Google Scholar to answer clinical questions at the POC. In January 2012, 299 residents from three internal medicine residencies were sent an electronic survey regarding resources used for POC decision making. Resource use frequency and factors influencing choice were determined using descriptive statistics. Binary logistic regression analysis was performed to determine relationships between the independent variables. A total of 167 residents (56%) responded; similar numbers responded at each level of training. Residents most frequently reported using UpToDate and Google at the POC at least daily (85% and 63%, respectively), with speed and trust in the quality of information being the primary drivers of selection. Google, used by 68% of residents, was used primarily to locate Web sites and general information about diseases, whereas Google Scholar, used by 30% of residents, tended to be used for treatment and management decisions or locating a journal article. The findings suggest that internal medicine residents use UpToDate most frequently, followed by consultation with faculty and the search engines Google and Google Scholar; speed, trust, and portability are the biggest drivers for resource selection; and time and information overload appear to be the biggest barriers to resources such as Ovid MEDLINE. Residents frequently used Google and may benefit from further training in information management skills.
Meric, Funda; Bernstam, Elmer V; Mirza, Nadeem Q; Hunt, Kelly K; Ames, Frederick C; Ross, Merrick I; Kuerer, Henry M; Pollock, Raphael E; Musen, Mark A; Singletary, S Eva
2002-01-01
Objectives To determine the characteristics of popular breast cancer related websites and whether more popular sites are of higher quality. Design The search engine Google was used to generate a list of websites about breast cancer. Google ranks search results by measures of link popularity—the number of links to a site from other sites. The top 200 sites returned in response to the query “breast cancer” were divided into “more popular” and “less popular” subgroups by three different measures of link popularity: Google rank and number of links reported independently by Google and by AltaVista (another search engine). Main outcome measures Type and quality of content. Results More popular sites according to Google rank were more likely than less popular ones to contain information on ongoing clinical trials (27% v 12%, P=0.01 ), results of trials (12% v 3%, P=0.02), and opportunities for psychosocial adjustment (48% v 23%, P<0.01). These characteristics were also associated with higher number of links as reported by Google and AltaVista. More popular sites by number of linking sites were also more likely to provide updates on other breast cancer research, information on legislation and advocacy, and a message board service. Measures of quality such as display of authorship, attribution or references, currency of information, and disclosure did not differ between groups. Conclusions Popularity of websites is associated with type rather than quality of content. Sites that include content correlated with popularity may best meet the public's desire for information about breast cancer. What is already known on this topicPatients are using the world wide web to search for health informationBreast cancer is one of the most popular search topicsCharacteristics of popular websites may reflect the information needs of patientsWhat this study addsType rather than quality of content correlates with popularity of websitesMeasures of quality correlate with accuracy of medical information PMID:11884322
Adawi, Mohammad; Watad, Abdulla; Sharif, Kassem; Amital, Howard; Mahroum, Naim
2017-01-01
Background Mayaro virus (MAYV), first discovered in Trinidad in 1954, is spread by the Haemagogus mosquito. Small outbreaks have been described in the past in the Amazon jungles of Brazil and other parts of South America. Recently, a case was reported in rural Haiti. Objective Given the emerging importance of MAYV, we aimed to explore the feasibility of exploiting a Web-based tool for monitoring and tracking MAYV cases. Methods Google Trends is an online tracking system. A Google-based approach is particularly useful to monitor especially infectious diseases epidemics. We searched Google Trends from its inception (from January 2004 through to May 2017) for MAYV-related Web searches worldwide. Results We noted a burst in search volumes in the period from July 2016 (relative search volume [RSV]=13%) to December 2016 (RSV=18%), with a peak in September 2016 (RSV=100%). Before this burst, the average search activity related to MAYV was very low (median 1%). MAYV-related queries were concentrated in the Caribbean. Scientific interest from the research community and media coverage affected digital seeking behavior. Conclusions MAYV has always circulated in South America. Its recent appearance in the Caribbean has been a source of concern, which resulted in a burst of Internet queries. While Google Trends cannot be used to perform real-time epidemiological surveillance of MAYV, it can be exploited to capture the public’s reaction to outbreaks. Public health workers should be aware of this, in that information and communication technologies could be used to communicate with users, reassure them about their concerns, and to empower them in making decisions affecting their health. PMID:29196278
Adawi, Mohammad; Bragazzi, Nicola Luigi; Watad, Abdulla; Sharif, Kassem; Amital, Howard; Mahroum, Naim
2017-12-01
Mayaro virus (MAYV), first discovered in Trinidad in 1954, is spread by the Haemagogus mosquito. Small outbreaks have been described in the past in the Amazon jungles of Brazil and other parts of South America. Recently, a case was reported in rural Haiti. Given the emerging importance of MAYV, we aimed to explore the feasibility of exploiting a Web-based tool for monitoring and tracking MAYV cases. Google Trends is an online tracking system. A Google-based approach is particularly useful to monitor especially infectious diseases epidemics. We searched Google Trends from its inception (from January 2004 through to May 2017) for MAYV-related Web searches worldwide. We noted a burst in search volumes in the period from July 2016 (relative search volume [RSV]=13%) to December 2016 (RSV=18%), with a peak in September 2016 (RSV=100%). Before this burst, the average search activity related to MAYV was very low (median 1%). MAYV-related queries were concentrated in the Caribbean. Scientific interest from the research community and media coverage affected digital seeking behavior. MAYV has always circulated in South America. Its recent appearance in the Caribbean has been a source of concern, which resulted in a burst of Internet queries. While Google Trends cannot be used to perform real-time epidemiological surveillance of MAYV, it can be exploited to capture the public's reaction to outbreaks. Public health workers should be aware of this, in that information and communication technologies could be used to communicate with users, reassure them about their concerns, and to empower them in making decisions affecting their health. ©Mohammad Adawi, Nicola Luigi Bragazzi, Abdulla Watad, Kassem Sharif, Howard Amital, Naim Mahroum. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 01.12.2017.
Wáng, Yì-Xiáng J; Arora, Richa; Choi, Yongdoo; Chung, Hsiao-Wen; Egorov, Vyacheslav I; Frahm, Jens; Kudo, Hiroyuki; Kuyumcu, Suleyman; Laurent, Sophie; Loffroy, Romaric; Maurea, Simone; Morcos, Sameh K; Ni, Yicheng; Oei, Edwin H G; Sabarudin, Akmal; Yu, Xin
2014-12-01
Journal based metrics is known not to be ideal for the measurement of the quality of individual researcher's scientific output. In the current report 16 contributors from Hong Kong SAR, India, Korea, Taiwan, Russia, Germany, Japan, Turkey, Belgium, France, Italy, UK, The Netherlands, Malaysia, and USA are invited. The following six questions were asked: (I) is Web of Sciences journal impact factor (IF) and Institute for Scientific Information (ISI) citation the main academic output performance evaluation tool in your institution? and your country? (II) How does Google citation count in your institution? and your country? (III) If paper is published in a non-SCI journal but it is included in PubMed and searchable by Google scholar, how it is valued when compared with a paper published in a journal with an IF? (IV) Do you value to publish a piece of your work in a non-SCI journal as much as a paper published in a journal with an IF? (V) What is your personal view on the metric measurement of scientific output? (VI) Overall, do you think Web of Sciences journal IF is beneficial, or actually it is doing more harm? The results show that IF and ISI citation is heavily affecting the academic life in most of the institutions. Google citation and evaluation, while is being used and convenient and speedy, has not gain wide 'official' recognition as a tool for scientific output evaluation.
Tapir: A web interface for transit/eclipse observability
NASA Astrophysics Data System (ADS)
Jensen, Eric
2013-06-01
Tapir is a set of tools, written in Perl, that provides a web interface for showing the observability of periodic astronomical events, such as exoplanet transits or eclipsing binaries. The package provides tools for creating finding charts for each target and airmass plots for each event. The code can access target lists that are stored on-line in a Google spreadsheet or in a local text file.
Using Google Analytics to evaluate the impact of the CyberTraining project.
McGuckin, Conor; Crowley, Niall
2012-11-01
A focus on results and impact should be at the heart of every project's approach to research and dissemination. This article discusses the potential of Google Analytics (GA: http://google.com/analytics ) as an effective resource for measuring the impact of academic research output and understanding the geodemographics of users of specific Web 2.0 content (e.g., intervention and prevention materials, health promotion and advice). This article presents the results of GA analyses as a resource used in measuring the impact of the EU-funded CyberTraining project, which provided a well-grounded, research-based training manual on cyberbullying for trainers through the medium of a Web-based eBook ( www.cybertraining-project.org ). The training manual includes review information on cyberbullying, its nature and extent across Europe, analyses of current projects, and provides resources for trainers working with the target groups of pupils, parents, teachers, and other professionals. Results illustrate the promise of GA as an effective tool for measuring the impact of academic research and project output with real potential for tracking and understanding intra- and intercountry regional variations in the uptake of prevention and intervention materials, thus enabling precision focusing of attention to those regions.
Stahl, J-P; Cohen, R; Denis, F; Gaudelus, J; Martinot, A; Lery, T; Lepetit, H
2016-05-01
Vaccine hesitancy is a growing and threatening trend, increasing the risk of disease outbreaks and potentially defeating health authorities' strategies. We aimed to describe the significant role of social networks and the Internet on vaccine hesitancy, and more generally on vaccine attitudes and behaviors. Presentation and discussion of lessons learnt from: (i) the monitoring and analysis of web and social network contents on vaccination; (ii) the tracking of Google search terms used by web users; (iii) the analysis of Google search suggestions related to vaccination; (iv) results from the Vaccinoscopie(©) study, online annual surveys of representative samples of 6500 to 10,000 French mothers, monitoring vaccine behaviors and attitude of French parents as well as vaccination coverage of their children, since 2008; and (v) various studies published in the scientific literature. Social networks and the web play a major role in disseminating information about vaccination. They have modified the vaccination decision-making process and, more generally, the doctor/patient relationship. The Internet may fuel controversial issues related to vaccination and durably impact public opinion, but it may also provide new tools to fight against vaccine hesitancy. Vaccine hesitancy should be fought on the Internet battlefield, and for this purpose, communication strategies should take into account new threats and opportunities offered by the web and social networks. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
An Evaluation of Web- and Print-Based Methods to Attract People to a Physical Activity Intervention
Jennings, Cally; Plotnikoff, Ronald C; Vandelanotte, Corneel
2016-01-01
Background Cost-effective and efficient methods to attract people to Web-based health behavior interventions need to be identified. Traditional print methods including leaflets, posters, and newspaper advertisements remain popular despite the expanding range of Web-based advertising options that have the potential to reach larger numbers at lower cost. Objective This study evaluated the effectiveness of multiple Web-based and print-based methods to attract people to a Web-based physical activity intervention. Methods A range of print-based (newspaper advertisements, newspaper articles, letterboxing, leaflets, and posters) and Web-based (Facebook advertisements, Google AdWords, and community calendars) methods were applied to attract participants to a Web-based physical activity intervention in Australia. The time investment, cost, number of first time website visits, the number of completed sign-up questionnaires, and the demographics of participants were recorded for each advertising method. Results A total of 278 people signed up to participate in the physical activity program. Of the print-based methods, newspaper advertisements totaled AUD $145, letterboxing AUD $135, leaflets AUD $66, posters AUD $52, and newspaper article AUD $3 per sign-up. Of the Web-based methods, Google AdWords totaled AUD $495, non-targeted Facebook advertisements AUD $68, targeted Facebook advertisements AUD $42, and community calendars AUD $12 per sign-up. Although the newspaper article and community calendars cost the least per sign-up, they resulted in only 17 and 6 sign-ups respectively. The targeted Facebook advertisements were the next most cost-effective method and reached a large number of sign-ups (n=184). The newspaper article and the targeted Facebook advertisements required the lowest time investment per sign-up (5 and 7 minutes respectively). People reached through the targeted Facebook advertisements were on average older (60 years vs 50 years, P<.001) and had a higher body mass index (32 vs 30, P<.05) than people reached through the other methods. Conclusions Overall, our results demonstrate that targeted Facebook advertising is the most cost-effective and efficient method at attracting moderate numbers to physical activity interventions in comparison to the other methods tested. Newspaper advertisements, letterboxing, and Google AdWords were not effective. The community calendars and newspaper articles may be effective for small community interventions. ClinicalTrial Australian New Zealand Clinical Trials Registry: ACTRN12614000339651; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=363570&isReview=true (Archived by WebCite at http://www.webcitation.org/6hMnFTvBt) PMID:27235075
An Evaluation of Web- and Print-Based Methods to Attract People to a Physical Activity Intervention.
Alley, Stephanie; Jennings, Cally; Plotnikoff, Ronald C; Vandelanotte, Corneel
2016-05-27
Cost-effective and efficient methods to attract people to Web-based health behavior interventions need to be identified. Traditional print methods including leaflets, posters, and newspaper advertisements remain popular despite the expanding range of Web-based advertising options that have the potential to reach larger numbers at lower cost. This study evaluated the effectiveness of multiple Web-based and print-based methods to attract people to a Web-based physical activity intervention. A range of print-based (newspaper advertisements, newspaper articles, letterboxing, leaflets, and posters) and Web-based (Facebook advertisements, Google AdWords, and community calendars) methods were applied to attract participants to a Web-based physical activity intervention in Australia. The time investment, cost, number of first time website visits, the number of completed sign-up questionnaires, and the demographics of participants were recorded for each advertising method. A total of 278 people signed up to participate in the physical activity program. Of the print-based methods, newspaper advertisements totaled AUD $145, letterboxing AUD $135, leaflets AUD $66, posters AUD $52, and newspaper article AUD $3 per sign-up. Of the Web-based methods, Google AdWords totaled AUD $495, non-targeted Facebook advertisements AUD $68, targeted Facebook advertisements AUD $42, and community calendars AUD $12 per sign-up. Although the newspaper article and community calendars cost the least per sign-up, they resulted in only 17 and 6 sign-ups respectively. The targeted Facebook advertisements were the next most cost-effective method and reached a large number of sign-ups (n=184). The newspaper article and the targeted Facebook advertisements required the lowest time investment per sign-up (5 and 7 minutes respectively). People reached through the targeted Facebook advertisements were on average older (60 years vs 50 years, P<.001) and had a higher body mass index (32 vs 30, P<.05) than people reached through the other methods. Overall, our results demonstrate that targeted Facebook advertising is the most cost-effective and efficient method at attracting moderate numbers to physical activity interventions in comparison to the other methods tested. Newspaper advertisements, letterboxing, and Google AdWords were not effective. The community calendars and newspaper articles may be effective for small community interventions. Australian New Zealand Clinical Trials Registry: ACTRN12614000339651; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=363570&isReview=true (Archived by WebCite at http://www.webcitation.org/6hMnFTvBt).
Moors, Amy C
2017-01-01
Finding romance, love, and sexual intimacy is a central part of our life experience. Although people engage in romance in a variety of ways, alternatives to "the couple" are largely overlooked in relationship research. Scholars and the media have recently argued that the rules of romance are changing, suggesting that interest in consensual departures from monogamy may become popular as people navigate their long-term coupling. This study utilizes Google Trends to assess Americans' interest in seeking out information related to consensual nonmonogamous relationships across a 10-year period (2006-2015). Using anonymous Web queries from hundreds of thousands of Google search engine users, results show that searches for words related to polyamory and open relationships (but not swinging) have significantly increased over time. Moreover, the magnitude of the correlation between consensual nonmonogamy Web queries and time was significantly higher than popular Web queries over the same time period, indicating this pattern of increased interest in polyamory and open relationships is unique. Future research avenues for incorporating consensual nonmonogamous relationships into relationship science are discussed.
The Role of Google Scholar in Evidence Reviews and Its Applicability to Grey Literature Searching
Haddaway, Neal Robert; Collins, Alexandra Mary; Coughlin, Deborah; Kirk, Stuart
2015-01-01
Google Scholar (GS), a commonly used web-based academic search engine, catalogues between 2 and 100 million records of both academic and grey literature (articles not formally published by commercial academic publishers). Google Scholar collates results from across the internet and is free to use. As a result it has received considerable attention as a method for searching for literature, particularly in searches for grey literature, as required by systematic reviews. The reliance on GS as a standalone resource has been greatly debated, however, and its efficacy in grey literature searching has not yet been investigated. Using systematic review case studies from environmental science, we investigated the utility of GS in systematic reviews and in searches for grey literature. Our findings show that GS results contain moderate amounts of grey literature, with the majority found on average at page 80. We also found that, when searched for specifically, the majority of literature identified using Web of Science was also found using GS. However, our findings showed moderate/poor overlap in results when similar search strings were used in Web of Science and GS (10–67%), and that GS missed some important literature in five of six case studies. Furthermore, a general GS search failed to find any grey literature from a case study that involved manual searching of organisations’ websites. If used in systematic reviews for grey literature, we recommend that searches of article titles focus on the first 200 to 300 results. We conclude that whilst Google Scholar can find much grey literature and specific, known studies, it should not be used alone for systematic review searches. Rather, it forms a powerful addition to other traditional search methods. In addition, we advocate the use of tools to transparently document and catalogue GS search results to maintain high levels of transparency and the ability to be updated, critical to systematic reviews. PMID:26379270
The Role of Google Scholar in Evidence Reviews and Its Applicability to Grey Literature Searching.
Haddaway, Neal Robert; Collins, Alexandra Mary; Coughlin, Deborah; Kirk, Stuart
2015-01-01
Google Scholar (GS), a commonly used web-based academic search engine, catalogues between 2 and 100 million records of both academic and grey literature (articles not formally published by commercial academic publishers). Google Scholar collates results from across the internet and is free to use. As a result it has received considerable attention as a method for searching for literature, particularly in searches for grey literature, as required by systematic reviews. The reliance on GS as a standalone resource has been greatly debated, however, and its efficacy in grey literature searching has not yet been investigated. Using systematic review case studies from environmental science, we investigated the utility of GS in systematic reviews and in searches for grey literature. Our findings show that GS results contain moderate amounts of grey literature, with the majority found on average at page 80. We also found that, when searched for specifically, the majority of literature identified using Web of Science was also found using GS. However, our findings showed moderate/poor overlap in results when similar search strings were used in Web of Science and GS (10-67%), and that GS missed some important literature in five of six case studies. Furthermore, a general GS search failed to find any grey literature from a case study that involved manual searching of organisations' websites. If used in systematic reviews for grey literature, we recommend that searches of article titles focus on the first 200 to 300 results. We conclude that whilst Google Scholar can find much grey literature and specific, known studies, it should not be used alone for systematic review searches. Rather, it forms a powerful addition to other traditional search methods. In addition, we advocate the use of tools to transparently document and catalogue GS search results to maintain high levels of transparency and the ability to be updated, critical to systematic reviews.
Mulcahey, Mary K; Gosselin, Michelle M; Fadale, Paul D
2013-06-19
The Internet is a common source of information for orthopaedic residents applying for sports medicine fellowships, with the web sites of the American Orthopaedic Society for Sports Medicine (AOSSM) and the San Francisco Match serving as central databases. We sought to evaluate the web sites for accredited orthopaedic sports medicine fellowships with regard to content and accessibility. We reviewed the existing web sites of the ninety-five accredited orthopaedic sports medicine fellowships included in the AOSSM and San Francisco Match databases from February to March 2012. A Google search was performed to determine the overall accessibility of program web sites and to supplement information obtained from the AOSSM and San Francisco Match web sites. The study sample consisted of the eighty-seven programs whose web sites connected to information about the fellowship. Each web site was evaluated for its informational value. Of the ninety-five programs, fifty-one (54%) had links listed in the AOSSM database. Three (3%) of all accredited programs had web sites that were linked directly to information about the fellowship. Eighty-eight (93%) had links listed in the San Francisco Match database; however, only five (5%) had links that connected directly to information about the fellowship. Of the eighty-seven programs analyzed in our study, all eighty-seven web sites (100%) provided a description of the program and seventy-six web sites (87%) included information about the application process. Twenty-one web sites (24%) included a list of current fellows. Fifty-six web sites (64%) described the didactic instruction, seventy (80%) described team coverage responsibilities, forty-seven (54%) included a description of cases routinely performed by fellows, forty-one (47%) described the role of the fellow in seeing patients in the office, eleven (13%) included call responsibilities, and seventeen (20%) described a rotation schedule. Two Google searches identified direct links for 67% to 71% of all accredited programs. Most accredited orthopaedic sports medicine fellowships lack easily accessible or complete web sites in the AOSSM or San Francisco Match databases. Improvement in the accessibility and quality of information on orthopaedic sports medicine fellowship web sites would facilitate the ability of applicants to obtain useful information.
Engaging the YouTube Google-Eyed Generation: Strategies for Using Web 2.0 in Teaching and Learning
ERIC Educational Resources Information Center
Duffy, Peter
2008-01-01
YouTube, Podcasting, Blogs, Wikis and RSS are buzz words currently associated with the term Web 2.0 and represent a shifting pedagogical paradigm for the use of a new set of tools within education. The implication here is a possible shift from the basic archetypical vehicles used for (e)learning today (lecture notes, printed material, PowerPoint,…
Extensible Probabilistic Repository Technology (XPRT)
2004-10-01
projects, such as, Centaurus , Evidence Data Base (EDB), etc., others were fabricated, such as INS and FED, while others contain data from the open...Google Web Report Unlimited SOAP API News BBC News Unlimited WEB RSS 1.0 Centaurus Person Demographics 204,402 people from 240 countries...objects of the domain ontology map to the various simulated data-sources. For example, the PersonDemographics are stored in the Centaurus database, while
Development of a Web-Based Visualization Platform for Climate Research Using Google Earth
NASA Technical Reports Server (NTRS)
Sun, Xiaojuan; Shen, Suhung; Leptoukh, Gregory G.; Wang, Panxing; Di, Liping; Lu, Mingyue
2011-01-01
Recently, it has become easier to access climate data from satellites, ground measurements, and models from various data centers, However, searching. accessing, and prc(essing heterogeneous data from different sources are very tim -consuming tasks. There is lack of a comprehensive visual platform to acquire distributed and heterogeneous scientific data and to render processed images from a single accessing point for climate studies. This paper. documents the design and implementation of a Web-based visual, interoperable, and scalable platform that is able to access climatological fields from models, satellites, and ground stations from a number of data sources using Google Earth (GE) as a common graphical interface. The development is based on the TCP/IP protocol and various data sharing open sources, such as OPeNDAP, GDS, Web Processing Service (WPS), and Web Mapping Service (WMS). The visualization capability of integrating various measurements into cE extends dramatically the awareness and visibility of scientific results. Using embedded geographic information in the GE, the designed system improves our understanding of the relationships of different elements in a four dimensional domain. The system enables easy and convenient synergistic research on a virtual platform for professionals and the general public, gr$tly advancing global data sharing and scientific research collaboration.
Integrating Radar Image Data with Google Maps
NASA Technical Reports Server (NTRS)
Chapman, Bruce D.; Gibas, Sarah
2010-01-01
A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.
Global trends in the awareness of sepsis: insights from search engine data between 2012 and 2017.
Jabaley, Craig S; Blum, James M; Groff, Robert F; O'Reilly-Shah, Vikas N
2018-01-17
Sepsis is an established global health priority with high mortality that can be curtailed through early recognition and intervention; as such, efforts to raise awareness are potentially impactful and increasingly common. We sought to characterize trends in the awareness of sepsis by examining temporal, geographic, and other changes in search engine utilization for sepsis information-seeking online. Using time series analyses and mixed descriptive methods, we retrospectively analyzed publicly available global usage data reported by Google Trends (Google, Palo Alto, CA, USA) concerning web searches for the topic of sepsis between 24 June 2012 and 24 June 2017. Google Trends reports aggregated and de-identified usage data for its search products, including interest over time, interest by region, and details concerning the popularity of related queries where applicable. Outlying epochs of search activity were identified using autoregressive integrated moving average modeling with transfer functions. We then identified awareness campaigns and news media coverage that correlated with epochs of significantly heightened search activity. A second-order autoregressive model with transfer functions was specified following preliminary outlier analysis. Nineteen significant outlying epochs above the modeled baseline were identified in the final analysis that correlated with 14 awareness and news media events. Our model demonstrated that the baseline level of search activity increased in a nonlinear fashion. A recurrent cyclic increase in search volume beginning in 2012 was observed that correlates with World Sepsis Day. Numerous other awareness and media events were correlated with outlying epochs. The average worldwide search volume for sepsis was less than that of influenza, myocardial infarction, and stroke. Analyzing aggregate search engine utilization data has promise as a mechanism to measure the impact of awareness efforts. Heightened information-seeking about sepsis occurs in close proximity to awareness events and relevant news media coverage. Future work should focus on validating this approach in other contexts and comparing its results to traditional methods of awareness campaign evaluation.
NASA Astrophysics Data System (ADS)
Palla, Gergely; Farkas, Illés J.; Pollner, Péter; Derényi, Imre; Vicsek, Tamás
2007-06-01
A search technique locating network modules, i.e. internally densely connected groups of nodes in directed networks is introduced by extending the clique percolation method originally proposed for undirected networks. After giving a suitable definition for directed modules we investigate their percolation transition in the Erdos-Rényi graph both analytically and numerically. We also analyse four real-world directed networks, including Google's own web-pages, an email network, a word association graph and the transcriptional regulatory network of the yeast Saccharomyces cerevisiae. The obtained directed modules are validated by additional information available for the nodes. We find that directed modules of real-world graphs inherently overlap and the investigated networks can be classified into two major groups in terms of the overlaps between the modules. Accordingly, in the word-association network and Google's web-pages, overlaps are likely to contain in-hubs, whereas the modules in the email and transcriptional regulatory network tend to overlap via out-hubs.
Google matrix analysis of directed networks
NASA Astrophysics Data System (ADS)
Ermann, Leonardo; Frahm, Klaus M.; Shepelyansky, Dima L.
2015-10-01
In the past decade modern societies have developed enormous communication and social networks. Their classification and information retrieval processing has become a formidable task for the society. Because of the rapid growth of the World Wide Web, and social and communication networks, new mathematical methods have been invented to characterize the properties of these networks in a more detailed and precise way. Various search engines extensively use such methods. It is highly important to develop new tools to classify and rank a massive amount of network information in a way that is adapted to internal network structures and characteristics. This review describes the Google matrix analysis of directed complex networks demonstrating its efficiency using various examples including the World Wide Web, Wikipedia, software architectures, world trade, social and citation networks, brain neural networks, DNA sequences, and Ulam networks. The analytical and numerical matrix methods used in this analysis originate from the fields of Markov chains, quantum chaos, and random matrix theory.
Participating in the Geospatial Web: Collaborative Mapping, Social Networks and Participatory GIS
NASA Astrophysics Data System (ADS)
Rouse, L. Jesse; Bergeron, Susan J.; Harris, Trevor M.
In 2005, Google, Microsoft and Yahoo! released free Web mapping applications that opened up digital mapping to mainstream Internet users. Importantly, these companies also released free APIs for their platforms, allowing users to geo-locate and map their own data. These initiatives have spurred the growth of the Geospatial Web and represent spatially aware online communities and new ways of enabling communities to share information from the bottom up. This chapter explores how the emerging Geospatial Web can meet some of the fundamental needs of Participatory GIS projects to incorporate local knowledge into GIS, as well as promote public access and collaborative mapping.
Use of Web 2.0 tools by hospital pharmacists.
Bonaga Serrano, B; Aldaz Francés, R; Garrigues Sebastiá, M R; Hernández San Salvador, M
2014-04-01
Web 2.0 tools are transforming the pathways health professionals use to communicate among themselves and with their patients so this situation forces a change of mind to implement them. The aim of our study is to assess the state of knowledge of the main Web 2.0 applications and how are used in a sample of hospital pharmacists. The study was carried out through an anonymous survey to all members of the Spanish Society of Hospital Pharmacy (SEFH) by means of a questionnaire sent by the Google Drive® application. After the 3-month study period was completed, collected data were compiled and then analyzed using SPPS v15.0. The response rate was 7.3%, being 70.5% female and 76.3% specialists. The majority of respondents (54.2%) were aged 20 to 35. Pubmed was the main way of accessing published articles. 65.2% of pharmacists knew the term "Web 2.0". 45.3% pharmacists were Twitter users and over 58.9% mainly for professional purposes. Most pharmacists believed that Twitter was a good tool to interact with professionals and patients. 78.7% do not use an agregator, but when used, Google Reader was the most common. Although Web 2.0 applications are gaining mainstream popularity some health professionals may resist using them. In fact, more than a half of surveyed pharmacists referred a lack of knowledge about Web 2.0 tools. It would be positive for pharmacists to use them properly during their professional practice to get the best out of them. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Basic GA Tools to Evaluate Your Web Area
Learn steps and tips for creating these Google Analytics (GA) reports, so you can learn which pages are popular or unpopular, which PDFs are getting looked at, who is using your pages, what search terms they used, and more.
From the Director: Surfing the Web for Health Information
... Reliable Results Most Internet users first visit a search engine — like Google or Yahoo! — when seeking health information. ... medical terms like "cancer" or "diabetes" into a search engine, the top-ten results will likely include authoritative ...
NASA Astrophysics Data System (ADS)
Yamagishi, Y.; Yanaka, H.; Tsuboi, S.
2009-12-01
We have developed a conversion tool for the data of seismic tomography into KML, called KML generator, and made it available on the web site (http://www.jamstec.go.jp/pacific21/google_earth). The KML generator enables us to display vertical and horizontal cross sections of the model on Google Earth in three-dimensional manner, which would be useful to understand the Earth's interior. The previous generator accepts text files of grid-point data having longitude, latitude, and seismic velocity anomaly. Each data file contains the data for each depth. Metadata, such as bibliographic reference, grid-point interval, depth, are described in other information file. We did not allow users to upload their own tomographic model to the web application, because there is not standard format to represent tomographic model. Recently European seismology research project, NEIRES (Network of Research Infrastructures for European Seismology), advocates that the data of seismic tomography should be standardized. They propose a new format based on JSON (JavaScript Object Notation), which is one of the data-interchange formats, as a standard one for the tomography. This format consists of two parts, which are metadata and grid-point data values. The JSON format seems to be powerful to handle and to analyze the tomographic model, because the structure of the format is fully defined by JavaScript objects, thus the elements are directly accessible by a script. In addition, there exist JSON libraries for several programming languages. The International Federation of Digital Seismograph Network (FDSN) adapted this format as a FDSN standard format for seismic tomographic model. There might be a possibility that this format would not only be accepted by European seismologists but also be accepted as the world standard. Therefore we improve our KML generator for seismic tomography to accept the data file having also JSON format. We also improve the web application of the generator so that the JSON formatted data file can be uploaded. Users can convert any tomographic model data to KML. The KML obtained through the new generator should provide an arena to compare various tomographic models and other geophysical observations on Google Earth, which may act as a common platform for geoscience browser.
Alicino, Cristiano; Bragazzi, Nicola Luigi; Faccio, Valeria; Amicizia, Daniela; Panatto, Donatella; Gasparini, Roberto; Icardi, Giancarlo; Orsi, Andrea
2015-12-10
The 2014 Ebola epidemic in West Africa has attracted public interest worldwide, leading to millions of Ebola-related Internet searches being performed during the period of the epidemic. This study aimed to evaluate and interpret Google search queries for terms related to the Ebola outbreak both at the global level and in all countries where primary cases of Ebola occurred. The study also endeavoured to look at the correlation between the number of overall and weekly web searches and the number of overall and weekly new cases of Ebola. Google Trends (GT) was used to explore Internet activity related to Ebola. The study period was from 29 December 2013 to 14 June 2015. Pearson's correlation was performed to correlate Ebola-related relative search volumes (RSVs) with the number of weekly and overall Ebola cases. Multivariate regression was performed using Ebola-related RSV as a dependent variable, and the overall number of Ebola cases and the Human Development Index were used as predictor variables. The greatest RSV was registered in the three West African countries mainly affected by the Ebola epidemic. The queries varied in the different countries. Both quantitative and qualitative differences between the affected African countries and other Western countries with primary cases were noted, in relation to the different flux volumes and different time courses. In the affected African countries, web query search volumes were mostly concentrated in the capital areas. However, in Western countries, web queries were uniformly distributed over the national territory. In terms of the three countries mainly affected by the Ebola epidemic, the correlation between the number of new weekly cases of Ebola and the weekly GT index varied from weak to moderate. The correlation between the number of Ebola cases registered in all countries during the study period and the GT index was very high. Google Trends showed a coarse-grained nature, strongly correlating with global epidemiological data, but was weaker at country level, as it was prone to distortions induced by unbalanced media coverage and the digital divide. Global and local health agencies could usefully exploit GT data to identify disease-related information needs and plan proper communication strategies, particularly in the case of health-threatening events.
Exploratory Visual Analytics of a Dynamically Built Network of Nodes in a WebGL-Enabled Browser
2014-01-01
dimensionality reduction, feature extraction, high-dimensional data, t-distributed stochastic neighbor embedding, neighbor retrieval visualizer, visual...WebGL-enabled rendering is supported natively by browsers such as the latest Mozilla Firefox , Google Chrome, and Microsoft Internet Explorer 11. At the...appropriate names. The resultant 26-node network is displayed in a Mozilla Firefox browser in figure 2 (also see appendix B). 3 Figure 1. The
TouchTerrain: A simple web-tool for creating 3D-printable topographic models
NASA Astrophysics Data System (ADS)
Hasiuk, Franciszek J.; Harding, Chris; Renner, Alex Raymond; Winer, Eliot
2017-12-01
An open-source web-application, TouchTerrain, was developed to simplify the production of 3D-printable terrain models. Direct Digital Manufacturing (DDM) using 3D Printers can change how geoscientists, students, and stakeholders interact with 3D data, with the potential to improve geoscience communication and environmental literacy. No other manufacturing technology can convert digital data into tangible objects quickly at relatively low cost; however, the expertise necessary to produce a 3D-printed terrain model can be a substantial burden: knowledge of geographical information systems, computer aided design (CAD) software, and 3D printers may all be required. Furthermore, printing models larger than the build volume of a 3D printer can pose further technical hurdles. The TouchTerrain web-application simplifies DDM for elevation data by generating digital 3D models customized for a specific 3D printer's capabilities. The only required user input is the selection of a region-of-interest using the provided web-application with a Google Maps-style interface. Publically available digital elevation data is processed via the Google Earth Engine API. To allow the manufacture of 3D terrain models larger than a 3D printer's build volume the selected area can be split into multiple tiles without third-party software. This application significantly reduces the time and effort required for a non-expert like an educator to obtain 3D terrain models for use in class. The web application is deployed at http://touchterrain.geol.iastate.edu/.
NASA Astrophysics Data System (ADS)
Cao, Y. B.; Hua, Y. X.; Zhao, J. X.; Guo, S. M.
2013-11-01
With China's rapid economic development and comprehensive national strength growing, Border work has become a long-term and important task in China's diplomatic work. How to implement rapid plotting, real-time sharing and mapping surrounding affairs has taken great significance for government policy makers and diplomatic staff. However, at present the already exists Boundary information system are mainly have problems of Geospatial data update is heavily workload, plotting tools are in a state of serious lack of, Geographic events are difficult to share, this phenomenon has seriously hampered the smooth development of the border task. The development and progress of Geographic information system technology especially the development of Web GIS offers the possibility to solve the above problems, this paper adopts four layers of B/S architecture, with the support of Google maps service, uses the free API which is offered by Google maps and its features of openness, ease of use, sharing characteristics, highresolution images to design and implement the surrounding transaction plotting and management system based on the web development technology of ASP.NET, C#, Ajax. The system can provide decision support for government policy makers as well as diplomatic staff's real-time plotting and sharing of surrounding information. The practice has proved that the system has good usability and strong real-time.
News trends and web search query of HIV/AIDS in Hong Kong.
Chiu, Alice P Y; Lin, Qianying; He, Daihai
2017-01-01
The HIV epidemic in Hong Kong has worsened in recent years, with major contributions from high-risk subgroup of men who have sex with men (MSM). Internet use is prevalent among the majority of the local population, where they sought health information online. This study examines the impacts of HIV/AIDS and MSM news coverage on web search query in Hong Kong. Relevant news coverage about HIV/AIDS and MSM from January 1st, 2004 to December 31st, 2014 was obtained from the WiseNews databse. News trends were created by computing the number of relevant articles by type, topic, place of origin and sub-populations. We then obtained relevant search volumes from Google and analysed causality between news trends and Google Trends using Granger Causality test and orthogonal impulse function. We found that editorial news has an impact on "HIV" Google searches on HIV, with the search term popularity peaking at an average of two weeks after the news are published. Similarly, editorial news has an impact on the frequency of "AIDS" searches two weeks after. MSM-related news trends have a more fluctuating impact on "MSM" Google searches, although the time lag varies anywhere from one week later to ten weeks later. This infodemiological study shows that there is a positive impact of news trends on the online search behavior of HIV/AIDS or MSM-related issues for up to ten weeks after. Health promotional professionals could make use of this brief time window to tailor the timing of HIV awareness campaigns and public health interventions to maximise its reach and effectiveness.
WebViz: A web browser based application for collaborative analysis of 3D data
NASA Astrophysics Data System (ADS)
Ruegg, C. S.
2011-12-01
In the age of high speed Internet where people can interact instantly, scientific tools have lacked technology which can incorporate this concept of communication using the web. To solve this issue a web application for geological studies has been created, tentatively titled WebViz. This web application utilizes tools provided by Google Web Toolkit to create an AJAX web application capable of features found in non web based software. Using these tools, a web application can be created to act as piece of software from anywhere in the globe with a reasonably speedy Internet connection. An application of this technology can be seen with data regarding the recent tsunami from the major japan earthquakes. After constructing the appropriate data to fit a computer render software called HVR, WebViz can request images of the tsunami data and display it to anyone who has access to the application. This convenience alone makes WebViz a viable solution, but the option to interact with this data with others around the world causes WebViz to be taken as a serious computational tool. WebViz also can be used on any javascript enabled browser such as those found on modern tablets and smart phones over a fast wireless connection. Due to the fact that WebViz's current state is built using Google Web Toolkit the portability of the application is in it's most efficient form. Though many developers have been involved with the project, each person has contributed to increase the usability and speed of the application. In the project's most recent form a dramatic speed increase has been designed as well as a more efficient user interface. The speed increase has been informally noticed in recent uses of the application in China and Australia with the hosting server being located at the University of Minnesota. The user interface has been improved to not only look better but the functionality has been improved. Major functions of the application are rotating the 3D object using buttons. These buttons have been replaced with a new layout that is easier to understand the function and is also easy to use with mobile devices. With these new changes, WebViz is easier to control and use for general use.
Quality of Web-based Information for the 10 Most Common Fractures.
Memon, Muzammil; Ginsberg, Lydia; Simunovic, Nicole; Ristevski, Bill; Bhandari, Mohit; Kleinlugtenbelt, Ydo Vincent
2016-06-17
In today's technologically advanced world, 75% of patients have used Google to search for health information. As a result, health care professionals fear that patients may be misinformed. Currently, there is a paucity of data on the quality and readability of Web-based health information on fractures. In this study, we assessed the quality and readability of Web-based health information related to the 10 most common fractures. Using the Google search engine, we assessed websites from the first results page for the 10 most common fractures using lay search terms. Website quality was measured using the DISCERN instrument, which scores websites as very poor (15-22.5), poor (22.5-37.5), fair (37.5-52.5), good (52.5-67.5), or excellent (67.5-75). The presence of Health on the Net code (HONcode) certification was assessed for all websites. Website readability was measured using the Flesch Reading Ease Score (0-100), where 60-69 is ideal for the general public, and the Flesch-Kincaid Grade Level (FKGL; -3.4 to ∞), where the mean FKGL of the US adult population is 8. Overall, website quality was "fair" for all fractures, with a mean (standard deviation) DISCERN score of 50.3 (5.8). The DISCERN score correlated positively with a higher website position on the search results page (r(2)=0.1, P=.002) and with HONcode certification (P=.007). The mean (standard deviation) Flesch Reading Ease Score and FKGL for all fractures were 62.2 (9.1) and 6.7 (1.6), respectively. The quality of Web-based health information on fracture care is fair, and its readability is appropriate for the general public. To obtain higher quality information, patients should select HONcode-certified websites. Furthermore, patients should select websites that are positioned higher on the results page because the Google ranking algorithms appear to rank the websites by quality.
Quality of Web-based Information for the 10 Most Common Fractures
Ginsberg, Lydia; Simunovic, Nicole; Ristevski, Bill; Bhandari, Mohit; Kleinlugtenbelt, Ydo Vincent
2016-01-01
Background In today's technologically advanced world, 75% of patients have used Google to search for health information. As a result, health care professionals fear that patients may be misinformed. Currently, there is a paucity of data on the quality and readability of Web-based health information on fractures. Objectives In this study, we assessed the quality and readability of Web-based health information related to the 10 most common fractures. Methods Using the Google search engine, we assessed websites from the first results page for the 10 most common fractures using lay search terms. Website quality was measured using the DISCERN instrument, which scores websites as very poor (15-22.5), poor (22.5-37.5), fair (37.5-52.5), good (52.5-67.5), or excellent (67.5-75). The presence of Health on the Net code (HONcode) certification was assessed for all websites. Website readability was measured using the Flesch Reading Ease Score (0-100), where 60-69 is ideal for the general public, and the Flesch-Kincaid Grade Level (FKGL; −3.4 to ∞), where the mean FKGL of the US adult population is 8. Results Overall, website quality was “fair” for all fractures, with a mean (standard deviation) DISCERN score of 50.3 (5.8). The DISCERN score correlated positively with a higher website position on the search results page (r2=0.1, P=.002) and with HONcode certification (P=.007). The mean (standard deviation) Flesch Reading Ease Score and FKGL for all fractures were 62.2 (9.1) and 6.7 (1.6), respectively. Conclusion The quality of Web-based health information on fracture care is fair, and its readability is appropriate for the general public. To obtain higher quality information, patients should select HONcode-certified websites. Furthermore, patients should select websites that are positioned higher on the results page because the Google ranking algorithms appear to rank the websites by quality. PMID:27317159
Shin, Soo-Yong; Seo, Dong-Woo; An, Jisun; Kwak, Haewoon; Kim, Sung-Han; Gwack, Jin; Jo, Min-Woo
2016-09-06
The Middle East respiratory syndrome coronavirus (MERS-CoV) was exported to Korea in 2015, resulting in a threat to neighboring nations. We evaluated the possibility of using a digital surveillance system based on web searches and social media data to monitor this MERS outbreak. We collected the number of daily laboratory-confirmed MERS cases and quarantined cases from May 11, 2015 to June 26, 2015 using the Korean government MERS portal. The daily trends observed via Google search and Twitter during the same time period were also ascertained using Google Trends and Topsy. Correlations among the data were then examined using Spearman correlation analysis. We found high correlations (>0.7) between Google search and Twitter results and the number of confirmed MERS cases for the previous three days using only four simple keywords: "MERS", " ("MERS (in Korean)"), " ("MERS symptoms (in Korean)"), and " ("MERS hospital (in Korean)"). Additionally, we found high correlations between the Google search and Twitter results and the number of quarantined cases using the above keywords. This study demonstrates the possibility of using a digital surveillance system to monitor the outbreak of MERS.
FastLane: An Agile Congestion Signaling Mechanism for Improving Datacenter Performance
2013-05-20
Cloudera, Ericsson, Facebook, General Electric, Hortonworks, Huawei , Intel, Microsoft, NetApp, Oracle, Quanta, Samsung, Splunk, VMware and Yahoo...Web Services, Google, SAP, Blue Goji, Cisco, Clearstory Data, Cloud- era, Ericsson, Facebook, General Electric, Hortonworks, Huawei , Intel, Microsoft
76 FR 34124 - Civil Supersonic Aircraft Panel Discussion
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-10
... and continuing to the second line in the second column, the Web site address should read as follows: https://spreadsheets.google.com/spreadsheet/viewform?formkey=dEFEdlRnYzBiaHZtTUozTHVtbkF4d0E6MQ . [FR...
WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets
NASA Astrophysics Data System (ADS)
Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.
2010-12-01
WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface of the web application. These features are all in addition to a full range of essential visualization functions including 3-D camera and object orientation, position manipulation, time-stepping control, and custom color/alpha mapping.
NASA Technical Reports Server (NTRS)
Perez Guerrero, Geraldo A.; Armstrong, Duane; Underwood, Lauren
2015-01-01
This project is creating a cloud-enabled, HTML 5 web application to help oyster fishermen and state agencies apply Earth science to improve the management of this important natural and economic resource. The Oyster Fisheries app gathers and analyzes environmental and water quality information, and alerts fishermen and resources managers about problems in oyster fishing waters. An intuitive interface based on Google Maps displays the geospatial information and provides familiar interactive controls to the users. Alerts can be tailored to notify users when conditions in specific leases or public fishing areas require attention. The app is hosted on the Amazon Web Services cloud. It is being developed and tested using some of the latest web development tools such as web components and Polymer.
Matsu: An Elastic Cloud Connected to a SensorWeb for Disaster Response
NASA Technical Reports Server (NTRS)
Mandl, Daniel
2011-01-01
This slide presentation reviews the use of cloud computing combined with the SensorWeb in aiding disaster response planning. Included is an overview of the architecture of the SensorWeb, and overviews of the phase 1 of the EO-1 system and the steps to improve it to transform it to an On-demand product cloud as part of the Open Cloud Consortium (OCC). The effectiveness of this system is demonstrated in the SensorWeb for the Namibia flood in 2010, using information blended from MODIS, TRMM, River Gauge data, and the Google Earth version of Namibia the system enabled river surge predictions and could enable planning for future disaster responses.
Ling, Rebecca; Lee, Joon
2016-10-12
Infodemiology can offer practical and feasible health research applications through the practice of studying information available on the Web. Google Trends provides publicly accessible information regarding search behaviors in a population, which may be studied and used for health campaign evaluation and disease monitoring. Additional studies examining the use and effectiveness of Google Trends for these purposes remain warranted. The objective of our study was to explore the use of infodemiology in the context of health campaign evaluation and chronic disease monitoring. It was hypothesized that following a launch of a campaign, there would be an increase in information seeking behavior on the Web. Second, increasing and decreasing disease patterns in a population would be associated with search activity patterns. This study examined 4 different diseases: human immunodeficiency virus (HIV) infection, stroke, colorectal cancer, and marijuana use. Using Google Trends, relative search volume data were collected throughout the period of February 2004 to January 2015. Campaign information and disease statistics were obtained from governmental publications. Search activity trends were graphed and assessed with disease trends and the campaign interval. Pearson product correlation statistics and joinpoint methodology analyses were used to determine significance. Disease patterns and online activity across all 4 diseases were significantly correlated: HIV infection (r=.36, P<.001), stroke (r=.40, P<.001), colorectal cancer (r= -.41, P<.001), and substance use (r=.64, P<.001). Visual inspection and the joinpoint analysis showed significant correlations for the campaigns on colorectal cancer and marijuana use in stimulating search activity. No significant correlations were observed for the campaigns on stroke and HIV regarding search activity. The use of infoveillance shows promise as an alternative and inexpensive solution to disease surveillance and health campaign evaluation. Further research is needed to understand Google Trends as a valid and reliable tool for health research.
2016-01-01
Background Infodemiology can offer practical and feasible health research applications through the practice of studying information available on the Web. Google Trends provides publicly accessible information regarding search behaviors in a population, which may be studied and used for health campaign evaluation and disease monitoring. Additional studies examining the use and effectiveness of Google Trends for these purposes remain warranted. Objective The objective of our study was to explore the use of infodemiology in the context of health campaign evaluation and chronic disease monitoring. It was hypothesized that following a launch of a campaign, there would be an increase in information seeking behavior on the Web. Second, increasing and decreasing disease patterns in a population would be associated with search activity patterns. This study examined 4 different diseases: human immunodeficiency virus (HIV) infection, stroke, colorectal cancer, and marijuana use. Methods Using Google Trends, relative search volume data were collected throughout the period of February 2004 to January 2015. Campaign information and disease statistics were obtained from governmental publications. Search activity trends were graphed and assessed with disease trends and the campaign interval. Pearson product correlation statistics and joinpoint methodology analyses were used to determine significance. Results Disease patterns and online activity across all 4 diseases were significantly correlated: HIV infection (r=.36, P<.001), stroke (r=.40, P<.001), colorectal cancer (r= −.41, P<.001), and substance use (r=.64, P<.001). Visual inspection and the joinpoint analysis showed significant correlations for the campaigns on colorectal cancer and marijuana use in stimulating search activity. No significant correlations were observed for the campaigns on stroke and HIV regarding search activity. Conclusions The use of infoveillance shows promise as an alternative and inexpensive solution to disease surveillance and health campaign evaluation. Further research is needed to understand Google Trends as a valid and reliable tool for health research. PMID:27733330
Web-Based Teaching: The Beginning of the End for Universities?
ERIC Educational Resources Information Center
Wyatt, Ray
This paper describes a World Wide Web-based, generic, inter-disciplinary subject called computer-aided policymaking. It has been offered at Melbourne University (Australia) from the beginning of 2001. It has generated some salutary lessons in marketing and pedagogy, but overall it is concluded that Web-based teaching has a rosy future.…
77 FR 36583 - NRC Form 5, Occupational Dose Record for a Monitoring Period
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-19
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2012... following methods: Federal Rulemaking Web Site: Go to http://www.regulations.gov and search for Docket ID... begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-09
... performance of the market. In May 2008, the internet portal Yahoo! began offering its Web site viewers real... products that must be obtained in tandem. For example, while Yahoo! and Google now both disseminate NASDAQ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-02
... performance of the market. In May 2008, the internet portal Yahoo! began offering its Web site viewers real... products that must be obtained in tandem. For example, while Yahoo! and Google now both disseminate NASDAQ...
NASA Astrophysics Data System (ADS)
Welton, B.; Chouinard, K.; Sultan, M.; Becker, D.; Milewski, A.; Becker, R.
2010-12-01
Rising populations in the arid and semi arid parts of the World are increasing the demand for fresh water supplies worldwide. Many data sets needed for assessment of hydrologic applications across vast regions of the world are expensive, unpublished, difficult to obtain, or at varying scales which complicates their use. Fortunately, this situation is changing with the development of global remote sensing datasets and web-based platforms such as GIS Server. GIS provides a cost effective vehicle for comparing, analyzing, and querying a variety of spatial datasets as geographically referenced layers. We have recently constructed a web-based GIS, that incorporates all relevant geological, geochemical, geophysical, and remote sensing data sets that were readily used to identify reservoir types and potential well locations on local and regional scales in various tectonic settings including: (1) extensional environment (Red Sea rift), (2) transcurrent fault system (Najd Fault in the Arabian-Nubian Shield), and (3) compressional environments (Himalayas). The web-based GIS could also be used to detect spatial and temporal trends in precipitation, recharge, and runoff in large watersheds on local, regional, and continental scales. These applications were enabled through the construction of a web-based ArcGIS Server with Google Map’s interface and the development of customized geoprocessing tools. ArcGIS Server provides out-of-the-box setups that are generic in nature. This platform includes all of the standard web based GIS tools (e.g. pan, zoom, identify, search, data querying, and measurement). In addition to the standard suite of tools provided by ArcGIS Server an additional set of advanced data manipulation and display tools was also developed to allow for a more complete and customizable view of the area of interest. The most notable addition to the standard GIS Server tools is the custom on-demand geoprocessing tools (e.g., graph, statistical functions, custom raster creation, profile, TRMM). The generation of a wide range of derivative maps (e.g., buffer zone, contour map, graphs, temporal rainfall distribution maps) from various map layers (e.g., geologic maps, geophysics, satellite images) allows for more user flexibility. The use of these tools along with Google Map’s API which enables the website user to utilize high quality GeoEye 2 images provide by Google in conjunction with our data, creates a more complete image of the area being observed and allows for custom derivative maps to be created in the field and viewed immediately on the web, processes that were restricted to offline databases.
Yu, Hong; Kaufman, David
2007-01-01
The Internet is having a profound impact on physicians' medical decision making. One recent survey of 277 physicians showed that 72% of physicians regularly used the Internet to research medical information and 51% admitted that information from web sites influenced their clinical decisions. This paper describes the first cognitive evaluation of four state-of-the-art Internet search engines: Google (i.e., Google and Scholar.Google), MedQA, Onelook, and PubMed for answering definitional questions (i.e., questions with the format of "What is X?") posed by physicians. Onelook is a portal for online definitions, and MedQA is a question answering system that automatically generates short texts to answer specific biomedical questions. Our evaluation criteria include quality of answer, ease of use, time spent, and number of actions taken. Our results show that MedQA outperforms Onelook and PubMed in most of the criteria, and that MedQA surpasses Google in time spent and number of actions, two important efficiency criteria. Our results show that Google is the best system for quality of answer and ease of use. We conclude that Google is an effective search engine for medical definitions, and that MedQA exceeds the other search engines in that it provides users direct answers to their questions; while the users of the other search engines have to visit several sites before finding all of the pertinent information.
The Top 50 Articles on Minimally Invasive Spine Surgery.
Virk, Sohrab S; Yu, Elizabeth
2017-04-01
Bibliometric study of current literature. To catalog the most important minimally invasive spine (MIS) surgery articles using the amount of citations as a marker of relevance. MIS surgery is a relatively new tool used by spinal surgeons. There is a dynamic and evolving field of research related to MIS techniques, clinical outcomes, and basic science research. To date, there is no comprehensive review of the most cited articles related to MIS surgery. A systematic search was performed over three widely used literature databases: Web of Science, Scopus, and Google Scholar. There were four searches performed using the terms "minimally invasive spine surgery," "endoscopic spine surgery," "percutaneous spinal surgery," and "lateral interbody surgery." The amount of citations included was averaged amongst the three databases to rank each article. The query of the three databases was performed in November 2015. Fifty articles were selected based upon the amount of citations each averaged amongst the three databases. The most cited article was titled "Extreme Lateral Interbody Fusion (XLIF): a novel surgical technique for anterior lumbar interbody fusion" by Ozgur et al and was credited with 447, 239, and 279 citations in Google Scholar, Web of Science, and Scopus, respectively. Citations ranged from 27 to 239 for Web of Science, 60 to 279 for Scopus, and 104 to 462 for Google Scholar. There was a large variety of articles written spanning over 14 different topics with the majority dealing with clinical outcomes related to MIS surgery. The majority of the most cited articles were level III and level IV studies. This is likely due to the relatively recent nature of technological advances in the field. Furthermore level I and level II studies are required in MIS surgery in the years ahead. 5.
ERIC Educational Resources Information Center
Ifill, Nicole; Radford, Alexandria Walton
2012-01-01
This set of Web Tables presents descriptive statistics on the spring 2009 labor market experiences of subbaccalaureate students who first entered postsecondary education in 2003-04. The Web Tables use data from the nationally representative 2004/09 Beginning Post-secondary Students Longitudinal Study (BPS:04/09), which followed a cohort of…
From Google Maps to Google Models (Invited)
NASA Astrophysics Data System (ADS)
Moore, R. V.
2010-12-01
Why hasn’t integrated modelling taken off? To its advocates, it is self-evidently the best and arguably the only tool available for understanding and predicting the likely response of the environment to events and policies. Legislation requires managers to ensure that their plans are sustainable. How, other than by modelling the interacting processes involved, can the option with the greatest benefits be identified? Integrated modelling (IM) is seen to have huge potential. In science, IM is used to extend and encapsulate our understanding of the whole earth system. Such models are beginning to be incorporated in operational decision support systems and used to seek sustainable solutions to society’s problems, but only on a limited scale. Commercial take up is negligible yet the opportunities would appear limitless. The need is there; the potential is there, so what is inhibiting IM’s take up? What must be done to reap the rewards of the R & D to date? To answer the question, it useful to look back at the developments which have seen paper maps evolve into Google Maps and the systems that now surround it; facilities available not just to experts and governments but to anyone with a an iphone and an internet connection. The initial objective was to automate the process of drawing lines on paper, though it was quickly realised that digitising maps was the key to unlocking the information they held. However, it took thousands of PhD and MSc projects before a computer could generate a map comparable to that produced by a cartographer and many more before it was possible to extract reliable useful information from maps. It also required advances in IT and a change of mindset from one focused on paper map production to one focused on information delivery. To move from digital maps to Google Maps required the availability of data on a world scale, the resources to bring them together, the development of remote sensing, satellite navigation and communications technology and the creation of a commercial climate and conditions that allowed businesses anywhere to exploit the new information. This talk will draw lessons from the experience and imagine how Google Maps could become Google Models. The first lesson is time scale, it took far longer for digital mapping to move out of the development phase than most expected. Its first real customers were the public utilities. They are large organisations, risk averse and take time to change their ways of working; integrated modellers should not be surprised by the slow take up. Few of the early commercial entrants made any significant profits. It was only when the data reached critical mass and became accessible, when the systems became easy to use, affordable and accessible via the web, when convincing demonstrations became available and the necessary standards emerged that Google Maps could emerge. IM has yet to reach this point. It has far bigger technical, scientific and institutional challenges to overcome. The resources required will be large. It is possible though that they could be marshalled by creating an open source community of practice. However, that community will need a facilitating core group and standards to succeed. Having seen what Google Maps made possible, the innovative ideas it released, it is not difficult to imagine where a community of practice might take IM.
ERIC Educational Resources Information Center
Radford, Alexandria Walton; Horn, Laura
2012-01-01
These Web Tables provide an overview of classes taken and credits earned by a nationwide sample of first-time beginning postsecondary students based on data from the Postsecondary Education Transcript Study (PETS) of the 2004/09 Beginning Postsecondary Students Longitudinal Study. PETS collected transcripts from all the postsecondary institutions…
JournalMap: Geo-semantic searching for relevant knowledge
USDA-ARS?s Scientific Manuscript database
Ecologists struggling to understand rapidly changing environments and evolving ecosystem threats need quick access to relevant research and documentation of natural systems. The advent of semantic and aggregation searching (e.g., Google Scholar, Web of Science) has made it easier to find useful lite...
ERIC Educational Resources Information Center
Hill, Paul; MacArthur, Stacey; Read, Nick
2014-01-01
Effective Internet search skills are essential with the continually increasing amount of information available on the Web. Extension personnel are required to find information to answer client questions and to conduct research on programs. Unfortunately, many lack the skills necessary to effectively navigate the Internet and locate needed…
NASA Astrophysics Data System (ADS)
Allred, B. W.; Naugle, D.; Donnelly, P.; Tack, J.; Jones, M. O.
2016-12-01
In 2010, the USDA Natural Resources Conservation Service (NRCS) launched the Sage Grouse Initiative (SGI) to voluntarily reduce threats facing sage-grouse and rangelands on private lands. Over the past five years, SGI has matured into a primary catalyst for rangeland and wildlife conservation across the North American west, focusing on the shared vision of wildlife conservation through sustainable working landscapes and providing win-win solutions for producers, sage grouse, and 350 other sagebrush obligate species. SGI and its partners have invested a total of $750 million into rangeland and wildlife conservation. Moving forward, SGI continues to focus on rangeland conservation. Partnering with Google Earth Engine, SGI has developed outcome monitoring and conservation planning tools at continental scales. The SGI science team is currently developing assessment and monitoring algorithms of key conservation indicators. The SGI web application utilizes Google Earth Engine for user defined analysis and planning, putting the appropriate information directly into the hands of managers and conservationists.
Using the Browser for Science: A Collaborative Toolkit for Astronomy
NASA Astrophysics Data System (ADS)
Connolly, A. J.; Smith, I.; Krughoff, K. S.; Gibson, R.
2011-07-01
Astronomical surveys have yielded hundreds of terabytes of catalogs and images that span many decades of the electromagnetic spectrum. Even when observatories provide user-friendly web interfaces, exploring these data resources remains a complex and daunting task. In contrast, gadgets and widgets have become popular in social networking (e.g. iGoogle, Facebook). They provide a simple way to make complex data easily accessible that can be customized based on the interest of the user. With ASCOT (an AStronomical COllaborative Toolkit) we expand on these concepts to provide a customizable and extensible gadget framework for use in science. Unlike iGoogle, where all of the gadgets are independent, the gadgets we develop communicate and share information, enabling users to visualize and interact with data through multiple, simultaneous views. With this approach, web-based applications for accessing and visualizing data can be generated easily and, by linking these tools together, integrated and powerful data analysis and discovery tools can be constructed.
News trends and web search query of HIV/AIDS in Hong Kong
Chiu, Alice P. Y.; Lin, Qianying
2017-01-01
Background The HIV epidemic in Hong Kong has worsened in recent years, with major contributions from high-risk subgroup of men who have sex with men (MSM). Internet use is prevalent among the majority of the local population, where they sought health information online. This study examines the impacts of HIV/AIDS and MSM news coverage on web search query in Hong Kong. Methods Relevant news coverage about HIV/AIDS and MSM from January 1st, 2004 to December 31st, 2014 was obtained from the WiseNews databse. News trends were created by computing the number of relevant articles by type, topic, place of origin and sub-populations. We then obtained relevant search volumes from Google and analysed causality between news trends and Google Trends using Granger Causality test and orthogonal impulse function. Results We found that editorial news has an impact on “HIV” Google searches on HIV, with the search term popularity peaking at an average of two weeks after the news are published. Similarly, editorial news has an impact on the frequency of “AIDS” searches two weeks after. MSM-related news trends have a more fluctuating impact on “MSM” Google searches, although the time lag varies anywhere from one week later to ten weeks later. Conclusions This infodemiological study shows that there is a positive impact of news trends on the online search behavior of HIV/AIDS or MSM-related issues for up to ten weeks after. Health promotional professionals could make use of this brief time window to tailor the timing of HIV awareness campaigns and public health interventions to maximise its reach and effectiveness. PMID:28922376
FindZebra: a search engine for rare diseases.
Dragusin, Radu; Petcu, Paula; Lioma, Christina; Larsen, Birger; Jørgensen, Henrik L; Cox, Ingemar J; Hansen, Lars Kai; Ingwersen, Peter; Winther, Ole
2013-06-01
The web has become a primary information resource about illnesses and treatments for both medical and non-medical users. Standard web search is by far the most common interface to this information. It is therefore of interest to find out how well web search engines work for diagnostic queries and what factors contribute to successes and failures. Among diseases, rare (or orphan) diseases represent an especially challenging and thus interesting class to diagnose as each is rare, diverse in symptoms and usually has scattered resources associated with it. We design an evaluation approach for web search engines for rare disease diagnosis which includes 56 real life diagnostic cases, performance measures, information resources and guidelines for customising Google Search to this task. In addition, we introduce FindZebra, a specialized (vertical) rare disease search engine. FindZebra is powered by open source search technology and uses curated freely available online medical information. FindZebra outperforms Google Search in both default set-up and customised to the resources used by FindZebra. We extend FindZebra with specialized functionalities exploiting medical ontological information and UMLS medical concepts to demonstrate different ways of displaying the retrieved results to medical experts. Our results indicate that a specialized search engine can improve the diagnostic quality without compromising the ease of use of the currently widely popular standard web search. The proposed evaluation approach can be valuable for future development and benchmarking. The FindZebra search engine is available at http://www.findzebra.com/. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Law, Michael R; Mintzes, Barbara; Morgan, Steven G
2011-03-01
The Internet has become a popular source of health information. However, there is little information on what drug information and which Web sites are being searched. To investigate the sources of online information about prescription drugs by assessing the most common Web sites returned in online drug searches and to assess the comparative popularity of Web pages for particular drugs. This was a cross-sectional study of search results for the most commonly dispensed drugs in the US (n=278 active ingredients) on 4 popular search engines: Bing, Google (both US and Canada), and Yahoo. We determined the number of times a Web site appeared as the first result. A linked retrospective analysis counted Wikipedia page hits for each of these drugs in 2008 and 2009. About three quarters of the first result on Google USA for both brand and generic names linked to the National Library of Medicine. In contrast, Wikipedia was the first result for approximately 80% of generic name searches on the other 3 sites. On these other sites, over two thirds of brand name searches led to industry-sponsored sites. The Wikipedia pages with the highest number of hits were mainly for opiates, benzodiazepines, antibiotics, and antidepressants. Wikipedia and the National Library of Medicine rank highly in online drug searches. Further, our results suggest that patients most often seek information on drugs with the potential for dependence, for stigmatized conditions, that have received media attention, and for episodic treatments. Quality improvement efforts should focus on these drugs.
Profile-IQ: Web-based data query system for local health department infrastructure and activities.
Shah, Gulzar H; Leep, Carolyn J; Alexander, Dayna
2014-01-01
To demonstrate the use of National Association of County & City Health Officials' Profile-IQ, a Web-based data query system, and how policy makers, researchers, the general public, and public health professionals can use the system to generate descriptive statistics on local health departments. This article is a descriptive account of an important health informatics tool based on information from the project charter for Profile-IQ and the authors' experience and knowledge in design and use of this query system. Profile-IQ is a Web-based data query system that is based on open-source software: MySQL 5.5, Google Web Toolkit 2.2.0, Apache Commons Math library, Google Chart API, and Tomcat 6.0 Web server deployed on an Amazon EC2 server. It supports dynamic queries of National Profile of Local Health Departments data on local health department finances, workforce, and activities. Profile-IQ's customizable queries provide a variety of statistics not available in published reports and support the growing information needs of users who do not wish to work directly with data files for lack of staff skills or time, or to avoid a data use agreement. Profile-IQ also meets the growing demand of public health practitioners and policy makers for data to support quality improvement, community health assessment, and other processes associated with voluntary public health accreditation. It represents a step forward in the recent health informatics movement of data liberation and use of open source information technology solutions to promote public health.
The New USGS Volcano Hazards Program Web Site
NASA Astrophysics Data System (ADS)
Venezky, D. Y.; Graham, S. E.; Parker, T. J.; Snedigar, S. F.
2008-12-01
The U.S. Geological Survey's (USGS) Volcano Hazard Program (VHP) has launched a revised web site that uses a map-based interface to display hazards information for U.S. volcanoes. The web site is focused on better communication of hazards and background volcano information to our varied user groups by reorganizing content based on user needs and improving data display. The Home Page provides a synoptic view of the activity level of all volcanoes for which updates are written using a custom Google® Map. Updates are accessible by clicking on one of the map icons or clicking on the volcano of interest in the adjacent color-coded list of updates. The new navigation provides rapid access to volcanic activity information, background volcano information, images and publications, volcanic hazards, information about VHP, and the USGS volcano observatories. The Volcanic Activity section was tailored for emergency managers but provides information for all our user groups. It includes a Google® Map of the volcanoes we monitor, an Elevated Activity Page, a general status page, information about our Volcano Alert Levels and Aviation Color Codes, monitoring information, and links to monitoring data from VHP's volcano observatories: Alaska Volcano Observatory (AVO), Cascades Volcano Observatory (CVO), Long Valley Observatory (LVO), Hawaiian Volcano Observatory (HVO), and Yellowstone Volcano Observatory (YVO). The YVO web site was the first to move to the new navigation system and we are working on integrating the Long Valley Observatory web site next. We are excited to continue to implement new geospatial technologies to better display our hazards and supporting volcano information.
Interfaces to PeptideAtlas: a case study of standard data access systems
Handcock, Jeremy; Robinson, Thomas; Deutsch, Eric W.; Boyle, John
2012-01-01
Access to public data sets is important to the scientific community as a resource to develop new experiments or validate new data. Projects such as the PeptideAtlas, Ensembl and The Cancer Genome Atlas (TCGA) offer both access to public data and a repository to share their own data. Access to these data sets is often provided through a web page form and a web service API. Access technologies based on web protocols (e.g. http) have been in use for over a decade and are widely adopted across the industry for a variety of functions (e.g. search, commercial transactions, and social media). Each architecture adapts these technologies to provide users with tools to access and share data. Both commonly used web service technologies (e.g. REST and SOAP), and custom-built solutions over HTTP are utilized in providing access to research data. Providing multiple access points ensures that the community can access the data in the simplest and most effective manner for their particular needs. This article examines three common access mechanisms for web accessible data: BioMart, caBIG, and Google Data Sources. These are illustrated by implementing each over the PeptideAtlas repository and reviewed for their suitability based on specific usages common to research. BioMart, Google Data Sources, and caBIG are each suitable for certain uses. The tradeoffs made in the development of the technology are dependent on the uses each was designed for (e.g. security versus speed). This means that an understanding of specific requirements and tradeoffs is necessary before selecting the access technology. PMID:22941959
Collaborative writing: Tools and tips.
Eapen, Bell Raj
2007-01-01
Majority of technical writing is done by groups of experts and various web based applications have made this collaboration easy. Email exchange of word processor documents with tracked changes used to be the standard technique for collaborative writing. However web based tools like Google docs and Spreadsheets have made the process fast and efficient. Various versioning tools and synchronous editors are available for those who need additional functionality. Having a group leader who decides the scheduling, communication and conflict resolving protocols is important for successful collaboration.
Using Web and Social Media for Influenza Surveillance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corley, Courtney D.; Cook, Diane; Mikler, Armin R.
2010-01-04
Analysis of Google influenza-like-illness (ILI) search queries has shown a strongly correlated pattern with Centers for Disease Control (CDC) and Prevention seasonal ILI reporting data.Web and social media provide another resource to detect increases in ILI. This paper evaluates trends in blog posts that discuss influenza. Our key finding is that from 5-October 2008 to 31-January 2009 a high correlation exists between the frequency of posts, containing influenza keywords, per week and CDC influenza-like-illness surveillance data.
Googling endometriosis: a systematic review of information available on the Internet.
Hirsch, Martin; Aggarwal, Shivani; Barker, Claire; Davis, Colin J; Duffy, James M N
2017-05-01
The demand for health information online is increasing rapidly without clear governance. We aim to evaluate the credibility, quality, readability, and accuracy of online patient information concerning endometriosis. We searched 5 popular Internet search engines: aol.com, ask.com, bing.com, google.com, and yahoo.com. We developed a search strategy in consultation with patients with endometriosis, to identify relevant World Wide Web pages. Pages containing information related to endometriosis for women with endometriosis or the public were eligible. Two independent authors screened the search results. World Wide Web pages were evaluated using validated instruments across 3 of the 4 following domains: (1) credibility (White Paper instrument; range 0-10); (2) quality (DISCERN instrument; range 0-85); and (3) readability (Flesch-Kincaid instrument; range 0-100); and (4) accuracy (assessed by a prioritized criteria developed in consultation with health care professionals, researchers, and women with endometriosis based on the European Society of Human Reproduction and Embryology guidelines [range 0-30]). We summarized these data in diagrams, tables, and narratively. We identified 750 World Wide Web pages, of which 54 were included. Over a third of Web pages did not attribute authorship and almost half the included pages did not report the sources of information or academic references. No World Wide Web page provided information assessed as being written in plain English. A minority of web pages were assessed as high quality. A single World Wide Web page provided accurate information: evidentlycochrane.net. Available information was, in general, skewed toward the diagnosis of endometriosis. There were 16 credible World Wide Web pages, however the content limitations were infrequently discussed. No World Wide Web page scored highly across all 4 domains. In the unlikely event that a World Wide Web page reports high-quality, accurate, and credible health information it is typically challenging for a lay audience to comprehend. Health care professionals, and the wider community, should inform women with endometriosis of the risk of outdated, inaccurate, or even dangerous information online. The implementation of an information standard will incentivize providers of online information to establish and adhere to codes of conduct. Copyright © 2016 Elsevier Inc. All rights reserved.
USGS Coastal and Marine Geology Survey Data in Google Earth
NASA Astrophysics Data System (ADS)
Reiss, C.; Steele, C.; Ma, A.; Chin, J.
2006-12-01
The U.S. Geological Survey (USGS) Coastal and Marine Geology (CMG) program has a rich data catalog of geologic field activities and metadata called InfoBank, which has been a standard tool for researchers within and outside of the agency. Along with traditional web maps, the data are now accessible in Google Earth, which greatly expands the possible user audience. The Google Earth interface provides geographic orientation and panning/zooming capabilities to locate data relative to topography, bathymetry, and coastal areas. Viewing navigation with Google Earth's background imagery allows queries such as, why areas were not surveyed (answer presence of islands, shorelines, cliffs, etc.). Detailed box core subsample photos from selected sampling activities, published geotechnical data, and sample descriptions are now viewable on Google Earth, (for example, M-1-95-MB, P-2-95-MB, and P-1-97- MB box core samples). One example of the use of Google Earth is CMG's surveys of San Francisco's Ocean Beach since 2004. The surveys are conducted with an all-terrain vehicle (ATV) and shallow-water personal watercraft (PWC) equipped with Global Positioning System (GPS), and elevation and echo sounder data collectors. 3D topographic models with centimeter accuracy have been produced from these surveys to monitor beach and nearshore processes, including sand transport, sedimentation patterns, and seasonal trends. Using Google Earth, multiple track line data (examples: OB-1-05-CA and OB-2-05-CA) can be overlaid on beach imagery. The images also help explain the shape of track lines as objects are encountered.
Spatiotemporal-Thematic Data Processing for the Semantic Web
NASA Astrophysics Data System (ADS)
Hakimpour, Farshad; Aleman-Meza, Boanerges; Perry, Matthew; Sheth, Amit
This chapter presents practical approaches to data processing in the space, time and theme dimensions using existing Semantic Web technologies. It describes how we obtain geographic and event data from Internet sources and also how we integrate them into an RDF store. We briefly introduce a set of functionalities in space, time and semantics. These functionalities are implemented based on our existing technology for main-memory-based RDF data processing developed at the LSDIS Lab. A number of these functionalities are exposed as REST Web services. We present two sample client-side applications that are developed using a combination of our services with Google Maps service.
Could we do better? Behavioural tracking on recommended consumer health websites.
Burkell, Jacquelyn; Fortier, Alexandre
2015-09-01
This study examines behavioural tracking practices on consumer health websites, contrasting tracking on sites recommended by information professionals with tracking on sites returned by Google. Two lists of consumer health websites were constructed: sites recommended by information professionals and sites returned by Google searches. Sites were divided into three groups according to source (Recommended-Only, Google-Only or both) and type (Government, Not-for-Profit or Commercial). Behavioural tracking practices on each website were documented using a protocol that detected cookies, Web beacons and Flash cookies. The presence and the number of trackers that collect personal information were contrasted across source and type of site; a second set of analyses specifically examined Advertising trackers. Recommended-Only sites show lower levels of tracking - especially tracking by advertisers - than do Google-Only sites or sites found through both sources. Government and Not-for-Profit sites have fewer trackers, particularly from advertisers, than do Commercial sites. Recommended sites, especially those from Government or Not-for-Profit organisations, present a lower privacy threat than sites returned by Google searches. Nonetheless, most recommended websites include some trackers, and half include at least one Advertising tracker. To protect patron privacy, information professionals should examine the tracking practices of the websites they recommend. © 2015 Health Libraries Group.
An Introduction to Science Education in Rural Australia
ERIC Educational Resources Information Center
Lyons, Terry
2008-01-01
Here's a challenge. Try searching "Google" for the phrase "rural science teachers" in Australian web content. Surprisingly, my attempts returned only two hits, neither of which actually referred to Australian teachers. Searches for "rural science education" fare little better. On this evidence one could be forgiven…
Rainfall erosivity in Brazil: A Review
USDA-ARS?s Scientific Manuscript database
In this paper, we review the erosivity studies conducted in Brazil to verify the quality and representativeness of the results generated and to provide a greater understanding of the rainfall erosivity (R-factor) in Brazil. We searched the ISI Web of Science, Scopus, SciELO, and Google Scholar datab...
Vona, Pamela; Wilmoth, Pete; Jaycox, Lisa H; McMillen, Janey S; Kataoka, Sheryl H; Wong, Marleen; DeRosier, Melissa E; Langley, Audra K; Kaufman, Joshua; Tang, Lingqi; Stein, Bradley D
2014-11-01
To explore the role of Web-based platforms in behavioral health, the study examined usage of a Web site for supporting training and implementation of an evidence-based intervention. Using data from an online registration survey and Google Analytics, the investigators examined user characteristics and Web site utilization. Site engagement was substantial across user groups. Visit duration differed by registrants' characteristics. Less experienced clinicians spent more time on the Web site. The training section accounted for most page views across user groups. Individuals previously trained in the Cognitive-Behavioral Intervention for Trauma in Schools intervention viewed more implementation assistance and online community pages than did other user groups. Web-based platforms have the potential to support training and implementation of evidence-based interventions for clinicians of varying levels of experience and may facilitate more rapid dissemination. Web-based platforms may be promising for trauma-related interventions, because training and implementation support should be readily available after a traumatic event.
Visualizing Mars data and imagery with Google Earth
NASA Astrophysics Data System (ADS)
Beyer, R. A.; Broxton, M.; Gorelick, N.; Hancher, M.; Lundy, M.; Kolb, E.; Moratto, Z.; Nefian, A.; Scharff, T.; Weiss-Malik, M.
2009-12-01
There is a vast store of planetary geospatial data that has been collected by NASA but is difficult to access and visualize. Virtual globes have revolutionized the way we visualize and understand the Earth, but other planetary bodies including Mars and the Moon can be visualized in similar ways. Extraterrestrial virtual globes are poised to revolutionize planetary science, bring an exciting new dimension to science education, and allow ordinary users to explore imagery being sent back to Earth by planetary science satellites. The original Google Mars Web site allowed users to view base maps of Mars via the Web, but it did not have the full features of the 3D Google Earth client. We have previously demonstrated the use of Google Earth to display Mars imagery, but now with the launch of Mars in Google Earth, there is a base set of Mars data available for anyone to work from and add to. There are a variety of global maps to choose from and display. The Terrain layer has the MOLA gridded data topography, and where available, HRSC terrain models are mosaicked into the topography. In some locations there is also meter-scale terrain derived from HiRISE stereo imagery. There is rich information in the form of the IAU nomenclature database, data for the rovers and landers on the surface, and a Spacecraft Imagery layer which contains the image outlines for all HiRISE, CTX, CRISM, HRSC, and MOC image data released to the PDS and links back to their science data. There are also features like the Traveler's Guide to Mars, Historic Maps, Guided Tours, as well as the 'Live from Mars' feature, which shows the orbital tracks of both the Mars Odyssey and Mars Reconnaissance Orbiter for a few days in the recent past. It shows where they have acquired imagery, and also some preview image data. These capabilities have obvious public outreach and education benefits, but the potential benefits of allowing planetary scientists to rapidly explore these large and varied data collections—in geological context and within a single user interface—are also becoming evident. Because anyone can produce additional KML content for use in Google Earth, scientists can customize the environment to their needs as well as publish their own processed data and results for others to use. Many scientists and organizations have begun to do this already, resulting in a useful and growing collection of planetary-science-oriented Google Earth layers.
Association between Stock Market Gains and Losses and Google Searches
Arditi, Eli; Yechiam, Eldad; Zahavi, Gal
2015-01-01
Experimental studies in the area of Psychology and Behavioral Economics have suggested that people change their search pattern in response to positive and negative events. Using Internet search data provided by Google, we investigated the relationship between stock-specific events and related Google searches. We studied daily data from 13 stocks from the Dow-Jones and NASDAQ100 indices, over a period of 4 trading years. Focusing on periods in which stocks were extensively searched (Intensive Search Periods), we found a correlation between the magnitude of stock returns at the beginning of the period and the volume, peak, and duration of search generated during the period. This relation between magnitudes of stock returns and subsequent searches was considerably magnified in periods following negative stock returns. Yet, we did not find that intensive search periods following losses were associated with more Google searches than periods following gains. Thus, rather than increasing search, losses improved the fit between people’s search behavior and the extent of real-world events triggering the search. The findings demonstrate the robustness of the attentional effect of losses. PMID:26513371
Using open-source programs to create a web-based portal for hydrologic information
NASA Astrophysics Data System (ADS)
Kim, H.
2013-12-01
Some hydrologic data sets, such as basin climatology, precipitation, and terrestrial water storage, are not easily obtainable and distributable due to their size and complexity. We present a Hydrologic Information Portal (HIP) that has been implemented at the University of California for Hydrologic Modeling (UCCHM) and that has been organized around the large river basins of North America. This portal can be easily accessed through a modern web browser that enables easy access and visualization of such hydrologic data sets. Some of the main features of our HIP include a set of data visualization features so that users can search, retrieve, analyze, integrate, organize, and map data within large river basins. Recent information technologies such as Google Maps, Tornado (Python asynchronous web server), NumPy/SciPy (Scientific Library for Python) and d3.js (Visualization library for JavaScript) were incorporated into the HIP to create ease in navigating large data sets. With such open source libraries, HIP can give public users a way to combine and explore various data sets by generating multiple chart types (Line, Bar, Pie, Scatter plot) directly from the Google Maps viewport. Every rendered object such as a basin shape on the viewport is clickable, and this is the first step to access the visualization of data sets.
An Interactive Web System for Field Data Sharing and Collaboration
NASA Astrophysics Data System (ADS)
Weng, Y.; Sun, F.; Grigsby, J. D.
2010-12-01
A Web 2.0 system is designed and developed to facilitate data collection for the field studies in the Geological Sciences department at Ball State University. The system provides a student-centered learning platform that enables the users to first upload their collected data in various formats, interact and collaborate dynamically online, and ultimately create a shared digital repository of field experiences. The data types considered for the system and their corresponding format and requirements are listed in the table below. The system has six main functionalities as follows. (1) Only the registered users can access the system with confidential identification and password. (2) Each user can upload/revise/delete data in various formats such as image, audio, video, and text files to the system. (3) Interested users are allowed to co-edit the contents and join the collaboration whiteboard for further discussion. (4) The system integrates with Google, Yahoo, or Flickr to search for similar photos with same tags. (5) Users can search the web system according to the specific key words. (6) Photos with recorded GPS readings can be mashed and mapped to Google Maps/Earth for visualization. Application of the system to geology field trips at Ball State University will be demonstrated to assess the usability of the system.Data Requirements
Quality of consumer-targeted internet guidance on home firearm and ammunition storage.
Freundlich, Katherine L; Skoczylas, Maria Shakour; Schmidt, John P; Keshavarzi, Nahid R; Mohr, Bethany Anne
2016-10-01
Four storage practices protect against unintentional and/or self-inflicted firearm injury among children and adolescents: keeping guns locked (1) and unloaded (2) and keeping ammunition locked up (3) and in a separate location from the guns (4). Our aim was to mimic common Google search strategies on firearm/ammunition storage and assess whether the resulting web pages provided recommendations consistent with those supported by the literature. We identified 87 web pages by Google search of the 10 most commonly used search terms in the USA related to firearm/ammunition storage. Two non-blinded independent reviewers analysed web page technical quality according to a 17-item checklist derived from previous studies. A single reviewer analysed readability by US grade level assigned by Flesch-Kincaid Grade Level Index. Two separate, blinded, independent reviewers analysed deidentified web page content for accuracy and completeness describing the four accepted storage practices. Reviewers resolved disagreements by consensus. The web pages described, on average, less than one of four accepted storage practices (mean 0.2 (95% CL 0.1 to 0.4)). Only two web pages (2%) identified all four practices. Two web pages (2%) made assertions inconsistent with recommendations; both implied that loaded firearms could be stored safely. Flesch-Kincaid Grade Level Index averaged 8.0 (95% CL 7.3 to 8.7). The average technical quality score was 7.1 (95% CL 6.8 to 7.4) out of an available score of 17. There was a high degree of agreement between reviewers regarding completeness (weighted κ 0.78 (95% CL 0.61 to 0.97)). The internet currently provides incomplete information about safe firearm storage. Understanding existing deficiencies may inform future strategies for improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
A web search on environmental topics: what is the role of ranking?
Covolo, Loredana; Filisetti, Barbara; Mascaretti, Silvia; Limina, Rosa Maria; Gelatti, Umberto
2013-12-01
Although the Internet is easy to use, the mechanisms and logic behind a Web search are often unknown. Reliable information can be obtained, but it may not be visible as the Web site is not located in the first positions of search results. The possible risks of adverse health effects arising from environmental hazards are issues of increasing public interest, and therefore the information about these risks, particularly on topics for which there is no scientific evidence, is very crucial. The aim of this study was to investigate whether the presentation of information on some environmental health topics differed among various search engines, assuming that the most reliable information should come from institutional Web sites. Five search engines were used: Google, Yahoo!, Bing, Ask, and AOL. The following topics were searched in combination with the word "health": "nuclear energy," "electromagnetic waves," "air pollution," "waste," and "radon." For each topic three key words were used. The first 30 search results for each query were considered. The ranking variability among the search engines and the type of search results were analyzed for each topic and for each key word. The ranking of institutional Web sites was given particular consideration. Variable results were obtained when surfing the Internet on different environmental health topics. Multivariate logistic regression analysis showed that, when searching for radon and air pollution topics, it is more likely to find institutional Web sites in the first 10 positions compared with nuclear power (odds ratio=3.4, 95% confidence interval 2.1-5.4 and odds ratio=2.9, 95% confidence interval 1.8-4.7, respectively) and also when using Google compared with Bing (odds ratio=3.1, 95% confidence interval 1.9-5.1). The increasing use of online information could play an important role in forming opinions. Web users should become more aware of the importance of finding reliable information, and health institutions should be able to make that information more visible.
Bernal-Rusiel, Jorge L; Rannou, Nicolas; Gollub, Randy L; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E; Pienaar, Rudolph
2017-01-01
In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView , a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution.
InChI in the wild: an assessment of InChIKey searching in Google
2013-01-01
While chemical databases can be queried using the InChI string and InChIKey (IK) the latter was designed for open-web searching. It is becoming increasingly effective for this since more sources enhance crawling of their websites by the Googlebot and consequent IK indexing. Searchers who use Google as an adjunct to database access may be less familiar with the advantages of using the IK as explored in this review. As an example, the IK for atorvastatin retrieves ~200 low-redundancy links from a Google search in 0.3 of a second. These include most major databases and a very low false-positive rate. Results encompass less familiar but potentially useful sources and can be extended to isomer capture by using just the skeleton layer of the IK. Google Advanced Search can be used to filter large result sets. Image searching with the IK is also effective and complementary to open-web queries. Results can be particularly useful for less-common structures as exemplified by a major metabolite of atorvastatin giving only three hits. Testing also demonstrated document-to-document and document-to-database joins via structure matching. The necessary generation of an IK from chemical names can be accomplished using open tools and resources for patents, papers, abstracts or other text sources. Active global sharing of local IK-linked information can be accomplished via surfacing in open laboratory notebooks, blogs, Twitter, figshare and other routes. While information-rich chemistry (e.g. approved drugs) can exhibit swamping and redundancy effects, the much smaller IK result sets for link-poor structures become a transformative first-pass option. The IK indexing has therefore turned Google into a de-facto open global chemical information hub by merging links to most significant sources, including over 50 million PubChem and ChemSpider records. The simplicity, specificity and speed of matching make it a useful option for biologists or others less familiar with chemical searching. However, compared to rigorously maintained major databases, users need to be circumspect about the consistency of Google results and provenance of retrieved links. In addition, community engagement may be necessary to ameliorate possible future degradation of utility. PMID:23399051
Interactive Mapping of the Planets: An Online Activity Using the Google Earth Platform
NASA Astrophysics Data System (ADS)
Osinski, G. R.; Gilbert, A.; Harrison, T. N.; Mader, M. M.; Shankar, B.; Tornabene, L. L.
2013-12-01
With funding from the Natural Sciences and Engineering Research Council of Canada's PromoScience program and support from the Department of Earth Sciences at The University of Western Ontario, the Centre for Planetary Science and Exploration (CPSX) has developed a new web-based initiative called Interactive Mapping of the Planets (IMAPS). Additional components include in person school visits to deliver inquiry-based workshops, week-long summer camps, and pre-prepared impact rock lending kits, all framed around the IMAPS activity. IMAPS will is now in beta testing mode and will be demonstrated in this session. The general objective of the online activity is for participants to plan and design a rover mission to Mars based on a given mission goal - e.g., to find evidence for past water flow. The activity begins with participants receiving image-analysis training to learn about the different landforms on Mars and which ones are potentially caused by water flow. They then need to pass a short test to show they can consistently identify Martian landforms. From there, the participants choose a landing site and plan a traverse - utilizing the free Google Earth plug-in - and taking into account factors such as hazards and their sites of interest. A mission control blog will provide updates on the status of their mission and a 'choose your rover' option provides the opportunity to unlock more advanced rovers by collaborating with other scientists and rating their missions. Indeed, evaluation of missions will be done using a crowd-sourcing method. In addition to being fully accessible online, CPSX will also target primary- and secondary-school grades in which astronomy and space science is taught. Teachers in K-12 classrooms will be able to sign-up for the activity ahead of time in order to receive a workshop package, which will guide them on how to use the IMAPS online activity with their class. Teachers will be able to set up groups for their classroom so that they can evaluate their students based on pre-determined criteria. The IMAPS activities are developed in partnerships with the Department of Earth Sciences at Western University, Sports Western, the Thames Valley District School Board, and Dimentians Web Marketing and Design. We are continually looking for new collaborators to help design or test our inquiry- and web-based activities, provide feedback on our programs, or volunteer with us. Please contact cpsxoutreach@uwo.ca if you are interested.
Learning Geomorphology Using Aerial Photography in a Web-Facilitated Class
ERIC Educational Resources Information Center
Palmer, R. Evan
2013-01-01
General education students taking freshman-level physical geography and geomorphology classes at Arizona State University completed an online laboratory whose main tool was Google Earth. Early in the semester, oblique and planimetric views introduced students to a few volcanic, tectonic, glacial, karst, and coastal landforms. Semi-quantitative…
ERIC Educational Resources Information Center
Bohle, Shannon
2008-01-01
With all the new advances in library technology--including metadata, social networking, and Web 2.0, along with the advent of nonlibrary and for-profit digital information companies like Wikisource and Google Print--librarians have barely had time to reflect on the nontechnical implications of these innovations. They need to take a step back and…
Is It Cheating if Everybody Does It?
ERIC Educational Resources Information Center
Gustafon, Chris
2004-01-01
A teacher brings you a paper he suspects is not the student's own work, and a google search confirms it was copied right off a web page. Intellectual honesty issues are impossible to duck in the library, but plagiarism lessons are often met with yawns and eye rolls from students.
Information Portals: The Next Generation Catalog
ERIC Educational Resources Information Center
Allison, DeeAnn
2010-01-01
Libraries today face an increasing challenge: to provide relevant information to diverse populations with differing needs while competing with Web search engines like Google. In 2009, a large group of libraries, including the University of Nebraska-Lincoln Libraries, joined with Innovative Interfaces as development partners to design a new type of…
Analysis of Orthopaedic Research Produced During the Wars in Iraq and Afghanistan.
Balazs, George C; Dickens, Jonathan F; Brelin, Alaina M; Wolfe, Jared A; Rue, John-Paul H; Potter, Benjamin K
2015-09-01
Military orthopaedic surgeons have published a substantial amount of original research based on our care of combat-wounded service members and related studies during the wars in Iraq and Afghanistan. However, to our knowledge, the influence of this body of work has not been evaluated bibliometrically, and doing so is important to determine the modern impact of combat casualty research in the wider medical community. We sought to identify the 20 most commonly cited works from military surgeons published during the Iraq and Afghanistan conflicts and analyze them to answer the following questions: (1) What were the subject areas of these 20 articles and what was the 2013 Impact Factor of each journal that published them? (2) How many citations did they receive and what were the characteristics of the journals that cited them? (3) Do the citation analysis results obtained from Google Scholar mirror the results obtained from Thompson-Reuters' Web of Science? We searched the Web of Science Citation Index Expanded for relevant original research performed by US military orthopaedic surgeons related to Operation Iraqi Freedom and Operation Enduring Freedom between 2001 and 2014. Articles citing these studies were reviewed using both Web of Science and Google Scholar data. The 20 most cited articles meeting inclusion criteria were identified and analyzed by content domain, frequency of citation, and sources in which they were cited. Nine of these studies examined the epidemiology and outcome of combat injury. Six studies dealt with wound management, wound dehiscence, and formation of heterotopic ossification. Five studies examined infectious complications of combat trauma. The median number of citations garnered by these 20 articles was 41 (range, 28-264) in Web of Science. Other research citing these studies has appeared in 279 different journals, covering 26 different medical and surgical subspecialties, from authors in 31 different countries. Google Scholar contained 97% of the Web of Science citations, but also had 31 duplicate entries and 29 citations with defective links. Modern combat casualty research by military orthopaedic surgeons is widely cited by researchers in a diverse range of subspecialties and geographic locales. This suggests that the military continues to be a source of innovation that is broadly applicable to civilian medical and surgical practice and should encourage expansion of military-civilian collaboration to maximize the utility of the knowledge gained in the treatment of war trauma. Level IV, therapeutic study.
2013-08-09
CAPE CANAVERAL, Fla. – As seen on Google Maps, space shuttle Endeavour goes through transition and retirement processing in high bay 4 of the Vehicle Assembly Building at NASA's Kennedy Space Center. The spacecraft completed 25 missions beginning with its first flight, STS-49, in May 1992, and ending with STS-134 in May 2011. It helped construct the International Space Station in orbit and travelled more than 122 million miles in orbit during its career. The reaction control system pods in the shuttle's nose and aft section were removed for processing before Endeavour was put on public display at the California Science Center in Los Angeles. Google precisely mapped the space center and some of its historical facilities for the company's map page. The work allows Internet users to see inside buildings at Kennedy as they were used during the space shuttle era. Photo credit: Google/Wendy Wang
FwWebViewPlus: integration of web technologies into WinCC OA based Human-Machine Interfaces at CERN
NASA Astrophysics Data System (ADS)
Golonka, Piotr; Fabian, Wojciech; Gonzalez-Berges, Manuel; Jasiun, Piotr; Varela-Rodriguez, Fernando
2014-06-01
The rapid growth in popularity of web applications gives rise to a plethora of reusable graphical components, such as Google Chart Tools and JQuery Sparklines, implemented in JavaScript and run inside a web browser. In the paper we describe the tool that allows for seamless integration of web-based widgets into WinCC Open Architecture, the SCADA system used commonly at CERN to build complex Human-Machine Interfaces. Reuse of widely available widget libraries and pushing the development efforts to a higher abstraction layer based on a scripting language allow for significant reduction in maintenance of the code in multi-platform environments compared to those currently used in C++ visualization plugins. Adequately designed interfaces allow for rapid integration of new web widgets into WinCC OA. At the same time, the mechanisms familiar to HMI developers are preserved, making the use of new widgets "native". Perspectives for further integration between the realms of WinCC OA and Web development are also discussed.
Health and medication information resources on the World Wide Web.
Grossman, Sara; Zerilli, Tina
2013-04-01
Health care practitioners have increasingly used the Internet to obtain health and medication information. The vast number of Internet Web sites providing such information and concerns with their reliability makes it essential for users to carefully select and evaluate Web sites prior to use. To this end, this article reviews the general principles to consider in this process. Moreover, as cost may limit access to subscription-based health and medication information resources with established reputability, freely accessible online resources that may serve as an invaluable addition to one's reference collection are highlighted. These include government- and organization-sponsored resources (eg, US Food and Drug Administration Web site and the American Society of Health-System Pharmacists' Drug Shortage Resource Center Web site, respectively) as well as commercial Web sites (eg, Medscape, Google Scholar). Familiarity with such online resources can assist health care professionals in their ability to efficiently navigate the Web and may potentially expedite the information gathering and decision-making process, thereby improving patient care.
Googling suicide: surfing for suicide information on the Internet.
Recupero, Patricia R; Harms, Samara E; Noble, Jeffrey M
2008-06-01
This study examined the types of resources a suicidal person might find through search engines on the Internet. We were especially interested in determining the accessibility of potentially harmful resources, such as prosuicide forums, as such resources have been implicated in completed suicides and are known to exist on the Web. Using 5 popular search engines (Google, Yahoo!, Ask.com, Lycos, and Dogpile) and 4 suicide-related search terms (suicide, how to commit suicide, suicide methods, and how to kill yourself), we collected quantitative and qualitative data about the search results. The searches were conducted in August and September 2006. Several coraters assigned codes and characterizations to the first 30 Web sites per search term combination (and "sponsored links" on those pages), which were then confirmed by consensus ratings. Search results were classified as being prosuicide, antisuicide, suicide-neutral, not a suicide site, or error (i.e., page would not load). Additional information was collected to further characterize the nature of the information on these Web sites. Suicide-neutral and anti-suicide pages occurred most frequently (of 373 unique Web pages, 115 were coded as suicide-neutral, and 109 were anti-suicide). While pro-suicide resources were less frequent (41 Web pages), they were nonetheless easily accessible. Detailed how-to instructions for unusual and lethal suicide methods were likewise easily located through the searches. Mental health professionals should ask patients about their Internet use. Depressed, suicidal, or potentially suicidal patients who use the Internet may be especially at risk. Clinicians may wish to assist patients in locating helpful, supportive resources online so that patients' Internet use may be more therapeutic than harmful.
Injury surveillance in low-resource settings using Geospatial and Social Web technologies
2010-01-01
Background Extensive public health gains have benefited high-income countries in recent decades, however, citizens of low and middle-income countries (LMIC) have largely not enjoyed the same advancements. This is in part due to the fact that public health data - the foundation for public health advances - are rarely collected in many LMIC. Injury data are particularly scarce in many low-resource settings, despite the huge associated burden of morbidity and mortality. Advances in freely-accessible and easy-to-use information and communication (ICT) technology may provide the impetus for increased public health data collection in settings with limited financial and personnel resources. Methods and Results A pilot study was conducted at a hospital in Cape Town, South Africa to assess the utility and feasibility of using free (non-licensed), and easy-to-use Social Web and GeoWeb tools for injury surveillance in low-resource settings. Data entry, geocoding, data exploration, and data visualization were successfully conducted using these technologies, including Google Spreadsheet, Mapalist, BatchGeocode, and Google Earth. Conclusion This study examined the potential for Social Web and GeoWeb technologies to contribute to public health data collection and analysis in low-resource settings through an injury surveillance pilot study conducted in Cape Town, South Africa. The success of this study illustrates the great potential for these technologies to be leveraged for public health surveillance in resource-constrained environments, given their ease-of-use and low-cost, and the sharing and collaboration capabilities they afford. The possibilities and potential limitations of these technologies are discussed in relation to the study, and to the field of public health in general. PMID:20497570
Electronic Biomedical Literature Search for Budding Researcher
Thakre, Subhash B.; Thakre S, Sushama S.; Thakre, Amol D.
2013-01-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research. PMID:24179937
Electronic biomedical literature search for budding researcher.
Thakre, Subhash B; Thakre S, Sushama S; Thakre, Amol D
2013-09-01
Search for specific and well defined literature related to subject of interest is the foremost step in research. When we are familiar with topic or subject then we can frame appropriate research question. Appropriate research question is the basis for study objectives and hypothesis. The Internet provides a quick access to an overabundance of the medical literature, in the form of primary, secondary and tertiary literature. It is accessible through journals, databases, dictionaries, textbooks, indexes, and e-journals, thereby allowing access to more varied, individualised, and systematic educational opportunities. Web search engine is a tool designed to search for information on the World Wide Web, which may be in the form of web pages, images, information, and other types of files. Search engines for internet-based search of medical literature include Google, Google scholar, Scirus, Yahoo search engine, etc., and databases include MEDLINE, PubMed, MEDLARS, etc. Several web-libraries (National library Medicine, Cochrane, Web of Science, Medical matrix, Emory libraries) have been developed as meta-sites, providing useful links to health resources globally. A researcher must keep in mind the strengths and limitations of a particular search engine/database while searching for a particular type of data. Knowledge about types of literature, levels of evidence, and detail about features of search engine as available, user interface, ease of access, reputable content, and period of time covered allow their optimal use and maximal utility in the field of medicine. Literature search is a dynamic and interactive process; there is no one way to conduct a search and there are many variables involved. It is suggested that a systematic search of literature that uses available electronic resource effectively, is more likely to produce quality research.
78 FR 69710 - Luminant Generation Company, LLC
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-20
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2008... . To begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public [[Page 69711
Who Is a Cancer Survivor? A Systematic Review of Published Definitions.
Marzorati, Chiara; Riva, Silvia; Pravettoni, Gabriella
2017-06-01
The term "cancer survivor" is commonly used by different persons, clinical institutions, academic bodies, and political organizations although it lacks of a unanimous and detailed definition. The objective of the study is to make a systematic review of published and proposed definitions of "cancer survivor." Utilizing a systematic search strategy with different strings of "cancer survivor," we searched the following databases: Medline (June 1975-June 2015), Scopus (all the years), Web of Science (all the years), Google Scholar (all the years), ERIC (all the years). This review suggests that there is not a unique definition of who is a "cancer survivor" and what is "cancer survivorship." However, the most widely used definition sees cancer survivorship as a process that begins at the moment of diagnosis and continues through the balance of life. This definition highlights psychological and legal patient's needs-as well as medical ones-to receive care and assistance from the beginning and, at the same time, it establishes valid criteria for making scientific and statistical sampling research. The extensive use of the term "cancer survivor" indicates that it is a significant term. This review has been written to outline the state of the art and it invites to reflect on a shared definition that could satisfy both clinical and research aspects. Implication for cancer survivors: this compendium of proposed definitions may improve communication among the many patients and patient organizations that use and work with this term.
Brave New Media World: Science Communication Voyages through the Global Seas
NASA Astrophysics Data System (ADS)
Clark, C. L.; Reisewitz, A.
2010-12-01
By leveraging online tools, such as blogs, Twitter, Facebook, Google Earth, flickr, web-based discussion boards, and a bi-monthly electronic magazine for the non-scientist, Scripps Institution of Oceanography is taking science communications out of the static webpage to create interactive journeys that spark social dialogue and helped raise awareness of science-based research on global marine environmental issues. Several new initiatives are being chronicled through popular blogs and expedition web sites as researchers share interesting scientific facts and unusual findings in near real-time.
Medicine 2.0: social networking, collaboration, participation, apomediation, and openness.
Eysenbach, Gunther
2008-08-25
In a very significant development for eHealth, broad adoption of Web 2.0 technologies and approaches coincides with the more recent emergence of Personal Health Application Platforms and Personally Controlled Health Records such as Google Health, Microsoft HealthVault, and Dossia. "Medicine 2.0" applications, services and tools are defined as Web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies and/or semantic web and virtual reality approaches to enable and facilitate specifically 1) social networking, 2) participation, 3) apomediation, 4) openness and 5) collaboration, within and between these user groups. The Journal of Medical Internet Research (JMIR) publishes a Medicine 2.0 theme issue and sponsors a conference on "How Social Networking and Web 2.0 changes Health, Health Care, Medicine and Biomedical Research", to stimulate and encourage research in these five areas.
Medicine 2.0: Social Networking, Collaboration, Participation, Apomediation, and Openness
2008-01-01
In a very significant development for eHealth, a broad adoption of Web 2.0 technologies and approaches coincides with the more recent emergence of Personal Health Application Platforms and Personally Controlled Health Records such as Google Health, Microsoft HealthVault, and Dossia. “Medicine 2.0” applications, services, and tools are defined as Web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies and/or semantic web and virtual reality approaches to enable and facilitate specifically 1) social networking, 2) participation, 3) apomediation, 4) openness, and 5) collaboration, within and between these user groups. The Journal of Medical Internet Research (JMIR) publishes a Medicine 2.0 theme issue and sponsors a conference on “How Social Networking and Web 2.0 changes Health, Health Care, Medicine, and Biomedical Research”, to stimulate and encourage research in these five areas. PMID:18725354
Using food-web theory to conserve ecosystems
McDonald-Madden, E.; Sabbadin, R.; Game, E. T.; Baxter, P. W. J.; Chadès, I.; Possingham, H. P.
2016-01-01
Food-web theory can be a powerful guide to the management of complex ecosystems. However, we show that indices of species importance common in food-web and network theory can be a poor guide to ecosystem management, resulting in significantly more extinctions than necessary. We use Bayesian Networks and Constrained Combinatorial Optimization to find optimal management strategies for a wide range of real and hypothetical food webs. This Artificial Intelligence approach provides the ability to test the performance of any index for prioritizing species management in a network. While no single network theory index provides an appropriate guide to management for all food webs, a modified version of the Google PageRank algorithm reliably minimizes the chance and severity of negative outcomes. Our analysis shows that by prioritizing ecosystem management based on the network-wide impact of species protection rather than species loss, we can substantially improve conservation outcomes. PMID:26776253
78 FR 68100 - Luminant Generation Company, LLC
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-13
... following methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID.../adams.html . To begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public Document Room (PDR) reference...
Medical student appraisal: searching on smartphones.
Khalifian, S; Markman, T; Sampognaro, P; Mitchell, S; Weeks, S; Dattilo, J
2013-01-01
The rapidly growing industry for mobile medical applications provides numerous smartphone resources designed for healthcare professionals. However, not all applications are equally useful in addressing the questions of early medical trainees. Three popular, free, mobile healthcare applications were evaluated along with a Google(TM) web search on both Apple(TM) and Android(TM) devices. Six medical students at a large academic hospital evaluated each application for a one-week period while on various clinical rotations. Google(TM) was the most frequently used search method and presented multimedia resources but was inefficient for obtaining clinical management information. Epocrates(TM) Pill ID feature was praised for its clinical utility. Medscape(TM) had the highest satisfaction of search and excelled through interactive educational features. Micromedex(TM) offered both FDA and off-label dosing for drugs. Google(TM) was the preferred search method for questions related to basic disease processes and multimedia resources, but was inadequate for clinical management. Caution should also be exercised when using Google(TM) in front of patients. Medscape(TM) was the most appealing application due to a broad scope of content and educational features relevant to medical trainees. Students should also be cognizant of how mobile technology may be perceived by their evaluators to avoid false impressions.
Shaath, M Kareem; Yeranosian, Michael G; Ippolito, Joseph A; Adams, Mark R; Sirkin, Michael S; Reilly, Mark C
2018-05-02
Orthopaedic trauma fellowship applicants use online-based resources when researching information on potential U.S. fellowship programs. The 2 primary sources for identifying programs are the Orthopaedic Trauma Association (OTA) database and the San Francisco Match (SF Match) database. Previous studies in other orthopaedic subspecialty areas have demonstrated considerable discrepancies among fellowship programs. The purpose of this study was to analyze content and availability of information on orthopaedic trauma surgery fellowship web sites. The online databases of the OTA and SF Match were reviewed to determine the availability of embedded program links or external links for the included programs. Thereafter, a Google search was performed for each program individually by typing the program's name, followed by the term "orthopaedic trauma fellowship." All identified fellowship web sites were analyzed for accessibility and content. Web sites were evaluated for comprehensiveness in mentioning key components of the orthopaedic trauma surgery curriculum. By consensus, we refined the final list of variables utilizing the methodology of previous studies on the topic. We identified 54 OTA-accredited fellowship programs, offering 87 positions. The majority (94%) of programs had web sites accessible through a Google search. Of the 51 web sites found, all (100%) described their program. Most commonly, hospital affiliation (88%), operative experiences (76%), and rotation overview (65%) were listed, and, least commonly, interview dates (6%), selection criteria (16%), on-call requirements (20%), and fellow evaluation criteria (20%) were listed. Programs with ≥2 fellows provided more information with regard to education content (p = 0.0001) and recruitment content (p = 0.013). Programs with Accreditation Council for Graduate Medical Education (ACGME) accreditation status also provided greater information with regard to education content (odds ratio, 4.0; p = 0.0001). Otherwise, no differences were seen by region, residency affiliation, medical school affiliation, or hospital affiliation. The SF Match and OTA databases provide few direct links to fellowship web sites. Individual program web sites do not effectively and completely convey information about the programs. The Internet is an underused resource for fellow recruitment. The lack of information on these sites allows for future opportunity to optimize this resource.
Accessibility and quality of online information for pediatric orthopaedic surgery fellowships.
Davidson, Austin R; Murphy, Robert F; Spence, David D; Kelly, Derek M; Warner, William C; Sawyer, Jeffrey R
2014-12-01
Pediatric orthopaedic fellowship applicants commonly use online-based resources for information on potential programs. Two primary sources are the San Francisco Match (SF Match) database and the Pediatric Orthopaedic Society of North America (POSNA) database. We sought to determine the accessibility and quality of information that could be obtained by using these 2 sources. The online databases of the SF Match and POSNA were reviewed to determine the availability of embedded program links or external links for the included programs. If not available in the SF Match or POSNA data, Web sites for listed programs were located with a Google search. All identified Web sites were analyzed for accessibility, content volume, and content quality. At the time of online review, 50 programs, offering 68 positions, were listed in the SF Match database. Although 46 programs had links included with their information, 36 (72%) of them simply listed http://www.sfmatch.org as their unique Web site. Ten programs (20%) had external links listed, but only 2 (4%) linked directly to the fellowship web page. The POSNA database does not list any links to the 47 programs it lists, which offer 70 positions. On the basis of a Google search of the 50 programs listed in the SF Match database, web pages were found for 35. Of programs with independent web pages, all had a description of the program and 26 (74%) described their application process. Twenty-nine (83%) listed research requirements, 22 (63%) described the rotation schedule, and 12 (34%) discussed the on-call expectations. A contact telephone number and/or email address was provided by 97% of programs. Twenty (57%) listed both the coordinator and fellowship director, 9 (26%) listed the coordinator only, 5 (14%) listed the fellowship director only, and 1 (3%) had no contact information given. The SF Match and POSNA databases provide few direct links to fellowship Web sites, and individual program Web sites either do not exist or do not effectively convey information about the programs. Improved accessibility and accurate information online would allow potential applicants to obtain information about pediatric fellowships in a more efficient manner.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-02
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Advisory... by teleconference. Please dial (877) 930-8819 and enter code 1579739. Web links: Windows Connection-2: http://wm.onlinevideoservice.com/CDC2 Flash Connection-4 (For Safari and Google Chrome Users): http...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-02
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Advisory... by teleconference. Please dial (877) 930-8819 and enter code 1579739. Web Links Windows Connection-2: http://wm.onlinevideoservice.com/CDC2 . Flash Connection-4 (For Safari and Google Chrome Users): http...
Paying Your Way to the Top: Search Engine Advertising.
ERIC Educational Resources Information Center
Scott, David M.
2003-01-01
Explains how organizations can buy listings on major Web search engines, making it the fastest growing form of advertising. Highlights include two network models, Google and Overture; bidding on phrases to buy as links to use with ads; ad ranking; benefits for small businesses; and paid listings versus regular search results. (LRW)
USDA-ARS?s Scientific Manuscript database
A review of existing literature was conducted to determine the prevalence of purulent vaginal discharge (PVD) in dairy herds around the world and detection methodologies that influence prevalence estimates. Four databases (PubMed, Google Scholar, Web of Science, and Scopus) were queried with the sea...
76 FR 74776 - Forum-Trends in Extreme Winds, Waves, and Extratropical Storms Along the Coasts
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-01
... Winds, Waves, and Extratropical Storms Along the Coasts AGENCY: National Environmental Satellite, Data... information, please check the forum Web site at https://sites.google.com/a/noaa.gov/extreme-winds-waves.../noaa.gov/extreme-winds-waves-extratropical-storms/home . Topics To Be Addressed This forum will address...
ERIC Educational Resources Information Center
Lucking, Robert A.; Christmann, Edwin P.; Whiting, Mervyn J.
2008-01-01
"Mashup" is a new technology term used to describe a web application that combines data or technology from several different sources. You can apply this concept in your classroom by having students create their own mashup maps. Google Maps provides you with the simple tools, map databases, and online help you'll need to quickly master this…
Concordancers and Dictionaries as Problem-Solving Tools for ESL Academic Writing
ERIC Educational Resources Information Center
Yoon, Choongil
2016-01-01
The present study investigated how 6 Korean ESL graduate students in Canada used a suite of freely available reference resources, consisting of Web-based corpus tools, Google search engines, and dictionaries, for solving linguistic problems while completing an authentic academic writing assignment in English. Using a mixed methods design, the…
Web-Based Collaborative Writing in L2 Contexts: Methodological Insights from Text Mining
ERIC Educational Resources Information Center
Yim, Soobin; Warschauer, Mark
2017-01-01
The increasingly widespread use of social software (e.g., Wikis, Google Docs) in second language (L2) settings has brought a renewed attention to collaborative writing. Although the current methodological approaches to examining collaborative writing are valuable to understand L2 students' interactional patterns or perceived experiences, they can…
Voice-Recognition Augmented Performance Tools in Performance Poetry Pedagogy
ERIC Educational Resources Information Center
Devanny, David; McGowan, Jack
2016-01-01
This provocation shares findings from the use of bespoke voice-recognition performance software in a number of seminars (which took place in the 2014-2016 academic years at Glasgow School of Art, University of Warwick, and Falmouth University). The software, made available through this publication, is a web-app which uses Google Chrome's native…
Factors Influencing Consent to Having Videotaped Mental Health Sessions
ERIC Educational Resources Information Center
Ko, Kenton; Goebert, Deborah
2011-01-01
Objective: The authors critically reviewed the literature regarding factors influencing consent to having videotaped mental health sessions. Methods: The authors searched the literature in PubMed, PsycINFO, Google Scholar, and Web of Science from the mid-1950s through February 2009. Results: The authors identified 27 studies, of which 19 (73%)…
Teaching in Educational Leadership Using Web 2.0 Applications: Perspectives on What Works
ERIC Educational Resources Information Center
Shinsky, E. John; Stevens, Hans A.
2011-01-01
To prepare 21st Century school leaders, educational leadership professors need to learn and teach the utilization of increasingly sophisticated technologies in their courses. The co-authors, a professor and an educational specialist degree candidate, describe how the use of advanced technologies--such as Wikis, Google Docs, Wimba Classroom, and…
ERIC Educational Resources Information Center
Foster, Andrea L.
2006-01-01
American college students are increasingly posting videos of their lives online, due to Web sites like Vimeo and Google Video that host video material free and the ubiquity of camera phones and other devices that can take video-clips. However, the growing popularity of online socializing has many safety experts worried that students could be…
Teaching Undergraduate Software Engineering Using Open Source Development Tools
2012-01-01
ware. Some example appliances are: a LAMP stack, Redmine, MySQL database, Moodle, Tom- cat on Apache, and Bugzilla. Some of the important features...Ada, C, C++, PHP , Py- thon, etc., and also supports a wide range of SDKs such as Google’s Android SDK and the Google Web Toolkit SDK. Additionally
Assessing Journal Quality in Mathematics Education
ERIC Educational Resources Information Center
Nivens, Ryan Andrew; Otten, Samuel
2017-01-01
In this Research Commentary, we describe 3 journal metrics--the Web of Science's Impact Factor, Scopus's SCImago Journal Rank, and Google Scholar Metrics' h5-index--and compile the rankings (if they exist) for 69 mathematics education journals. We then discuss 2 paths that the mathematics education community should consider with regard to these…
Web Searching: A Process-Oriented Experimental Study of Three Interactive Search Paradigms.
ERIC Educational Resources Information Center
Dennis, Simon; Bruza, Peter; McArthur, Robert
2002-01-01
Compares search effectiveness when using query-based Internet search via the Google search engine, directory-based search via Yahoo, and phrase-based query reformulation-assisted search via the Hyperindex browser by means of a controlled, user-based experimental study of undergraduates at the University of Queensland. Discusses cognitive load,…
Where Do I Find It?--An Internet Glossary.
ERIC Educational Resources Information Center
Del Monte, Erin; Manso, Angela
2001-01-01
Lists 13 different Internet search engines that might be of interest to educators, including: AOL Search, Alta Vista, Google, Lycos, Northern Light, and Yahoo. Gives a brief description of each search engine's capabilities, strengths, and weaknesses and includes Web addresses of U.S. government offices, including the U.S. Department of Education.…
School Librarians: Vital Educational Leaders
ERIC Educational Resources Information Center
Martineau, Pamela
2010-01-01
In the new millennium, school librarians are more likely to be found sitting behind a computer as they update the library web page or create a wiki on genetically modified organisms. Or they might be seen in the library computer lab as they lead students through tutorials on annotated bibliographies or Google docs. If adequately supported, school…
The Physlet Approach to Simulation Design
ERIC Educational Resources Information Center
Christian, Wolfgang; Belloni, Mario; Esquembre, Francisco; Mason, Bruce A.; Barbato, Lyle; Riggsbee, Matt
2015-01-01
Over the past two years, the AAPT/ComPADRE staff and the Open Source Physics group have published the second edition of "Physlet Physics" and "Physlet Quantum Physics," delivered as interactive web pages on AAPT/ComPADRE and as free eBooks available through iTunes and Google Play. These two websites, and their associated books,…
2012-03-23
be reminded that the aforementioned movies depicted events that happened nearly 70 years ago.48 These films neither represent modem amphibious...contemporary sources. Few are more contemporary than those in this genre . Nothing can substitute a simple Google web search to get ideas about where
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-22
... on Web sites operated by Google, Interactive Data, and Dow Jones, among others. The text of the... and various forms of alternative trading systems (``ATSs''), including dark pools and electronic..., BATS Trading and Direct Edge. A proliferation of dark pools and other ATSs operate profitably with...
Fleischman, Ross J.; Lundquist, Mark; Jui, Jonathan; Newgard, Craig D.; Warden, Craig
2014-01-01
Objective To derive and validate a model that accurately predicts ambulance arrival time that could be implemented as a Google Maps web application. Methods This was a retrospective study of all scene transports in Multnomah County, Oregon, from January 1 through December 31, 2008. Scene and destination hospital addresses were converted to coordinates. ArcGIS Network Analyst was used to estimate transport times based on street network speed limits. We then created a linear regression model to improve the accuracy of these street network estimates using weather, patient characteristics, use of lights and sirens, daylight, and rush-hour intervals. The model was derived from a 50% sample and validated on the remainder. Significance of the covariates was determined by p < 0.05 for a t-test of the model coefficients. Accuracy was quantified by the proportion of estimates that were within 5 minutes of the actual transport times recorded by computer-aided dispatch. We then built a Google Maps-based web application to demonstrate application in real-world EMS operations. Results There were 48,308 included transports. Street network estimates of transport time were accurate within 5 minutes of actual transport time less than 16% of the time. Actual transport times were longer during daylight and rush-hour intervals and shorter with use of lights and sirens. Age under 18 years, gender, wet weather, and trauma system entry were not significant predictors of transport time. Our model predicted arrival time within 5 minutes 73% of the time. For lights and sirens transports, accuracy was within 5 minutes 77% of the time. Accuracy was identical in the validation dataset. Lights and sirens saved an average of 3.1 minutes for transports under 8.8 minutes, and 5.3 minutes for longer transports. Conclusions An estimate of transport time based only on a street network significantly underestimated transport times. A simple model incorporating few variables can predict ambulance time of arrival to the emergency department with good accuracy. This model could be linked to global positioning system data and an automated Google Maps web application to optimize emergency department resource use. Use of lights and sirens had a significant effect on transport times. PMID:23865736
Development of a Google-based search engine for data mining radiology reports.
Erinjeri, Joseph P; Picus, Daniel; Prior, Fred W; Rubin, David A; Koppel, Paul
2009-08-01
The aim of this study is to develop a secure, Google-based data-mining tool for radiology reports using free and open source technologies and to explore its use within an academic radiology department. A Health Insurance Portability and Accountability Act (HIPAA)-compliant data repository, search engine and user interface were created to facilitate treatment, operations, and reviews preparatory to research. The Institutional Review Board waived review of the project, and informed consent was not required. Comprising 7.9 GB of disk space, 2.9 million text reports were downloaded from our radiology information system to a fileserver. Extensible markup language (XML) representations of the reports were indexed using Google Desktop Enterprise search engine software. A hypertext markup language (HTML) form allowed users to submit queries to Google Desktop, and Google's XML response was interpreted by a practical extraction and report language (PERL) script, presenting ranked results in a web browser window. The query, reason for search, results, and documents visited were logged to maintain HIPAA compliance. Indexing averaged approximately 25,000 reports per hour. Keyword search of a common term like "pneumothorax" yielded the first ten most relevant results of 705,550 total results in 1.36 s. Keyword search of a rare term like "hemangioendothelioma" yielded the first ten most relevant results of 167 total results in 0.23 s; retrieval of all 167 results took 0.26 s. Data mining tools for radiology reports will improve the productivity of academic radiologists in clinical, educational, research, and administrative tasks. By leveraging existing knowledge of Google's interface, radiologists can quickly perform useful searches.
Bernal-Rusiel, Jorge L.; Rannou, Nicolas; Gollub, Randy L.; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E.; Pienaar, Rudolph
2017-01-01
In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView, a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution. PMID:28507515
Survey of publications and the H-index of Academic Emergency Medicine Professors.
Babineau, Matthew; Fischer, Christopher; Volz, Kathryn; Sanchez, Leon D
2014-05-01
The number of publications and how often these have been cited play a role in academic promotion. Bibliometrics that attempt to quantify the relative impact of scholarly work have been proposed. The h-index is defined as the number (h) of publications for an individual that have been cited at least h times. We calculated the h-index and number of publications for academic emergency physicians at the rank of professor. We accessed the Society for Academic Emergency Medicine professor list in January of 2012. We calculated the number of publications through Web of Science and PubMed and the h-index using Google scholar and Web of Science. We identified 299 professors of emergency medicine. The number of professors per institution ranged from 1 to 13. Median h-index in Web of Science was 11 (interquartile range [IQR] 6-17, range 0-51), in Google Scholar median h-index was 14 (IQR 9-22, range 0-63) The median number of publications reported in Web of Science was 36 (IQR 18-73, range 0-359. Total number of publications had a high correlation with the h-index (r=0.884). The h-index is only a partial measure of academic productivity. As a measure of the impact of an individual's publications it can provide a simple way to compare and measure academic progress and provide a metric that can be used when evaluating a person for academic promotion. Calculation of the h-index can provide a way to track academic progress and impact. [West J Emerg Med. 2014;15(3):290-292.].
Lynch, Noel P; Lang, Bronagh; Angelov, Sophia; McGarrigle, Sarah A; Boyle, Terence J; Al-Azawi, Dhafir; Connolly, Elizabeth M
2017-04-01
This study evaluated the readability, accessibility and quality of information pertaining to breast reconstruction post mastectomy on the Internet in the English language. Using the Google © search engine the keywords "Breast reconstruction post mastectomy" were searched for. We analyzed the top 75 sites. The Flesch Reading Ease Score and Gunning Fog Index were calculated to assess readability. Web site quality was assessed objectively using the University of Michigan Consumer Health Web site Evaluation Checklist. Accessibility was determined using an automated accessibility tool. In addition, the country of origin, type of organisation producing the site and presence of Health on the Net (HoN) Certification status was recorded. The Web sites were difficult to read and comprehend. The mean Flesch Reading Ease scores were 55.5. The mean Gunning Fog Index scores was 8.6. The mean Michigan score was 34.8 indicating weak quality of websites. Websites with HoN certification ranked higher in the search results (p = 0.007). Website quality was influenced by organisation type (p < 0.0001) with academic/healthcare, not for profit and government sites having higher Michigan scores. 20% of sites met the minimum accessibility criteria. Internet information on breast reconstruction post mastectomy and procedures is poorly written and we suggest that Webpages providing information must be made more readable and accessible. We suggest that health professionals should recommend Web sites that are easy to read and contain high-quality surgical information. Medical information on the Internet should be readable, accessible, reliable and of a consistent quality. Copyright © 2017 Elsevier Ltd. All rights reserved.
An initial log analysis of usage patterns on a research networking system.
Boland, Mary Regina; Trembowelski, Sylvia; Bakken, Suzanne; Weng, Chunhua
2012-08-01
Usage data for research networking systems (RNSs) are valuable but generally unavailable for understanding scientific professionals' information needs and online collaborator seeking behaviors. This study contributes a method for evaluating RNSs and initial usage knowledge of one RNS obtained from using this method. We designed a log for an institutional RNS, defined categories of users and tasks, and analyzed correlations between usage patterns and user and query types. Our results show that scientific professionals spend more time performing deep Web searching on RNSs than generic Google users and we also show that retrieving scientist profiles is faster on an RNS than on Google (3.5 seconds vs. 34.2 seconds) whereas organization-specific browsing on a RNS takes longer than on Google (117.0 seconds vs. 34.2 seconds). Usage patterns vary by user role, e.g., faculty performed more informational queries than administrators, which implies role-specific user support is needed for RNSs. © 2012 Wiley Periodicals, Inc.
An Initial Log Analysis of Usage Patterns on a Research Networking System
Boland, Mary Regina; Trembowelski, Sylvia; Bakken, Suzanne; Weng, Chunhua
2012-01-01
Abstract Usage data for research networking systems (RNSs) are valuable but generally unavailable for understanding scientific professionals’ information needs and online collaborator seeking behaviors. This study contributes a method for evaluating RNSs and initial usage knowledge of one RNS obtained from using this method. We designed a log for an institutional RNS, defined categories of users and tasks, and analyzed correlations between usage patterns and user and query types. Our results show that scientific professionals spend more time performing deep Web searching on RNSs than generic Google users and we also show that retrieving scientist profiles is faster on an RNS than on Google (3.5 seconds vs. 34.2 seconds) whereas organization‐specific browsing on a RNS takes longer than on Google (117.0 seconds vs. 34.2 seconds). Usage patterns vary by user role, e.g., faculty performed more informational queries than administrators, which implies role‐specific user support is needed for RNSs. Clin Trans Sci 2012; Volume 5: 340–347 PMID:22883612
Google matrix of business process management
NASA Astrophysics Data System (ADS)
Abel, M. W.; Shepelyansky, D. L.
2011-12-01
Development of efficient business process models and determination of their characteristic properties are subject of intense interdisciplinary research. Here, we consider a business process model as a directed graph. Its nodes correspond to the units identified by the modeler and the link direction indicates the causal dependencies between units. It is of primary interest to obtain the stationary flow on such a directed graph, which corresponds to the steady-state of a firm during the business process. Following the ideas developed recently for the World Wide Web, we construct the Google matrix for our business process model and analyze its spectral properties. The importance of nodes is characterized by PageRank and recently proposed CheiRank and 2DRank, respectively. The results show that this two-dimensional ranking gives a significant information about the influence and communication properties of business model units. We argue that the Google matrix method, described here, provides a new efficient tool helping companies to make their decisions on how to evolve in the exceedingly dynamic global market.
A Web-based Google-Earth Coincident Imaging Tool for Satellite Calibration and Validation
NASA Astrophysics Data System (ADS)
Killough, B. D.; Chander, G.; Gowda, S.
2009-12-01
The Group on Earth Observations (GEO) is coordinating international efforts to build a Global Earth Observation System of Systems (GEOSS) to meet the needs of its nine “Societal Benefit Areas”, of which the most demanding, in terms of accuracy, is climate. To accomplish this vision, satellite on-orbit and ground-based data calibration and validation (Cal/Val) of Earth observation measurements are critical to our scientific understanding of the Earth system. Existing tools supporting space mission Cal/Val are often developed for specific campaigns or events with little desire for broad application. This paper describes a web-based Google-Earth based tool for the calculation of coincident satellite observations with the intention to support a diverse international group of satellite missions to improve data continuity, interoperability and data fusion. The Committee on Earth Observing Satellites (CEOS), which includes 28 space agencies and 20 other national and international organizations, are currently operating and planning over 240 Earth observation satellites in the next 15 years. The technology described here will better enable the use of multiple sensors to promote increased coordination toward a GEOSS. The CEOS Systems Engineering Office (SEO) and the Working Group on Calibration and Validation (WGCV) support the development of the CEOS Visualization Environment (COVE) tool to enhance international coordination of data exchange, mission planning and Cal/Val events. The objective is to develop a simple and intuitive application tool that leverages the capabilities of Google-Earth web to display satellite sensor coverage areas and for the identification of coincident scene locations along with dynamic menus for flexibility and content display. Key features and capabilities include user-defined evaluation periods (start and end dates) and regions of interest (rectangular areas) and multi-user collaboration. Users can select two or more CEOS missions from a database including Satellite Tool Kit (STK) generated orbit information and perform rapid calculations to identify coincident scenes where the groundtracks of the CEOS mission instrument fields-of-view intersect. Calculated results are displayed on a customized Google-Earth web interface to view location and time information along with optional output to EXCEL table format. In addition, multiple viewports can be used for comparisons. COVE was first introduced to the CEOS WGCV community in May 2009. Since that time, the development of a prototype version has progressed. It is anticipated that the capabilities and applications of COVE can support a variety of international Cal/Val activities as well as provide general information on Earth observation coverage for education and societal benefit. This project demonstrates the utility of a systems engineering tool with broad international appeal for enhanced communication and data evaluation opportunities among international CEOS agencies. The COVE tool is publicly accessible via NASA servers.
Kelman, Alex R; Muñoz, Ricardo F
2014-01-01
Background One of the advantages of Internet-based research is the ability to efficiently recruit large, diverse samples of international participants. Currently, there is a dearth of information on the behind-the-scenes process to setting up successful online recruitment tools. Objective The objective of the study was to examine the comparative impact of Spanish- and English-language keywords for a Google AdWords campaign to recruit pregnant women to an Internet intervention and to describe the characteristics of those who enrolled in the trial. Methods Spanish- and English-language Google AdWords campaigns were created to advertise and recruit pregnant women to a Web-based randomized controlled trial for the prevention of postpartum depression, the Mothers and Babies/Mamás y Bebés Internet Project. Search engine users who clicked on the ads in response to keyword queries (eg, pregnancy, depression and pregnancy) were directed to the fully automated study website. Data on the performance of keywords associated with each Google ad reflect Web user queries from February 2009 to June 2012. Demographic information, self-reported depression symptom scores, major depressive episode status, and Internet use data were collected from enrolled participants before randomization in the intervention study. Results The Google ads received high exposure (12,983,196 impressions) and interest (176,295 clicks) from a global sample of Web users; 6745 pregnant women consented to participate and 2575 completed enrollment in the intervention study. Keywords that were descriptive of pregnancy and distress or pregnancy and health resulted in higher consent and enrollment rates (ie, high-performing ads). In both languages, broad keywords (eg, pregnancy) had the highest exposure, more consented participants, and greatest cost per consent (up to US $25.77 per consent). The online ads recruited a predominantly Spanish-speaking sample from Latin America of Mestizo racial identity. The English-speaking sample was also diverse with most participants residing in regions of Asia and Africa. Spanish-speaking participants were significantly more likely to be of Latino ethnic background, not married, completed fewer years of formal education, and were more likely to have accessed the Internet for depression information (P<.001). Conclusions The Internet is an effective method for reaching an international sample of pregnant women interested in online interventions to manage changes in their mood during the perinatal period. To increase efficiency, Internet advertisements need to be monitored and tailored to reflect the target population’s conceptualization of health issues being studied. Trial Registration ClinicalTrials.gov NCT00816725; http://clinicaltrials.gov/show/NCT00816725 (Archived by WebCite at http://www.webcitation.org/6LumonjZP). PMID:24407163
Prabhu, Arpan V; Crihalmeanu, Tudor; Hansberry, David R; Agarwal, Nitin; Glaser, Christine; Clump, David A; Heron, Dwight E; Beriwal, Sushil
The Google search engine is a resource commonly used by patients to access health-related patient education information. The American Medical Association and National Institutes of Health recommend that patient education resources be written at a level between the third and seventh grade reading levels. We assessed the readability levels of online palliative care patient education resources using 10 readability algorithms widely accepted in the medical literature. In October 2016, searches were conducted for 10 individual terms pertaining to palliative care and oncology using the Google search engine; the first 10 articles written for the public for each term were downloaded for a total of 100 articles. The terms included palliative care, hospice, advance directive, cancer pain management, treatment of metastatic disease, treatment of brain metastasis, treatment of bone metastasis, palliative radiation therapy, palliative chemotherapy, and end-of-life care. We determined the average reading level of the articles by readability scale and Web site domain. Nine readability assessments with scores equivalent to academic grade level found that the 100 palliative care education articles were collectively written at a 12.1 reading level (standard deviation, 2.1; range, 7.6-17.3). Zero articles were written below a seventh grade level. Forty-nine (49%) articles were written above a high school graduate reading level. The Flesch Reading Ease scale classified the articles as "difficult" to read with a score of 45.6 of 100. The articles were collected from 62 Web site domains. Seven domains were accessed 3 or more times; among these, www.mskcc.org had the highest average reading level at a 14.5 grade level (standard deviation, 1.4; range, 13.4-16.1). Most palliative care education articles readily available on Google are written above national health literacy recommendations. There is need to revise these resources to allow patients and their families to derive the most benefit from these materials. Copyright © 2017 729. Published by Elsevier Inc. All rights reserved.
Assessing species habitat using Google Street View: a case study of cliff-nesting vultures.
Olea, Pedro P; Mateo-Tomás, Patricia
2013-01-01
The assessment of a species' habitat is a crucial issue in ecology and conservation. While the collection of habitat data has been boosted by the availability of remote sensing technologies, certain habitat types have yet to be collected through costly, on-ground surveys, limiting study over large areas. Cliffs are ecosystems that provide habitat for a rich biodiversity, especially raptors. Because of their principally vertical structure, however, cliffs are not easy to study by remote sensing technologies, posing a challenge for many researches and managers working with cliff-related biodiversity. We explore the feasibility of Google Street View, a freely available on-line tool, to remotely identify and assess the nesting habitat of two cliff-nesting vultures (the griffon vulture and the globally endangered Egyptian vulture) in northwestern Spain. Two main usefulness of Google Street View to ecologists and conservation biologists were evaluated: i) remotely identifying a species' potential habitat and ii) extracting fine-scale habitat information. Google Street View imagery covered 49% (1,907 km) of the roads of our study area (7,000 km²). The potential visibility covered by on-ground surveys was significantly greater (mean: 97.4%) than that of Google Street View (48.1%). However, incorporating Google Street View to the vulture's habitat survey would save, on average, 36% in time and 49.5% in funds with respect to the on-ground survey only. The ability of Google Street View to identify cliffs (overall accuracy = 100%) outperformed the classification maps derived from digital elevation models (DEMs) (62-95%). Nonetheless, high-performance DEM maps may be useful to compensate Google Street View coverage limitations. Through Google Street View we could examine 66% of the vultures' nesting-cliffs existing in the study area (n = 148): 64% from griffon vultures and 65% from Egyptian vultures. It also allowed us the extraction of fine-scale features of cliffs. This World Wide Web-based methodology may be a useful, complementary tool to remotely map and assess the potential habitat of cliff-dependent biodiversity over large geographic areas, saving survey-related costs.
Assessing Species Habitat Using Google Street View: A Case Study of Cliff-Nesting Vultures
Olea, Pedro P.; Mateo-Tomás, Patricia
2013-01-01
The assessment of a species’ habitat is a crucial issue in ecology and conservation. While the collection of habitat data has been boosted by the availability of remote sensing technologies, certain habitat types have yet to be collected through costly, on-ground surveys, limiting study over large areas. Cliffs are ecosystems that provide habitat for a rich biodiversity, especially raptors. Because of their principally vertical structure, however, cliffs are not easy to study by remote sensing technologies, posing a challenge for many researches and managers working with cliff-related biodiversity. We explore the feasibility of Google Street View, a freely available on-line tool, to remotely identify and assess the nesting habitat of two cliff-nesting vultures (the griffon vulture and the globally endangered Egyptian vulture) in northwestern Spain. Two main usefulness of Google Street View to ecologists and conservation biologists were evaluated: i) remotely identifying a species’ potential habitat and ii) extracting fine-scale habitat information. Google Street View imagery covered 49% (1,907 km) of the roads of our study area (7,000 km2). The potential visibility covered by on-ground surveys was significantly greater (mean: 97.4%) than that of Google Street View (48.1%). However, incorporating Google Street View to the vulture’s habitat survey would save, on average, 36% in time and 49.5% in funds with respect to the on-ground survey only. The ability of Google Street View to identify cliffs (overall accuracy = 100%) outperformed the classification maps derived from digital elevation models (DEMs) (62–95%). Nonetheless, high-performance DEM maps may be useful to compensate Google Street View coverage limitations. Through Google Street View we could examine 66% of the vultures’ nesting-cliffs existing in the study area (n = 148): 64% from griffon vultures and 65% from Egyptian vultures. It also allowed us the extraction of fine-scale features of cliffs. This World Wide Web-based methodology may be a useful, complementary tool to remotely map and assess the potential habitat of cliff-dependent biodiversity over large geographic areas, saving survey-related costs. PMID:23355880
NASA Astrophysics Data System (ADS)
Schmaltz, J. E.; Ilavajhala, S.; Plesea, L.; Hall, J. R.; Boller, R. A.; Chang, G.; Sadaqathullah, S.; Kim, R.; Murphy, K. J.; Thompson, C. K.
2012-12-01
Expedited processing of imagery from NASA satellites for near-real time use by non-science applications users has a long history, especially since the beginning of the Terra and Aqua missions. Several years ago, the Land Atmosphere Near-real-time Capability for EOS (LANCE) was created to greatly expand the range of near-real time data products from a variety of Earth Observing System (EOS) instruments. NASA's Earth Observing System Data and Information System (EOSDIS) began exploring methods to distribute these data as imagery in an intuitive, geo-referenced format, which would be available within three hours of acquisition. Toward this end, EOSDIS has developed the Global Imagery Browse Services (GIBS, http://earthdata.nasa.gov/gibs) to provide highly responsive, scalable, and expandable imagery services. The baseline technology chosen for GIBS was a Tiled Web Mapping Service (TWMS) developed at the Jet Propulsion Laboratory. Using this, global images and mosaics are divided into tiles with fixed bounding boxes for a pyramid of fixed resolutions. Initially, the satellite imagery is created at the existing data systems for each sensor, ensuring the oversight of those most knowledgeable about the science. There, the satellite data is geolocated and converted to an image format such as JPEG, TIFF, or PNG. The GIBS ingest server retrieves imagery from the various data systems and converts them into image tiles, which are stored in a highly-optimized raster format named Meta Raster Format (MRF). The image tiles are then served to users via HTTP by means of an Apache module. Services are available for the entire globe (lat-long projection) and for both polar regions (polar stereographic projection). Requests to the services can be made with the non-standard, but widely known, TWMS format or via the well-known OGC Web Map Tile Service (WMTS) standard format. Standard OGC Web Map Service (WMS) access to the GIBS server is also available. In addition, users may request a KML pyramid. This variety of access methods allows stakeholders to develop visualization/browse clients for a diverse variety of specific audiences. Currently, EOSDIS is providing an OpenLayers web client, Worldview (http://earthdata.nasa.gov/worldview), as an interface to GIBS. A variety of other existing clients can also be developed using such tools as Google Earth, Google Earth browser Plugin, ESRI's Adobe Flash/Flex Client Library, NASA World Wind, Perceptive Pixel Client, Esri's iOS Client Library, and OpenLayers for Mobile. The imagery browse capabilities from GIBS can be combined with other EOSDIS services (i.e. ECHO OpenSearch) via a client that ties them both together to provide an interface that enables data download from the onscreen imagery. Future plans for GIBS include providing imagery based on science quality data from the entire data record of these EOS instruments.
The Internet and the Google Age: Introduction
ERIC Educational Resources Information Center
James, Jonathan D.
2014-01-01
In this introductory chapter, the author begins by looking at the Internet from an historical and communication perspective in an effort to understand its significance in the contemporary world. Then he gives an overview of the most searched topics on the Internet and identifies prospects that have opened up and perils that lurk in the information…
Opinion: High-Quality Mathematics Resources as Public Goods
ERIC Educational Resources Information Center
Russo, James
2017-01-01
James Russo begins a discussion of the difficulty and time-consuming activity of Googling to find lesson plans and resources to keep his lessons more interesting and engaging, since such resources seem particularly scarce for math teachers. Russo writes that joining professional associations has given him ready access to higher quality resources…
Using Blogging to Enhance the Initiation of Students into Academic Research
ERIC Educational Resources Information Center
Chong, Eddy K. M.
2010-01-01
For the net-generation students learning in a Web 2.0 world, research is often equated with Googling and approached with a mindset accustomed to cut-and-paste practices. Recognizing educators' concern over such students' learning dispositions on the one hand, and the educational affordances of blogging on the other, this study examines the use of…
A Mathematical and Sociological Analysis of Google Search Algorithm
2013-01-16
through the collective intelligence of the web to determine a page’s importance. Let v be a vector of RN with N ≥ 8 billion. Any unit vector in RN is...scrolled up by some artifical hits. Aknowledgment: The authors would like to thank Dr. John Lavery for his encouragement and support which enable them to
The Effect of Creative Drama as a Method on Skills: A Meta-Analysis Study
ERIC Educational Resources Information Center
Ulubey, Özgür
2018-01-01
The aim of the current study was to synthesize the findings of experimental studies addressing the effect of the creative drama method on the skills of students. Research data were derived from ProQuest Citations, Web of Science, Google Academic, National Thesis Center, EBSCO, ERIC, Taylor & Francis Online, and ScienceDirect databases using…
Machine Translation-Assisted Language Learning: Writing for Beginners
ERIC Educational Resources Information Center
Garcia, Ignacio; Pena, Maria Isabel
2011-01-01
The few studies that deal with machine translation (MT) as a language learning tool focus on its use by advanced learners, never by beginners. Yet, freely available MT engines (i.e. Google Translate) and MT-related web initiatives (i.e. Gabble-on.com) position themselves to cater precisely to the needs of learners with a limited command of a…
Opening Up Access to Open Access
ERIC Educational Resources Information Center
Singer, Ross
2008-01-01
As the corpus of gray literature grows and the price of serials rises, it becomes increasingly important to explore ways to integrate the free and open Web seamlessly into one's collections. Users, after all, are discovering these materials all the time via sites such as Google Scholar and Scirus or by searching arXiv.org or CiteSeer directly.…
2014-11-01
unclassified tools and techniques that can be shared with PNs, to include social engineering, spear phishing , fake web sites, physical access attempts, and...and instead rely on commercial services such as Yahoo or Google . Some nations have quite advanced cyber security practices, but may take vastly...unauthorized access to data/systems Inject external network scanning, email phishing , malicious website access, social engineering Sample
Media Use in Higher Education from a Cross-National Perspective
ERIC Educational Resources Information Center
Grosch, Michael
2013-01-01
The web 2.0 has already penetrated the learning environment of students ubiquitously. This dissemination of online services into tertiary education has led to constant changes in students' learning and study behaviour. Students use services such as Google and Wikipedia most often not only during free time but also for learning. At the same…
Overview of the TREC 2014 Federated Web Search Track
2014-11-01
Pictures e021 Dailymotion Video e123 Picsearch Photo/Pictures e022 YouTube Video e124 Wikimedia Photo/Pictures e023 Google Blogs Blogs e126 Funny or...song of ice and fire 7045 Natural Parks America 7072 price gibson howard roberts custom 7092 How much was a gallon of gas during depression 7111 what is
ERIC Educational Resources Information Center
Wager, J. James
2012-01-01
Thousands--if not tens of thousands--of books, monographs, and articles have been written on the subject of leadership. A Google search of the word returns nearly a half-billion Web sites. As a professional who has spent nearly 40 years in the higher education sector, the author has been blessed with opportunities to view and practice leadership…
Leveraging Learning Technologies for Collaborative Writing in an Online Pharmacotherapy Course
ERIC Educational Resources Information Center
Pittenger, Amy L.; Olson-Kellogg, Becky
2012-01-01
The purpose of this project was to evaluate the development and delivery of a hypertext case scenario document to be used as the capstone assessment tool for doctoral-level physical therapy students. The integration of Web-based collaborative tools (PBworks[TM] and Google Sites[TM]) allowed students in this all-online course to apply their…
ERIC Educational Resources Information Center
Rambe, Patient
2017-01-01
The rhetoric on the potential of Web 2.0 technologies to democratize online engagement of students often overlooks the discomforting, differential participation and asymmetrical engagement that accompanies student adoption of emerging technologies. This paper, therefore, constitutes a critical reality check for student adoption of technology to…
A Content Analysis of Online HPV Immunization Information
ERIC Educational Resources Information Center
Pappa, Sara T.
2016-01-01
The Human Papillomavirus (HPV) can cause some types of cancer and is the most common sexually transmitted infection in the US. Because most people turn to the internet for health information, this study analyzed HPV information found online. A content analysis was conducted on 69 web search results (URLs) from Google, Yahoo, Bing and Ask. The…
Exploring Writing Individually and Collaboratively Using Google Docs in EFL Contexts
ERIC Educational Resources Information Center
Alsubaie, Jawaher; Ashuraidah, Ali
2017-01-01
Online teaching and learning became popular with the evolution of the World Wide Web now days. Implementing online learning tools within EFL contexts will help better address the multitude of teaching and learning styles. Difficulty in academic writing can be considered one of the common problems that students face in and outside their classrooms.…
Web-Based Interactive Steel Sculpture for the Google Generation
ERIC Educational Resources Information Center
Chou, Karen C.; Moaveni, Saeed
2009-01-01
In almost all the civil engineering programs in the United States, a student is required to take at least one design course in either steel or reinforced concrete. One of the topics covered in an introductory steel design course is the design of connections. Steel connections play important roles in the integrity of a structure, and many…
ERIC Educational Resources Information Center
Van Sickle, Angela
2016-01-01
Clinical Question: Would individuals with acquired apraxia of speech (AOS) demonstrate greater improvements for speech production with an articulatory kinematic approach or a rate/rhythm approach? Method: EBP Intervention Comparison Review. Study Sources: ASHA journal, Google Scholar, PubMed, CINAHL Plus with Full Text, Web of Science, Ovid, and…
Young, Bradley L; Cantrell, Colin K; Patt, Joshua C; Ponce, Brent A
2018-06-01
Accessible, adequate online information is important to fellowship applicants. Program web sites can affect which programs applicants apply to, subsequently altering interview costs incurred by both parties and ultimately impacting rank lists. Web site analyses have been performed for all orthopaedic subspecialties other than those involved in the combined adult reconstruction and musculoskeletal (MSK) oncology fellowship match. A complete list of active programs was obtained from the official adult reconstruction and MSK oncology society web sites. Web site accessibility was assessed using a structured Google search. Accessible web sites were evaluated based on 21 previously reported content criteria. Seventy-four adult reconstruction programs and 11 MSK oncology programs were listed on the official society web sites. Web sites were identified and accessible for 58 (78%) adult reconstruction and 9 (82%) MSK oncology fellowship programs. No web site contained all content criteria and more than half of both adult reconstruction and MSK oncology web sites failed to include 12 of the 21 criteria. Several programs participating in the combined Adult Reconstructive Hip and Knee/Musculoskeletal Oncology Fellowship Match did not have accessible web sites. Of the web sites that were accessible, none contained comprehensive information and the majority lacked information that has been previously identified as being important to perspective applicants.
Virtual reality for spherical images
NASA Astrophysics Data System (ADS)
Pilarczyk, Rafal; Skarbek, Władysław
2017-08-01
Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.
Effective Web and Desktop Retrieval with Enhanced Semantic Spaces
NASA Astrophysics Data System (ADS)
Daoud, Amjad M.
We describe the design and implementation of the NETBOOK prototype system for collecting, structuring and efficiently creating semantic vectors for concepts, noun phrases, and documents from a corpus of free full text ebooks available on the World Wide Web. Automatic generation of concept maps from correlated index terms and extracted noun phrases are used to build a powerful conceptual index of individual pages. To ensure scalabilty of our system, dimension reduction is performed using Random Projection [13]. Furthermore, we present a complete evaluation of the relative effectiveness of the NETBOOK system versus the Google Desktop [8].
A Web Search on Environmental Topics: What Is the Role of Ranking?
Filisetti, Barbara; Mascaretti, Silvia; Limina, Rosa Maria; Gelatti, Umberto
2013-01-01
Abstract Background: Although the Internet is easy to use, the mechanisms and logic behind a Web search are often unknown. Reliable information can be obtained, but it may not be visible as the Web site is not located in the first positions of search results. The possible risks of adverse health effects arising from environmental hazards are issues of increasing public interest, and therefore the information about these risks, particularly on topics for which there is no scientific evidence, is very crucial. The aim of this study was to investigate whether the presentation of information on some environmental health topics differed among various search engines, assuming that the most reliable information should come from institutional Web sites. Materials and Methods: Five search engines were used: Google, Yahoo!, Bing, Ask, and AOL. The following topics were searched in combination with the word “health”: “nuclear energy,” “electromagnetic waves,” “air pollution,” “waste,” and “radon.” For each topic three key words were used. The first 30 search results for each query were considered. The ranking variability among the search engines and the type of search results were analyzed for each topic and for each key word. The ranking of institutional Web sites was given particular consideration. Results: Variable results were obtained when surfing the Internet on different environmental health topics. Multivariate logistic regression analysis showed that, when searching for radon and air pollution topics, it is more likely to find institutional Web sites in the first 10 positions compared with nuclear power (odds ratio=3.4, 95% confidence interval 2.1–5.4 and odds ratio=2.9, 95% confidence interval 1.8–4.7, respectively) and also when using Google compared with Bing (odds ratio=3.1, 95% confidence interval 1.9–5.1). Conclusions: The increasing use of online information could play an important role in forming opinions. Web users should become more aware of the importance of finding reliable information, and health institutions should be able to make that information more visible. PMID:24083368
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-21
... any of the following methods: Federal Rulemaking Web Site: Go to http://www.regulations.gov and search.../reading-rm/adams.html . To begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public Document Room (PDR...
ERIC Educational Resources Information Center
Chen, Xianglei; Ho, Phoebe
2012-01-01
Science, technology, engineering, and mathematics (STEM) fields are widely regarded as critical to the national economy. To provide a nationally representative portrait of undergraduate students' experiences in STEM education, these Web Tables summarize longitudinal data from a cohort of first-time, beginning students who started postsecondary…
Accessibility and usability OCW data: The UTPL OCW.
Rodríguez, Germania; Perez, Jennifer; Cueva, Samanta; Torres, Rommel
2017-08-01
This data article provides a data description on article entitled "A framework for improving web accessibility and usability of Open Course Ware sites" [3] This Data in Brief presents the data obtained from the accessibility and usability evaluation of the UTPL OCW. The data obtained from the framework evaluation consists of the manual evaluation of the standards criteria and the automatic evaluation of the tools Google PageSpeed and Google Analytics. In addition, this article presents the synthetized tables from standards that are used by the framework to evaluate the accessibility and usability of OCW, and the questionnaires required to extract the data. As a result, the article also provides the data required to reproduce the evaluation of other OCW.
Samaras, Loukas; García-Barriocanal, Elena; Sicilia, Miguel-Angel
2017-11-20
An extended discussion and research has been performed in recent years using data collected through search queries submitted via the Internet. It has been shown that the overall activity on the Internet is related to the number of cases of an infectious disease outbreak. The aim of the study was to define a similar correlation between data from Google Trends and data collected by the official authorities of Greece and Europe by examining the development and the spread of seasonal influenza in Greece and Italy. We used multiple regressions of the terms submitted in the Google search engine related to influenza for the period from 2011 to 2012 in Greece and Italy (sample data for 104 weeks for each country). We then used the autoregressive integrated moving average statistical model to determine the correlation between the Google search data and the real influenza cases confirmed by the aforementioned authorities. Two methods were used: (1) a flu score was created for the case of Greece and (2) comparison of data from a neighboring country of Greece, which is Italy. The results showed that there is a significant correlation that can help the prediction of the spread and the peak of the seasonal influenza using data from Google searches. The correlation for Greece for 2011 and 2012 was .909 and .831, respectively, and correlation for Italy for 2011 and 2012 was .979 and .933, respectively. The prediction of the peak was quite precise, providing a forecast before it arrives to population. We can create an Internet surveillance system based on Google searches to track influenza in Greece and Italy. ©Loukas Samaras, Elena García-Barriocanal, Miguel-Angel Sicilia. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 20.11.2017.
Phelan, Nigel; Davy, Shane; O'Keeffe, Gerard W; Barry, Denis S
2017-03-01
The role of e-learning platforms in anatomy education continues to expand as self-directed learning is promoted in higher education. Although a wide range of e-learning resources are available, determining student use of non-academic internet resources requires novel approaches. One such approach that may be useful is the Google Trends © web application. To determine the feasibility of Google Trends to gain insights into anatomy-related online searches, Google Trends data from the United States from January 2010 to December 2015 were analyzed. Data collected were based on the recurrence of keywords related to head and neck anatomy generated from the American Association of Clinical Anatomists and the Anatomical Society suggested anatomy syllabi. Relative search volume (RSV) data were analyzed for seasonal periodicity and their overall temporal trends. Following exclusions due to insufficient search volume data, 29 out of 36 search terms were analyzed. Significant seasonal patterns occurred in 23 search terms. Thirty-nine seasonal peaks were identified, mainly in October and April, coinciding with teaching periods in anatomy curricula. A positive correlation of RSV with time over the 6-year study period occurred in 25 out of 29 search terms. These data demonstrate how Google Trends may offer insights into the nature and timing of online search patterns of anatomical syllabi and may potentially inform the development and timing of targeted online supports to ensure that students of anatomy have the opportunity to engage with online content that is both accurate and fit for purpose. Anat Sci Educ 10: 152-159. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.
NASA Astrophysics Data System (ADS)
Carraro, Francesco
"Mars @ ASDC" is a project born with the goal of using the new web technologies to assist researches involved in the study of Mars. This project employs Mars map and javascript APIs provided by Google to visualize data acquired by space missions on the planet. So far, visualization of tracks acquired by MARSIS and regions observed by VIRTIS-Rosetta has been implemented. The main reason for the creation of this kind of tool is the difficulty in handling hundreds or thousands of acquisitions, like the ones from MARSIS, and the consequent difficulty in finding observations related to a particular region. This led to the development of a tool which allows to search for acquisitions either by defining the region of interest through a set of geometrical parameters or by manually selecting the region on the map through a few mouse clicks The system allows the visualization of tracks (acquired by MARSIS) or regions (acquired by VIRTIS-Rosetta) which intersect the user defined region. MARSIS tracks can be visualized both in Mercator and polar projections while the regions observed by VIRTIS can presently be visualized only in Mercator projection. The Mercator projection is the standard map provided by Google. The polar projections are provided by NASA and have been developed to be used in combination with APIs provided by Google The whole project has been developed following the "open source" philosophy: the client-side code which handles the functioning of the web page is written in javascript; the server-side code which executes the searches for tracks or regions is written in PHP and the DB which undergoes the system is MySQL.
[An evaluation of the quality of health web pages using a validated questionnaire].
Conesa Fuentes, Maria del Carmen; Aguinaga Ontoso, Enrique; Hernández Morante, Juan José
2011-01-01
The objective of the present study was to evaluate the quality of general health information in Spanish language web pages, and the official Regional Services web pages from the different Autonomous Regions. It is a cross-sectional study. We have used a previously validated questionnaire to study the present state of the health information on Internet for a lay-user point of view. By mean of PageRank (Google®), we obtained a group of webs, including a total of 65 health web pages. We applied some exclusion criteria, and finally obtained a total of 36 webs. We also analyzed the official web pages from the different Health Services in Spain (19 webs), making a total of 54 health web pages. In the light of our data, we observed that, the quality of the general information health web pages was generally rather low, especially regarding the information quality. Not one page reached the maximum score (19 points). The mean score of the web pages was of 9.8±2.8. In conclusion, to avoid the problems arising from the lack of quality, health professionals should design advertising campaigns and other media to teach the lay-user how to evaluate the information quality. Copyright © 2009 Elsevier España, S.L. All rights reserved.
Lambert, Bruno; Flahault, Antoine; Chartier-Kastler, Emmanuel; Hanslik, Thomas
2013-01-01
Background Despite the fact that urinary tract infection (UTI) is a very frequent disease, little is known about its seasonality in the community. Methods and Findings To estimate seasonality of UTI using multiple time series constructed with available proxies of UTI. Eight time series based on two databases were used: sales of urinary antibacterial medications reported by a panel of pharmacy stores in France between 2000 and 2012, and search trends on the Google search engine for UTI-related terms between 2004 and 2012 in France, Germany, Italy, the USA, China, Australia and Brazil. Differences between summers and winters were statistically assessed with the Mann-Whitney test. We evaluated seasonality by applying the Harmonics Product Spectrum on Fast Fourier Transform. Seven time series out of eight displayed a significant increase in medication sales or web searches in the summer compared to the winter, ranging from 8% to 20%. The eight time series displayed a periodicity of one year. Annual increases were seen in the summer for UTI drug sales in France and Google searches in France, the USA, Germany, Italy, and China. Increases occurred in the austral summer for Google searches in Brazil and Australia. Conclusions An annual seasonality of UTIs was evidenced in seven different countries, with peaks during the summer. PMID:24204587
Google Scholar is not enough to be used alone for systematic reviews.
Giustini, Dean; Boulos, Maged N Kamel
2013-01-01
Google Scholar (GS) has been noted for its ability to search broadly for important references in the literature. Gehanno et al. recently examined GS in their study: 'Is Google scholar enough to be used alone for systematic reviews?' In this paper, we revisit this important question, and some of Gehanno et al.'s other findings in evaluating the academic search engine. The authors searched for a recent systematic review (SR) of comparable size to run search tests similar to those in Gehanno et al. We selected Chou et al. (2013) contacting the authors for a list of publications they found in their SR on social media in health. We queried GS for each of those 506 titles (in quotes "), one by one. When GS failed to retrieve a paper, or produced too many results, we used the allintitle: command to find papers with the same title. Google Scholar produced records for ~95% of the papers cited by Chou et al. (n=476/506). A few of the 30 papers that were not in GS were later retrieved via PubMed and even regular Google Search. But due to its different structure, we could not run searches in GS that were originally performed by Chou et al. in PubMed, Web of Science, Scopus and PsycINFO®. Identifying 506 papers in GS was an inefficient process, especially for papers using similar search terms. Has Google Scholar improved enough to be used alone in searching for systematic reviews? No. GS' constantly-changing content, algorithms and database structure make it a poor choice for systematic reviews. Looking for papers when you know their titles is a far different issue from discovering them initially. Further research is needed to determine when and how (and for what purposes) GS can be used alone. Google should provide details about GS' database coverage and improve its interface (e.g., with semantic search filters, stored searching, etc.). Perhaps then it will be an appropriate choice for systematic reviews.
EntrezAJAX: direct web browser access to the Entrez Programming Utilities.
Loman, Nicholas J; Pallen, Mark J
2010-06-21
Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/
Novel inter and intra prediction tools under consideration for the emerging AV1 video codec
NASA Astrophysics Data System (ADS)
Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe; Wang, Yunqing; Bankoski, Jim; Wang, Chen; Keyder, Emil
2017-09-01
Google started the WebM Project in 2010 to develop open source, royalty- free video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec AV1, in a consortium of major tech companies called the Alliance for Open Media, that achieves at least a generational improvement in coding efficiency over VP9. In this paper, we focus primarily on new tools in AV1 that improve the prediction of pixel blocks before transforms, quantization and entropy coding are invoked. Specifically, we describe tools and coding modes that improve intra, inter and combined inter-intra prediction. Results are presented on standard test sets.
NASA Astrophysics Data System (ADS)
Yen, Y.-N.; Wu, Y.-W.; Weng, K.-H.
2013-07-01
E-learning assisted teaching and learning is the trend of the 21st century and has many advantages - freedom from the constraints of time and space, hypertext and multimedia rich resources - enhancing the interaction between students and the teaching materials. The purpose of this study is to explore how rich Internet resources assisted students with the Western Architectural History course. First, we explored the Internet resources which could assist teaching and learning activities. Second, according to course objectives, we built a web-based platform which integrated the Google spreadsheets form, SIMILE widget, Wikipedia and the Google Maps and applied it to the course of Western Architectural History. Finally, action research was applied to understanding the effectiveness of this teaching/learning mode. Participants were the students of the Department of Architecture in the Private University of Technology in northern Taiwan. Results showed that students were willing to use the web-based platform to assist their learning. They found this platform to be useful in understanding the relationship between different periods of buildings. Through the view of the map mode, this platform also helped students expand their international perspective. However, we found that the information shared by students via the Internet were not completely correct. One possible reason was that students could easily acquire information on Internet but they could not determine the correctness of the information. To conclude, this study found some useful and rich resources that could be well-integrated, from which we built a web-based platform to collect information and present this information in diverse modes to stimulate students' learning motivation. We recommend that future studies should consider hiring teaching assistants in order to ease the burden on teachers, and to assist in the maintenance of information quality.
Data Visualization of Lunar Orbiter KAGUYA (SELENE) using web-based GIS
NASA Astrophysics Data System (ADS)
Okumura, H.; Sobue, S.; Yamamoto, A.; Fujita, T.
2008-12-01
The Japanese Lunar Orbiter KAGUYA (SELENE) was launched on Sep.14 2007, and started nominal observation from Dec. 21 2007. KAGUYA has 15 ongoing observation missions and is obtaining various physical quantity data of the moon such as elemental abundance, mineralogical composition, geological feature, magnetic field and gravity field. We are working on the visualization of these data and the application of them to web-based GIS. Our purpose of data visualization is the promotion of science and education and public outreach (EPO). As for scientific usage and public outreach, we already constructed KAGUYA Web Map Server (WMS) at JAXA Sagamihara Campus and began to test it among internal KAGUYA project. KAGUYA science team plans the integrated science using the data of multiple instruments with the aim of obtaining the new findings of the origin and the evolution of the moon. In the study of the integrated science, scientists have to access, compare and analyze various types of data with different resolution. Web-based GIS will allow users to map, overlay and share the data and information easily. So it will be the best way to progress such a study and we are developing the KAGUYA WMS as a platform of the KAGUYA integrated science. For the purpose of EPO, we are customizing NASA World Wind (NWW) JAVA for KAGUYA supported by NWW project. Users will be able to search and view many images and movies of KAGUYA on NWW JAVA in the easy and attractive way. In addition, we are considering applying KAGUYA images to Google Moon with KML format and adding KAGUYA movies to Google/YouTube.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-11
... responsible for making sure that your comment does not include any sensitive health information, like medical records or other individually identifiable health information. In addition, do not include any ``[t]rade... overnight service. Visit the Commission Web site at http://www.ftc.gov to read this Notice and the news...
ERIC Educational Resources Information Center
Geluso, Joe
2013-01-01
Usage-based theories of language learning suggest that native speakers of a language are acutely aware of formulaic language due in large part to frequency effects. Corpora and data-driven learning can offer useful insights into frequent patterns of naturally occurring language to second/foreign language learners who, unlike native speakers, are…
DeVilbiss, Elizabeth A; Lee, Brian K
2014-12-01
We sought to evaluate the potential for using historical web search data on autism spectrum disorders (ASD)-related topics as an indicator of ASD awareness. Analysis of Google Trend data suggested that National Autism Awareness Month and televised reports concerning autism are an effective method of promoting online search interest in autism.
The Google Online Marketing Challenge: Real Clients, Real Money, Real Ads and Authentic Learning
ERIC Educational Resources Information Center
Miko, John S.
2014-01-01
Search marketing is the process of utilizing search engines to drive traffic to a Web site through both paid and unpaid efforts. One potential paid component of a search marketing strategy is the use of a pay-per-click (PPC) advertising campaign in which advertisers pay search engine hosts only when their advertisement is clicked. This paper…
Brief Report: Consistency of Search Engine Rankings for Autism Websites
ERIC Educational Resources Information Center
Reichow, Brian; Naples, Adam; Steinhoff, Timothy; Halpern, Jason; Volkmar, Fred R.
2012-01-01
The World Wide Web is one of the most common methods used by parents to find information on autism spectrum disorders and most consumers find information through search engines such as Google or Bing. However, little is known about how the search engines operate or the consistency of the results that are returned over time. This study presents the…
ERIC Educational Resources Information Center
Simpson, Andrea; Baldwin, Elizabeth Margaret
2017-01-01
This study sought to analyze and evaluate the accessibility, availability and quality of online information regarding the National Disability Insurance Scheme (NDIS) and hearing loss. The most common search engine keyword terms a caregiver may enter when conducting a web search was determined using a keyword search tool. The top websites linked…
Education Students' Use of Collaborative Writing Tools in Collectively Reflective Essay Papers
ERIC Educational Resources Information Center
Brodahl, Cornelia; Hansen, Nils Kristian
2014-01-01
Google Docs and EtherPad are Web 2.0 tools providing opportunity for multiple users to work online on the same document consecutively or simultaneously. Over the last few years a number of research papers on the use of these collaborative tools in a teaching and learning environment have been published. This work builds on that of Brodahl,…
ERIC Educational Resources Information Center
Ratniyom, Jadsada; Boonphadung, Suttipong; Unnanantn, Thassanant
2016-01-01
This study examined the effects of the introductory organic chemistry online homework on first year pre-service science teachers' learning achievements. The online homework was created using a web-based Google form in order to enhance the pre-service science teachers' learning achievements. The steps for constructing online homework were…
ERIC Educational Resources Information Center
Dority Baker, Marcia L.
2013-01-01
This article provides a case study of how the University of Nebraska College of Law and Schmid Law Library use "buttons" to manage Law College faculty members' and librarians' online presence. Since Google is the primary search engine used to find information, it is important that librarians and libraries assist Web site visitors in…
E-Learning for Elementary Students: The Web 2.0 Tool Google Drive as Teaching and Learning Practice
ERIC Educational Resources Information Center
Apergi, Angeliki; Anagnostopoulou, Angeliki; Athanasiou, Alexandra
2015-01-01
It is a well-known fact that during recent years, the new economic and technological environment, which has emerged from the dynamic impacts of globalization, has given rise to the increased development of information and communication technologies that have immensely influenced education and training all over Europe. Within this framework, there…
Bernard, André; Langille, Morgan; Hughes, Stephanie; Rose, Caren; Leddin, Desmond; Veldhuyzen van Zanten, Sander
2007-09-01
The Internet is a widely used information resource for patients with inflammatory bowel disease, but there is variation in the quality of Web sites that have patient information regarding Crohn's disease and ulcerative colitis. The purpose of the current study is to systematically evaluate the quality of these Web sites. The top 50 Web sites appearing in Google using the terms "Crohn's disease" or "ulcerative colitis" were included in the study. Web sites were evaluated using a (a) Quality Evaluation Instrument (QEI) that awarded Web sites points (0-107) for specific information on various aspects of inflammatory bowel disease, (b) a five-point Global Quality Score (GQS), (c) two reading grade level scores, and (d) a six-point integrity score. Thirty-four Web sites met the inclusion criteria, 16 Web sites were excluded because they were portals or non-IBD oriented. The median QEI score was 57 with five Web sites scoring higher than 75 points. The median Global Quality Score was 2.0 with five Web sites achieving scores of 4 or 5. The average reading grade level score was 11.2. The median integrity score was 3.0. There is marked variation in the quality of the Web sites containing information on Crohn's disease and ulcerative colitis. Many Web sites suffered from poor quality but there were five high-scoring Web sites.
Using the internet to understand angler behavior in the information age
Martin, Dustin R.; Pracheil, Brenda M.; DeBoer, Jason A.; Wilde, Gene R.; Pope, Kevin L.
2012-01-01
Declining participation in recreational angling is of great concern to fishery managers because fishing license sales are an important revenue source for protection of aquatic resources. This decline is frequently attributed, in part, to increased societal reliance on electronics. Internet use by anglers is increasing and fishery managers may use the Internet as a unique means to increase angler participation. We examined Internet search behavior using Google Insights for Search, a free online tool that summarizes Google searches from 2004 to 2011 to determine (1) trends in Internet search volume for general fishing related terms and (2) the relative usefulness of terms related to angler recruitment programs across the United States. Though search volume declined for general fishing terms (e.g., fishing, fishing guide), search volume increased for social media and recruitment terms (e.g., fishing forum, family fishing) over the 7-year period. We encourage coordinators of recruitment programs to capitalize on anglers’ Internet usage by considering Internet search patterns when creating web-based information. Careful selection of terms used in web-based information to match those currently searched by potential anglers may help to direct traffic to state agency websites that support recruitment efforts.
Muellner, Ulrich J; Vial, Flavie; Wohlfender, Franziska; Hadorn, Daniela; Reist, Martin; Muellner, Petra
2015-01-01
The reporting of outputs from health surveillance systems should be done in a near real-time and interactive manner in order to provide decision makers with powerful means to identify, assess, and manage health hazards as early and efficiently as possible. While this is currently rarely the case in veterinary public health surveillance, reporting tools do exist for the visual exploration and interactive interrogation of health data. In this work, we used tools freely available from the Google Maps and Charts library to develop a web application reporting health-related data derived from slaughterhouse surveillance and from a newly established web-based equine surveillance system in Switzerland. Both sets of tools allowed entry-level usage without or with minimal programing skills while being flexible enough to cater for more complex scenarios for users with greater programing skills. In particular, interfaces linking statistical softwares and Google tools provide additional analytical functionality (such as algorithms for the detection of unusually high case occurrences) for inclusion in the reporting process. We show that such powerful approaches could improve timely dissemination and communication of technical information to decision makers and other stakeholders and could foster the early-warning capacity of animal health surveillance systems.
Web application and database modeling of traffic impact analysis using Google Maps
NASA Astrophysics Data System (ADS)
Yulianto, Budi; Setiono
2017-06-01
Traffic impact analysis (TIA) is a traffic study that aims at identifying the impact of traffic generated by development or change in land use. In addition to identifying the traffic impact, TIA is also equipped with mitigation measurement to minimize the arising traffic impact. TIA has been increasingly important since it was defined in the act as one of the requirements in the proposal of Building Permit. The act encourages a number of TIA studies in various cities in Indonesia, including Surakarta. For that reason, it is necessary to study the development of TIA by adopting the concept Transportation Impact Control (TIC) in the implementation of the TIA standard document and multimodal modeling. It includes TIA's standardization for technical guidelines, database and inspection by providing TIA checklists, monitoring and evaluation. The research was undertaken by collecting the historical data of junctions, modeling of the data in the form of relational database, building a user interface for CRUD (Create, Read, Update and Delete) the TIA data in the form of web programming with Google Maps libraries. The result research is a system that provides information that helps the improvement and repairment of TIA documents that exist today which is more transparent, reliable and credible.
Zarifmahmoudi, Leili; Kianifar, Hamid Reza; Sadeghi, Ramin
2013-10-01
Citation tracking is an important method to analyze the scientific impact of journal articles and can be done through Scopus (SC), Google Scholar (GS), or ISI web of knowledge (WOS). In the current study, we analyzed the citations to 2011-2012 articles of Iranian Journal of Basic Medical Sciences (IJBMS) in these three resources. The relevant data from SC, GS, and WOS official websites. Total number of citations, their overlap and unique citations of these three recourses were evaluated. WOS and SC covered 100% and GS covered 97% of the IJBMS items. Totally, 37 articles were cited at least once in one of the studied resources. Total number of citations were 20, 30, and 59 in WOS, SC, and GS respectively. Forty citations of GS, 6 citation of SC, and 2 citations of WOS were unique. Every scientific resource has its own inaccuracies in providing citation analysis information. Citation analysis studies are better to be done each year to correct any inaccuracy as soon as possible. IJBMS has gained considerable scientific attention from wide range of high impact journals and through citation tracking method; this visibility can be traced more thoroughly.
Using GeoRSS feeds to distribute house renting and selling information based on Google map
NASA Astrophysics Data System (ADS)
Nong, Yu; Wang, Kun; Miao, Lei; Chen, Fei
2007-06-01
Geographically Encoded Objects RSS (GeoRSS) is a way to encode location in RSS feeds. RSS is a widely supported format for syndication of news and weblogs, and is extendable to publish any sort of itemized data. When Weblogs explode since RSS became new portals, Geo-tagged feed is necessary to show the location that story tells. Geographically Encoded Objects adopts the core of RSS framework, making itself the map annotations specified in the RSS XML format. The case studied illuminates that GeoRSS could be maximally concise in representation and conception, so it's simple to manipulate generation and then mashup GeoRSS feeds with Google Map through API to show the real estate information with other attribute in the information window. After subscribe to feeds of concerned subjects, users could easily check for new bulletin showing on map through syndication. The primary design goal of GeoRSS is to make spatial data creation as easy as regular Web content development. However, it does more for successfully bridging the gap between traditional GIS professionals and amateurs, Web map hackers, and numerous services that enable location-based content for its simplicity and effectiveness.
Evaluating IPv6 Adoption in the Internet
NASA Astrophysics Data System (ADS)
Colitti, Lorenzo; Gunderson, Steinar H.; Kline, Erik; Refice, Tiziana
As IPv4 address space approaches exhaustion, large networks are deploying IPv6 or preparing for deployment. However, there is little data available about the quantity and quality of IPv6 connectivity. We describe a methodology to measure IPv6 adoption from the perspective of a Web site operator and to evaluate the impact that adding IPv6 to a Web site will have on its users. We apply our methodology to the Google Web site and present results collected over the last year. Our data show that IPv6 adoption, while growing significantly, is still low, varies considerably by country, and is heavily influenced by a small number of large deployments. We find that native IPv6 latency is comparable to IPv4 and provide statistics on IPv6 transition mechanisms used.
Ramos, H.; Shannon, P.; Aebersold, R.
2008-01-01
Motivation: Mass spectrometry experiments in the field of proteomics produce lists containing tens to thousands of identified proteins. With the protein information and property explorer (PIPE), the biologist can acquire functional annotations for these proteins and explore the enrichment of the list, or fraction thereof, with respect to functional classes. These protein lists may be saved for access at a later time or different location. The PIPE is interoperable with the Firegoose and the Gaggle, permitting wide-ranging data exploration and analysis. The PIPE is a rich-client web application which uses AJAX capabilities provided by the Google Web Toolkit, and server-side data storage using Hibernate. Availability: http://pipe.systemsbiology.net Contact: pshannon@systemsbiology.org PMID:18635572
Towards the Geospatial Web: Media Platforms for Managing Geotagged Knowledge Repositories
NASA Astrophysics Data System (ADS)
Scharl, Arno
International media have recognized the visual appeal of geo-browsers such as NASA World Wind and Google Earth, for example, when Web and television coverage on Hurricane Katrina used interactive geospatial projections to illustrate its path and the scale of destruction in August 2005. Yet these early applications only hint at the true potential of geospatial technology to build and maintain virtual communities and to revolutionize the production, distribution and consumption of media products. This chapter investigates this potential by reviewing the literature and discussing the integration of geospatial and semantic reference systems, with an emphasis on extracting geospatial context from unstructured text. A content analysis of news coverage based on a suite of text mining tools (webLyzard) sheds light on the popularity and adoption of geospatial platforms.
HippDB: a database of readily targeted helical protein-protein interactions.
Bergey, Christina M; Watkins, Andrew M; Arora, Paramjit S
2013-11-01
HippDB catalogs every protein-protein interaction whose structure is available in the Protein Data Bank and which exhibits one or more helices at the interface. The Web site accepts queries on variables such as helix length and sequence, and it provides computational alanine scanning and change in solvent-accessible surface area values for every interfacial residue. HippDB is intended to serve as a starting point for structure-based small molecule and peptidomimetic drug development. HippDB is freely available on the web at http://www.nyu.edu/projects/arora/hippdb. The Web site is implemented in PHP, MySQL and Apache. Source code freely available for download at http://code.google.com/p/helidb, implemented in Perl and supported on Linux. arora@nyu.edu.
Innovative Instructional Tools from the AMS
NASA Astrophysics Data System (ADS)
Abshire, W. E.; Geer, I. W.; Mills, E. W.; Nugnes, K. A.; Stimach, A. E.
2016-12-01
Since 1996, the American Meteorological Society (AMS) has been developing online educational materials with dynamic features that engage students and encourage additional exploration of various concepts. Most recently, AMS transitioned its etextbooks to webBooks. Now accessible anywhere with internet access, webBooks can be read with any web browser. Prior versions of AMS etextbooks were difficult to use in a lab setting, however webBooks are much easier to use and no longer a hurdle to learning. Additionally, AMS eInvestigations Manuals, also in webBook format, include labs with innovative features and educational tools. One such example is the AMS Climate at a Glance (CAG) app that draws data from NOAA's Climate at a Glance website. The user selects historical data of a given parameter and the app calculates various statistics revealing whether or not the results are consistent with climate change. These results allow users to distinguish between climate variability and climate change. This can be done for hundreds of locations across the U.S. and on multiple time scales. Another innovative educational tool used in AMS eInvestigations Manuals is the AMS Conceptual Climate Energy Model (CCEM). The CCEM is a computer simulation designed to enable users to track the paths that units of energy might follow as they enter, move through and exit an imaginary system according to simple rules applied to different scenarios. The purpose is to provide insight into the impacts of physical processes that operate in the real world. Finally, AMS educational materials take advantage of Google Earth imagery to reproduce the physical aspects of globes, allowing users to investigate spatial relationships in three dimensions. Google Earth imagery is used to explore tides, ocean bottom bathymetry and El Nino and La Nina. AMS will continue to develop innovative educational materials and tools as technology advances, to attract more students to the Earth sciences.
Naive (commonsense) geography and geobrowser usability after ten years of Google Earth
NASA Astrophysics Data System (ADS)
Hamerlinck, J. D.
2016-04-01
In 1995, the concept of ‘naive geography’ was formally introduced as an area of cognitive geographic information science representing ‘the body of knowledge that people have about the surrounding geographic world’ and reflecting ‘the way people think and reason about geographic space and time, both consciously and subconsciously’. The need to incorporate such commonsense knowledge and reasoning into design of geospatial technologies was identified but faced challenges in formalizing these relationships and processes in software implementation. Ten years later, the Google Earth geobrowser was released, marking the beginning of a new era of open access to, and application of, geographic data and information in society. Fast-forward to today, and the opportunity presents itself to take stock of twenty years of naive geography and a decade of the ubiquitous virtual globe. This paper introduces an ongoing research effort to explore the integration of naive (or commonsense) geography concepts in the Google Earth geobrowser virtual globe and their possible impact on Google Earth's usability, utility, and usefulness. A multi-phase methodology is described, combining usability reviews and usability testing with use-case scenarios involving the U.S.-Canadian Yellowstone to Yukon Initiative. Initial progress on a usability review combining cognitive walkthroughs and heuristics evaluation is presented.
NASA Astrophysics Data System (ADS)
Raup, B. H.; Khalsa, S. S.; Armstrong, R.
2007-12-01
The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), MapInfo, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.
mORCA: sailing bioinformatics world with mobile devices.
Díaz-Del-Pino, Sergio; Falgueras, Juan; Perez-Wohlfeil, Esteban; Trelles, Oswaldo
2018-03-01
Nearly 10 years have passed since the first mobile apps appeared. Given the fact that bioinformatics is a web-based world and that mobile devices are endowed with web-browsers, it seemed natural that bioinformatics would transit from personal computers to mobile devices but nothing could be further from the truth. The transition demands new paradigms, designs and novel implementations. Throughout an in-depth analysis of requirements of existing bioinformatics applications we designed and deployed an easy-to-use web-based lightweight mobile client. Such client is able to browse, select, compose automatically interface parameters, invoke services and monitor the execution of Web Services using the service's metadata stored in catalogs or repositories. mORCA is available at http://bitlab-es.com/morca/app as a web-app. It is also available in the App store by Apple and Play Store by Google. The software will be available for at least 2 years. ortrelles@uma.es. Source code, final web-app, training material and documentation is available at http://bitlab-es.com/morca. © The Author(s) 2017. Published by Oxford University Press.
Williams, Jessica H; DeLaughter, Kathryn; Volkman, Julie E; Sadasivam, Rajani S; Ray, Midge N; Gilbert, Gregg H; Houston, Thomas K
2018-06-01
To describe the content of messages sent by smokers through asynchronous counseling within a Web-based smoking cessation intervention. Qualitative. National community-based setting of patients who had been engaged by the medical or dental practices at which they attended or via Google advertisements. Adults older than 19 years who were current smokers and interested in quitting. Participants throughout the United States referred to a Web-based cessation intervention by their medical or dental provider or by clicking on a Google advertisement. We conducted a qualitative review of 742 asynchronous counseling messages sent by 270 Web site users. Messages were reviewed, analyzed, and organized into qualitative themes by the investigative team. The asynchronous counseling feature of the intervention was used most frequently by smokers who were white (87%), female (67%), aged 45 to 54 (32%), and who had at least some college-level education (70%). Qualitative analysis yielded 7 basic themes-Talk about the Process of Quitting, Barriers to Quitting, Reasons to Quit, Quit History, Support and Strategies for Quitting, Quitting with Medication, and Quit Progress. The most common theme was Support and Strategies for Quitting with 255 references among all messages. We found rich communication across the spectrum of the quit process, from persons preparing to quit to those who had successfully quit. Asynchronous smoking cessation counseling provides a promising means of social support for smokers during the quit process.
Franklin, Erik C; Stat, Michael; Pochon, Xavier; Putnam, Hollie M; Gates, Ruth D
2012-03-01
The genus Symbiodinium encompasses a group of unicellular, photosynthetic dinoflagellates that are found free living or in hospite with a wide range of marine invertebrate hosts including scleractinian corals. We present GeoSymbio, a hybrid web application that provides an online, easy to use and freely accessible interface for users to discover, explore and utilize global geospatial bioinformatic and ecoinformatic data on Symbiodinium-host symbioses. The novelty of this application lies in the combination of a variety of query and visualization tools, including dynamic searchable maps, data tables with filter and grouping functions, and interactive charts that summarize the data. Importantly, this application is hosted remotely or 'in the cloud' using Google Apps, and therefore does not require any specialty GIS, web programming or data programming expertise from the user. The current version of the application utilizes Symbiodinium data based on the ITS2 genetic marker from PCR-based techniques, including denaturing gradient gel electrophoresis, sequencing and cloning of specimens collected during 1982-2010. All data elements of the application are also downloadable as spatial files, tables and nucleic acid sequence files in common formats for desktop analysis. The application provides a unique tool set to facilitate research on the basic biology of Symbiodinium and expedite new insights into their ecology, biogeography and evolution in the face of a changing global climate. GeoSymbio can be accessed at https://sites.google.com/site/geosymbio/. © 2011 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Ryan, J. G.; McIlrath, J. A.
2008-12-01
Web-accessible geospatial information system (GIS) technologies have advanced in concert with an expansion of data resources that can be accessed and used by researchers, educators and students. These resources facilitate the development of data-rich instructional resources and activities that can be used to transition seamlessly into undergraduate research projects. MARGINS Data in the Classroom (http://serc.carleton.edu/ margins/index.html) seeks to engage MARGINS researchers and educators in using the images, datasets, and visualizations produced by NSF-MARGINS Program-funded research and related efforts to create Web-deliverable instructional materials for use in undergraduate-level geoscience courses (MARGINS Mini-Lessons). MARGINS science data is managed by the Marine Geosciences Data System (MGDS), and these and all other MGDS-hosted data can be accessed, manipulated and visualized using GeoMapApp (www.geomapapp.org; Carbotte et al, 2004), a freely available geographic information system focused on the marine environment. Both "packaged" MGDS datasets (i.e., global earthquake foci, volcanoes, bathymetry) and "raw" data (seismic surveys, magnetics, gravity) are accessible via GeoMapApp, with WFS linkages to other resources (geodesy from UNAVCO; seismic profiles from IRIS; geochemical and drillsite data from EarthChem, IODP, and others), permitting the comprehensive characterization of many regions of the ocean basins. Geospatially controlled datasets can be imported into GeoMapApp visualizations, and these visualizations can be exported into Google Earth as .kmz image files. Many of the MARGINS Mini-Lessons thus far produced use (or have studentss use the varied capabilities of GeoMapApp (i.e., constructing topographic profiles, overlaying varied geophysical and bathymetric datasets, characterizing geochemical data). These materials are available for use and testing from the project webpage (http://serc.carleton.edu/margins/). Classroom testing and assessment of the Mini- Lessons begins this Fall.
Solar Eclipse Computer API: Planning Ahead for August 2017
NASA Astrophysics Data System (ADS)
Bartlett, Jennifer L.; Chizek Frouard, Malynda; Lesniak, Michael V.; Bell, Steve
2016-01-01
With the total solar eclipse of 2017 August 21 over the continental United States approaching, the U.S. Naval Observatory (USNO) on-line Solar Eclipse Computer can now be accessed via an application programming interface (API). This flexible interface returns local circumstances for any solar eclipse in JavaScript Object Notation (JSON) that can be incorporated into third-party Web sites or applications. For a given year, it can also return a list of solar eclipses that can be used to build a more specific request for local circumstances. Over the course of a particular eclipse as viewed from a specific site, several events may be visible: the beginning and ending of the eclipse (first and fourth contacts), the beginning and ending of totality (second and third contacts), the moment of maximum eclipse, sunrise, or sunset. For each of these events, the USNO Solar Eclipse Computer reports the time, Sun's altitude and azimuth, and the event's position and vertex angles. The computer also reports the duration of the total phase, the duration of the eclipse, the magnitude of the eclipse, and the percent of the Sun obscured for a particular eclipse site. On-line documentation for using the API-enabled Solar Eclipse Computer, including sample calls, is available (http://aa.usno.navy.mil/data/docs/api.php). The same Web page also describes how to reach the Complete Sun and Moon Data for One Day, Phases of the Moon, Day and Night Across the Earth, and Apparent Disk of a Solar System Object services using API calls.For those who prefer using a traditional data input form, local circumstances can still be requested that way at http://aa.usno.navy.mil/data/docs/SolarEclipses.php. In addition, the 2017 August 21 Solar Eclipse Resource page (http://aa.usno.navy.mil/data/docs/Eclipse2017.php) consolidates all of the USNO resources for this event, including a Google Map view of the eclipse track designed by Her Majesty's Nautical Almanac Office (HMNAO). Looking further ahead, a 2024 April 8 Solar Eclipse Resource page (http://aa.usno.navy.mil/data/docs/Eclipse2024.php) is also available.
WEB-BASED MODELING OF A FERTILIZER SOLUTION SPILL IN THE OHIO RIVER
Environmental computer models are usually desktop models. Some web-enabled models are beginning to appear where the user can use a browser to run the models on a central web server. Several issues arise when a desktop model is transferred to a web architecture. This paper discuss...
Differences between h-index measures from different bibliographic sources and search engines.
Barreto, Mauricio Lima; Aragão, Erika; Sousa, Luis Eugenio Portela Fernandes de; Santana, Táris Maria; Barata, Rita Barradas
2013-04-01
To analyze the use of the h-index as a measure of the bibliometric impact of Brazilian researchers' scientific publications. The scientific production of Brazilian CNPq 1-A researchers in the areas of public health, immunology and medicine were compared. The mean h-index of the groups of researchers in each area were estimated and nonparametric Kruskal Wallis test and multiple comparisons Behrens-Fisher test were used to compare the differences. The h-index means were higher in the area of Immunology than in Public Health and Medicine when the Web of Science base was used. However, this difference disappears when the comparison is made using Scopus or Google Scholar. The emergence of Google Scholar brings a new level to discussions on the measure of the bibliometric impact of scientific publications. Areas with strong professional components, in which knowledge is produced and must also be published in the native language, vis-a-vis its dissemination to the international community, necessarily have a standard of scientific publications and citations different from areas exclusively or predominantly academic and they are best captured by Google Scholar.
General vs health specialized search engine: a blind comparative evaluation of top search results.
Pletneva, Natalia; Ruiz de Castaneda, Rafael; Baroz, Frederic; Boyer, Celia
2014-01-01
This paper presents the results of a blind comparison of top ten search results retrieved by Google.ch (French) and Khresmoi for everyone, a health specialized search engine. Participants--students of the Faculty of Medicine of the University of Geneva had to complete three tasks and select their preferred results. The majority of the participants have largely preferred Google results while Khresmoi results showed potential to compete in specific topics. The coverage of the results seems to be one of the reasons. The second being that participants do not know how to select quality and transparent health web pages. More awareness, tools and education about the matter is required for the students of Medicine to be able to efficiently distinguish trustworthy online health information.
Prospective analysis of the quality of Spanish health information web sites after 3 years.
Conesa-Fuentes, Maria C; Hernandez-Morante, Juan J
2016-12-01
Although the Internet has become an essential source of health information, our study conducted 3 years ago provided evidence of the low quality of Spanish health web sites. The objective of the present study was to evaluate the quality of Spanish health information web sites now, and to compare these results with those obtained 3 years ago. For the original study, the most visited health information web sites were selected through the PageRank® (Google®) system. The present study evaluated the quality of the same web sites from February to May 2013, using the method developed by Bermúdez-Tamayo et al. and HONCode® criteria. The mean quality of the selected web sites was low and has deteriorated since the previous evaluation, especially in regional health services and institutions' web sites. The quality of private web sites remained broadly similar. Compliance with privacy and update criteria also improved in the intervening period. The results indicate that, even in the case of health web sites, design or appearance is more relevant to developers than quality of information. It is recommended that responsible institutions should increase their efforts to eliminate low-quality health information that may further contribute to health problems.
Flying across Galaxy Clusters with Google Earth: additional imagery from SDSS co-added data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Jiangang; Annis, James; /Fermilab
2010-10-01
Galaxy clusters are spectacular. We provide a Google Earth compatible imagery for the deep co-added images from the Sloan Digital Sky Survey and make it a tool for examing galaxy clusters. Google Earth (in sky mode) provides a highly interactive environment for visualizing the sky. By encoding the galaxy cluster information into a kml/kmz file, one can use Google Earth as a tool for examining galaxy clusters and fly across them freely. However, the resolution of the images provided by Google Earth is not very high. This is partially because the major imagery google earth used is from Sloan Digitalmore » Sky Survey (SDSS) (SDSS collaboration 2000) and the resolutions have been reduced to speed up the web transferring. To have higher resolution images, you need to add your own images in a way that Google Earth can understand. The SDSS co-added data are the co-addition of {approx}100 scans of images from SDSS stripe 82 (Annis et al. 2010). It provides the deepest images based on SDSS and reach as deep as about redshift 1.0. Based on the co-added images, we created color images in a way as described by Lupton et al. (2004) and convert the color images to Google Earth compatible images using wcs2kml (Brewer et al. 2007). The images are stored at a public server at Fermi National Accelerator Laboratory and can be accessed by the public. To view those images in Google Earth, you need to download a kmz file, which contains the links to the color images, and then open the kmz file with your Google Earth. To meet different needs for resolutions, we provide three kmz files corresponding to low, medium and high resolution images. We recommend the high resolution one as long as you have a broadband Internet connection, though you should choose to download any of them, depending on your own needs and Internet speed. After you open the downloaded kmz file with Google Earth (in sky mode), it takes about 5 minutes (depending on your Internet connection and the resolution of images you want) to get some initial images loaded. Then, additional images corresponding to the region you are browsing will be loaded automatically. So far, you have access to all the co-added images. But you still do not have the galaxy cluster position information to look at. In order to see the galaxy clusters, you need to download another kmz file that tell Google Earth where to find the galaxy clusters in the co-added data region. We provide a kmz file for a few galaxy clusters in the stripe 82 region and you can download and open it with Google Earth. In the SDSS co-added region (stripe 82 region), the imagery from Google Earth itself is from the Digitized Sky Survey (2007), which is in very poor quality. In Figure1 and Figure2, we show screenshots of a cluster with and without the new co-added imagery in Google Earth. Much more details have been revealed with the deep images.« less
Sciascia, Savino; Radin, Massimo
2017-11-01
To investigate trends of Internet search volumes linked to Systemic Lupus Erythematosus (SLE), on-going clinical trials and research developments associated to the disease, using Big Data monitoring and data mining. We performed a longitudinal analysis based on the large amount of data generated by Google Trends, scientific search tools (SCOPUS, Medline/Pubmed/ClinicalTrails.gov) considering 'SLE', and 'lupus' in a 5-year web-based research. Wikipedia page views were also analysed using WikiTrends and the results were compared with the search volumes generated by Google Trends. We observed an overall higher distribution of search volumes from Google Trends in United States, South America, Canada, South Africa, Australia and Europe (mainly Italy, United Kingdom, Spain, France, Germany), showing a geographically heterogeneity in insight into health-related behaviour of the different populations towards SLE. By comparing the search volumes analysing the Wikipedia page views of both SLE and belimumab, we found a close peak trend, reflecting the knowledge translation after the approval of belimumab for the treatment of SLE. When focusing on search volumes of Google Trends, we noticed that the highest peaks were related to news headlines that involved celebrities affected by SLE, also when comparing to the peak generated by the approval of belimumab. This new approach, able to investigate health information seeking, might give an estimate of the health-related demand and even of the health-related behaviour of SLE, bringing new light to unanswered questions. Copyright © 2017 Elsevier B.V. All rights reserved.
Rattanasing, Wannaporn; Kaewpitoon, Soraya J; Loyd, Ryan A; Rujirakul, Ratana; Yodkaw, Eakachai; Kaewpitoon, Natthawut
2015-01-01
Cholangiocarcinoma (CCA) is a serious public health problem in the Northeast of Thailand. CCA is considered to be an incurable and rapidly lethal disease. Knowledge of the distribution of CCA patients is necessary for management strategies. This study aimed to utilize the Geographic Information System and Google EarthTM for distribution mapping of cholangiocarcinoma in Satuek District, Buriram, Thailand, during a 5-year period (2008-2012). In this retrospective study data were collected and reviewed from the OPD cards, definitive cases of CCA were patients who were treated in Satuek hospital and were diagnosed with CCA or ICD-10 code C22.1. CCA cases were used to analyze and calculate with ArcGIS 9.2, all of data were imported into Google Earth using the online web page www.earthpoint.us. Data were displayed at village points. A total of 53 cases were diagnosed and identified as CCA. The incidence was 53.57 per 100,000 population (65.5 for males and 30.8 for females) and the majority of CCA cases were in stages IV and IIA. The average age was 67 years old. The highest attack rate was observed in Thung Wang sub-district (161.4 per 100,000 population). The map display at village points for CCA patients based on Google Earth gave a clear visual deistribution. CCA is still a major problem in Satuek district, Buriram province of Thailand. The Google Earth production process is very simple and easy to learn. It is suitable for the use in further development of CCA management strategies.
Arif, Nadia; Al-Jefri, Majed; Bizzi, Isabella Harb; Perano, Gianni Boitano; Goldman, Michel; Haq, Inam; Chua, Kee Leng; Mengozzi, Manuela; Neunez, Marie; Smith, Helen; Ghezzi, Pietro
2018-01-01
The 1998 Lancet paper by Wakefield et al., despite subsequent retraction and evidence indicating no causal link between vaccinations and autism, triggered significant parental concern. The aim of this study was to analyze the online information available on this topic. Using localized versions of Google, we searched "autism vaccine" in English, French, Italian, Portuguese, Mandarin, and Arabic and analyzed 200 websites for each search engine result page (SERP). A common feature was the newsworthiness of the topic, with news outlets representing 25-50% of the SERP, followed by unaffiliated websites (blogs, social media) that represented 27-41% and included most of the vaccine-negative websites. Between 12 and 24% of websites had a negative stance on vaccines, while most websites were pro-vaccine (43-70%). However, their ranking by Google varied. While in Google.com, the first vaccine-negative website was the 43rd in the SERP, there was one vaccine-negative webpage in the top 10 websites in both the British and Australian localized versions and in French and two in Italian, Portuguese, and Mandarin, suggesting that the information quality algorithm used by Google may work better in English. Many webpages mentioned celebrities in the context of the link between vaccines and autism, with Donald Trump most frequently. Few websites (1-5%) promoted complementary and alternative medicine (CAM) but 50-100% of these were also vaccine-negative suggesting that CAM users are more exposed to vaccine-negative information. This analysis highlights the need for monitoring the web for information impacting on vaccine uptake.
Tethys: A Platform for Water Resources Modeling and Decision Support Apps
NASA Astrophysics Data System (ADS)
Swain, N. R.; Christensen, S. D.; Jones, N.; Nelson, E. J.
2014-12-01
Cloud-based applications or apps are a promising medium through which water resources models and data can be conveyed in a user-friendly environment—making them more accessible to decision-makers and stakeholders. In the context of this work, a water resources web app is a web application that exposes limited modeling functionality for a scenario exploration activity in a structured workflow (e.g.: land use change runoff analysis, snowmelt runoff prediction, and flood potential analysis). The technical expertise required to develop water resources web apps can be a barrier to many potential developers of water resources apps. One challenge that developers face is in providing spatial storage, analysis, and visualization for the spatial data that is inherent to water resources models. The software projects that provide this functionality are non-standard to web development and there are a large number of free and open source software (FOSS) projects to choose from. In addition, it is often required to synthesize several software projects to provide all of the needed functionality. Another challenge for the developer will be orchestrating the use of several software components. Consequently, the initial software development investment required to deploy an effective water resources cloud-based application can be substantial. The Tethys Platform has been developed to lower the technical barrier and minimize the initial development investment that prohibits many scientists and engineers from making use of the web app medium. Tethys synthesizes several software projects including PostGIS for spatial storage, 52°North WPS for spatial analysis, GeoServer for spatial publishing, Google Earth™, Google Maps™ and OpenLayers for spatial visualization, and Highcharts for plotting tabular data. The software selection came after a literature review of software projects being used to create existing earth sciences web apps. All of the software is linked via a Python-powered software development kit (SDK). Tethys developers use the SDK to build their apps and incorporate the needed functionality from the software suite. The presentation will include several apps that have been developed using Tethys to demonstrate its capabilities. Based upon work supported by the National Science Foundation under Grant No. 1135483.
Definitions of Quality in Higher Education: A Synthesis of the Literature
ERIC Educational Resources Information Center
Schindler, Laura; Puls-Elvidge, Sarah; Welzant, Heather; Crawford, Linda
2015-01-01
The aim of this paper is to provide a synthesis of the literature on defining quality in the context of higher education. During a search for relevant literature, the authors intentionally cast a wide net, beginning with a broad search in Google Scholar and followed by a narrower search in educational databases, including Academic Search Complete,…
The Library Web: Case Studies in Web Site Creation and Implementation.
ERIC Educational Resources Information Center
Still, Julie M., Ed.
This book presents 19 case studies in library web site creation and implementation. The book begins with an introduction--"Introduction: Step into My Parlor" (Julie M. Still)--and is divided into three sections. The first section, Academic Library Web Sites, contains six case studies: "U-SEARCH: The University of Saskatchewan…
ERIC Educational Resources Information Center
Silva, Mary Lourdes; Adams Delaney, Susan; Cochran, Jolene; Jackson, Ruth; Olivares, Cory
2015-01-01
The majority of research on the implementation of ePortfolios focuses on curriculum, faculty development, or student buy-in. When ePortfolio systems have been described in technical terms, the focus has been on the functionality, affordances, and limitations of ePortfolio systems (e.g., TaskStream, LiveText), free web tools (e.g., Google Docs),…
Bootstrapping and Maintaining Trust in the Cloud
2016-12-01
simultaneous cloud nodes. 1. INTRODUCTION The proliferation and popularity of infrastructure-as-a- service (IaaS) cloud computing services such as...Amazon Web Services and Google Compute Engine means more cloud tenants are hosting sensitive, private, and business critical data and applications in the...thousands of IaaS resources as they are elastically instantiated and terminated. Prior cloud trusted computing solutions address a subset of these features
with Search To search for a document, type a few descriptive words in the search box, and press the Enter key or click the search button. A results page appears with a list of documents and web pages that are related to your search terms, with the most relevant search results appearing at the top of the
ERIC Educational Resources Information Center
DeVilbiss, Elizabeth A.; Lee, Brian K.
2014-01-01
We sought to evaluate the potential for using historical web search data on autism spectrum disorders (ASD)-related topics as an indicator of ASD awareness. Analysis of Google Trend data suggested that National Autism Awareness Month and televised reports concerning autism are an effective method of promoting online search interest in autism.
ERIC Educational Resources Information Center
Grosch, Michael
2014-01-01
The rise of the web 2.0 led to dramatic changes in media usage behavior of students in tertiary education. Services such as Google and Facebook are most accepted amongst students not only in pastime but also for learning. A representative survey was made at Karlsruhe Institute of Technology (KIT). About 1,400 students were asked 150 questions to…
3 Ways that Web-Based Computing Will Change Colleges--And Challenge Them
ERIC Educational Resources Information Center
Young, Jeffrey R.
2008-01-01
Cloud computing, one of the latest technology buzzwords, is so hard to explain that Google drove a bus from campus to campus to walk students through the company's vision of it. After students sat through a demo at computers set up nearby, they boarded the bus and got free T-shirts. The bus only stopped at colleges that had already agreed to hand…
Canute Rules the Waves?: Hope for E-Library Tools Facing the Challenge of the "Google Generation"
ERIC Educational Resources Information Center
Myhill, Martin
2007-01-01
Purpose: To consider the findings of a recent e-resources survey at the University of Exeter Library in the context of the dominance of web search engines in academia, balanced by the development of e-library tools such as the library OPAC, OpenURL resolvers, metasearch engines, LDAP and proxy servers, and electronic resource management modules.…
A Virtual Tour of the 1868 Hayward Earthquake in Google EarthTM
NASA Astrophysics Data System (ADS)
Lackey, H. G.; Blair, J. L.; Boatwright, J.; Brocher, T.
2007-12-01
The 1868 Hayward earthquake has been overshadowed by the subsequent 1906 San Francisco earthquake that destroyed much of San Francisco. Nonetheless, a modern recurrence of the 1868 earthquake would cause widespread damage to the densely populated Bay Area, particularly in the east Bay communities that have grown up virtually on top of the Hayward fault. Our concern is heightened by paleoseismic studies suggesting that the recurrence interval for the past five earthquakes on the southern Hayward fault is 140 to 170 years. Our objective is to build an educational web site that illustrates the cause and effect of the 1868 earthquake drawing on scientific and historic information. We will use Google EarthTM software to visually illustrate complex scientific concepts in a way that is understandable to a non-scientific audience. This web site will lead the viewer from a regional summary of the plate tectonics and faulting system of western North America, to more specific information about the 1868 Hayward earthquake itself. Text and Google EarthTM layers will include modeled shaking of the earthquake, relocations of historic photographs, reconstruction of damaged buildings as 3-D models, and additional scientific data that may come from the many scientific studies conducted for the 140th anniversary of the event. Earthquake engineering concerns will be stressed, including population density, vulnerable infrastructure, and lifelines. We will also present detailed maps of the Hayward fault, measurements of fault creep, and geologic evidence of its recurrence. Understanding the science behind earthquake hazards is an important step in preparing for the next significant earthquake. We hope to communicate to the public and students of all ages, through visualizations, not only the cause and effect of the 1868 earthquake, but also modern seismic hazards of the San Francisco Bay region.
Bragazzi, Nicola Luigi; Toletone, Alessandra; Brigo, Francesco; Durando, Paolo
2016-01-01
Objective Silicosis is an untreatable but preventable occupational disease, caused by exposure to silica. It can progressively evolve to lung impairment, respiratory failure and death, even after exposure has ceased. However, little is known about occupational diseases-related interest at the level of scientific community, media coverage and web behavior. This article aims at filling in this gap of knowledge, taking the silicosis as a case study. Methods We investigated silicosis-related web-activities using Google Trends (GT) for capturing the Internet behavior worldwide in the years 2004–2015. GT-generated data were, then, compared with the silicosis-related scientific production (i.e., PubMed and Google Scholar), the media coverage (i.e., Google news), the Wikipedia traffic (i.e, Wikitrends) and the usage of new media (i.e., YouTube and Twitter). Results A peak in silicosis-related web searches was noticed in 2010–2011: interestingly, both scientific articles production and media coverage markedly increased after these years in a statistically significant way. The public interest and the level of the public engagement were witnessed by an increase in likes, comments, hashtags, and re-tweets. However, it was found that only a small fraction of the posted/uploaded material contained accurate scientific information. Conclusions GT could be useful to assess the reaction of the public and the level of public engagement both to novel risk-factors associated to occupational diseases, and possibly related changes in disease natural history, and to the effectiveness of preventive workplace practices and legislative measures adopted to improve occupational health. Further, occupational clinicians should become aware of the topics most frequently searched by patients and proactively address these concerns during the medical examination. Institutional bodies and organisms should be more present and active in digital tools and media to disseminate and communicate scientifically accurate information. This manuscript should be intended as preliminary, exploratory communication, paving the way for further studies. PMID:27806115
Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search.
Jay, Caroline; Harper, Simon; Dunlop, Ian; Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain
2016-01-14
Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these "experts." Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the "Google generation" than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is "Google-like," enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F1,19=37.3, P<.001), with a main effect of task (F3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F1,19=18.0, P<.001). There was also a main effect of task (F2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation.
Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search
Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain
2016-01-01
Background Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these “experts.” Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. Objective The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the “Google generation” than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Methods Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is “Google-like,” enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Results Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F 1,19=37.3, P<.001), with a main effect of task (F 3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F 1,19=18.0, P<.001). There was also a main effect of task (F 2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. Conclusions The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation. PMID:26769334
Bragazzi, Nicola Luigi; Amital, Howard; Adawi, Mohammad; Brigo, Francesco; Watad, Samaa; Aljadeff, Gali; Amital, Daniela; Watad, Abdulla
2017-08-01
Fibromyalgia is a chronic disease, characterized by pain, fatigue, and poor sleep quality. Patients and mainly those with chronic diseases tend to search for health-related material online. Google Trends (GT), an online tracking system of Internet hit-search volumes that recently merged with its sister project Google Insights for Search (Google Inc.), was used to explore Internet activity related to fibromyalgia. Digital interest in fibromyalgia and related topics searched worldwide has been reported in the last 13 years. A slight decline in this interest has been observed through the years, remaining stable in the last 5 years. Fibromyalgia web behavior exhibited a regular, cyclic pattern, even though no seasonality could be detected. Similar findings have been reported among rheumatoid arthritis and depression. However, differently from rheumatoid arthritis and depression, the focus of the fibromyalgia-related queries was more concentrated on drug side effects and the "elusive" nature of fibromyalgia: is it a real or imaginary condition? Does it really exist or is it all in your head? A tremendous amount of information on fibromyalgia and related topics exist online. Still many queries have been raised and repeated constantly by fibromyalgia patients in the last 13 years. Therefore, physicians should be aware of the common concerns of people or patients regarding fibromyalgia in order to give a proper answers and education.
Does Googling lead to statin intolerance?
Khan, Sarah; Holbrook, Anne; Shah, Baiju R
2018-07-01
The nocebo effect, where patients with expectations of adverse effects are more likely to experience them, may contribute to the high rate of statin intolerance found in observational studies. Information that patients read on the internet may be a precipitant of this effect. The objective of the study was to establish whether the number of websites about statin side effects found using Google is associated with the prevalence of statin intolerance. The prevalence of statin intolerance in 13 countries across 5 continents was established in a recent study via a web-based survey of primary care physicians and specialists. Using the Google search engine for each country, the number of websites about statin side effects was determined, and standardized to the number of websites about statins overall. Searches were restricted to pages in the native language, and were conducted after connecting to each country using a virtual private network (VPN). English-speaking countries (Australia, Canada, UK, USA) had the highest prevalence of statin intolerance and also had the largest standardized number of websites about statin side effects. The sample Pearson correlation coefficient between these two variables was 0.868. Countries where patients using Google are more likely to find websites about statin side effects have greater levels of statin intolerance. The nocebo effect driven by online information may be contributing to statin intolerance. Copyright © 2018 Elsevier B.V. All rights reserved.
UNAVCO Software and Services for Visualization and Exploration of Geoscience Data
NASA Astrophysics Data System (ADS)
Meertens, C.; Wier, S.
2007-12-01
UNAVCO has been involved in visualization of geoscience data to support education and research for several years. An early and ongoing service is the Jules Verne Voyager, a web browser applet built on the GMT that displays any area on Earth, with many data set choices, including maps, satellite images, topography, geoid heights, sea-floor ages, strain rates, political boundaries, rivers and lakes, earthquake and volcano locations, focal mechanisms, stress axes, and observed and modeled plate motion and deformation velocity vectors from geodetic measurements around the world. As part of the GEON project, UNAVCO has developed the GEON IDV, a research-level, 4D (earth location, depth and/or altitude, and time), Java application for interactive display and analysis of geoscience data. The GEON IDV is designed to meet the challenge of investigating complex, multi-variate, time-varying, three-dimensional geoscience data anywhere on earth. The GEON IDV supports simultaneous displays of data sets from differing sources, with complete control over colors, time animation, map projection, map area, point of view, and vertical scale. The GEON IDV displays gridded and point data, images, GIS shape files, and several other types of data. The GEON IDV has symbols and displays for GPS velocity vectors, seismic tomography, earthquake focal mechanisms, earthquake locations with magnitude or depth, seismic ray paths in 3D, seismic anisotropy, convection model visualization, earth strain axes and strain field imagery, and high-resolution 3D topographic relief maps. Multiple data sources and display types may appear in one view. As an example of GEON IDV utility, it can display hypocenters under a volcano, a surface geology map of the volcano draped over 3D topographic relief, town locations and political boundaries, and real-time 3D weather radar clouds of volcanic ash in the atmosphere, with time animation. The GEON IDV can drive a GeoWall or other 3D stereo system. IDV output includes imagery, movies, and KML files for Google Earth use of IDV static images, where Google Earth can handle the display. The IDV can be scripted to create display images on user request or automatically on data arrival, offering the use of the IDV as a back end to support a data web site. We plan to extend the power of the IDV by accepting new data types and data services, such as GeoSciML. An active program of online and video training in GEON IDV use is planned. UNAVCO will support users who need assistance converting their data to the standard formats used by the GEON IDV. The UNAVCO Facility provides web-accessible support for Google Earth and Google Maps display of any of more than 9500 GPS stations and survey points, including metadata for each installation. UNAVCO provides corresponding Open Geospatial Consortium (OGC) web services with the same data. UNAVCO's goal is to facilitate data access, interoperability, and efficient searches, exploration, and use of data by promoting web services, standards for GEON IDV data formats and metadata, and software able to simultaneously read and display multiple data sources, formats, and map locations or projections. Retention and propagation of semantics and metadata with observational and experimental values is essential for interoperability and understanding diverse data sources.
Characterization of topological structure on complex networks.
Nakamura, Ikuo
2003-10-01
Characterizing the topological structure of complex networks is a significant problem especially from the viewpoint of data mining on the World Wide Web. "Page rank" used in the commercial search engine Google is such a measure of authority to rank all the nodes matching a given query. We have investigated the page-rank distribution of the real Web and a growing network model, both of which have directed links and exhibit a power law distributions of in-degree (the number of incoming links to the node) and out-degree (the number of outgoing links from the node), respectively. We find a concentration of page rank on a small number of nodes and low page rank on high degree regimes in the real Web, which can be explained by topological properties of the network, e.g., network motifs, and connectivities of nearest neighbors.
Available, intuitive and free! Building e-learning modules using web 2.0 services.
Tam, Chun Wah Michael; Eastwood, Anne
2012-01-01
E-learning is part of the mainstream in medical education and often provides the most efficient and effective means of engaging learners in a particular topic. However, translating design and content ideas into a useable product can be technically challenging, especially in the absence of information technology (IT) support. There is little published literature on the use of web 2.0 services to build e-learning activities. To describe the web 2.0 tools and solutions employed to build the GP Synergy evidence-based medicine and critical appraisal online course. We used and integrated a number of free web 2.0 services including: Prezi, a web-based presentation platform; YouTube, a video sharing service; Google Docs, a online document platform; Tiny.cc, a URL shortening service; and Wordpress, a blogging platform. The course consisting of five multimedia-rich, tutorial-like modules was built without IT specialist assistance or specialised software. The web 2.0 services used were free. The course can be accessed with a modern web browser. Modern web 2.0 services remove many of the technical barriers for creating and sharing content on the internet. When used synergistically, these services can be a flexible and low-cost platform for building e-learning activities. They were a pragmatic solution in our context.
Analysis and visualization of Arabidopsis thaliana GWAS using web 2.0 technologies.
Huang, Yu S; Horton, Matthew; Vilhjálmsson, Bjarni J; Seren, Umit; Meng, Dazhe; Meyer, Christopher; Ali Amer, Muhammad; Borevitz, Justin O; Bergelson, Joy; Nordborg, Magnus
2011-01-01
With large-scale genomic data becoming the norm in biological studies, the storing, integrating, viewing and searching of such data have become a major challenge. In this article, we describe the development of an Arabidopsis thaliana database that hosts the geographic information and genetic polymorphism data for over 6000 accessions and genome-wide association study (GWAS) results for 107 phenotypes representing the largest collection of Arabidopsis polymorphism data and GWAS results to date. Taking advantage of a series of the latest web 2.0 technologies, such as Ajax (Asynchronous JavaScript and XML), GWT (Google-Web-Toolkit), MVC (Model-View-Controller) web framework and Object Relationship Mapper, we have created a web-based application (web app) for the database, that offers an integrated and dynamic view of geographic information, genetic polymorphism and GWAS results. Essential search functionalities are incorporated into the web app to aid reverse genetics research. The database and its web app have proven to be a valuable resource to the Arabidopsis community. The whole framework serves as an example of how biological data, especially GWAS, can be presented and accessed through the web. In the end, we illustrate the potential to gain new insights through the web app by two examples, showcasing how it can be used to facilitate forward and reverse genetics research. Database URL: http://arabidopsis.usc.edu/
Farhat, Naim; Zoeller, Christoph; Petersen, Claus; Ure, Benno
2016-08-01
Introduction The presentation of health institutions in the internet is highly variable concerning marketing features and medical information. We aimed to investigate the structure and the kind of information provided on the Web sites of all departments of pediatric surgery in Germany. Furthermore, we aimed to identify the degree to which these Web sites comply with internet marketing recommendations for generating business. Method The Web sites of all pediatric surgery units referred to as departments on the official Web site of the German Society of Pediatric Surgery (GSPS) were assessed. The search engine Google was used by entering the terms "pediatric surgery" and the name of the city. Besides general data eight content characteristics focusing on ranking, accessibility, use of social media, multilingual sites, navigation options, selected images, contact details, and medical information were evaluated according to published recommendations. Results A total of 85 departments of pediatric surgery were included. On Google search results 44 (52%) ranked number one and 34 (40%) of the department's homepages were accessible directly through the homepage link of the GSPS. A link to own digital and/or social media was offered on 11 (13%) homepages. Nine sites were multilingual. The most common navigation bar item was clinical services on 74 (87%) homepages. Overall, 76 (89%) departments presented their doctors and 17 (20%) presented other staff members with images of doctors on 53 (62%) and contact data access from the homepage on 68 (80%) Web sites. On 25 (29%) Web sites information on the medical conditions treated were presented, on 17 (20%) details of treating concepts, and on 4 (5%) numbers of patients with specific conditions treated in the own department per year. Conclusion We conclude that numerous of the investigated online presentations do not comply with recommended criteria for offering professional information for patients and for promoting services. Only less than one-third of the departments of pediatric surgery in Germany offer information about the medical conditions they treat. Features, which may influence the decision of patients and parents such as ranking, accessibility, use of social media, multilingual sites, navigation options, selected images, and contact information were differently lacking on many Web sites. Georg Thieme Verlag KG Stuttgart · New York.
78 FR 5838 - NRC Enforcement Policy
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-28
... submit comments by any of the following methods: Federal Rulemaking Web site: Go to http://www... of the following methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search... the search, select ``ADAMS Public Documents'' and then select ``Begin Web-based ADAMS Search.'' For...
Appreciating "Charlotte's Web."
ERIC Educational Resources Information Center
Jordan, Anne Devereaux
1997-01-01
Presents an appreciation of "Charlotte's Web," a children's literary classic which portrays clearly and simply the themes of love, death, friendship, and salvation. Discusses E.B. White's life and background, his attention to writing style, and the beginnings of "Charlotte's Web." Provides a capsule of classic elements in the…
EntrezAJAX: direct web browser access to the Entrez Programming Utilities
2010-01-01
Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX) to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/ PMID:20565938
Analyzing traffic source impact on returning visitors ratio in information provider website
NASA Astrophysics Data System (ADS)
Prasetio, A.; Sari, P. K.; Sharif, O. O.; Sofyan, E.
2016-04-01
Web site performance, especially returning visitor is an important metric for an information provider web site. Since high returning visitor is a good indication of a web site’s visitor loyalty, it is important to find a way to improve this metric. This research investigated if there is any difference on returning visitor metric among three web traffic sources namely direct, referral and search. Monthly returning visitor and total visitor from each source is retrieved from Google Analytics tools and then calculated to measure returning visitor ratio. The period of data observation is from July 2012 to June 2015 resulting in a total of 108 samples. These data then analysed using One-Way Analysis of Variance (ANOVA) to address our research question. The results showed that different traffic source has significantly different returning visitor ratio especially between referral traffic source and the other two traffic sources. On the other hand, this research did not find any significant difference between returning visitor ratio from direct and search traffic sources. The owner of the web site can focus to multiply referral links from other relevant sites.
NASA Astrophysics Data System (ADS)
Paolini, P.; Forti, G.; Catalani, G.; Lucchetti, S.; Menghini, A.; Mirandola, A.; Pistacchio, S.; Porzia, U.; Roberti, M.
2016-04-01
High Quality survey models, realized by multiple Low Cost methods and technologies, as a container to sharing Cultural and Archival Heritage, this is the aim guiding our research, here described in its primary applications. The SAPIENZA building, a XVI century masterpiece that represented the first unified headquarters of University in Rome, plays since year 1936, when the University moved to its newly edified campus, the role of the main venue for the State Archives. By the collaboration of a group of students of the Architecture Faculty, some integrated survey methods were applied on the monument with success. The beginning was the topographic survey, creating a reference on ground and along the monument for the upcoming applications, a GNNS RTK survey followed georeferencing points on the internal courtyard. Dense stereo matching photogrammetry is nowadays an accepted method for generating 3D survey models, accurate and scalable; it often substitutes 3D laser scanning for its low cost, so that it became our choice. Some 360° shots were planned for creating panoramic views of the double portico from the courtyard, plus additional single shots of some lateral spans and of pillars facing the court, as a single operation with a double finality: to create linked panotours with hotspots to web-linked databases, and 3D textured and georeferenced surface models, allowing to study the harmonic proportions of the classical architectural order. The use of free web Gis platforms, to load the work in Google Earth and the realization of low cost 3D prototypes of some representative parts, has been even performed.
WebProtégé: a collaborative Web-based platform for editing biomedical ontologies.
Horridge, Matthew; Tudorache, Tania; Nuylas, Csongor; Vendetti, Jennifer; Noy, Natalya F; Musen, Mark A
2014-08-15
WebProtégé is an open-source Web application for editing OWL 2 ontologies. It contains several features to aid collaboration, including support for the discussion of issues, change notification and revision-based change tracking. WebProtégé also features a simple user interface, which is geared towards editing the kinds of class descriptions and annotations that are prevalent throughout biomedical ontologies. Moreover, it is possible to configure the user interface using views that are optimized for editing Open Biomedical Ontology (OBO) class descriptions and metadata. Some of these views are shown in the Supplementary Material and can be seen in WebProtégé itself by configuring the project as an OBO project. WebProtégé is freely available for use on the Web at http://webprotege.stanford.edu. It is implemented in Java and JavaScript using the OWL API and the Google Web Toolkit. All major browsers are supported. For users who do not wish to host their ontologies on the Stanford servers, WebProtégé is available as a Web app that can be run locally using a Servlet container such as Tomcat. Binaries, source code and documentation are available under an open-source license at http://protegewiki.stanford.edu/wiki/WebProtege. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
An overview of biomedical literature search on the World Wide Web in the third millennium.
Kumar, Prince; Goel, Roshni; Jain, Chandni; Kumar, Ashish; Parashar, Abhishek; Gond, Ajay Ratan
2012-06-01
Complete access to the existing pool of biomedical literature and the ability to "hit" upon the exact information of the relevant specialty are becoming essential elements of academic and clinical expertise. With the rapid expansion of the literature database, it is almost impossible to keep up to date with every innovation. Using the Internet, however, most people can freely access this literature at any time, from almost anywhere. This paper highlights the use of the Internet in obtaining valuable biomedical research information, which is mostly available from journals, databases, textbooks and e-journals in the form of web pages, text materials, images, and so on. The authors present an overview of web-based resources for biomedical researchers, providing information about Internet search engines (e.g., Google), web-based bibliographic databases (e.g., PubMed, IndMed) and how to use them, and other online biomedical resources that can assist clinicians in reaching well-informed clinical decisions.
Kaewpitoon, Soraya J; Rujirakul, Ratana; Joosiri, Apinya; Jantakate, Sirinun; Sangkudloa, Amnat; Kaewthani, Sarochinee; Chimplee, Kanokporn; Khemplila, Kritsakorn; Kaewpitoon, Natthawut
2016-01-01
Cholangiocarcinoma (CCA) is a serious problem in Thailand, particularly in the northeastern and northern regions. Database of population at risk are need required for monitoring, surveillance, home health care, and home visit. Therefore, this study aimed to develop a geographic information system (GIS) database and Google map of the population at risk of CCA in Mueang Yang district, Nakhon Ratchasima province, northeastern Thailand during June to October 2015. Populations at risk were screened using the Korat CCA verbal screening test (KCVST). Software included Microsoft Excel, ArcGIS, and Google Maps. The secondary data included the point of villages, sub-district boundaries, district boundaries, point of hospital in Mueang Yang district, used for created the spatial databese. The populations at risk for CCA and opisthorchiasis were used to create an arttribute database. Data were tranfered to WGS84 UTM ZONE 48. After the conversion, all of the data were imported into Google Earth using online web pages www.earthpoint.us. Some 222 from a 4,800 population at risk for CCA constituted a high risk group. Geo-visual display available at following www.google.com/maps/d/u/0/ edit?mid=zPxtcHv_iDLo.kvPpxl5mAs90 and hl=th. Geo-visual display 5 layers including: layer 1, village location and number of the population at risk for CCA; layer 2, sub-district health promotion hospital in Mueang Yang district and number of opisthorchiasis; layer 3, sub-district district and the number of population at risk for CCA; layer 4, district hospital and the number of population at risk for CCA and number of opisthorchiasis; and layer 5, district and the number of population at risk for CCA and number of opisthorchiasis. This GIS database and Google map production process is suitable for further monitoring, surveillance, and home health care for CCA sufferers.
Visualizing Moon Data and Imagery with Google Earth
NASA Astrophysics Data System (ADS)
Weiss-Malik, M.; Scharff, T.; Nefian, A.; Moratto, Z.; Kolb, E.; Lundy, M.; Hancher, M.; Gorelick, N.; Broxton, M.; Beyer, R. A.
2009-12-01
There is a vast store of planetary geospatial data that has been collected by NASA but is difficult to access and visualize. Virtual globes have revolutionized the way we visualize and understand the Earth, but other planetary bodies including Mars and the Moon can be visualized in similar ways. Extraterrestrial virtual globes are poised to revolutionize planetary science, bring an exciting new dimension to science education, and allow ordinary users to explore imagery being sent back to Earth by planetary science satellites. The original Google Moon Web site was a limited series of maps and Apollo content. The new Moon in Google Earth feature provides a similar virtual planet experience for the Moon as we have for the Earth and Mars. We incorporated existing Clementine and Lunar Orbiter imagery for the basemaps and a combination of Kaguya LALT topography and some terrain created from Apollo Metric and Panoramic images. We also have information about the Apollo landings and other robotic landers on the surface, as well as historic maps and charts, and guided tours. Some of the first-released LROC imagery of the Apollo landing sites has been put in place, and we look forward to incorporating more data as it is released from LRO, Chandraayan-1, and Kaguya. These capabilities have obvious public outreach and education benefits, but the potential benefits of allowing planetary scientists to rapidly explore these large and varied data collections — in geological context and within a single user interface — are also becoming evident. Because anyone can produce additional KML content for use in Google Earth, scientists can customize the environment to their needs as well as publish their own processed data and results for others to use. Many scientists and organizations have begun to do this already, resulting in a useful and growing collection of planetary-science-oriented Google Earth layers. Screen shot of Moon in Google Earth, a freely downloadable application for visualizing Moon imagery and data.
iAnn: an event sharing platform for the life sciences.
Jimenez, Rafael C; Albar, Juan P; Bhak, Jong; Blatter, Marie-Claude; Blicher, Thomas; Brazas, Michelle D; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; van Driel, Marc A; Dunn, Michael J; Fernandes, Pedro L; van Gelder, Celia W G; Hermjakob, Henning; Ioannidis, Vassilios; Judge, David P; Kahlem, Pascal; Korpelainen, Eija; Kraus, Hans-Joachim; Loveland, Jane; Mayer, Christine; McDowall, Jennifer; Moran, Federico; Mulder, Nicola; Nyronen, Tommi; Rother, Kristian; Salazar, Gustavo A; Schneider, Reinhard; Via, Allegra; Villaveces, Jose M; Yu, Ping; Schneider, Maria V; Attwood, Teresa K; Corpas, Manuel
2013-08-01
We present iAnn, an open source community-driven platform for dissemination of life science events, such as courses, conferences and workshops. iAnn allows automatic visualisation and integration of customised event reports. A central repository lies at the core of the platform: curators add submitted events, and these are subsequently accessed via web services. Thus, once an iAnn widget is incorporated into a website, it permanently shows timely relevant information as if it were native to the remote site. At the same time, announcements submitted to the repository are automatically disseminated to all portals that query the system. To facilitate the visualization of announcements, iAnn provides powerful filtering options and views, integrated in Google Maps and Google Calendar. All iAnn widgets are freely available. http://iann.pro/iannviewer manuel.corpas@tgac.ac.uk.
Treatment of transverse patellar fractures: a comparison between metallic and non-metallic implants.
Heusinkveld, Maarten H G; den Hamer, Anniek; Traa, Willeke A; Oomen, Pim J A; Maffulli, Nicola
2013-01-01
Several methods of transverse patellar fixation have been described. This study compares the clinical outcome and the occurrence of complications of various fixation methods. The databases PubMed, Web of Science, Science Direct, Google Scholar and Google were searched. A direct comparison between fixation techniques using mixed or non-metallic implants and metallic K-wire and tension band fixation shows no significant difference in clinical outcome between both groups. Additionally, studies reporting novel operation techniques show good clinical results. Studies describing the treatment of patients using non-metallic or mixed implants are fewer compared with those using metallic fixation. A large variety of clinical scoring systems were used for assessing the results of treatment, which makes direct comparison difficult. More data of fracture treatment using non-metallic or mixed implants is needed to achieve a more balanced comparison.
Just Google It: An Approach on Word Frequencies Based on Online Search Result.
Moret-Tatay, Carmen; Gamermann, Daniel; Murphy, Michael; Kuzmičová, Anezka
2018-01-01
Word frequency is one of the most robust factors in the literature on word processing, based on the lexical corpus of a language. However, different sources might be used in order to determine the actual frequency of each word. Recent research has determined frequencies based on movie subtitles, Twitter, blog posts, or newspapers. In this paper, we examine a determination of these frequencies based on the World Wide Web. For this purpose, a Python script was developed to obtain frequencies of a word through online search results. These frequencies were employed to estimate lexical decision times in comparison to the traditional frequencies in a lexical decision task. It was found that the Google frequencies predict reaction times comparably to the traditional frequencies. Still, the explained variance was higher for the traditional database.
Alabama Public Scoping Meeting | NOAA Gulf Spill Restoration
: Mobile, AL Start Time: 6:30 p.m. Central Time Description: As part of the public scoping process, the co open at 6:30 p.m. and the meeting will begin at 7:30 p.m. Location: The Battle House Renaissance Mobile Hotel & Spa 26 North Royal Street Mobile, AL 36602 (google map of location) Gulf Spill Restoration
Bootstrapping and Maintaining Trust in the Cloud
2016-12-01
proliferation and popularity of infrastructure-as-a- service (IaaS) cloud computing services such as Amazon Web Services and Google Compute Engine means...IaaS trusted computing system: • Secure Bootstrapping – the system should enable the tenant to securely install an initial root secret into each cloud ...elastically instantiated and terminated. Prior cloud trusted computing solutions address a subset of these features, but none achieve all. Excalibur [31] sup
A Full-Text-Based Search Engine for Finding Highly Matched Documents Across Multiple Categories
NASA Technical Reports Server (NTRS)
Nguyen, Hung D.; Steele, Gynelle C.
2016-01-01
This report demonstrates the full-text-based search engine that works on any Web-based mobile application. The engine has the capability to search databases across multiple categories based on a user's queries and identify the most relevant or similar. The search results presented here were found using an Android (Google Co.) mobile device; however, it is also compatible with other mobile phones.
The Library in Your Toolbar: You Can Make It Easy to Search Library Resources from Your Own Browser
ERIC Educational Resources Information Center
Webster, Peter
2007-01-01
For years, patrons have been able to access library services from home and in the library building, but in the world of Google, Yahoo, YouTube, MySpace, and Facebook, library web sites and catalogs are too often not the first place people go to look for information. The innovative use of toolbars could change this. Toolbars have been popular for…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Sayan; Celestre, Rich; Feng, Jun
2016-01-02
The method of synchrotron X-ray protein footprinting (XF-MS) is used to determine protein conformational changes, folding, protein-protein and protein-ligand interactions, providing information which is often difficult to obtain using X-ray crystallography and other common structural biology methods [1 G. Xu and M.R. Chance, Chemical Reviews 107, 3514–3543 (2007). [CrossRef], [PubMed], [Web of Science ®], [Google Scholar] –3 V.N. Bavro, Biochem Soc Trans 43, 983–994 (2015). [CrossRef], [PubMed], [Web of Science ®], [Google Scholar] ]. The technique uses comparative in situ labeling of solvent-accessible side chains by highly reactive hydroxyl radicals (•OH) in buffered aqueous solution under different assay conditions. Inmore » regions where a protein is folded or binds a partner, these •OH susceptible sites are inaccessible to solvent, and therefore protected from labeling. The •OH are generated by the ionization of water using high-flux-density X-rays. High-flux density is a key factor for XF-MS labeling because obtaining an adequate steady-state concentration of hydroxyl radical within a short irradiation time is necessary to minimize radiation-induced secondary damage and also to overcome various scavenging reactions that reduce the yield of labeled side chains.« less
PageRank and rank-reversal dependence on the damping factor
NASA Astrophysics Data System (ADS)
Son, S.-W.; Christensen, C.; Grassberger, P.; Paczuski, M.
2012-12-01
PageRank (PR) is an algorithm originally developed by Google to evaluate the importance of web pages. Considering how deeply rooted Google's PR algorithm is to gathering relevant information or to the success of modern businesses, the question of rank stability and choice of the damping factor (a parameter in the algorithm) is clearly important. We investigate PR as a function of the damping factor d on a network obtained from a domain of the World Wide Web, finding that rank reversal happens frequently over a broad range of PR (and of d). We use three different correlation measures, Pearson, Spearman, and Kendall, to study rank reversal as d changes, and we show that the correlation of PR vectors drops rapidly as d changes from its frequently cited value, d0=0.85. Rank reversal is also observed by measuring the Spearman and Kendall rank correlation, which evaluate relative ranks rather than absolute PR. Rank reversal happens not only in directed networks containing rank sinks but also in a single strongly connected component, which by definition does not contain any sinks. We relate rank reversals to rank pockets and bottlenecks in the directed network structure. For the network studied, the relative rank is more stable by our measures around d=0.65 than at d=d0.
Jupp, Simon; Burdett, Tony; Welter, Danielle; Sarntivijai, Sirarat; Parkinson, Helen; Malone, James
2016-01-01
Authoring bio-ontologies is a task that has traditionally been undertaken by skilled experts trained in understanding complex languages such as the Web Ontology Language (OWL), in tools designed for such experts. As requests for new terms are made, the need for expert ontologists represents a bottleneck in the development process. Furthermore, the ability to rigorously enforce ontology design patterns in large, collaboratively developed ontologies is difficult with existing ontology authoring software. We present Webulous, an application suite for supporting ontology creation by design patterns. Webulous provides infrastructure to specify templates for populating ontology design patterns that get transformed into OWL assertions in a target ontology. Webulous provides programmatic access to the template server and a client application has been developed for Google Sheets that allows templates to be loaded, populated and resubmitted to the Webulous server for processing. The development and delivery of ontologies to the community requires software support that goes beyond the ontology editor. Building ontologies by design patterns and providing simple mechanisms for the addition of new content helps reduce the overall cost and effort required to develop an ontology. The Webulous system provides support for this process and is used as part of the development of several ontologies at the European Bioinformatics Institute.
Be-safe travel, a web-based geographic application to explore safe-route in an area
NASA Astrophysics Data System (ADS)
Utamima, Amalia; Djunaidy, Arif
2017-08-01
In large cities in developing countries, the various forms of criminality are often found. For instance, the most prominent crimes in Surabaya, Indonesia is 3C, that is theft with violence (curas), theft by weighting (curat), and motor vehicle theft (curanmor). 3C case most often occurs on the highway and residential areas. Therefore, new entrants in an area should be aware of these kind of crimes. Route Planners System or route planning system such as Google Maps only consider the shortest distance in the calculation of the optimal route. The selection of the optimal path in this study not only consider the shortest distance, but also involves other factors, namely the security level. This research considers at the need for an application to recommend the safest road to be passed by the vehicle passengers while drive an area. This research propose Be-Safe Travel, a web-based application using Google API that can be accessed by people who like to drive in an area, but still lack of knowledge of the pathways which are safe from crime. Be-Safe Travel is not only useful for the new entrants, but also useful for delivery courier of valuables goods to go through the safest streets.
Zarifmahmoudi, Leili; Kianifar, Hamid Reza; Sadeghi, Ramin
2013-01-01
Objective(s): Citation tracking is an important method to analyze the scientific impact of journal articles and can be done through Scopus (SC), Google Scholar (GS), or ISI web of knowledge (WOS). In the current study, we analyzed the citations to 2011-2012 articles of Iranian Journal of Basic Medical Sciences (IJBMS) in these three resources. Material and Methods: The relevant data from SC, GS, and WOS official websites. Total number of citations, their overlap and unique citations of these three recourses were evaluated. Results: WOS and SC covered 100% and GS covered 97% of the IJBMS items. Totally, 37 articles were cited at least once in one of the studied resources. Total number of citations were 20, 30, and 59 in WOS, SC, and GS respectively. Forty citations of GS, 6 citation of SC, and 2 citations of WOS were unique. Conclusion: Every scientific resource has its own inaccuracies in providing citation analysis information. Citation analysis studies are better to be done each year to correct any inaccuracy as soon as possible. IJBMS has gained considerable scientific attention from wide range of high impact journals and through citation tracking method; this visibility can be traced more thoroughly. PMID:24379959
H-indices of Academic Pediatricians of Mashhad University of Medical Sciences.
Hamidreza, Kianifar; Javad, Akhoondian; Ramin, Sadeghi; Leili, Zarifmahmoudi
2013-12-01
Web of Science, Scopus, and Google Scholar are three major sources which provide h-indices for individual researchers. In this study we aimed to compare the h-indices of the academic pediatricians of Mashhad University of Medical Sciences obtained from the above mentioned sources. Academic pediatrician who had at least 5 ISI indexed articles entered the study. Information required for evaluating the h-indices of the included researchers were retrieved from official websites Web of Science (WOS), Scopus, and Google Scholar (GS). Correlations between obtained h-indices from the mentioned databases were analyzed using Spearrman correlation coefficient. Ranks of each researcher according to each database h-index were also evaluated. In general, 16 pediatricians entered the study. Computed h-indices for individual authors were different in each database. Correlations between obtained h-indices were: 0.439 (ISI and Scopus), 0.488 (ISI and GS), and 0.810 (Scopus and GS). Despite differences between evaluated h-indices in each database for individual authors, the rankings according to these h-indices were almost similar. Although h-indices supplied by WOS, SCOPUS, and GS can be used interchangeably, their differences should be acknowledged. Setting up "ReasercherID" in WOS and "User profile" in GS, and giving regular feedback to SCOPUS can increase the validity of the calculated h-indices.
A Knowledge Portal and Collaboration Environment for the Earth Sciences
NASA Astrophysics Data System (ADS)
D'Agnese, F. A.
2008-12-01
Earth Knowledge is developing a web-based 'Knowledge Portal and Collaboration Environment' that will serve as the information-technology-based foundation of a modular Internet-based Earth-Systems Monitoring, Analysis, and Management Tool. This 'Knowledge Portal' is essentially a 'mash- up' of web-based and client-based tools and services that support on-line collaboration, community discussion, and broad public dissemination of earth and environmental science information in a wide-area distributed network. In contrast to specialized knowledge-management or geographic-information systems developed for long- term and incremental scientific analysis, this system will exploit familiar software tools using industry standard protocols, formats, and APIs to discover, process, fuse, and visualize existing environmental datasets using Google Earth and Google Maps. An early form of these tools and services is being used by Earth Knowledge to facilitate the investigations and conversations of scientists, resource managers, and citizen-stakeholders addressing water resource sustainability issues in the Great Basin region of the desert southwestern United States. These ongoing projects will serve as use cases for the further development of this information-technology infrastructure. This 'Knowledge Portal' will accelerate the deployment of Earth- system data and information into an operational knowledge management system that may be used by decision-makers concerned with stewardship of water resources in the American Desert Southwest.
Tanabe, Kouichi; Fujiwara, Kaho; Ogura, Hana; Yasuda, Hatsuna; Goto, Nobuyuki
2018-01-01
Background Patients and their families are able to obtain information about palliative care from websites easily nowadays. However, there are concerns on the accuracy of information on the Web and how up to date it is. Objective The objective of this study was to elucidate problematic points of medical information about palliative care obtained from websites, and to compare the quality of the information between Japanese and US websites. Methods We searched Google Japan and Google USA for websites relating to palliative care. We then evaluated the top 50 websites from each search using the DISCERN and LIDA instruments. Results We found that Japanese websites were given a lower evaluation of reliability than US websites. In 3 LIDA instrument subcategories—engagability (P<.001), currency (P=.001), and content production procedure (P<.001)—US websites scored significantly higher and had large effect sizes. Conclusions Our results suggest that Japanese websites have problems with the frequency with which they are updated, their update procedures and policies, and the scrutiny process the evidence must undergo. Additionally, there was a weak association between search ranking and reliability, and simultaneously we found that reliability could not be assessed by search ranking alone. PMID:29615388
NASA SensorWeb and OGC Standards for Disaster Management
NASA Technical Reports Server (NTRS)
Mandl, Dan
2010-01-01
I. Goal: Enable user to cost-effectively find and create customized data products to help manage disasters; a) On-demand; b) Low cost and non-specialized tools such as Google Earth and browsers; c) Access via open network but with sufficient security. II. Use standards to interface various sensors and resultant data: a) Wrap sensors in Open Geospatial Consortium (OGC) standards; b) Wrap data processing algorithms and servers with OGC standards c) Use standardized workflows to orchestrate and script the creation of these data; products. III. Target Web 2.0 mass market: a) Make it simple and easy to use; b) Leverage new capabilities and tools that are emerging; c) Improve speed and responsiveness.
Integration of Bim, Web Maps and Iot for Supporting Comfort Analysis
NASA Astrophysics Data System (ADS)
Gunduz, M.; Isikdag, U.; Basaraner, M.
2017-11-01
The use of the Internet is expanding and the technological capabilities of electronic devices are evolving. Today, Internet of Things (IoT) solutions can be developed that were never even imaginable before. In this paper, a case study is presented on the joint use of Building Information Model (BIM), Geographical Information Systems (GIS) and Internet of Things (IoT) technologies. It is a part of an ongoing study that intends to overcome some problems about the management of complex facilities. In the study, a BIM has been converted and displayed in 2D on Google Maps, and information on various sensors have been represented on the web with geographic coordinates in real-time.
Arif, Nadia; Al-Jefri, Majed; Bizzi, Isabella Harb; Perano, Gianni Boitano; Goldman, Michel; Haq, Inam; Chua, Kee Leng; Mengozzi, Manuela; Neunez, Marie; Smith, Helen; Ghezzi, Pietro
2018-01-01
The 1998 Lancet paper by Wakefield et al., despite subsequent retraction and evidence indicating no causal link between vaccinations and autism, triggered significant parental concern. The aim of this study was to analyze the online information available on this topic. Using localized versions of Google, we searched “autism vaccine” in English, French, Italian, Portuguese, Mandarin, and Arabic and analyzed 200 websites for each search engine result page (SERP). A common feature was the newsworthiness of the topic, with news outlets representing 25–50% of the SERP, followed by unaffiliated websites (blogs, social media) that represented 27–41% and included most of the vaccine-negative websites. Between 12 and 24% of websites had a negative stance on vaccines, while most websites were pro-vaccine (43–70%). However, their ranking by Google varied. While in Google.com, the first vaccine-negative website was the 43rd in the SERP, there was one vaccine-negative webpage in the top 10 websites in both the British and Australian localized versions and in French and two in Italian, Portuguese, and Mandarin, suggesting that the information quality algorithm used by Google may work better in English. Many webpages mentioned celebrities in the context of the link between vaccines and autism, with Donald Trump most frequently. Few websites (1–5%) promoted complementary and alternative medicine (CAM) but 50–100% of these were also vaccine-negative suggesting that CAM users are more exposed to vaccine-negative information. This analysis highlights the need for monitoring the web for information impacting on vaccine uptake. PMID:29922286
Google Scholar versus PubMed in locating primary literature to answer drug-related questions.
Freeman, Maisha Kelly; Lauderdale, Stacy A; Kendrach, Michael G; Woolley, Thomas W
2009-03-01
Google Scholar linked more visitors to biomedical journal Web sites than did PubMed after the database's initial release; however, its usefulness in locating primary literature articles is unknown. To assess in both databases the availability of primary literature target articles; total number of citations; availability of free, full-text journal articles; and number of primary literature target articles retrieved by year within the first 100 citations of the search results. Drug information question reviews published in The Annals of Pharmacotherapy Drug Information Rounds column served as targets to determine the retrieval ability of Google Scholar and PubMed searches. Reviews printed in this column from January 2006 to June 2007 were eligible for study inclusion. Articles were chosen if at least 2 key words of the printed article were included in the PubMed Medical Subject Heading (MeSH) database, and these terms were searched in both databases. Twenty-two of 33 (67%) eligible Drug Information Rounds articles met the inclusion criteria. The median number of primary literature articles used in each of these articles was 6.5 (IQR 4.8, 8.3; mean +/- SD 8 +/- 5.4). No significant differences were found for the mean number of target primary literature articles located within the first 100 citations in Google Scholar and PubMed searches (5.1 +/- 3.9 vs 5.3 +/- 3.3; p = 0.868). Google Scholar searches located more total results than PubMed (2211.6 +/- 3999.5 vs 44.2 +/- 47.4; p = 0.019). The availability of free, full-text journal articles per Drug Information Rounds article was similar between the databases (1.8 +/- 1.7 vs 2.3 +/- 1.7; p = 0.325). More primary literature articles published prior to 2000 were located with Google Scholar searches compared with PubMed (62.8% vs 34.9%; p = 0.017); however, no statistically significant differences between the databases were observed for articles published after 2000 (66.4 vs 77.1; p = 0.074). No significant differences were identified in the number of target primary literature articles located between databases. PubMed searches yielded fewer total citations than Google Scholar results; however, PubMed appears to be more specific than Google Scholar for locating relevant primary literature articles.
WebProtégé: A Collaborative Ontology Editor and Knowledge Acquisition Tool for the Web
Tudorache, Tania; Nyulas, Csongor; Noy, Natalya F.; Musen, Mark A.
2012-01-01
In this paper, we present WebProtégé—a lightweight ontology editor and knowledge acquisition tool for the Web. With the wide adoption of Web 2.0 platforms and the gradual adoption of ontologies and Semantic Web technologies in the real world, we need ontology-development tools that are better suited for the novel ways of interacting, constructing and consuming knowledge. Users today take Web-based content creation and online collaboration for granted. WebProtégé integrates these features as part of the ontology development process itself. We tried to lower the entry barrier to ontology development by providing a tool that is accessible from any Web browser, has extensive support for collaboration, and a highly customizable and pluggable user interface that can be adapted to any level of user expertise. The declarative user interface enabled us to create custom knowledge-acquisition forms tailored for domain experts. We built WebProtégé using the existing Protégé infrastructure, which supports collaboration on the back end side, and the Google Web Toolkit for the front end. The generic and extensible infrastructure allowed us to easily deploy WebProtégé in production settings for several projects. We present the main features of WebProtégé and its architecture and describe briefly some of its uses for real-world projects. WebProtégé is free and open source. An online demo is available at http://webprotege.stanford.edu. PMID:23807872
ERIC Educational Resources Information Center
Dehinbo, Johnson
2011-01-01
The widespread use of the Internet and the World Wide Web led to the availability of many platforms for developing dynamic Web application and the problem of choosing the most appropriate platform that will be easy to use for undergraduate students of web applications development in tertiary institutions. Students beginning to learn web…
The Implications of Well-Formedness on Web-Based Educational Resources.
ERIC Educational Resources Information Center
Mohler, James L.
Within all institutions, Web developers are beginning to utilize technologies that make sites more than static information resources. Databases such as XML (Extensible Markup Language) and XSL (Extensible Stylesheet Language) are key technologies that promise to extend the Web beyond the "information storehouse" paradigm and provide…
Mahroum, Naim; Bragazzi, Nicola Luigi; Brigo, Francesco; Waknin, Roy; Sharif, Kassem; Mahagna, Hussein; Amital, Howard; Watad, Abdulla
2018-04-01
Human immunodeficiency virus vaccination and pre-exposure prophylaxis represent two different emerging preventive tools. Google Trends was used to assess the public interest toward these tools in terms of digital activities. Worldwide web searches concerning the human immunodeficiency virus vaccine represented 0.34 percent, 0.03 percent, and 46.97 percent of human immunodeficiency virus, acquired immune deficiency syndrome, and human immunodeficiency virus/acquired immune deficiency syndrome treatment-related Google Trends queries, respectively. Concerning temporal trends, digital activities were shown to increase from 0 percent as of 1 January 2004 percent to 46 percent as of 8 October 2017 with two spikes observed in May and July 2012, coinciding with the US Food and Drug Administration approval. Bursts in search number and volume were recorded as human immunodeficiency virus vaccine trials emerged. This search topic has decreased in the past decade in parallel to the increase in Truvada-related topics. Concentrated searches were noticed among African countries with high human immunodeficiency virus/acquired immune deficiency syndrome prevalence. Stakeholders should take advantage of public interest especially in preventive medicine in high disease burden countries.
Gunnell, David; Derges, Jane; Chang, Shu-Sen; Biddle, Lucy
2015-01-01
Helium gas suicides have increased in England and Wales; easy-to-access descriptions of this method on the Internet may have contributed to this rise. To investigate the availability of information on using helium as a method of suicide and trends in searching about this method on the Internet. We analyzed trends in (a) Google searching (2004-2014) and (b) hits on a Wikipedia article describing helium as a method of suicide (2013-2014). We also investigated the extent to which helium was described as a method of suicide on web pages and discussion forums identified via Google. We found no evidence of rises in Internet searching about suicide using helium. News stories about helium suicides were associated with increased search activity. The Wikipedia article may have been temporarily altered to increase awareness of suicide using helium around the time of a celebrity suicide. Approximately one third of the links retrieved using Google searches for suicide methods mentioned helium. Information about helium as a suicide method is readily available on the Internet; the Wikipedia article describing its use was highly accessed following celebrity suicides. Availability of online information about this method may contribute to rises in helium suicides.
Google Hangouts: Leveraging Social Media to Reach the Education Community
NASA Astrophysics Data System (ADS)
Eisenhamer, Bonnie; Summers, Frank; McCallister, Dan; Ryer, Holly
2015-01-01
Research shows that educator professional development is most effective when it is sustained and/or when a follow-on component is included to support the learning process. In order to create more comprehensive learning experiences for our workshop participants, the education team at the Space Telescope Science Institute is working collaboratively with scientific staff and other experts to create a follow-on component for our professional development program. The new component utilizes video conferencing platforms, such as Google's Hangouts On Air, to provide educators with content updates and extended learning opportunities in between in-person professional development experiences. The goal is to enhance our professional development program in a cost-effective way while reaching a greater cross-section of educators. Video broadcasts go live on Google+, YouTube, and our website - thus providing access to any user with a web browser. Additionally, the broadcasts are automatically recorded and archived for future viewing on our YouTube channel. This provides educators with anywhere, anytime training that best suits their needs and schedules. This poster will highlight our new Hangouts for educators as well as our cross-departmental efforts to expand the reach of our Hubble Hangouts for the public through a targeted recruitment strategy.
Aslam, Romaan; Gibbons, Daniel; Ghezzi, Pietro
2017-01-01
The idea that antioxidant supplements can prevent or cure many diseases is extremely popular. To study the public understanding of antioxidants on the Web, we searched the term “antioxidants” in http://Google.com and analyzed 200 websites in terms of typology (news, commercial, professional, health portal, no-profit or government organization, scientific journals), disease or biological process mentioned (aging, immunity, neurological disease, diabetes, arthritis, etc.), and stance toward antioxidants, whether neutral, positive, or negative. Commercial and news websites were prevalent (over half of the total) but not in the top 10 returned by Google, where the most frequent were health portals, government, and professional websites. Among the diseases mentioned, cancer was the first, followed by vascular and eye diseases. A negative stance toward supplements was prevalent in the whole search, and this was even more evident for cancer. Information on aging or immunity had the largest proportion of pro-supplement and commercial websites. This study shows that some diseases are highly associated with antioxidants on the Internet and that information on antioxidants in aging and immunity is more likely to describe the positive effects of antioxidant supplements. PMID:28484695
Biographer: web-based editing and rendering of SBGN compliant biochemical networks.
Krause, Falko; Schulz, Marvin; Ripkens, Ben; Flöttmann, Max; Krantz, Marcus; Klipp, Edda; Handorf, Thomas
2013-06-01
The rapid accumulation of knowledge in the field of Systems Biology during the past years requires advanced, but simple-to-use, methods for the visualization of information in a structured and easily comprehensible manner. We have developed biographer, a web-based renderer and editor for reaction networks, which can be integrated as a library into tools dealing with network-related information. Our software enables visualizations based on the emerging standard Systems Biology Graphical Notation. It is able to import networks encoded in various formats such as SBML, SBGN-ML and jSBGN, a custom lightweight exchange format. The core package is implemented in HTML5, CSS and JavaScript and can be used within any kind of web-based project. It features interactive graph-editing tools and automatic graph layout algorithms. In addition, we provide a standalone graph editor and a web server, which contains enhanced features like web services for the import and export of models and visualizations in different formats. The biographer tool can be used at and downloaded from the web page http://biographer.biologie.hu-berlin.de/. The different software packages, including a server-independent version as well as a web server for Windows and Linux based systems, are available at http://code.google.com/p/biographer/ under the open-source license LGPL
McPherson, Ann; Macfarlane, Aidan
2007-08-01
The Internet is an exciting resource for providing immediately available, evidence-based, health information for young people in an age-appropriate form on a 24 hours/day, 7 days/week basis. www.teenagehealthfreak.org is a United Kingdom-based Web site designed to take advantage of this. The content of the site, which is the leading teenage health Web site on a Google search, contains both the diary of a hypochondriac 15-year-old boy and a virtual doctor's surgery. It also allows for young people to e-mail health-related questions and receive relevant answers from a health expert. Analysis of the content of these e-mails indicates the unmet health needs and concerns of young people. Future developments of the site include linking the site www.youthhealthtalk.org, a Web site that contains videotaped interviews with young people who have a variety other health concerns.
New Quality Metrics for Web Search Results
NASA Astrophysics Data System (ADS)
Metaxas, Panagiotis Takis; Ivanova, Lilia; Mustafaraj, Eni
Web search results enjoy an increasing importance in our daily lives. But what can be said about their quality, especially when querying a controversial issue? The traditional information retrieval metrics of precision and recall do not provide much insight in the case of web information retrieval. In this paper we examine new ways of evaluating quality in search results: coverage and independence. We give examples on how these new metrics can be calculated and what their values reveal regarding the two major search engines, Google and Yahoo. We have found evidence of low coverage for commercial and medical controversial queries, and high coverage for a political query that is highly contested. Given the fact that search engines are unwilling to tune their search results manually, except in a few cases that have become the source of bad publicity, low coverage and independence reveal the efforts of dedicated groups to manipulate the search results.
Visualize Your Data with Google Fusion Tables
NASA Astrophysics Data System (ADS)
Brisbin, K. E.
2011-12-01
Google Fusion Tables is a modern data management platform that makes it easy to host, manage, collaborate on, visualize, and publish tabular data online. Fusion Tables allows users to upload their own data to the Google cloud, which they can then use to create compelling and interactive visualizations with the data. Users can view data on a Google Map, plot data in a line chart, or display data along a timeline. Users can share these visualizations with others to explore and discover interesting trends about various types of data, including scientific data such as invasive species or global trends in disease. Fusion Tables has been used by many organizations to visualize a variety of scientific data. One example is the California Redistricting Map created by the LA Times: http://goo.gl/gwZt5 The Pacific Institute and Circle of Blue have used Fusion Tables to map the quality of water around the world: http://goo.gl/T4SX8 The World Resources Institute mapped the threat level of coral reefs using Fusion Tables: http://goo.gl/cdqe8 What attendees will learn in this session: This session will cover all the steps necessary to use Fusion Tables to create a variety of interactive visualizations. Attendees will begin by learning about the various options for uploading data into Fusion Tables, including Shapefile, KML file, and CSV file import. Attendees will then learn how to use Fusion Tables to manage their data by merging it with other data and controlling the permissions of the data. Finally, the session will cover how to create a customized visualization from the data, and share that visualization with others using both Fusion Tables and the Google Maps API.
Mandel, Jacob E; Morel-Ovalle, Louis; Boas, Franz E; Ziv, Etay; Yarmohammadi, Hooman; Deipolyi, Amy; Mohabir, Heeralall R; Erinjeri, Joseph P
2018-02-20
The purpose of this study is to determine whether a custom Google Maps application can optimize site selection when scheduling outpatient interventional radiology (IR) procedures within a multi-site hospital system. The Google Maps for Business Application Programming Interface (API) was used to develop an internal web application that uses real-time traffic data to determine estimated travel time (ETT; minutes) and estimated travel distance (ETD; miles) from a patient's home to each a nearby IR facility in our hospital system. Hypothetical patient home addresses based on the 33 cities comprising our institution's catchment area were used to determine the optimal IR site for hypothetical patients traveling from each city based on real-time traffic conditions. For 10/33 (30%) cities, there was discordance between the optimal IR site based on ETT and the optimal IR site based on ETD at non-rush hour time or rush hour time. By choosing to travel to an IR site based on ETT rather than ETD, patients from discordant cities were predicted to save an average of 7.29 min during non-rush hour (p = 0.03), and 28.80 min during rush hour (p < 0.001). Using a custom Google Maps application to schedule outpatients for IR procedures can effectively reduce patient travel time when more than one location providing IR procedures is available within the same hospital system.
NASA Astrophysics Data System (ADS)
Girvetz, E. H.; Zganjar, C.; Raber, G. T.; Hoekstra, J.; Lawler, J. J.; Kareiva, P.
2008-12-01
Now that there is overwhelming evidence of global climate change, scientists, managers and planners (i.e. practitioners) need to assess the potential impacts of climate change on particular ecological systems, within specific geographic areas, and at spatial scales they care about, in order to make better land management, planning, and policy decisions. Unfortunately, this application of climate science to real world decisions and planning has proceeded too slowly because we lack tools for translating cutting-edge climate science and climate-model outputs into something managers and planners can work with at local or regional scales (CCSP 2008). To help increase the accessibility of climate information, we have developed a freely-available, easy-to-use, web-based climate-change analysis toolbox, called ClimateWizard, for assessing how climate has and is projected to change at specific geographic locations throughout the world. The ClimateWizard uses geographic information systems (GIS), web-services (SOAP/XML), statistical analysis platforms (e.g. R- project), and web-based mapping services (e.g. Google Earth/Maps, KML/GML) to provide a variety of different analyses (e.g. trends and departures) and outputs (e.g. maps, graphs, tables, GIS layers). Because ClimateWizard analyzes large climate datasets stored remotely on powerful computers, users of the tool do not need to have fast computers or expensive software, but simply need access to the internet. The analysis results are then provided to users in a Google Maps webpage tailored to the specific climate-change question being asked. The ClimateWizard is not a static product, but rather a framework to be built upon and modified to suit the purposes of specific scientific, management, and policy questions. For example, it can be expanded to include bioclimatic variables (e.g. evapotranspiration) and marine data (e.g. sea surface temperature), as well as improved future climate projections, and climate-change impact analyses involving hydrology, vegetation, wildfire, disease, and food security. By harnessing the power of computer and web- based technologies, the ClimateWizard puts local, regional, and global climate-change analyses in the hands of a wider array of managers, planners, and scientists.
The quality of mental health information commonly searched for on the Internet.
Grohol, John M; Slimowicz, Joseph; Granda, Rebecca
2014-04-01
Previous research has reviewed the quality of online information related to specific mental disorders. Yet, no comprehensive study has been conducted on the overall quality of mental health information searched for online. This study examined the first 20 search results of two popular search engines-Google and Bing-for 11 common mental health terms. They were analyzed using the DISCERN instrument, an adaptation of the Depression Website Content Checklist (ADWCC), Flesch Reading Ease and Flesch-Kincaid Grade Level readability measures, HONCode badge display, and commercial status, resulting in an analysis of 440 web pages. Quality of Web site results varied based on type of disorder examined, with higher quality Web sites found for schizophrenia, bipolar disorder, and dysthymia, and lower quality ratings for phobia, anxiety, and panic disorder Web sites. Of the total Web sites analyzed, 67.5% had good or better quality content. Nearly one-third of the search results produced Web sites from three entities: WebMD, Wikipedia, and the Mayo Clinic. The mean Flesch Reading Ease score was 41.21, and the mean Flesch-Kincaid Grade Level score was 11.68. The presence of the HONCode badge and noncommercial status was found to have a small correlation with Web site quality, and Web sites displaying the HONCode badge and commercial sites had lower readability scores. Popular search engines appear to offer generally reliable results pointing to mostly good or better quality mental health Web sites. However, additional work is needed to make these sites more readable.
Takada, Kenta
2012-01-01
Seasonal changes in the popularity of fireflies [usually Genji-fireflies (Luciola cruciata Motschulsky) in Japan] and Japanese rhinoceros beetles [Allomyrina dichotoma (Linne)] were investigated to examine whether contemporary Japanese are interested in visible emergence of these insects as seasonal events. The popularity of fireflies and Japanese rhinoceros beetles was assessed by the Google search volume of their Japanese names, “Hotaru” and “Kabuto-mushi” in Japanese Katakana script using Google Trends. The search volume index for fireflies and Japanese rhinoceros beetles was distributed across seasons with a clear peak in only particular times of each year from 2004 to 2011. In addition, the seasonal peak of popularity for fireflies occurred at the beginning of June, whereas that for Japanese rhinoceros beetles occurred from the middle of July to the beginning of August. Thus seasonal peak of each species coincided with the peak period of the emergence of each adult stage. These findings indicated that the Japanese are interested in these insects primarily during the time when the two species are most visibly abundant. Although untested, this could suggest that fireflies and Japanese rhinoceros beetles are perceived by the general public as indicators or symbols of summer in Japan. PMID:26466535
An insight into the deep web; why it matters for addiction psychiatry?
Orsolini, Laura; Papanti, Duccio; Corkery, John; Schifano, Fabrizio
2017-05-01
Nowadays, the web is rapidly spreading, playing a significant role in the marketing or sale or distribution of "quasi" legal drugs, hence facilitating continuous changes in drug scenarios. The easily renewable and anarchic online drug-market is gradually transforming indeed the drug market itself, from a "street" to a "virtual" one, with customers being able to shop with a relative anonymity in a 24-hr marketplace. The hidden "deep web" is facilitating this phenomenon. The paper aims at providing an overview to mental health's and addiction's professionals on current knowledge about prodrug activities on the deep web. A nonparticipant netnographic qualitative study of a list of prodrug websites (blogs, fora, and drug marketplaces) located into the surface web was here carried out. A systematic Internet search was conducted on Duckduckgo® and Google® whilst including the following keywords: "drugs" or "legal highs" or "Novel Psychoactive Substances" or "NPS" combined with the word deep web. Four themes (e.g., "How to access into the deepweb"; "Darknet and the online drug trading sites"; "Grams-search engine for the deep web"; and "Cryptocurrencies") and 14 categories were here generated and properly discussed. This paper represents a complete or systematical guideline about the deep web, specifically focusing on practical information on online drug marketplaces, useful for addiction's professionals. Copyright © 2017 John Wiley & Sons, Ltd.
Web Archiving at the Library of Congress
ERIC Educational Resources Information Center
Grotke, Abbie
2011-01-01
Recent years have seen an explosion of the number of institutions involved in or beginning to think about web archiving. Many National Digital Stewardship Alliance (NDSA) members, as well as other universities, historical societies, and state and local governments, have recognized the need for and importance of preserving a variety of web content…
An open annotation ontology for science on web 3.0
2011-01-01
Background There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Methods Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. Results This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables “stand-off” or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO’s Google Code page: http://code.google.com/p/annotation-ontology/ . Conclusions The Annotation Ontology meets critical requirements for an open, freely shareable model in OWL, of annotation metadata created against scientific documents on the Web. We believe AO can become a very useful common model for annotation metadata on Web documents, and will enable biomedical domain ontologies to be used quite widely to annotate the scientific literature. Potential collaborators and those with new relevant use cases are invited to contact the authors. PMID:21624159
An open annotation ontology for science on web 3.0.
Ciccarese, Paolo; Ocana, Marco; Garcia Castro, Leyla Jael; Das, Sudeshna; Clark, Tim
2011-05-17
There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables "stand-off" or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO's Google Code page: http://code.google.com/p/annotation-ontology/ . The Annotation Ontology meets critical requirements for an open, freely shareable model in OWL, of annotation metadata created against scientific documents on the Web. We believe AO can become a very useful common model for annotation metadata on Web documents, and will enable biomedical domain ontologies to be used quite widely to annotate the scientific literature. Potential collaborators and those with new relevant use cases are invited to contact the authors.
NASA Astrophysics Data System (ADS)
Rosen, Charles; Siegel, Edward Carl-Ludwig; Feynman, Richard; Wunderman, Irwin; Smith, Adolph; Marinov, Vesco; Goldman, Jacob; Brine, Sergey; Poge, Larry; Schmidt, Erich; Young, Frederic; Goates-Bulmer, William-Steven; Lewis-Tsurakov-Altshuler, Thomas-Valerie-Genot; Ibm/Exxon Collaboration; Google/Uw Collaboration; Microsoft/Amazon Collaboration; Oracle/Sun Collaboration; Ostp/Dod/Dia/Nsa/W.-F./Boa/Ubs/Ub Collaboration
2013-03-01
Belew[Finding Out About, Cambridge(2000)] and separately full-decade pre-Page/Brin/Google FIRST Siegel-Rosen(Machine-Intelligence/Atherton)-Feynman-Smith-Marinov(Guzik Enterprises/Exxon-Enterprises/A.-I./Santa Clara)-Wunderman(H.-P.) [IBM Conf. on Computers and Mathematics, Stanford(1986); APS Mtgs.(1980s): Palo Alto/Santa Clara/San Francisco/...(1980s) MRS Spring-Mtgs.(1980s): Palo Alto/San Jose/San Francisco/...(1980-1992) FIRST quantum-computing via Bose-Einstein quantum-statistics(BEQS) Bose-Einstein CONDENSATION (BEC) in artificial-intelligence(A-I) artificial neural-networks(A-N-N) and biological neural-networks(B-N-N) and Siegel[J. Noncrystalline-Solids 40, 453(1980); Symp. on Fractals..., MRS Fall-Mtg., Boston(1989)-5-papers; Symp. on Scaling..., (1990); Symp. on Transport in Geometric-Constraint (1990)
Use of Openly Available Satellite Images for Remote Sensing Education
NASA Astrophysics Data System (ADS)
Wang, C.-K.
2011-09-01
With the advent of Google Earth, Google Maps, and Microsoft Bing Maps, high resolution satellite imagery are becoming more easily accessible than ever. It have been the case that the college students may already have wealth experiences with the high resolution satellite imagery by using these software and web services prior to any formal remote sensing education. It is obvious that the remote sensing education should be adjusted to the fact that the audience are already the customers of remote sensing products (through the use of the above mentioned services). This paper reports the use of openly available satellite imagery in an introductory-level remote sensing course in the Department of Geomatics of National Cheng Kung University as a term project. From the experience learned from the fall of 2009 and 2010, it shows that this term project has effectively aroused the students' enthusiastic toward Remote Sensing.
The Gaze of the Perfect Search Engine: Google as an Infrastructure of Dataveillance
NASA Astrophysics Data System (ADS)
Zimmer, M.
Web search engines have emerged as a ubiquitous and vital tool for the successful navigation of the growing online informational sphere. The goal of the world's largest search engine, Google, is to "organize the world's information and make it universally accessible and useful" and to create the "perfect search engine" that provides only intuitive, personalized, and relevant results. While intended to enhance intellectual mobility in the online sphere, this chapter reveals that the quest for the perfect search engine requires the widespread monitoring and aggregation of a users' online personal and intellectual activities, threatening the values the perfect search engines were designed to sustain. It argues that these search-based infrastructures of dataveillance contribute to a rapidly emerging "soft cage" of everyday digital surveillance, where they, like other dataveillance technologies before them, contribute to the curtailing of individual freedom, affect users' sense of self, and present issues of deep discrimination and social justice.
Pharmaceutical applications and phytochemical profile of Cinnamomum burmannii
Al-Dhubiab, Bandar E.
2012-01-01
Extensive studies have been carried out in the last decade to assess the pharmaceutical potential and screen the phytochemical constituents of Cinnamomum burmannii. Databases such as PubMed (MEDLINE), Science Direct (Embase, Biobase, biosis), Scopus, Scifinder, Google Scholar, Google Patent, Cochrane database, and web of science were searched using a defined search strategy. This plant is a member of the genus Cinnamomum and is traditionally used as a spice. Cinnamomum burmannii have been demonstrated to exhibit analgesic, antibacterial, anti-diabetic, anti-fungal, antioxidant, antirheumatic, anti-thrombotic, and anti-tumor activities. The chemical constituents are mostly cinnamyl alcohol, coumarin, cinnamic acid, cinnamaldehyde, anthocynin, and essential oils together with constituents of sugar, protein, crude fats, pectin, and others. This review presents an overview of the current status and knowledge on the traditional usage, the pharmaceutical, biological activities, and phytochemical constituents reported for C. burmannii. PMID:23055638
Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G
2016-01-01
Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p < .01) and covered fewer important HPV-related topics (p < .001). Critical Web pages required viewers to have higher reading skills, were less likely to include an author byline, and were more likely to include testimonial accounts. They also were more likely to raise unsubstantiated concerns about vaccination. Web pages critical of HPV vaccine may be frequently returned and highly ranked by search engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Data-Driven Geospatial Visual Analytics for Real-Time Urban Flooding Decision Support
NASA Astrophysics Data System (ADS)
Liu, Y.; Hill, D.; Rodriguez, A.; Marini, L.; Kooper, R.; Myers, J.; Wu, X.; Minsker, B. S.
2009-12-01
Urban flooding is responsible for the loss of life and property as well as the release of pathogens and other pollutants into the environment. Previous studies have shown that spatial distribution of intense rainfall significantly impacts the triggering and behavior of urban flooding. However, no general purpose tools yet exist for deriving rainfall data and rendering them in real-time at the resolution of hydrologic units used for analyzing urban flooding. This paper presents a new visual analytics system that derives and renders rainfall data from the NEXRAD weather radar system at the sewershed (i.e. urban hydrologic unit) scale in real-time for a Chicago stormwater management project. We introduce a lightweight Web 2.0 approach which takes advantages of scientific workflow management and publishing capabilities developed at NCSA (National Center for Supercomputing Applications), streaming data-aware semantic content management repository, web-based Google Earth/Map and time-aware KML (Keyhole Markup Language). A collection of polygon-based virtual sensors is created from the NEXRAD Level II data using spatial, temporal and thematic transformations at the sewershed level in order to produce persistent virtual rainfall data sources for the animation. Animated color-coded rainfall map in the sewershed can be played in real-time as a movie using time-aware KML inside the web browser-based Google Earth for visually analyzing the spatiotemporal patterns of the rainfall intensity in the sewershed. Such system provides valuable information for situational awareness and improved decision support during extreme storm events in an urban area. Our further work includes incorporating additional data (such as basement flooding events data) or physics-based predictive models that can be used for more integrated data-driven decision support.
Castillo-Ortiz, Jose Dionisio; Valdivia-Nuno, Jose de Jesus; Ramirez-Gomez, Andrea; Garagarza-Mariscal, Heber; Gallegos-Rios, Carlos; Flores-Hernandez, Gabriel; Hernandez-Sanchez, Luis; Brambila-Barba, Victor; Castaneda-Sanchez, Jose Juan; Barajas-Ochoa, Zalathiel; Suarez-Rico, Angel; Sanchez-Gonzalez, Jorge Manuel; Ramos-Remus, Cesar
Education is a major health determinant and one of the main independent outcome predictors in rheumatoid arthritis (RA). The use of the Internet by patients has grown exponentially in the last decade. To assess the characteristics, legibility and quality of the information available in Spanish in the Internet regarding to rheumatoid arthritis. The search was performed in Google using the phrase rheumatoid arthritis. Information from the first 30 pages was evaluated according to a pre-established format (relevance, scope, authorship, type of publication and financial objective). The quality and legibility of the pages were assessed using two validated tools, DISCERN and INFLESZ respectively. Data extraction was performed by senior medical students and evaluation was achieved by consensus. The Google search returned 323 hits but only 63% were considered relevant; 80% of them were information sites (71% discussed exclusively RA, 44% conventional treatment and 12% alternative therapies) and 12.5% had a primary financial interest. 60% of the sites were created by nonprofit organizations and 15% by medical associations. Web sites posted by medical institutions from the United States of America were better positioned in Spanish (Arthritis Foundation 4th position and American College of Rheumatology 10th position) than web sites posted by Spanish speaking countries. There is a risk of disinformation for patients with RA that use the Internet. We identified a window of opportunity for rheumatology medical institutions from Spanish-speaking countries to have a more prominent societal involvement in the education of their patients with RA. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.
Abbott, Kevin C; Oliver, David K; Boal, Thomas R; Gadiyak, Grigorii; Boocks, Carl; Yuan, Christina M; Welch, Paul G; Poropatich, Ronald K
2002-04-01
Studies of the use of the World Wide Web to obtain medical knowledge have largely focused on patients. In particular, neither the international use of academic nephrology World Wide Web sites (websites) as primary information sources nor the use of search engines (and search strategies) to obtain medical information have been described. Visits ("hits") to the Walter Reed Army Medical Center (WRAMC) Nephrology Service website from April 30, 2000, to March 14, 2001, were analyzed for the location of originating source using Webtrends, and search engines (Google, Lycos, etc.) were analyzed manually for search strategies used. From April 30, 2000 to March 14, 2001, the WRAMC Nephrology Service website received 1,007,103 hits and 12,175 visits. These visits were from 33 different countries, and the most frequent regions were Western Europe, Asia, Australia, the Middle East, Pacific Islands, and South America. The most frequent organization using the site was the military Internet system, followed by America Online and automated search programs of online search engines, most commonly Google. The online lecture series was the most frequently visited section of the website. Search strategies used in search engines were extremely technical. The use of "robots" by standard Internet search engines to locate websites, which may be blocked by mandatory registration, has allowed users worldwide to access the WRAMC Nephrology Service website to answer very technical questions. This suggests that it is being used as an alternative to other primary sources of medical information and that the use of mandatory registration may hinder users from finding valuable sites. With current Internet technology, even a single service can become a worldwide information resource without sacrificing its primary customers.
Study of medicine 2.0 due to Web 2.0?! - Risks and opportunities for the curriculum in Leipzig
Hempel, Gunther; Neef, Martin; Rotzoll, Daisy; Heinke, Wolfgang
2013-01-01
Web 2.0 is changing the study of medicine by opening up totally new ways of learning and teaching in an ongoing process. Global social networking services like Facebook, YouTube, Flickr, Google Drive and Xing already play an important part in communication both among students and between students and teaching staff. Moreover, local portals (such as the platform [http://www.leipzig-medizin.de] established in 2003) have also caught on and in some cases eclipsed the use of the well-known location-independent social media. The many possibilities and rapid changes brought about by social networks need to be publicized within medical faculties. Therefore, an E-learning and New Media Working Group was set up at the Faculty of Medicine of Universität Leipzig in order to harness the opportunities of Web 2.0, analyse the resulting processes of change in the study of medicine, and curb the risks of the Internet. With Web 2.0 and the social web already influencing the study of medicine, the opportunities of the Internet now need to be utilized to improve the teaching of medicine. PMID:23467440
Hughes, Benjamin; Joshi, Indra; Lemonde, Hugh; Wareham, Jonathan
2009-10-01
Web 2.0 internet tools and methods have attracted considerable attention as a means to improve health care delivery. Despite evidence demonstrating their use by medical professionals, there is no detailed research describing how Web 2.0 influences physicians' daily clinical practice. Hence this study examines Web 2.0 use by 35 junior physicians in clinical settings to further understand their impact on medical practice. Diaries and interviews encompassing 177 days of internet use or 444 search incidents, analyzed via thematic analysis. Results indicate that 53% of internet visits employed user-generated or Web 2.0 content, with Google and Wikipedia used by 80% and 70% of physicians, respectively. Despite awareness of information credibility risks with Web 2.0 content, it has a role in information seeking for both clinical decisions and medical education. This is enabled by the ability to cross check information and the diverse needs for background and non-verified information. Web 2.0 use represents a profound departure from previous learning and decision processes which were normally controlled by senior medical staff or medical schools. There is widespread concern with the risk of poor quality information with Web 2.0 use, and the manner in which physicians are using it suggest effective use derives from the mitigating actions by the individual physician. Three alternative policy options are identified to manage this risk and improve efficiency in Web 2.0's use.
Global reaction to the recent outbreaks of Zika virus: Insights from a Big Data analysis.
Bragazzi, Nicola Luigi; Alicino, Cristiano; Trucchi, Cecilia; Paganino, Chiara; Barberis, Ilaria; Martini, Mariano; Sticchi, Laura; Trinka, Eugen; Brigo, Francesco; Ansaldi, Filippo; Icardi, Giancarlo; Orsi, Andrea
2017-01-01
The recent spreading of Zika virus represents an emerging global health threat. As such, it is attracting public interest worldwide, generating a great amount of related Internet searches and social media interactions. The aim of this research was to understand Zika-related digital behavior throughout the epidemic spreading and to assess its consistence with real-world epidemiological data, using a behavioral informatics and analytics approach. In this study, the global web-interest and reaction to the recently occurred outbreaks of the Zika Virus were analyzed in terms of tweets and Google Trends (GT), Google News, YouTube, and Wikipedia search queries. These data streams were mined from 1st January 2004 to 31st October 2016, with a focus on the period November 2015-October 2016. This analysis was complemented with the use of epidemiological data. Spearman's correlation was performed to correlate all Zika-related data. Moreover, a multivariate regression was performed using Zika-related search queries as a dependent variable, and epidemiological data, number of inhabitants in 2015 and Human Development Index as predictor variables. Overall 3,864,395 tweets, 284,903 accesses to Wikipedia pages dedicated to the Zika virus were analyzed during the study period. All web-data sources showed that the main spike of researches and interactions occurred in February 2016 with a second peak in August 2016. All novel data streams-related activities increased markedly during the epidemic period with respect to pre-epidemic period when no web activity was detected. Correlations between data from all these web platforms resulted very high and statistically significant. The countries in which web searches were particularly concentrated are mainly from Central and South Americas. The majority of queries concerned the symptoms of the Zika virus, its vector of transmission, and its possible effect to babies, including microcephaly. No statistically significant correlation was found between novel data streams and global real-world epidemiological data. At country level, a correlation between the digital interest towards the Zika virus and Zika incidence rate or microcephaly cases has been detected. An increasing public interest and reaction to the current Zika virus outbreak was documented by all web-data sources and a similar pattern of web reactions has been detected. The public opinion seems to be particularly worried by the alert of teratogenicity of the Zika virus. Stakeholders and health authorities could usefully exploited these internet tools for collecting the concerns of public opinion and reply to them, disseminating key information.
Global reaction to the recent outbreaks of Zika virus: Insights from a Big Data analysis
Trucchi, Cecilia; Paganino, Chiara; Barberis, Ilaria; Martini, Mariano; Sticchi, Laura; Trinka, Eugen; Brigo, Francesco; Ansaldi, Filippo; Icardi, Giancarlo; Orsi, Andrea
2017-01-01
Objective The recent spreading of Zika virus represents an emerging global health threat. As such, it is attracting public interest worldwide, generating a great amount of related Internet searches and social media interactions. The aim of this research was to understand Zika-related digital behavior throughout the epidemic spreading and to assess its consistence with real-world epidemiological data, using a behavioral informatics and analytics approach. Methods In this study, the global web-interest and reaction to the recently occurred outbreaks of the Zika Virus were analyzed in terms of tweets and Google Trends (GT), Google News, YouTube, and Wikipedia search queries. These data streams were mined from 1st January 2004 to 31st October 2016, with a focus on the period November 2015—October 2016. This analysis was complemented with the use of epidemiological data. Spearman’s correlation was performed to correlate all Zika-related data. Moreover, a multivariate regression was performed using Zika-related search queries as a dependent variable, and epidemiological data, number of inhabitants in 2015 and Human Development Index as predictor variables. Results Overall 3,864,395 tweets, 284,903 accesses to Wikipedia pages dedicated to the Zika virus were analyzed during the study period. All web-data sources showed that the main spike of researches and interactions occurred in February 2016 with a second peak in August 2016. All novel data streams-related activities increased markedly during the epidemic period with respect to pre-epidemic period when no web activity was detected. Correlations between data from all these web platforms resulted very high and statistically significant. The countries in which web searches were particularly concentrated are mainly from Central and South Americas. The majority of queries concerned the symptoms of the Zika virus, its vector of transmission, and its possible effect to babies, including microcephaly. No statistically significant correlation was found between novel data streams and global real-world epidemiological data. At country level, a correlation between the digital interest towards the Zika virus and Zika incidence rate or microcephaly cases has been detected. Conclusions An increasing public interest and reaction to the current Zika virus outbreak was documented by all web-data sources and a similar pattern of web reactions has been detected. The public opinion seems to be particularly worried by the alert of teratogenicity of the Zika virus. Stakeholders and health authorities could usefully exploited these internet tools for collecting the concerns of public opinion and reply to them, disseminating key information. PMID:28934352
Evaluation of online disaster and emergency preparedness resources.
Friedman, Daniela B; Tanwar, Manju; Richter, Jane V E
2008-01-01
Increasingly, individuals are relying on the Internet as a major source of health information. When faced with sudden or pending disasters, people resort to the Internet in search of clear, current, and accurate instructions on how to prepare for and respond to such emergencies. Research about online health resources ascertained that information was written at the secondary education and college levels and extremely difficult for individuals with limited literacy to comprehend. This content analysis is the first to assess the reading difficulty level and format suitability of a large number of disaster and emergency preparedness Web pages intended for the general public. The aims of this study were to: (1) assess the readability and suitability of disaster and emergency preparedness information on the Web; and (2) determine whether the reading difficulty level and suitability of online resources differ by the type of disaster or emergency and/or Website domain. Fifty Websites containing information on disaster and/or emergency preparedness were retrieved using the Google search engine. Readability testing was conducted on the first Web page, suggested by Google, addressing preparedness for the general public. The reading level was assessed using Flesch-Kincaid (F-K) and Flesch Reading Ease (FRE) measures. The Suitability Assessment of Materials (SAM) instrument was used to evaluate additional factors such as graphics, layout, and cultural appropriateness. The mean F-K readability score of the 50 Websites was Grade 10.74 (95% CI = 9.93, 11.55). The mean FRE score was 45.74 (95% CI = 41.38, 50.10), a score considered "difficult."A Web page with content about both risk and preparedness supplies was the most difficult to read according to F-K (Grade level = 12.1). Web pages with general disaster and emergency information and preparedness supplies were considered most difficult according to the FRE (38.58, 95% CI = 30.09, 47.08). The average SAM score was 48% or 0.48 (95% CI = 0.45, 0.51), implying below average suitability of these Websites. Websites on pandemics and bioterrorism were the most difficult to read (F-K: p = 0.012; FRE: p = 0.014) and least suitable (SAM: p = 0.035) compared with other disasters and emergencies. The results suggest the need for readily accessible preparedness resources on the Web that are easy-to-read and visually appropriate. Interdisciplinary collaborations between public health educators, risk communication specialists, and Web page creators and writers are recommended to ensure the development and dissemination of disaster and emergency resources that consider literacy abilities of the general public.
Blumenberg, Cauane; Barros, Aluísio J D
2018-07-01
To systematically review the literature and compare response rates (RRs) of web surveys to alternative data collection methods in the context of epidemiologic and public health studies. We reviewed the literature using PubMed, LILACS, SciELO, WebSM, and Google Scholar databases. We selected epidemiologic and public health studies that considered the general population and used two parallel data collection methods, being one web-based. RR differences were analyzed using two-sample test of proportions, and pooled using random effects. We investigated agreement using Bland-and-Altman, and correlation using Pearson's coefficient. We selected 19 studies (nine randomized trials). The RR of the web-based data collection was 12.9 percentage points (p.p.) lower (95% CI = - 19.0, - 6.8) than the alternative methods, and 15.7 p.p. lower (95% CI = - 24.2, - 7.3) considering only randomized trials. Monetary incentives did not reduce the RR differences. A strong positive correlation (r = 0.83) between the RRs was observed. Web-based data collection present lower RRs compared to alternative methods. However, it is not recommended to interpret this as a meta-analytical evidence due to the high heterogeneity of the studies.
Preferences of women for web-based nutritional information in pregnancy.
Kennedy, R A K; Mullaney, L; Reynolds, C M E; Cawley, S; McCartney, D M A; Turner, M J
2017-02-01
During pregnancy, women are increasingly turning to web-based resources for information. This study examined the use of web-based nutritional information by women during pregnancy and explored their preferences. Cross-sectional observational study. Women were enrolled at their convenience from a large maternity hospital. Clinical and sociodemographic details were collected and women's use of web-based resources was assessed using a detailed questionnaire. Of the 101 women, 41.6% were nulliparous and the mean age was 33.1 years (19-47 years). All women had internet access and only 3% did not own a smartphone. Women derived pregnancy-related nutritional information from a range of online resources, most commonly: What to Expect When You're Expecting (15.1%), Babycenter (12.9%), and Eumom (9.7%). However, 24.7% reported using Google searches. There was minimal use of publically funded or academically supported resources. The features women wanted in a web-based application were recipes (88%), exercise advice (71%), personalized dietary feedback (37%), social features (35%), videos (24%) and cooking demonstrations (23%). This survey highlights the risk that pregnant women may get nutritional information from online resources which are not evidence-based. It also identifies features that women want from a web-based nutritional resource. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
[Improving vaccination social marketing by monitoring the web].
Ferro, A; Bonanni, P; Castiglia, P; Montante, A; Colucci, M; Miotto, S; Siddu, A; Murrone, L; Baldo, V
2014-01-01
Immunisation is one of the most important and cost- effective interventions in Public Health because of their significant positive impact on population health.However, since Jenner's discovery there always been a lively debate between supporters and opponents of vaccination; Today the antivaccination movement spreads its message mostly on the web, disseminating inaccurate data through blogs and forums, increasing vaccine rejection.In this context, the Società Italiana di Igiene (SItI) created a web project in order to fight the misinformation on the web regarding vaccinations, through a series of information tools, including scientific articles, educational information, video and multimedia presentations The web portal (http://www.vaccinarsi.org) was published in May 2013 and now is already available over one hundred web pages related to vaccinations Recently a Forum, a periodic newsletter and a Twitter page have been created. There has been an average of 10,000 hits per month. Currently our users are mostly healthcare professionals. The visibility of the site is very good and it currently ranks first in the Google's search engine, taping the word "vaccinarsi" The results of the first four months of activity are extremely encouraging and show the importance of this project; furthermore the application for quality certification by independent international Organizations has been submitted.
Information about epilepsy on the internet: An exploratory study of Arabic websites.
Alkhateeb, Jamal M; Alhadidi, Muna S
2018-01-01
The aim of this study was to explore information about epilepsy found on Arabic websites. The researchers collected information from the internet between November 2016 and January 2017. Information was obtained using Google and Yahoo search engines. Keywords used were the Arabic equivalent of the following two keywords: epilepsy (Al-saraa) and convulsion (Tashanoj). A total of 144 web pages addressing epilepsy in Arabic were reviewed. The majority of web pages were websites of medical institutions and general health websites, followed by informational and educational websites, others, blogs and websites of individuals, and news and media sites. Topics most commonly addressed were medical treatments for epilepsy (50% of all pages) followed by epilepsy definition (41%) and epilepsy etiology (34.7%). The results also revealed that the vast majority of web pages did not mention the source of information. Many web pages also did not provide author information. Only a small proportion of the web pages provided adequate information. Relatively few web pages provided inaccurate information or made sweeping generalizations. As a result, it is concluded that the findings of the present study suggest that development of more credible Arabic websites on epilepsy is needed. These websites need to go beyond basic information, offering more evidence-based and updated information about epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.
Cooperative Learning and Web 2.0: A Social Perspective on Critical Thinking
ERIC Educational Resources Information Center
Schipke, Rae Carrington
2018-01-01
This article discusses how cooperative learning as a socioinstructional approach, relates to both socially-based emerging technologies (i.e. Web 2.0) and to critical thinking with respect to co-cognition. It begins with a discussion of the importance of connecting cooperative learning, Web 2.0, and critical thinking. This is followed by the need…
The World Wide Web and Higher Education: The Promise of Virtual Universities and Online Libraries.
ERIC Educational Resources Information Center
Barnard, John
1997-01-01
While many universities and colleges are emphasizing distance education as a way to reach working adults and control costs associated with maintaining campus infrastructures, the World Wide Web is beginning to provide a medium for offering courses to students anywhere in the world. Discusses virtual universities which combine the Web with other…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-29
... NRC-2008- 0252. You may submit comments by any of the following methods: Federal Rulemaking Web site... publicly available, by any of the following methods: Federal Rulemaking Web site: Go to http://www... ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public Document Room...
Development and Construction of the Multimedia Web-Based Courses Based on ASP
ERIC Educational Resources Information Center
Wang, Yu; Liu, Jianbo
2011-01-01
With the quick development of internet and computer technology, more and more information acquirers begin to more depend on the network, and for the transmission route of knowledge, the advantageous state of web-based courses becomes more and more obvious. The support of modern education technology for the web-based courses would gradually replace…
SILVA tree viewer: interactive web browsing of the SILVA phylogenetic guide trees.
Beccati, Alan; Gerken, Jan; Quast, Christian; Yilmaz, Pelin; Glöckner, Frank Oliver
2017-09-30
Phylogenetic trees are an important tool to study the evolutionary relationships among organisms. The huge amount of available taxa poses difficulties in their interactive visualization. This hampers the interaction with the users to provide feedback for the further improvement of the taxonomic framework. The SILVA Tree Viewer is a web application designed for visualizing large phylogenetic trees without requiring the download of any software tool or data files. The SILVA Tree Viewer is based on Web Geographic Information Systems (Web-GIS) technology with a PostgreSQL backend. It enables zoom and pan functionalities similar to Google Maps. The SILVA Tree Viewer enables access to two phylogenetic (guide) trees provided by the SILVA database: the SSU Ref NR99 inferred from high-quality, full-length small subunit sequences, clustered at 99% sequence identity and the LSU Ref inferred from high-quality, full-length large subunit sequences. The Tree Viewer provides tree navigation, search and browse tools as well as an interactive feedback system to collect any kinds of requests ranging from taxonomy to data curation and improving the tool itself.
Syndromic surveillance models using Web data: the case of scarlet fever in the UK.
Samaras, Loukas; García-Barriocanal, Elena; Sicilia, Miguel-Angel
2012-03-01
Recent research has shown the potential of Web queries as a source for syndromic surveillance, and existing studies show that these queries can be used as a basis for estimation and prediction of the development of a syndromic disease, such as influenza, using log linear (logit) statistical models. Two alternative models are applied to the relationship between cases and Web queries in this paper. We examine the applicability of using statistical methods to relate search engine queries with scarlet fever cases in the UK, taking advantage of tools to acquire the appropriate data from Google, and using an alternative statistical method based on gamma distributions. The results show that using logit models, the Pearson correlation factor between Web queries and the data obtained from the official agencies must be over 0.90, otherwise the prediction of the peak and the spread of the distributions gives significant deviations. In this paper, we describe the gamma distribution model and show that we can obtain better results in all cases using gamma transformations, and especially in those with a smaller correlation factor.
Jiménez-Muñoz, Juan C.; Mattar, Cristian; Sobrino, José A.; Malhi, Yadvinder
2015-01-01
Advances in information technologies and accessibility to climate and satellite data in recent years have favored the development of web-based tools with user-friendly interfaces in order to facilitate the dissemination of geo/biophysical products. These products are useful for the analysis of the impact of global warming over different biomes. In particular, the study of the Amazon forest responses to drought have recently received attention by the scientific community due to the occurrence of two extreme droughts and sustained warming over the last decade. Thermal Amazoni@ is a web-based platform for the visualization and download of surface thermal anomalies products over the Amazon forest and adjacent intertropical oceans using Google Earth as a baseline graphical interface (http://ipl.uv.es/thamazon/web). This platform is currently operational at the servers of the University of Valencia (Spain), and it includes both satellite (MODIS) and climatic (ERA-Interim) datasets. Thermal Amazoni@ is composed of the viewer system and the web and ftp sites with ancillary information and access to product download. PMID:26029379
Jiménez-Muñoz, Juan C; Mattar, Cristian; Sobrino, José A; Malhi, Yadvinder
2015-01-01
Advances in information technologies and accessibility to climate and satellite data in recent years have favored the development of web-based tools with user-friendly interfaces in order to facilitate the dissemination of geo/biophysical products. These products are useful for the analysis of the impact of global warming over different biomes. In particular, the study of the Amazon forest responses to drought have recently received attention by the scientific community due to the occurrence of two extreme droughts and sustained warming over the last decade. Thermal Amazoni@ is a web-based platform for the visualization and download of surface thermal anomalies products over the Amazon forest and adjacent intertropical oceans using Google Earth as a baseline graphical interface (http://ipl.uv.es/thamazon/web). This platform is currently operational at the servers of the University of Valencia (Spain), and it includes both satellite (MODIS) and climatic (ERA-Interim) datasets. Thermal Amazoni@ is composed of the viewer system and the web and ftp sites with ancillary information and access to product download.
VAAPA: a web platform for visualization and analysis of alternative polyadenylation.
Guan, Jinting; Fu, Jingyi; Wu, Mingcheng; Chen, Longteng; Ji, Guoli; Quinn Li, Qingshun; Wu, Xiaohui
2015-02-01
Polyadenylation [poly(A)] is an essential process during the maturation of most mRNAs in eukaryotes. Alternative polyadenylation (APA) as an important layer of gene expression regulation has been increasingly recognized in various species. Here, a web platform for visualization and analysis of alternative polyadenylation (VAAPA) was developed. This platform can visualize the distribution of poly(A) sites and poly(A) clusters of a gene or a section of a chromosome. It can also highlight genes with switched APA sites among different conditions. VAAPA is an easy-to-use web-based tool that provides functions of poly(A) site query, data uploading, downloading, and APA sites visualization. It was designed in a multi-tier architecture and developed based on Smart GWT (Google Web Toolkit) using Java as the development language. VAAPA will be a valuable addition to the community for the comprehensive study of APA, not only by making the high quality poly(A) site data more accessible, but also by providing users with numerous valuable functions for poly(A) site analysis and visualization. Copyright © 2014 Elsevier Ltd. All rights reserved.
The White-hat Bot: A Novel Botnet Defense Strategy
2012-06-14
etc. I will briefly discuss one common exploit here. One fraudulent activity 4 perpetuated by botnets involves ad services such as Google’s AdSense ...which pays website owners revenue for posting the AdSense banner on their web site (Google, 2012). The AdSense banner displays messages from...botmaster creates a bot that is programmed to visit the botmaster’s own websites to click on the advertisements displayed in the AdSense banners. Since
Adjacency and Proximity Searching in the Science Citation Index and Google
2005-01-01
major database search engines , including commercial S&T database search engines (e.g., Science Citation Index (SCI), Engineering Compendex (EC...PubMed, OVID), Federal agency award database search engines (e.g., NSF, NIH, DOE, EPA, as accessed in Federal R&D Project Summaries), Web search Engines (e.g...searching. Some database search engines allow strict constrained co- occurrence searching as a user option (e.g., OVID, EC), while others do not (e.g., SCI
Cyber Moat: Adaptive Virtualized Network Framework for Deception and Disinformation
2016-12-12
As one type of bots, web crawlers have been leveraged by search engines (e.g., Googlebot by Google) to popularize websites through website indexing...However, the number of malicious bots is increasing too. To regulate the behavior of crawlers, most websites include a file called "robots.txt" that...However, "robots.txt" only provides a guideline, and almost all malicious robots ignore it. Moreover, since this file is publicly available, malicious