Sample records for tcrp web-only document

  1. Semantic Similarity between Web Documents Using Ontology

    NASA Astrophysics Data System (ADS)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-06-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  2. Semantic Similarity between Web Documents Using Ontology

    NASA Astrophysics Data System (ADS)

    Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh

    2018-03-01

    The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.

  3. WEBCAP: Web Scheduler for Distance Learning Multimedia Documents with Web Workload Considerations

    ERIC Educational Resources Information Center

    Habib, Sami; Safar, Maytham

    2008-01-01

    In many web applications, such as the distance learning, the frequency of refreshing multimedia web documents places a heavy burden on the WWW resources. Moreover, the updated web documents may encounter inordinate delays, which make it difficult to retrieve web documents in time. Here, we present an Internet tool called WEBCAP that can schedule…

  4. Web-based X-ray quality control documentation.

    PubMed

    David, George; Burnett, Lou Ann; Schenkel, Robert

    2003-01-01

    The department of radiology at the Medical College of Georgia Hospital and Clinics has developed an equipment quality control web site. Our goal is to provide immediate access to virtually all medical physics survey data. The web site is designed to assist equipment engineers, department management and technologists. By improving communications and access to equipment documentation, we believe productivity is enhanced. The creation of the quality control web site was accomplished in three distinct steps. First, survey data had to be placed in a computer format. The second step was to convert these various computer files to a format supported by commercial web browsers. Third, a comprehensive home page had to be designed to provide convenient access to the multitude of surveys done in the various x-ray rooms. Because we had spent years previously fine-tuning the computerization of the medical physics quality control program, most survey documentation was already in spreadsheet or database format. A major technical decision was the method of conversion of survey spreadsheet and database files into documentation appropriate for the web. After an unsatisfactory experience with a HyperText Markup Language (HTML) converter (packaged with spreadsheet and database software), we tried creating Portable Document Format (PDF) files using Adobe Acrobat software. This process preserves the original formatting of the document and takes no longer than conventional printing; therefore, it has been very successful. Although the PDF file generated by Adobe Acrobat is a proprietary format, it can be displayed through a conventional web browser using the freely distributed Adobe Acrobat Reader program that is available for virtually all platforms. Once a user installs the software, it is automatically invoked by the web browser whenever the user follows a link to a file with a PDF extension. Although no confidential patient information is available on the web site, our legal

  5. Documenting clinical pharmacist intervention before and after the introduction of a web-based tool.

    PubMed

    Nurgat, Zubeir A; Al-Jazairi, Abdulrazaq S; Abu-Shraie, Nada; Al-Jedai, Ahmed

    2011-04-01

    To develop a database for documenting pharmacist intervention through a web-based application. The secondary endpoint was to determine if the new, web-based application provides any benefits with regards to documentation compliance by clinical pharmacists and ease of calculating cost savings compared with our previous method of documenting pharmacist interventions. A tertiary care hospital in Saudi Arabia. The documentation of interventions using a web-based documentation application was retrospectively compared with previous methods of documentation of clinical pharmacists' interventions (multi-user PC software). The number and types of interventions recorded by pharmacists, data mining of archived data, efficiency, cost savings, and the accuracy of the data generated. The number of documented clinical interventions increased from 4,926, using the multi-user PC software, to 6,840 for the web-based application. On average, we observed 653 interventions per clinical pharmacist using the web-based application, which showed an increase compared to an average of 493 interventions using the old multi-user PC software. However, using a paired Student's t-test there was no statistical significance difference between the two means (P = 0.201). Using a χ² test, which captured management level and the type of system used, we found a strong effect of management level (P < 2.2 × 10⁻¹⁶) on the number of documented interventions. We also found a moderately significant relationship between educational level and the number of interventions documented (P = 0.045). The mean ± SD time required to document an intervention using the web-based application was 66.55 ± 8.98 s. Using the web-based application, 29.06% of documented interventions resulted in cost-savings, while using the multi-user PC software only 4.75% of interventions did so. The majority of cost savings across both platforms resulted from the discontinuation of unnecessary drugs and a change in dosage regimen

  6. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  7. What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.

    PubMed

    Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W

    2015-06-01

    Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.

  8. Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.

    PubMed

    Kahn, Charles E

    2008-09-01

    Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.

  9. The Number of Scholarly Documents on the Public Web

    PubMed Central

    Khabsa, Madian; Giles, C. Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%. PMID:24817403

  10. The number of scholarly documents on the public web.

    PubMed

    Khabsa, Madian; Giles, C Lee

    2014-01-01

    The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.

  11. KernPaeP - a web-based pediatric palliative documentation system for home care.

    PubMed

    Hartz, Tobias; Verst, Hendrik; Ueckert, Frank

    2009-01-01

    KernPaeP is a new web-based on- and offline documentation system, which has been developed for pediatric palliative care-teams supporting patient documentation and communication among health care professionals. It provides a reliable system making fast and secure home care documentation possible. KernPaeP is accessible online by registered users using any web-browser. Home care teams use an offline version of KernPaeP running on a netbook for patient documentation on site. Identifying and medical patient data are strictly separated and stored on two database servers. The system offers a stable, enhanced two-way algorithm for synchronization between the offline component and the central database servers. KernPaeP is implemented meeting highest security standards while still maintaining high usability. The web-based documentation system allows ubiquitous and immediate access to patient data. Sumptuous paper work is replaced by secure and comprehensive electronic documentation. KernPaeP helps saving time and improving the quality of documentation. Due to development in close cooperation with pediatric palliative professionals, KernPaeP fulfils the broad needs of home-care documentation. The technique of web-based online and offline documentation is in general applicable for arbitrary home care scenarios.

  12. New Interfaces to Web Documents and Services

    NASA Technical Reports Server (NTRS)

    Carlisle, W. H.

    1996-01-01

    This paper reports on investigations into how to extend capabilities of the Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1996 Summer Faculty Fellowship program, and involved research into and prototype development of software components that provide documents and services for the World Wide Web (WWW). The WWW has become a de-facto standard for sharing resources over the internet, primarily because web browsers are freely available for the most common hardware platforms and their operating systems. As a consequence of the popularity of the internet, tools, and techniques associated with web browsers are changing rapidly. New capabilities are offered by companies that support web browsers in order to achieve or remain a dominant participant in internet services. Because a goal of the VRC is to build an environment for NASA centers, universities, and industrial partners to share information associated with Advanced Concepts Office activities, the VRC tracks new techniques and services associated with the web in order to determine the their usefulness for distributed and collaborative engineering research activities. Most recently, Java has emerged as a new tool for providing internet services. Because the major web browser providers have decided to include Java in their software, investigations into Java were conducted this summer.

  13. Creating Polyphony with Exploratory Web Documentation in Singapore

    ERIC Educational Resources Information Center

    Lim, Sirene; Hoo, Lum Chee

    2012-01-01

    We introduce and reflect on "Images of Teaching", an ongoing web documentation research project on preschool teaching in Singapore. This paper discusses the project's purpose, methodological process, and our learning points as researchers who aim to contribute towards inquiry-based professional learning. The website offers a window into…

  14. Features: Real-Time Adaptive Feature and Document Learning for Web Search.

    ERIC Educational Resources Information Center

    Chen, Zhixiang; Meng, Xiannong; Fowler, Richard H.; Zhu, Binhai

    2001-01-01

    Describes Features, an intelligent Web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Explains how Features learns from users' document relevance feedback and automatically extracts and suggests indexing keywords relevant to a search query, and learns from users' keyword relevance feedback…

  15. Using the web to validate document recognition results: experiments with business cards

    NASA Astrophysics Data System (ADS)

    Oertel, Clemens; O'Shea, Shauna; Bodnar, Adam; Blostein, Dorothea

    2004-12-01

    The World Wide Web is a vast information resource which can be useful for validating the results produced by document recognizers. Three computational steps are involved, all of them challenging: (1) use the recognition results in a Web search to retrieve Web pages that contain information similar to that in the document, (2) identify the relevant portions of the retrieved Web pages, and (3) analyze these relevant portions to determine what corrections (if any) should be made to the recognition result. We have conducted exploratory implementations of steps (1) and (2) in the business-card domain: we use fields of the business card to retrieve Web pages and identify the most relevant portions of those Web pages. In some cases, this information appears suitable for correcting OCR errors in the business card fields. In other cases, the approach fails due to stale information: when business cards are several years old and the business-card holder has changed jobs, then websites (such as the home page or company website) no longer contain information matching that on the business card. Our exploratory results indicate that in some domains it may be possible to develop effective means of querying the Web with recognition results, and to use this information to correct the recognition results and/or detect that the information is stale.

  16. Using the web to validate document recognition results: experiments with business cards

    NASA Astrophysics Data System (ADS)

    Oertel, Clemens; O'Shea, Shauna; Bodnar, Adam; Blostein, Dorothea

    2005-01-01

    The World Wide Web is a vast information resource which can be useful for validating the results produced by document recognizers. Three computational steps are involved, all of them challenging: (1) use the recognition results in a Web search to retrieve Web pages that contain information similar to that in the document, (2) identify the relevant portions of the retrieved Web pages, and (3) analyze these relevant portions to determine what corrections (if any) should be made to the recognition result. We have conducted exploratory implementations of steps (1) and (2) in the business-card domain: we use fields of the business card to retrieve Web pages and identify the most relevant portions of those Web pages. In some cases, this information appears suitable for correcting OCR errors in the business card fields. In other cases, the approach fails due to stale information: when business cards are several years old and the business-card holder has changed jobs, then websites (such as the home page or company website) no longer contain information matching that on the business card. Our exploratory results indicate that in some domains it may be possible to develop effective means of querying the Web with recognition results, and to use this information to correct the recognition results and/or detect that the information is stale.

  17. Analysis of Documentation Speed Using Web-Based Medical Speech Recognition Technology: Randomized Controlled Trial.

    PubMed

    Vogel, Markus; Kaisers, Wolfgang; Wassmuth, Ralf; Mayatepek, Ertan

    2015-11-03

    Clinical documentation has undergone a change due to the usage of electronic health records. The core element is to capture clinical findings and document therapy electronically. Health care personnel spend a significant portion of their time on the computer. Alternatives to self-typing, such as speech recognition, are currently believed to increase documentation efficiency and quality, as well as satisfaction of health professionals while accomplishing clinical documentation, but few studies in this area have been published to date. This study describes the effects of using a Web-based medical speech recognition system for clinical documentation in a university hospital on (1) documentation speed, (2) document length, and (3) physician satisfaction. Reports of 28 physicians were randomized to be created with (intervention) or without (control) the assistance of a Web-based system of medical automatic speech recognition (ASR) in the German language. The documentation was entered into a browser's text area and the time to complete the documentation including all necessary corrections, correction effort, number of characters, and mood of participant were stored in a database. The underlying time comprised text entering, text correction, and finalization of the documentation event. Participants self-assessed their moods on a scale of 1-3 (1=good, 2=moderate, 3=bad). Statistical analysis was done using permutation tests. The number of clinical reports eligible for further analysis stood at 1455. Out of 1455 reports, 718 (49.35%) were assisted by ASR and 737 (50.65%) were not assisted by ASR. Average documentation speed without ASR was 173 (SD 101) characters per minute, while it was 217 (SD 120) characters per minute using ASR. The overall increase in documentation speed through Web-based ASR assistance was 26% (P=.04). Participants documented an average of 356 (SD 388) characters per report when not assisted by ASR and 649 (SD 561) characters per report when assisted

  18. Review of Web-Based Technical Documentation Processes. FY07 NAEP-QA Special Study Report. TR-08-17

    ERIC Educational Resources Information Center

    Gribben, Monica; Wise, Lauress; Becker, D. E.

    2008-01-01

    Beginning with the 2000 and 2001 National Assessment of Educational Progress (NAEP) assessments, the National Center for Education Statistics (NCES) has made technical documentation available on the worldwide web at http://nces.ed.gov/nationsreportcard/tdw/. The web-based documentation is designed to be less dense and more accessible than prior…

  19. Web Prep: How to Prepare NAS Reports For Publication on the Web

    NASA Technical Reports Server (NTRS)

    Walatka, Pamela; Balakrishnan, Prithika; Clucas, Jean; McCabe, R. Kevin; Felchle, Gail; Brickell, Cristy

    1996-01-01

    This document contains specific advice and requirements for NASA Ames Code IN authors of NAS reports. Much of the information may be of interest to other authors writing for the Web. WebPrep has a graphic Table of Contents in the form of a WebToon, which simulates a discussion between a scientist and a Web publishing consultant. In the WebToon, Frequently Asked Questions about preparing reports for the Web are linked to relevant text in the body of this document. We also provide a text-only Table of Contents. The text for this document is divided into chapters: each chapter corresponds to one frame of the WebToons. The chapter topics are: converting text to HTML, converting 2D graphic images to gif, creating imagemaps and tables, converting movie and audio files to Web formats, supplying 3D interactive data, and (briefly) JAVA capabilities. The last chapter is specifically for NAS staff authors. The Glossary-Index lists web related words and links to topics covered in the main text.

  20. The Effects of a Web-Based Nursing Process Documentation Program on Stress and Anxiety of Nursing Students in South Korea.

    PubMed

    Lee, Eunjoo; Noh, Hyun Kyung

    2016-01-01

    To examine the effects of a web-based nursing process documentation system on the stress and anxiety of nursing students during their clinical practice. A quasi-experimental design was employed. The experimental group (n = 110) used a web-based nursing process documentation program for their case reports as part of assignments for a clinical practicum, whereas the control group (n = 106) used traditional paper-based case reports. Stress and anxiety levels were measured with a numeric rating scale before, 2 weeks after, and 4 weeks after using the web-based nursing process documentation program during a clinical practicum. The data were analyzed using descriptive statistics, t tests, chi-square tests, and repeated-measures analyses of variance. Nursing students who used the web-based nursing process documentation program showed significant lower levels of stress and anxiety than the control group. A web-based nursing process documentation program could be used to reduce the stress and anxiety of nursing students during clinical practicum, which ultimately would benefit nursing students by increasing satisfaction with and effectiveness of clinical practicum. © 2015 NANDA International, Inc.

  1. Document-Centred Discourse on the Web: A Publishing Tool for Students, Tutors and Researchers.

    ERIC Educational Resources Information Center

    Shum, Simon Buckingham; Sumner, Tamara

    This paper describes how the authors are exploiting the potential of interactive World Wide Web media to support a central part of academic life--the publishing, critiquing, and discussion of documents. The paper begins with an overview of documents in academic life and a discussion of paper-based or "papyrocentric" print and scholarly…

  2. The use of fingerprints available on the web in false identity documents: Analysis from a forensic intelligence perspective.

    PubMed

    Girelli, Carlos Magno Alves

    2016-05-01

    Fingerprints present in false identity documents were found on the web. In some cases, laterally reversed (mirrored) images of a same fingerprint were observed in different documents. In the present work, 100 fingerprints images downloaded from the web, as well as their reversals obtained by image editing, were compared between themselves and against the database of the Brazilian Federal Police AFIS, in order to better understand trends about this kind of forgery in Brazil. Some image editing effects were observed in the analyzed fingerprints: addition of artifacts (such as watermarks), image rotation, image stylization, lateral reversal and tonal reversal. Discussion about lateral reversals' detection is presented in this article, as well as suggestion to reduce errors due to missed HIT decisions between reversed fingerprints. The present work aims to highlight the importance of the fingerprints' analysis when performing document examination, especially when only copies of documents are available, something very common in Brazil. Besides the intrinsic features of the fingermarks considered in three levels of details by ACE-V methodology, some visual features of the fingerprints images can be helpful to identify sources of forgeries and modus operandi, such as: limits and image contours, fails in the friction ridges caused by excess or lack of inking and presence of watermarks and artifacts arising from the background. Based on the agreement of such features in fingerprints present in different identity documents and also on the analysis of the time and location where the documents were seized, it is possible to highlight potential links between apparently unconnected crimes. Therefore, fingerprints have potential to reduce linkage blindness and the present work suggests the analysis of fingerprints when profiling false identity documents, as well as the inclusion of fingerprints features in the profile of the documents. Copyright © 2016 Elsevier Ireland Ltd. All

  3. Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.

    PubMed

    Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao

    2016-12-01

    In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.

  4. An automatically updateable web publishing solution: taking document sharing and conversion to enterprise level

    NASA Astrophysics Data System (ADS)

    Rahman, Fuad; Tarnikova, Yuliya; Hartono, Rachmat; Alam, Hassan

    2006-01-01

    This paper presents a novel automatic web publishing solution, Pageview (R). PageView (R) is a complete working solution for document processing and management. The principal aim of this tool is to allow workgroups to share, access and publish documents on-line on a regular basis. For example, assuming that a person is working on some documents. The user will, in some fashion, organize his work either in his own local directory or in a shared network drive. Now extend that concept to a workgroup. Within a workgroup, some users are working together on some documents, and they are saving them in a directory structure somewhere on a document repository. The next stage of this reasoning is that a workgroup is working on some documents, and they want to publish them routinely on-line. Now it may happen that they are using different editing tools, different software, and different graphics tools. The resultant documents may be in PDF, Microsoft Office (R), HTML, or Word Perfect format, just to name a few. In general, this process needs the documents to be processed in a fashion so that they are in the HTML format, and then a web designer needs to work on that collection to make them available on-line. PageView (R) takes care of this whole process automatically, making the document workflow clean and easy to follow. PageView (R) Server publishes documents, complete with the directory structure, for online use. The documents are automatically converted to HTML and PDF so that users can view the content without downloading the original files, or having to download browser plug-ins. Once published, other users can access the documents as if they are accessing them from their local folders. The paper will describe the complete working system and will discuss possible applications within the document management research.

  5. Poster — Thur Eve — 52: A Web-based Platform for Collaborative Document Management in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kildea, J.; Joseph, A.

    We describe DepDocs, a web-based platform that we have developed to manage the committee meetings, policies, procedures and other documents within our otherwise paperless radiotherapy clinic. DepDocs is essentially a document management system based on the popular Drupal content management software. For security and confidentiality, it is hosted on a linux server internal to our hospital network such that documents are never sent to the cloud or outside of the hospital firewall. We used Drupal's in-built role-based user rights management system to assign a role, and associated document editing rights, to each user. Documents are accessed for viewing using eithermore » a simple Google-like search or by generating a list of related documents from a taxonomy of categorization terms. Our system provides document revision tracking and an document review and approval mechanism for all official policies and procedures. Committee meeting schedules, agendas and minutes are maintained by committee chairs and are restricted to committee members. DepDocs has been operational within our department for over six months and has already 45 unique users and an archive of over 1000 documents, mostly policies and procedures. Documents are easily retrievable from the system using any web browser within our hospital's network.« less

  6. Code AI Personal Web Pages

    NASA Technical Reports Server (NTRS)

    Garcia, Joseph A.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The document consists of a publicly available web site (george.arc.nasa.gov) for Joseph A. Garcia's personal web pages in the AI division. Only general information will be posted and no technical material. All the information is unclassified.

  7. Desktop document delivery using portable document format (PDF) files and the Web.

    PubMed Central

    Shipman, J P; Gembala, W L; Reeder, J M; Zick, B A; Rainwater, M J

    1998-01-01

    Desktop access to electronic full-text literature was rated one of the most desirable services in a client survey conducted by the University of Washington Libraries. The University of Washington Health Sciences Libraries (UW HSL) conducted a ten-month pilot test from August 1996 to May 1997 to determine the feasibility of delivering electronic journal articles via the Internet to remote faculty. Articles were scanned into Adobe Acrobat Portable Document Format (PDF) files and delivered to individuals using Multipurpose Internet Mail Extensions (MIME) standard e-mail attachments and the Web. Participants retrieved scanned articles and used the Adobe Acrobat Reader software to view and print files. The pilot test required a special programming effort to automate the client notification and file deletion processes. Test participants were satisfied with the pilot test despite some technical difficulties. Desktop delivery is now offered as a routine delivery method from the UW HSL. PMID:9681165

  8. Desktop document delivery using portable document format (PDF) files and the Web.

    PubMed

    Shipman, J P; Gembala, W L; Reeder, J M; Zick, B A; Rainwater, M J

    1998-07-01

    Desktop access to electronic full-text literature was rated one of the most desirable services in a client survey conducted by the University of Washington Libraries. The University of Washington Health Sciences Libraries (UW HSL) conducted a ten-month pilot test from August 1996 to May 1997 to determine the feasibility of delivering electronic journal articles via the Internet to remote faculty. Articles were scanned into Adobe Acrobat Portable Document Format (PDF) files and delivered to individuals using Multipurpose Internet Mail Extensions (MIME) standard e-mail attachments and the Web. Participants retrieved scanned articles and used the Adobe Acrobat Reader software to view and print files. The pilot test required a special programming effort to automate the client notification and file deletion processes. Test participants were satisfied with the pilot test despite some technical difficulties. Desktop delivery is now offered as a routine delivery method from the UW HSL.

  9. Documentation of Heritage Structures Through Geo-Crowdsourcing and Web-Mapping

    NASA Astrophysics Data System (ADS)

    Dhonju, H. K.; Xiao, W.; Shakya, B.; Mills, J. P.; Sarhosis, V.

    2017-09-01

    Heritage documentation has become increasingly urgent due to both natural impacts and human influences. The documentation of countless heritage sites around the globe is a massive project that requires significant amounts of financial and labour resources. With the concepts of volunteered geographic information (VGI) and citizen science, heritage data such as digital photographs can be collected through online crowd participation. Whilst photographs are not strictly geographic data, they can be geo-tagged by the participants. They can also be automatically geo-referenced into a global coordinate system if collected via mobile phones which are now ubiquitous. With the assistance of web-mapping, an online geo-crowdsourcing platform has been developed to collect and display heritage structure photographs. Details of platform development are presented in this paper. The prototype is demonstrated with several heritage examples. Potential applications and advancements are discussed.

  10. Web document ranking via active learning and kernel principal component analysis

    NASA Astrophysics Data System (ADS)

    Cai, Fei; Chen, Honghui; Shu, Zhen

    2015-09-01

    Web document ranking arises in many information retrieval (IR) applications, such as the search engine, recommendation system and online advertising. A challenging issue is how to select the representative query-document pairs and informative features as well for better learning and exploring new ranking models to produce an acceptable ranking list of candidate documents of each query. In this study, we propose an active sampling (AS) plus kernel principal component analysis (KPCA) based ranking model, viz. AS-KPCA Regression, to study the document ranking for a retrieval system, i.e. how to choose the representative query-document pairs and features for learning. More precisely, we fill those documents gradually into the training set by AS such that each of which will incur the highest expected DCG loss if unselected. Then, the KPCA is performed via projecting the selected query-document pairs onto p-principal components in the feature space to complete the regression. Hence, we can cut down the computational overhead and depress the impact incurred by noise simultaneously. To the best of our knowledge, we are the first to perform the document ranking via dimension reductions in two dimensions, namely, the number of documents and features simultaneously. Our experiments demonstrate that the performance of our approach is better than that of the baseline methods on the public LETOR 4.0 datasets. Our approach brings an improvement against RankBoost as well as other baselines near 20% in terms of MAP metric and less improvements using P@K and NDCG@K, respectively. Moreover, our approach is particularly suitable for document ranking on the noisy dataset in practice.

  11. Development and evaluation of a web-based application for digital findings and documentation in physiotherapy education.

    PubMed

    Spieler, Bernadette; Burgsteiner, Harald; Messer-Misak, Karin; Gödl-Purrer, Barbara; Salchinger, Beate

    2015-01-01

    Findings in physiotherapy have standardized approaches in treatment, but there is also a significant margin of differences in how to implement these standards. Clinical decisions require experience and continuous learning processes to consolidate personal values and opinions and studies suggest that lecturers can influence students positively. Recently, the study course of Physiotherapy at the University of Applied Science in Graz has offered a paper based finding document. This document supported decisions through the adaption of the clinical reasoning process. The document was the starting point for our learning application called "EasyAssess", a Java based web-application for a digital findings documentation. A central point of our work was to ensure efficiency, effectiveness and usability of the web-application through usability tests utilized by both students and lecturers. Results show that our application fulfills the previously defined requirements and can be efficiently used in daily routine largely because of its simple user interface and its modest design. Due to the close cooperation with the study course Physiotherapy, the application has incorporated the various needs of the target audiences and confirmed the usefulness of our application.

  12. Annotating Atomic Components of Papers in Digital Libraries: The Semantic and Social Web Heading towards a Living Document Supporting eSciences

    NASA Astrophysics Data System (ADS)

    García Castro, Alexander; García-Castro, Leyla Jael; Labarga, Alberto; Giraldo, Olga; Montaña, César; O'Neil, Kieran; Bateman, John A.

    Rather than a document that is being constantly re-written as in the wiki approach, the Living Document (LD) is one that acts as a document router, operating by means of structured and organized social tagging and existing ontologies. It offers an environment where users can manage papers and related information, share their knowledge with their peers and discover hidden associations among the shared knowledge. The LD builds upon both the Semantic Web, which values the integration of well-structured data, and the Social Web, which aims to facilitate interaction amongst people by means of user-generated content. In this vein, the LD is similar to a social networking system, with users as central nodes in the network, with the difference that interaction is focused on papers rather than people. Papers, with their ability to represent research interests, expertise, affiliations, and links to web based tools and databanks, represent a central axis for interaction amongst users. To begin to show the potential of this vision, we have implemented a novel web prototype that enables researchers to accomplish three activities central to the Semantic Web vision: organizing, sharing and discovering. Availability: http://www.scientifik.info/

  13. Emergency Response Capability Baseline Needs Assessment - Requirements Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharry, John A.

    This document was prepared by John A. Sharry, LLNL Fire Marshal and LLNL Division Leader for Fire Protection and reviewed by LLNL Emergency Management Department Head James Colson. The document follows and expands upon the format and contents of the DOE Model Fire Protection Baseline Capabilities Assessment document contained on the DOE Fire Protection Web Site, but only addresses emergency response.

  14. Calculations of actual corneal astigmatism using total corneal refractive power before and after myopic keratorefractive surgery.

    PubMed

    Seo, Kyoung Yul; Yang, Hun; Kim, Wook Kyum; Nam, Sang Min

    2017-01-01

    To calculate actual corneal astigmatism using the total corneal refractive astigmatism for the 4-mm apex zone of the Pentacam (TCRP4astig) and keratometric astigmatism (Kastig) before and after photorefractive keratectomy or laser in situ keratomileusis. Uncomplicated 56 eyes after more than 6 months from the surgery were recruited by chart review. Various corneal astigmatisms were measured using the Pentacam and autokeratometer before and after surgery. Three eyes were excluded and 53 eyes of 38 subjects with with-the-rule astigmatism (WTR) were finally included. The astigmatisms were investigated using polar value analysis. When TCRP4astig was set as an actual astigmatism, the efficacy of arithmetic or coefficient adjustment of Kastig was evaluated using bivariate analysis. The difference between the simulated keratometer astigmatism of the Pentacam (SimKastig) and Kastig was strongly correlated with the difference between TCRP4astig and Kastig. TCRP4astig was different from Kastig in magnitude rather than meridian before and after surgery; the preoperative difference was due to the posterior cornea only; however, the postoperative difference was observed in both anterior and posterior parts. For arithmetic adjustment, 0.28 D and 0.27 D were subtracted from the preoperative and postoperative magnitudes of Kastig, respectively. For coefficient adjustment, the preoperative and postoperative magnitudes of Kastig were multiplied by 0.80 and 0.66, respectively. By arithmetic or coefficient adjustment, the difference between TCRP4astig and adjusted Kastig would be less than 0.75 D in magnitude for 95% of cases. Kastig was successfully adjusted to TCPR4astig before and after myopic keratorefractive surgery in cases of WTR. For use of TCRP4astig directly, SimKastig and Kastig should be matched.

  15. Calculations of actual corneal astigmatism using total corneal refractive power before and after myopic keratorefractive surgery

    PubMed Central

    Seo, Kyoung Yul; Yang, Hun; Kim, Wook Kyum; Nam, Sang Min

    2017-01-01

    Purpose To calculate actual corneal astigmatism using the total corneal refractive astigmatism for the 4-mm apex zone of the Pentacam (TCRP4astig) and keratometric astigmatism (Kastig) before and after photorefractive keratectomy or laser in situ keratomileusis Methods Uncomplicated 56 eyes after more than 6 months from the surgery were recruited by chart review. Various corneal astigmatisms were measured using the Pentacam and autokeratometer before and after surgery. Three eyes were excluded and 53 eyes of 38 subjects with with-the-rule astigmatism (WTR) were finally included. The astigmatisms were investigated using polar value analysis. When TCRP4astig was set as an actual astigmatism, the efficacy of arithmetic or coefficient adjustment of Kastig was evaluated using bivariate analysis. Results The difference between the simulated keratometer astigmatism of the Pentacam (SimKastig) and Kastig was strongly correlated with the difference between TCRP4astig and Kastig. TCRP4astig was different from Kastig in magnitude rather than meridian before and after surgery; the preoperative difference was due to the posterior cornea only; however, the postoperative difference was observed in both anterior and posterior parts. For arithmetic adjustment, 0.28 D and 0.27 D were subtracted from the preoperative and postoperative magnitudes of Kastig, respectively. For coefficient adjustment, the preoperative and postoperative magnitudes of Kastig were multiplied by 0.80 and 0.66, respectively. By arithmetic or coefficient adjustment, the difference between TCRP4astig and adjusted Kastig would be less than 0.75 D in magnitude for 95% of cases. Conclusions Kastig was successfully adjusted to TCPR4astig before and after myopic keratorefractive surgery in cases of WTR. For use of TCRP4astig directly, SimKastig and Kastig should be matched. PMID:28403194

  16. Improving health care proxy documentation using a web-based interview through a patient portal

    PubMed Central

    Crotty, Bradley H; Kowaloff, Hollis B; Safran, Charles; Slack, Warner V

    2016-01-01

    Objective Health care proxy (HCP) documentation is suboptimal. To improve rates of proxy selection and documentation, we sought to develop and evaluate a web-based interview to guide patients in their selection, and to capture their choices in their electronic health record (EHR). Methods We developed and implemented a HCP interview within the patient portal of a large academic health system. We analyzed the experience, together with demographic and clinical factors, of the first 200 patients who used the portal to complete the interview. We invited users to comment about their experience and analyzed their comments using established qualitative methods. Results From January 20, 2015 to March 13, 2015, 139 of the 200 patients who completed the interview submitted their HCP information for their clinician to review in the EHR. These patients had a median age of 57 years (Inter Quartile Range (IQR) 45–67) and most were healthy. The 99 patients who did not previously have HCP information in their EHR were more likely to complete and then submit their information than the 101 patients who previously had a proxy in their health record (odds ratio 2.4, P = .005). Qualitative analysis identified several ways in which the portal-based interview reminded, encouraged, and facilitated patients to complete their HCP. Conclusions Patients found our online interview convenient and helpful in facilitating selection and documentation of an HCP. Our study demonstrates that a web-based interview to collect and share a patient’s HCP information is both feasible and useful. PMID:26568608

  17. Going, Going, Still There: Using the WebCite Service to Permanently Archive Cited Web Pages

    PubMed Central

    Trudel, Mathieu

    2005-01-01

    Scholars are increasingly citing electronic “web references” which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To “webcite” a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its “instructions for authors” accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) “prospectively” before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted “citing articles” (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have

  18. Going, going, still there: using the WebCite service to permanently archive cited web pages.

    PubMed

    Eysenbach, Gunther; Trudel, Mathieu

    2005-12-30

    Scholars are increasingly citing electronic "web references" which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To "webcite" a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its "instructions for authors" accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) "prospectively" before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted "citing articles" (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have applications for research

  19. Improving health care proxy documentation using a web-based interview through a patient portal.

    PubMed

    Bajracharya, Adarsha S; Crotty, Bradley H; Kowaloff, Hollis B; Safran, Charles; Slack, Warner V

    2016-05-01

    Health care proxy (HCP) documentation is suboptimal. To improve rates of proxy selection and documentation, we sought to develop and evaluate a web-based interview to guide patients in their selection, and to capture their choices in their electronic health record (EHR). We developed and implemented a HCP interview within the patient portal of a large academic health system. We analyzed the experience, together with demographic and clinical factors, of the first 200 patients who used the portal to complete the interview. We invited users to comment about their experience and analyzed their comments using established qualitative methods. From January 20, 2015 to March 13, 2015, 139 of the 200 patients who completed the interview submitted their HCP information for their clinician to review in the EHR. These patients had a median age of 57 years (Inter Quartile Range (IQR) 45-67) and most were healthy. The 99 patients who did not previously have HCP information in their EHR were more likely to complete and then submit their information than the 101 patients who previously had a proxy in their health record (odds ratio 2.4, P = .005). Qualitative analysis identified several ways in which the portal-based interview reminded, encouraged, and facilitated patients to complete their HCP. Patients found our online interview convenient and helpful in facilitating selection and documentation of an HCP. Our study demonstrates that a web-based interview to collect and share a patient's HCP information is both feasible and useful. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Web Mining for Web Image Retrieval.

    ERIC Educational Resources Information Center

    Chen, Zheng; Wenyin, Liu; Zhang, Feng; Li, Mingjing; Zhang, Hongjiang

    2001-01-01

    Presents a prototype system for image retrieval from the Internet using Web mining. Discusses the architecture of the Web image retrieval prototype; document space modeling; user log mining; and image retrieval experiments to evaluate the proposed system. (AEF)

  1. Searching the world wide Web

    PubMed

    Lawrence; Giles

    1998-04-03

    The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the "indexable Web," the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages.

  2. An investigation of document aesthetics for web-to-print repurposing of small-medium business marketing collateral

    NASA Astrophysics Data System (ADS)

    Allebach, J. P.; Ortiz Segovia, Maria; Atkins, C. Brian; O'Brien-Strain, Eamonn; Damera-Venkata, Niranjan; Bhatti, Nina; Liu, Jerry; Lin, Qian

    2010-02-01

    Businesses have traditionally relied on different types of media to communicate with existing and potential customers. With the emergence of the Web, the relation between the use of print and electronic media has continually evolved. In this paper, we investigate one possible scenario that combines the use of the Web and print. Specifically, we consider the scenario where a small- or medium-sized business (SMB) has an existing web site from which they wish to pull content to create a print piece. Our assumption is that the web site was developed by a professional designer, working in conjunction with the business owner or marketing team, and that it contains a rich assembly of content that is presented in an aesthetically pleasing manner. Our goal is to understand the process that a designer would follow to create an effective and aesthetically pleasing print piece. We are particularly interested to understand the choices made by the designer with respect to placement and size of the text and graphic elements on the page. Toward this end, we conducted an experiment in which professional designers worked with SMBs to create print pieces from their respective web pages. In this paper, we report our findings from this experiment, and examine the underlying conclusions regarding the resulting document aesthetics in the context of the existing design, and engineering and computer science literatures that address this topic

  3. Automating Information Discovery Within the Invisible Web

    NASA Astrophysics Data System (ADS)

    Sweeney, Edwina; Curran, Kevin; Xie, Ermai

    A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.

  4. An experimental comparison of web-push vs. paper-only survey procedures for conducting an in-depth health survey of military spouses.

    PubMed

    McMaster, Hope Seib; LeardMann, Cynthia A; Speigle, Steven; Dillman, Don A

    2017-04-26

    Previous research has found that a "web-push" approach to data collection, which involves contacting people by mail to request an Internet survey response while withholding a paper response option until later in the contact process, consistently achieves lower response rates than a "paper-only" approach, whereby all respondents are contacted and requested to respond by mail. An experiment was designed, as part of the Millennium Cohort Family Study, to compare response rates, sample representativeness, and cost between a web-push and a paper-only approach; each approach comprised 3 stages of mail contacts. The invited sample (n = 4,935) consisted of spouses married to U.S. Service members, who had been serving in the military between 2 and 5 years as of October, 2011. The web-push methodology produced a significantly higher response rate, 32.8% compared to 27.8%. Each of the 3 stages of postal contact significantly contributed to response for both treatments with 87.1% of the web-push responses received over the Internet. The per-respondent cost of the paper-only treatment was almost 40% higher than the web-push treatment group. Analyses revealed no meaningfully significant differences between treatment groups in representation. These results provide evidence that a web-push methodology is more effective and less expensive than a paper-only approach among young military spouses, perhaps due to their heavy reliance on the internet, and we suggest that this approach may be more effective with the general population as they become more uniformly internet savvy.

  5. Webmail: an Automated Web Publishing System

    NASA Astrophysics Data System (ADS)

    Bell, David

    A system for publishing frequently updated information to the World Wide Web will be described. Many documents now hosted by the NOAO Web server require timely posting and frequent updates, but need only minor changes in markup or are in a standard format requiring only conversion to HTML. These include information from outside the organization, such as electronic bulletins, and a number of internal reports, both human and machine generated. Webmail uses procmail and Perl scripts to process incoming email messages in a variety of ways. This processing may include wrapping or conversion to HTML, posting to the Web or internal newsgroups, updating search indices or links on related pages, and sending email notification of the new pages to interested parties. The Webmail system has been in use at NOAO since early 1997 and has steadily grown to include fourteen recipes that together handle about fifty messages per week.

  6. Web-based document and content management with off-the-shelf software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuster, J

    1999-03-18

    This, then, is the current status of the project: Since we made the switch to Intradoc, we are now treating the project as a document and image management system. In reality, it could be considered a document and content management system since we can manage almost any file input to the system such as video or audio. At present, however, we are concentrating on images. As mentioned above, my CRADA funding was only targeted at including thumbnails of images in Intradoc. We still had to modify Intradoc so that it would compress images submitted to the system. All processing ofmore » files submitted to Intradoc is handled in what is called the Document Refinery. Even though MrSID created thumbnails in the process of compressing an image, work needed to be done to somehow build this capability into the Document Refinery. Therefore we made the decision to contract the Intradoc Engineering Team to perform this custom development work. To make Intradoc even more capable of handling images, we have also contracted for customization of the Document Refinery to accept Adobe PhotoShop and Illustrator file in their native format.« less

  7. The Document Management Alliance.

    ERIC Educational Resources Information Center

    Fay, Chuck

    1998-01-01

    Describes the Document Management Alliance, a standards effort for document management systems that manages and tracks changes to electronic documents created and used by collaborative teams, provides secure access, and facilitates online information retrieval via the Internet and World Wide Web. Future directions are also discussed. (LRW)

  8. Multipurpose fare media : developments and issues

    DOT National Transportation Integrated Search

    1997-06-01

    This TCRP digest presents the interim findings of TCRP Project A-14, Potential of Multipurpose Fare Media," conducted by Multisystems, Inc., in collaboration with Dove Associates. Inc., and Mundle & Associates, Inc.Included in the digest are (1) a...

  9. Dark Web 101

    DTIC Science & Technology

    2016-07-21

    Todays internet has multiple webs. The surface web is what Google and other search engines index and pull based on links. Essentially, the surface...financial records, research and development), and personal data (medical records or legal documents). These are all deep web. Standard search engines dont

  10. Monotone Increasing Binary Similarity and Its Application to Automatic Document-Acquisition of a Category

    NASA Astrophysics Data System (ADS)

    Suzuki, Izumi; Mikami, Yoshiki; Ohsato, Ario

    A technique that acquires documents in the same category with a given short text is introduced. Regarding the given text as a training document, the system marks up the most similar document, or sufficiently similar documents, from among the document domain (or entire Web). The system then adds the marked documents to the training set to learn the set, and this process is repeated until no more documents are marked. Setting a monotone increasing property to the similarity as it learns enables the system to 1) detect the correct timing so that no more documents remain to be marked and to 2) decide the threshold value that the classifier uses. In addition, under the condition that the normalization process is limited to what term weights are divided by a p-norm of the weights, the linear classifier in which training documents are indexed in a binary manner is the only instance that satisfies the monotone increasing property. The feasibility of the proposed technique was confirmed through an examination of binary similarity and using English and German documents randomly selected from the Web.

  11. Documenting pharmacist interventions on an intranet.

    PubMed

    Simonian, Armen I

    2003-01-15

    The process of developing and implementing an intranet Web site for clinical intervention documentation is described. An inpatient pharmacy department initiated an organizationwide effort to improve documentation of interventions by pharmacists at its seven hospitals to achieve real-time capture of meaningful benchmarking data. Standardization of intervention types would allow the health system to contrast and compare medication use, process improvement, and patient care initiatives among its hospitals. After completing a needs assessment and reviewing current methodologies, a computerized tracking tool was developed in-house and integrated with the organization's intranet. Representatives from all hospitals agreed on content and functionality requirements for the Web site. The site was completed and activated in February 2002. Before this Web site was established, the most documented intervention types were Renal Adjustment and Clarify Dose, with a daily average of four and three, respectively. After site activation, daily averages for Renal Adjustment remained unchanged, but Clarify Dose is now documented nine times per day. Drug Information and i.v.-to-p.o. intervention types, which previously averaged less than one intervention per day, are now documented an average of four times daily. Approximately 91% of staff pharmacists are using this site. Future plans for this site include enhanced accessibility to the site with wireless personal digital assistants. The design and implementation of an intranet Web site to document pharmacists' interventions doubled the rate of intervention documentation and standardized the intervention types among hospitals in the health system.

  12. WikiHyperGlossary (WHG): an information literacy technology for chemistry documents.

    PubMed

    Bauer, Michael A; Berleant, Daniel; Cornell, Andrew P; Belford, Robert E

    2015-01-01

    The WikiHyperGlossary is an information literacy technology that was created to enhance reading comprehension of documents by connecting them to socially generated multimedia definitions as well as semantically relevant data. The WikiHyperGlossary enhances reading comprehension by using the lexicon of a discipline to generate dynamic links in a document to external resources that can provide implicit information the document did not explicitly provide. Currently, the most common method to acquire additional information when reading a document is to access a search engine and browse the web. This may lead to skimming of multiple documents with the novice actually never returning to the original document of interest. The WikiHyperGlossary automatically brings information to the user within the current document they are reading, enhancing the potential for deeper document understanding. The WikiHyperGlossary allows users to submit a web URL or text to be processed against a chosen lexicon, returning the document with tagged terms. The selection of a tagged term results in the appearance of the WikiHyperGlossary Portlet containing a definition, and depending on the type of word, tabs to additional information and resources. Current types of content include multimedia enhanced definitions, ChemSpider query results, 3D molecular structures, and 2D editable structures connected to ChemSpider queries. Existing glossaries can be bulk uploaded, locked for editing and associated with multiple social generated definitions. The WikiHyperGlossary leverages both social and semantic web technologies to bring relevant information to a document. This can not only aid reading comprehension, but increases the users' ability to obtain additional information within the document. We have demonstrated a molecular editor enabled knowledge framework that can result in a semantic web inductive reasoning process, and integration of the WikiHyperGlossary into other software technologies, like

  13. A novel architecture for information retrieval system based on semantic web

    NASA Astrophysics Data System (ADS)

    Zhang, Hui

    2011-12-01

    Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

  14. The CLIMB Geoportal - A web-based dissemination and documentation platform for hydrological modelling data

    NASA Astrophysics Data System (ADS)

    Blaschek, Michael; Gerken, Daniel; Ludwig, Ralf; Duttmann, Rainer

    2015-04-01

    Geoportals are important elements of spatial data infrastructures (SDIs) that are strongly based on GIS-related web services. These services are basically meant for distributing, documenting and visualizing (spatial) data in a standardized manner; an important but challenging task especially in large scientific projects with a high number of data suppliers and producers from various countries. This presentation focuses on introducing the free and open-source based geoportal solution developed within the research project CLIMB (Climate Induced Changes on the Hydrology of Mediterranean Basins, www.climb-fp7.eu) that serves as the central platform for interchanging project-related spatial data and information. In this collaboration, financed by the EU-FP7-framework and coordinated at the LMU Munich, 21 partner institutions from nine European and non-European countries were involved. The CLIMB Geoportal (lgi-climbsrv.geographie.uni-kiel.de) stores and provides spatially distributed data about the current state and future changes of the hydrological conditions within the seven CLIMB test sites around the Mediterranean. Hydrological modelling outcome - validated by the CLIMB partners - is offered to the public in forms of Web Map Services (WMS), whereas downloading the underlying data itself through Web Coverage Services (WCS) is possible for registered users only. A selection of common indicators such as discharge, drought index as well as uncertainty measures including their changes over time were used in different spatial resolution. Besides map information, the portal enables the graphical display of time series of selected variables calculated by the individual models applied within the CLIMB-project. The implementation of the CLIMB Geoportal is finally based on version 2.0c5 of the open source geospatial content management system GeoNode. It includes a GeoServer instance for providing the OGC-compliant web services and comes with a metadata catalog (pycsw) as well

  15. A Query Integrator and Manager for the Query Web

    PubMed Central

    Brinkley, James F.; Detwiler, Landon T.

    2012-01-01

    We introduce two concepts: the Query Web as a layer of interconnected queries over the document web and the semantic web, and a Query Web Integrator and Manager (QI) that enables the Query Web to evolve. QI permits users to write, save and reuse queries over any web accessible source, including other queries saved in other installations of QI. The saved queries may be in any language (e.g. SPARQL, XQuery); the only condition for interconnection is that the queries return their results in some form of XML. This condition allows queries to chain off each other, and to be written in whatever language is appropriate for the task. We illustrate the potential use of QI for several biomedical use cases, including ontology view generation using a combination of graph-based and logical approaches, value set generation for clinical data management, image annotation using terminology obtained from an ontology web service, ontology-driven brain imaging data integration, small-scale clinical data integration, and wider-scale clinical data integration. Such use cases illustrate the current range of applications of QI and lead us to speculate about the potential evolution from smaller groups of interconnected queries into a larger query network that layers over the document and semantic web. The resulting Query Web could greatly aid researchers and others who now have to manually navigate through multiple information sources in order to answer specific questions. PMID:22531831

  16. Embedding the shapes of regions of interest into a Clinical Document Architecture document.

    PubMed

    Minh, Nguyen Hai; Yi, Byoung-Kee; Kim, Il Kon; Song, Joon Hyun; Binh, Pham Viet

    2015-03-01

    Sharing a medical image visually annotated by a region of interest with a remotely located specialist for consultation is a good practice. It may, however, require a special-purpose (and most likely expensive) system to send and view them, which is an unfeasible solution in developing countries such as Vietnam. In this study, we design and implement interoperable methods based on the HL7 Clinical Document Architecture and the eXtensible Markup Language Stylesheet Language for Transformation standards to seamlessly exchange and visually present the shapes of regions of interest using web browsers. We also propose a new integration architecture for a Clinical Document Architecture generator that enables embedding of regions of interest and simultaneous auto-generation of corresponding style sheets. Using the Clinical Document Architecture document and style sheet, a sender can transmit clinical documents and medical images together with coordinate values of regions of interest to recipients. Recipients can easily view the documents and display embedded regions of interest by rendering them in their web browser of choice. © The Author(s) 2014.

  17. Biotool2Web: creating simple Web interfaces for bioinformatics applications.

    PubMed

    Shahid, Mohammad; Alam, Intikhab; Fuellen, Georg

    2006-01-01

    Currently there are many bioinformatics applications being developed, but there is no easy way to publish them on the World Wide Web. We have developed a Perl script, called Biotool2Web, which makes the task of creating web interfaces for simple ('home-made') bioinformatics applications quick and easy. Biotool2Web uses an XML document containing the parameters to run the tool on the Web, and generates the corresponding HTML and common gateway interface (CGI) files ready to be published on a web server. This tool is available for download at URL http://www.uni-muenster.de/Bioinformatics/services/biotool2web/ Georg Fuellen (fuellen@alum.mit.edu).

  18. Document Concurrence System

    NASA Technical Reports Server (NTRS)

    Muhsin, Mansour; Walters, Ian

    2004-01-01

    The Document Concurrence System is a combination of software modules for routing users expressions of concurrence with documents. This system enables determination of the current status of concurrences and eliminates the need for the prior practice of manually delivering paper documents to all persons whose approvals were required. This system runs on a server, and participants gain access via personal computers equipped with Web-browser and electronic-mail software. A user can begin a concurrence routing process by logging onto an administration module, naming the approvers and stating the sequence for routing among them, and attaching documents. The server then sends a message to the first person on the list. Upon concurrence by the first person, the system sends a message to the second person, and so forth. A person on the list indicates approval, places the documents on hold, or indicates disapproval, via a Web-based module. When the last person on the list has concurred, a message is sent to the initiator, who can then finalize the process through the administration module. A background process running on the server identifies concurrence processes that are overdue and sends reminders to the appropriate persons.

  19. Going, going, still there: using the WebCite service to permanently archive cited Web pages.

    PubMed

    Eysenbach, Gunther

    2006-01-01

    Scholars are increasingly citing electronic "web references" which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To "webcite" a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page.

  20. Extraction of a group-pair relation: problem-solving relation from web-board documents.

    PubMed

    Pechsiri, Chaveevan; Piriyakul, Rapepun

    2016-01-01

    This paper aims to extract a group-pair relation as a Problem-Solving relation, for example a DiseaseSymptom-Treatment relation and a CarProblem-Repair relation, between two event-explanation groups, a problem-concept group as a symptom/CarProblem-concept group and a solving-concept group as a treatment-concept/repair concept group from hospital-web-board and car-repair-guru-web-board documents. The Problem-Solving relation (particularly Symptom-Treatment relation) including the graphical representation benefits non-professional persons by supporting knowledge of primarily solving problems. The research contains three problems: how to identify an EDU (an Elementary Discourse Unit, which is a simple sentence) with the event concept of either a problem or a solution; how to determine a problem-concept EDU boundary and a solving-concept EDU boundary as two event-explanation groups, and how to determine the Problem-Solving relation between these two event-explanation groups. Therefore, we apply word co-occurrence to identify a problem-concept EDU and a solving-concept EDU, and machine-learning techniques to solve a problem-concept EDU boundary and a solving-concept EDU boundary. We propose using k-mean and Naïve Bayes to determine the Problem-Solving relation between the two event-explanation groups involved with clustering features. In contrast to previous works, the proposed approach enables group-pair relation extraction with high accuracy.

  1. LCS Content Document Application

    NASA Technical Reports Server (NTRS)

    Hochstadt, Jake

    2011-01-01

    My project at KSC during my spring 2011 internship was to develop a Ruby on Rails application to manage Content Documents..A Content Document is a collection of documents and information that describes what software is installed on a Launch Control System Computer. It's important for us to make sure the tools we use everyday are secure, up-to-date, and properly licensed. Previously, keeping track of the information was done by Excel and Word files between different personnel. The goal of the new application is to be able to manage and access the Content Documents through a single database backed web application. Our LCS team will benefit greatly with this app. Admin's will be able to login securely to keep track and update the software installed on each computer in a timely manner. We also included exportability such as attaching additional documents that can be downloaded from the web application. The finished application will ease the process of managing Content Documents while streamlining the procedure. Ruby on Rails is a very powerful programming language and I am grateful to have the opportunity to build this application.

  2. NoSQL: collection document and cloud by using a dynamic web query form

    NASA Astrophysics Data System (ADS)

    Abdalla, Hemn B.; Lin, Jinzhao; Li, Guoquan

    2015-07-01

    Mongo-DB (from "humongous") is an open-source document database and the leading NoSQL database. A NoSQL (Not Only SQL, next generation databases, being non-relational, deal, open-source and horizontally scalable) presenting a mechanism for storage and retrieval of documents. Previously, we stored and retrieved the data using the SQL queries. Here, we use the MonogoDB that means we are not utilizing the MySQL and SQL queries. Directly importing the documents into our Drives, retrieving the documents on that drive by not applying the SQL queries, using the IO BufferReader and Writer, BufferReader for importing our type of document files to my folder (Drive). For retrieving the document files, the usage is BufferWriter from the particular folder (or) Drive. In this sense, providing the security for those storing files for what purpose means if we store the documents in our local folder means all or views that file and modified that file. So preventing that file, we are furnishing the security. The original document files will be changed to another format like in this paper; Binary format is used. Our documents will be converting to the binary format after that direct storing in one of our folder, that time the storage space will provide the private key for accessing that file. Wherever any user tries to discover the Document files means that file data are in the binary format, the document's file owner simply views that original format using that personal key from receive the secret key from the cloud.

  3. Personalization of Rule-based Web Services.

    PubMed

    Choi, Okkyung; Han, Sang Yong

    2008-04-04

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

  4. How many scientific papers are mentioned in policy-related documents? An empirical investigation using Web of Science and Altmetric data.

    PubMed

    Haunschild, Robin; Bornmann, Lutz

    2017-01-01

    In this short communication, we provide an overview of a relatively newly provided source of altmetrics data which could possibly be used for societal impact measurements in scientometrics. Recently, Altmetric-a start-up providing publication level metrics-started to make data for publications available which have been mentioned in policy-related documents. Using data from Altmetric, we study how many papers indexed in the Web of Science (WoS) are mentioned in policy-related documents. We find that less than 0.5% of the papers published in different subject categories are mentioned at least once in policy-related documents. Based on our results, we recommend that the analysis of (WoS) publications with at least one policy-related mention is repeated regularly (annually) in order to check the usefulness of the data. Mentions in policy-related documents should not be used for impact measurement until new policy-related sites are tracked.

  5. What Are the Usage Conditions of Web 2.0 Tools Faculty of Education Students?

    ERIC Educational Resources Information Center

    Agir, Ahmet

    2014-01-01

    As a result of advances in technology and then the emergence of using Internet in every step of life, web that provides access to the documents such as picture, audio, animation and text in Internet started to be used. At first, web consists of only visual and text pages that couldn't enable to make user's interaction. However, it is seen that not…

  6. An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling.

    PubMed

    Devi, R Suganya; Manjula, D; Siddharth, R K

    2015-01-01

    Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling.

  7. An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling

    PubMed Central

    Devi, R. Suganya; Manjula, D.; Siddharth, R. K.

    2015-01-01

    Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling. PMID:26137592

  8. EPA Web Training Classes

    EPA Pesticide Factsheets

    Scheduled webinars can help you better manage EPA web content. Class topics include Drupal basics, creating different types of pages in the WebCMS such as document pages and forms, using Google Analytics, and best practices for metadata and accessibility.

  9. Dealing with Multiple Documents on the WWW: The Role of Metacognition in the Formation of Documents Models

    ERIC Educational Resources Information Center

    Stadtler, Marc; Bromme, Rainer

    2007-01-01

    Drawing on the theory of documents representation (Perfetti et al., Toward a theory of documents representation. In: H. v. Oostendorp & S. R. Goldman (Eds.), "The construction of mental representations during reading." Mahwah, NJ: Erlbaum, 1999), we argue that successfully dealing with multiple documents on the World Wide Web requires readers to…

  10. Intelligent Visualization of Geo-Information on the Future Web

    NASA Astrophysics Data System (ADS)

    Slusallek, P.; Jochem, R.; Sons, K.; Hoffmann, H.

    2012-04-01

    Visualization is a key component of the "Observation Web" and will become even more important in the future as geo data becomes more widely accessible. The common statement that "Data that cannot be seen, does not exist" is especially true for non-experts, like most citizens. The Web provides the most interesting platform for making data easily and widely available. However, today's Web is not well suited for the interactive visualization and exploration that is often needed for geo data. Support for 3D data was added only recently and at an extremely low level (WebGL), but even the 2D visualization capabilities of HTML e.g. (images, canvas, SVG) are rather limited, especially regarding interactivity. We have developed XML3D as an extension to HTML-5. It allows for compactly describing 2D and 3D data directly as elements of an HTML-5 document. All graphics elements are part of the Document Object Model (DOM) and can be manipulated via the same set of DOM events and methods that millions of Web developers use on a daily basis. Thus, XML3D makes highly interactive 2D and 3D visualization easily usable, not only for geo data. XML3D is supported by any WebGL-capable browser but we also provide native implementations in Firefox and Chromium. As an example, we show how OpenStreetMap data can be mapped directly to XML3D and visualized interactively in any Web page. We show how this data can be easily augmented with additional data from the Web via a few lines of Javascript. We also show how embedded semantic data (via RDFa) allows for linking the visualization back to the data's origin, thus providing an immersive interface for interacting with and modifying the original data. XML3D is used as key input for standardization within the W3C Community Group on "Declarative 3D for the Web" chaired by the DFKI and has recently been selected as one of the Generic Enabler for the EU Future Internet initiative.

  11. Automatic generation of Web mining environments

    NASA Astrophysics Data System (ADS)

    Cibelli, Maurizio; Costagliola, Gennaro

    1999-02-01

    The main problem related to the retrieval of information from the world wide web is the enormous number of unstructured documents and resources, i.e., the difficulty of locating and tracking appropriate sources. This paper presents a web mining environment (WME), which is capable of finding, extracting and structuring information related to a particular domain from web documents, using general purpose indices. The WME architecture includes a web engine filter (WEF), to sort and reduce the answer set returned by a web engine, a data source pre-processor (DSP), which processes html layout cues in order to collect and qualify page segments, and a heuristic-based information extraction system (HIES), to finally retrieve the required data. Furthermore, we present a web mining environment generator, WMEG, that allows naive users to generate a WME specific to a given domain by providing a set of specifications.

  12. An Auto-management Thesis Program WebMIS Based on Workflow

    NASA Astrophysics Data System (ADS)

    Chang, Li; Jie, Shi; Weibo, Zhong

    An auto-management WebMIS based on workflow for bachelor thesis program is given in this paper. A module used for workflow dispatching is designed and realized using MySQL and J2EE according to the work principle of workflow engine. The module can automatively dispatch the workflow according to the date of system, login information and the work status of the user. The WebMIS changes the management from handwork to computer-work which not only standardizes the thesis program but also keeps the data and documents clean and consistent.

  13. Web-based documentation system with exchange of DICOM RT for multicenter clinical studies in particle therapy

    NASA Astrophysics Data System (ADS)

    Kessel, Kerstin A.; Bougatf, Nina; Bohn, Christian; Engelmann, Uwe; Oetzel, Dieter; Bendl, Rolf; Debus, Jürgen; Combs, Stephanie E.

    2012-02-01

    Conducting clinical studies is rather difficult because of the large variety of voluminous datasets, different documentation styles, and various information systems, especially in radiation oncology. In this paper, we describe our development of a web-based documentation system with first approaches of automatic statistical analyses for transnational and multicenter clinical studies in particle therapy. It is possible to have immediate access to all patient information and exchange, store, process, and visualize text data, all types of DICOM images, especially DICOM RT, and any other multimedia data. Accessing the documentation system and submitting clinical data is possible for internal and external users (e.g. referring physicians from abroad, who are seeking the new technique of particle therapy for their patients). Thereby, security and privacy protection is ensured with the encrypted https protocol, client certificates, and an application gateway. Furthermore, all data can be pseudonymized. Integrated into the existing hospital environment, patient data is imported via various interfaces over HL7-messages and DICOM. Several further features replace manual input wherever possible and ensure data quality and entirety. With a form generator, studies can be individually designed to fit specific needs. By including all treated patients (also non-study patients), we gain the possibility for overall large-scale, retrospective analyses. Having recently begun documentation of our first six clinical studies, it has become apparent that the benefits lie in the simplification of research work, better study analyses quality and ultimately, the improvement of treatment concepts by evaluating the effectiveness of particle therapy.

  14. Semantic Metadata for Heterogeneous Spatial Planning Documents

    NASA Astrophysics Data System (ADS)

    Iwaniak, A.; Kaczmarek, I.; Łukowicz, J.; Strzelecki, M.; Coetzee, S.; Paluszyński, W.

    2016-09-01

    Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa). The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.

  15. Indexing and Retrieval for the Web.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    2003-01-01

    Explores current research on indexing and ranking as retrieval functions of search engines on the Web. Highlights include measuring search engine stability; evaluation of Web indexing and retrieval; Web crawlers; hyperlinks for indexing and ranking; ranking for metasearch; document structure; citation indexing; relevance; query evaluation;…

  16. Content Documents Management

    NASA Technical Reports Server (NTRS)

    Muniz, R.; Hochstadt, J.; Boelke J.; Dalton, A.

    2011-01-01

    The Content Documents are created and managed under the System Software group with. Launch Control System (LCS) project. The System Software product group is lead by NASA Engineering Control and Data Systems branch (NEC3) at Kennedy Space Center. The team is working on creating Operating System Images (OSI) for different platforms (i.e. AIX, Linux, Solaris and Windows). Before the OSI can be created, the team must create a Content Document which provides the information of a workstation or server, with the list of all the software that is to be installed on it and also the set where the hardware belongs. This can be for example in the LDS, the ADS or the FR-l. The objective of this project is to create a User Interface Web application that can manage the information of the Content Documents, with all the correct validations and filters for administrator purposes. For this project we used one of the most excellent tools in agile development applications called Ruby on Rails. This tool helps pragmatic programmers develop Web applications with Rails framework and Ruby programming language. It is very amazing to see how a student can learn about OOP features with the Ruby language, manage the user interface with HTML and CSS, create associations and queries with gems, manage databases and run a server with MYSQL, run shell commands with command prompt and create Web frameworks with Rails. All of this in a real world project and in just fifteen weeks!

  17. An advanced web query interface for biological databases

    PubMed Central

    Latendresse, Mario; Karp, Peter D.

    2010-01-01

    Although most web-based biological databases (DBs) offer some type of web-based form to allow users to author DB queries, these query forms are quite restricted in the complexity of DB queries that they can formulate. They can typically query only one DB, and can query only a single type of object at a time (e.g. genes) with no possible interaction between the objects—that is, in SQL parlance, no joins are allowed between DB objects. Writing precise queries against biological DBs is usually left to a programmer skillful enough in complex DB query languages like SQL. We present a web interface for building precise queries for biological DBs that can construct much more precise queries than most web-based query forms, yet that is user friendly enough to be used by biologists. It supports queries containing multiple conditions, and connecting multiple object types without using the join concept, which is unintuitive to biologists. This interactive web interface is called the Structured Advanced Query Page (SAQP). Users interactively build up a wide range of query constructs. Interactive documentation within the SAQP describes the schema of the queried DBs. The SAQP is based on BioVelo, a query language based on list comprehension. The SAQP is part of the Pathway Tools software and is available as part of several bioinformatics web sites powered by Pathway Tools, including the BioCyc.org site that contains more than 500 Pathway/Genome DBs. PMID:20624715

  18. Results from a Web Impact Factor Crawler.

    ERIC Educational Resources Information Center

    Thelwall, Mike

    2001-01-01

    Discusses Web impact factors (WIFs), Web versions of the impact factors for journals, and how they can be calculated by using search engines. Highlights include HTML and document indexing; Web page links; a Web crawler designed for calculating WIFs; and WIFs for United Kingdom universities that measured research profiles or capability. (Author/LRW)

  19. More than a meal: integrating non-feeding interactions into food webs

    USGS Publications Warehouse

    Kéfi, Sonia; Berlow, Eric L.; Wieters, Evie A.; Navarrete, Sergio A.; Petchey, Owen L.; Wood, Spencer A.; Boit, Alice; Joppa, Lucas N.; Lafferty, Kevin D.; Williams, Richard J.; Martinez, Neo D.; Menge, Bruce A.; Blanchette, Carol A.; Iles, Alison C.; Brose, Ulrich

    2012-01-01

    Organisms eating each other are only one of many types of well documented and important interactions among species. Other such types include habitat modification, predator interference and facilitation. However, ecological network research has been typically limited to either pure food webs or to networks of only a few (<3) interaction types. The great diversity of non-trophic interactions observed in nature has been poorly addressed by ecologists and largely excluded from network theory. Herein, we propose a conceptual framework that organises this diversity into three main functional classes defined by how they modify specific parameters in a dynamic food web model. This approach provides a path forward for incorporating non-trophic interactions in traditional food web models and offers a new perspective on tackling ecological complexity that should stimulate both theoretical and empirical approaches to understanding the patterns and dynamics of diverse species interactions in nature.

  20. Web Search Engines: Key To Locating Information for All Users or Only the Cognoscenti?

    ERIC Educational Resources Information Center

    Tomaiuolo, Nicholas G.; Packer, Joan G.

    This paper describes a study that attempted to ascertain the degree of success that undergraduates and graduate students, with varying levels of experience using the World Wide Web and Web search engines, and without librarian instruction or intervention, had in locating relevant material on specific topics furnished by the investigators. Because…

  1. Measuring the Success of the Academic Library Website Using Banner Advertisements and Web Conversion Rates: A Case Study

    ERIC Educational Resources Information Center

    Whang, Michael

    2007-01-01

    Measuring website success is critical not only to the web development process but also to demonstrate the value of library services to the institution. This article documents one library's approach to the measurement of website success. LibQUAL+[TM] results and strategic-planning documents indicated a need for a new type of measurement. The…

  2. Research, Collaboration, and Open Science Using Web 2.0

    PubMed Central

    Shee, Kevin; Strong, Michael; Guido, Nicholas J.; Lue, Robert A.; Church, George M.; Viel, Alain

    2010-01-01

    There is little doubt that the Internet has transformed the world in which we live. Information that was once archived in bricks and mortar libraries is now only a click away, and people across the globe have become connected in a manner inconceivable only 20 years ago. Although many scientists and educators have embraced the Internet as an invaluable tool for research, education and data sharing, some have been somewhat slower to take full advantage of emerging Web 2.0 technologies. Here we discuss the benefits and challenges of integrating Web 2.0 applications into undergraduate research and education programs, based on our experience utilizing these technologies in a summer undergraduate research program in synthetic biology at Harvard University. We discuss the use of applications including wiki-based documentation, digital brainstorming, and open data sharing via the Web, to facilitate the educational aspects and collaborative progress of undergraduate research projects. We hope to inspire others to integrate these technologies into their own coursework or research projects. PMID:23653712

  3. Home Page, Sweet Home Page: Creating a Web Presence.

    ERIC Educational Resources Information Center

    Falcigno, Kathleen; Green, Tim

    1995-01-01

    Focuses primarily on design issues and practical concerns involved in creating World Wide Web documents for use within an organization. Concerns for those developing Web home pages are: learning HyperText Markup Language (HTML); defining customer group; allocating staff resources for maintenance of documents; providing feedback mechanism for…

  4. Comparing surgically induced astigmatism calculated by means of simulated keratometry versus total corneal refractive power.

    PubMed

    Garzón, Nuria; Rodríguez-Vallejo, Manuel; Carmona, David; Calvo-Sanz, Jorge A; Poyales, Francisco; Palomino, Carlos; Zato-Gómez de Liaño, Miguel Á; Fernández, Joaquín

    2018-03-01

    To evaluate surgically induced astigmatism as computed by means of either simulated keratometry (K SIM ) or total corneal refractive power (TCRP) after temporal incisions. Prospective observational study including 36 right eyes undergoing cataract surgery. Astigmatism was measured preoperatively during the 3-month follow-up period using Pentacam. Surgically induced astigmatism was computed considering anterior corneal surface astigmatism at 3 mm with K SIM and considering both corneal surfaces with TCRP from 1 to 8 mm (TCRP 3 for 3 mm). The eyes under study were divided into two balanced groups: LOW with K SIM astigmatism <0.90 D and HIGH with K SIM astigmatism ≥0.90 D. Resulting surgically induced astigmatism values were compared across groups and measuring techniques by means of flattening, steepening, and torque analysis. Mean surgically induced astigmatism was higher in the HIGH group (0.31 D @ 102°) than in the LOW group (0.04 D @ 16°). The temporal incision resulted in a steepening in the HIGH group of 0.15 D @ 90°, as estimated with K SIM , versus 0.28 D @ 90° with TCRP 3 , but no significant differences were found for the steepening in the LOW group or for the torque in either group. Differences between K SIM - and TCRP 3 -based surgically induced astigmatism values were negligible in LOW group. Surgically induced astigmatism was considerably higher in the high-astigmatism group and its value was underestimated with the K SIM approach. Eyes having low astigmatism should not be included for computing the surgically induced astigmatism because steepening would be underestimated.

  5. An Educational Tool for Browsing the Semantic Web

    ERIC Educational Resources Information Center

    Yoo, Sujin; Kim, Younghwan; Park, Seongbin

    2013-01-01

    The Semantic Web is an extension of the current Web where information is represented in a machine processable way. It is not separate from the current Web and one of the confusions that novice users might have is where the Semantic Web is. In fact, users can easily encounter RDF documents that are components of the Semantic Web while they navigate…

  6. Relevance of Web Documents:Ghosts Consensus Method.

    ERIC Educational Resources Information Center

    Gorbunov, Andrey L.

    2002-01-01

    Discusses how to improve the quality of Internet search systems and introduces the Ghosts Consensus Method which is free from the drawbacks of digital democracy algorithms and is based on linear programming tasks. Highlights include vector space models; determining relevant documents; and enriching query terms. (LRW)

  7. Semantic Document Model to Enhance Data and Knowledge Interoperability

    NASA Astrophysics Data System (ADS)

    Nešić, Saša

    To enable document data and knowledge to be efficiently shared and reused across application, enterprise, and community boundaries, desktop documents should be completely open and queryable resources, whose data and knowledge are represented in a form understandable to both humans and machines. At the same time, these are the requirements that desktop documents need to satisfy in order to contribute to the visions of the Semantic Web. With the aim of achieving this goal, we have developed the Semantic Document Model (SDM), which turns desktop documents into Semantic Documents as uniquely identified and semantically annotated composite resources, that can be instantiated into human-readable (HR) and machine-processable (MP) forms. In this paper, we present the SDM along with an RDF and ontology-based solution for the MP document instance. Moreover, on top of the proposed model, we have built the Semantic Document Management System (SDMS), which provides a set of services that exploit the model. As an application example that takes advantage of SDMS services, we have extended MS Office with a set of tools that enables users to transform MS Office documents (e.g., MS Word and MS PowerPoint) into Semantic Documents, and to search local and distant semantic document repositories for document content units (CUs) over Semantic Web protocols.

  8. Globe Teachers Guide and Photographic Data on the Web

    NASA Technical Reports Server (NTRS)

    Kowal, Dan

    2004-01-01

    The task of managing the GLOBE Online Teacher s Guide during this time period focused on transforming the technology behind the delivery system of this document. The web application transformed from a flat file retrieval system to a dynamic database access approach. The new methodology utilizes Java Server Pages (JSP) on the front-end and an Oracle relational database on the backend. This new approach allows users of the web site, mainly teachers, to access content efficiently by grade level and/or by investigation or educational concept area. Moreover, teachers can gain easier access to data sheets and lab and field guides. The new online guide also included updated content for all GLOBE protocols. The GLOBE web management team was given documentation for maintaining the new application. Instructions for modifying the JSP templates and managing database content were included in this document. It was delivered to the team by the end of October, 2003. The National Geophysical Data Center (NGDC) continued to manage the school study site photos on the GLOBE website. 333 study site photo images were added to the GLOBE database and posted on the web during this same time period for 64 schools. Documentation for processing study site photos was also delivered to the new GLOBE web management team. Lastly, assistance was provided in transferring reference applications such as the Cloud and LandSat quizzes and Earth Systems Online Poster from NGDC servers to GLOBE servers along with documentation for maintaining these applications.

  9. ICTNET at Web Track 2009 Diversity task

    DTIC Science & Technology

    2009-11-01

    performance. On the World Wide Web, there exist many documents which represents several implicit subtopics. We used commerce search engines to gather those...documents. In this task, our work can be divided into five steps. First, we collect documents returned by commerce search engines , and considered

  10. Parents on the web: risks for quality management of cough in children.

    PubMed

    Pandolfini, C; Impicciatore, P; Bonati, M

    2000-01-01

    Health information on the Internet, with respect to common, self-limited childhood illnesses, has been found to be unreliable. Therefore, parents navigating on the Internet risk finding advice that is incomplete or, more importantly, not evidence-based. The importance that a resource such as the Internet as a source of quality health information for consumers should, however, be taken into consideration. For this reason, studies need to be performed regarding the quality of material provided. Various strategies have been proposed that would allow parents to distinguish trustworthy web documents from unreliable ones. One of these strategies is the use of a checklist for the appraisal of web pages based on their technical aspects. The purpose of this study was to assess the quality of information present on the Internet regarding the home management of cough in children and to examine the applicability of a checklist strategy that would allow consumers to select more trustworthy web pages. The Internet was searched for web pages regarding the home treatment of cough in children with the use of different search engines. Medline and the Cochrane database were searched for available evidence concerning the management of cough in children. Three checklists were created to assess different aspects of the web documents. The first checklist was designed to allow for a technical appraisal of the web pages and was based on components such as the name of the author and references used. The second was constructed to examine the completeness of the health information contained in the documents, such as causes and mechanism of cough, and pharmacological and nonpharmacological treatment. The third checklist assessed the quality of the information by measuring it against a gold standard document. This document was created by combining the policy statement issued by the American Academy of Pediatrics regarding the pharmacological treatment of cough in children with the guide of the

  11. Using Web 2.0 for health promotion and social marketing efforts: lessons learned from Web 2.0 experts.

    PubMed

    Dooley, Jennifer Allyson; Jones, Sandra C; Iverson, Don

    2014-01-01

    Web 2.0 experts working in social marketing participated in qualitative in-depth interviews. The research aimed to document the current state of Web 2.0 practice. Perceived strengths (such as the viral nature of Web 2.0) and weaknesses (such as the time consuming effort it took to learn new Web 2.0 platforms) existed when using Web 2.0 platforms for campaigns. Lessons learned were identified--namely, suggestions for engaging in specific types of content creation strategies (such as plain language and transparent communication practices). Findings present originality and value to practitioners working in social marketing who want to effectively use Web 2.0.

  12. Crossroads 2000 proceedings [table of contents hyperlinked to documents

    DOT National Transportation Integrated Search

    1998-08-19

    This document's table of contents hyperlinks to the 76 papers presented at the Crossroads 2000 Conference. The documents are housed at the web site for Iowa State University Center for Transportation Research and Education. A selection of 14 individu...

  13. Social Networking on the Semantic Web

    ERIC Educational Resources Information Center

    Finin, Tim; Ding, Li; Zhou, Lina; Joshi, Anupam

    2005-01-01

    Purpose: Aims to investigate the way that the semantic web is being used to represent and process social network information. Design/methodology/approach: The Swoogle semantic web search engine was used to construct several large data sets of Resource Description Framework (RDF) documents with social network information that were encoded using the…

  14. Topic Models for Link Prediction in Document Networks

    ERIC Educational Resources Information Center

    Kataria, Saurabh

    2012-01-01

    Recent explosive growth of interconnected document collections such as citation networks, network of web pages, content generated by crowd-sourcing in collaborative environments, etc., has posed several challenging problems for data mining and machine learning community. One central problem in the domain of document networks is that of "link…

  15. ADASS Web Database XML Project

    NASA Astrophysics Data System (ADS)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  16. Content Recognition and Context Modeling for Document Analysis and Retrieval

    ERIC Educational Resources Information Center

    Zhu, Guangyu

    2009-01-01

    The nature and scope of available documents are changing significantly in many areas of document analysis and retrieval as complex, heterogeneous collections become accessible to virtually everyone via the web. The increasing level of diversity presents a great challenge for document image content categorization, indexing, and retrieval.…

  17. 32 CFR 701.119 - Privacy and the web.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Privacy and the web. 701.119 Section 701.119... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.119 Privacy and the web. DON activities shall consult SECNAVINST 5720.47B for guidance on what may be posted on a Navy Web site. ...

  18. The Implementation of Cosine Similarity to Calculate Text Relevance between Two Documents

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Sembiring, C. A.; Budiman, M. A.

    2018-03-01

    Rapidly increasing number of web pages or documents leads to topic specific filtering in order to find web pages or documents efficiently. This is a preliminary research that uses cosine similarity to implement text relevance in order to find topic specific document. This research is divided into three parts. The first part is text-preprocessing. In this part, the punctuation in a document will be removed, then convert the document to lower case, implement stop word removal and then extracting the root word by using Porter Stemming algorithm. The second part is keywords weighting. Keyword weighting will be used by the next part, the text relevance calculation. Text relevance calculation will result the value between 0 and 1. The closer value to 1, then both documents are more related, vice versa.

  19. Model-based document categorization employing semantic pattern analysis and local structure clustering

    NASA Astrophysics Data System (ADS)

    Fume, Kosei; Ishitani, Yasuto

    2008-01-01

    We propose a document categorization method based on a document model that can be defined externally for each task and that categorizes Web content or business documents into a target category in accordance with the similarity of the model. The main feature of the proposed method consists of two aspects of semantics extraction from an input document. The semantics of terms are extracted by the semantic pattern analysis and implicit meanings of document substructure are specified by a bottom-up text clustering technique focusing on the similarity of text line attributes. We have constructed a system based on the proposed method for trial purposes. The experimental results show that the system achieves more than 80% classification accuracy in categorizing Web content and business documents into 15 or 70 categories.

  20. XML Content Finally Arrives on the Web!

    ERIC Educational Resources Information Center

    Funke, Susan

    1998-01-01

    Explains extensible markup language (XML) and how it differs from hypertext markup language (HTML) and standard generalized markup language (SGML). Highlights include features of XML, including better formatting of documents, better searching capabilities, multiple uses for hyperlinking, and an increase in Web applications; Web browsers; and what…

  1. Is nursing ready for WebQuests?

    PubMed

    Lahaie, Ulysses David

    2008-12-01

    Based on an inquiry-oriented framework, WebQuests facilitate the construction of effective learning activities. Developed by Bernie Dodge and Tom March in 1995 at the San Diego State University, WebQuests have gained worldwide popularity among educators in the kindergarten through grade 12 educational sector. However, their application at the college and university levels is not well documented. WebQuests enhance and promote higher order-thinking skills, are consistent with Bloom's Taxonomy, and reflect a learner-centered instructional methodology (constructivism). They are based on solid theoretical foundations and promote critical thinking, inquiry, and problem solving. There is a role for WebQuests in nursing education. A WebQuest example is described in this article.

  2. Human rights abuses, transparency, impunity and the Web.

    PubMed

    Miles, Steven H

    2007-01-01

    This paper reviews how human rights advocates during the "war-on-terror" have found new ways to use the World Wide Web (Web) to combat human rights abuses. These include posting of human rights reports; creating large, open-access and updated archives of government documents and other data, tracking CIA rendition flights and maintaining blogs, e-zines, list-serves and news services that rapidly distribute information between journalists, scholars and human rights advocates. The Web is a powerful communication tool for human rights advocates. It is international, instantaneous, and accessible for uploading, archiving, locating and downloading information. For its human rights potential to be fully realized, international law must be strengthened to promote the declassification of government documents, as is done by various freedom of information acts. It is too early to assess the final impact of the Web on human rights abuses in the "war-on-terror". Wide dissemination of government documents and human rights advocates' reports has put the United States government on the defensive and some of its policies have changed in response to public pressure. Even so, the essential elements of secret prisons, detention without charges or trials, and illegal rendition remain intact.

  3. Storing and Viewing Electronic Documents.

    ERIC Educational Resources Information Center

    Falk, Howard

    1999-01-01

    Discusses the conversion of fragile library materials to computer storage and retrieval to extend the life of the items and to improve accessibility through the World Wide Web. Highlights include entering the images, including scanning; optical character recognition; full text and manual indexing; and available document- and image-management…

  4. Automatic document classification of biological literature

    PubMed Central

    Chen, David; Müller, Hans-Michael; Sternberg, Paul W

    2006-01-01

    Background Document classification is a wide-spread problem with many applications, from organizing search engine snippets to spam filtering. We previously described Textpresso, a text-mining system for biological literature, which marks up full text according to a shallow ontology that includes terms of biological interest. This project investigates document classification in the context of biological literature, making use of the Textpresso markup of a corpus of Caenorhabditis elegans literature. Results We present a two-step text categorization algorithm to classify a corpus of C. elegans papers. Our classification method first uses a support vector machine-trained classifier, followed by a novel, phrase-based clustering algorithm. This clustering step autonomously creates cluster labels that are descriptive and understandable by humans. This clustering engine performed better on a standard test-set (Reuters 21578) compared to previously published results (F-value of 0.55 vs. 0.49), while producing cluster descriptions that appear more useful. A web interface allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. Conclusion We have demonstrated a simple method to classify biological documents that embodies an improvement over current methods. While the classification results are currently optimized for Caenorhabditis elegans papers by human-created rules, the classification engine can be adapted to different types of documents. We have demonstrated this by presenting a web interface that allows researchers to quickly navigate through the hierarchy and look for documents that belong to a specific concept. PMID:16893465

  5. Font adaptive word indexing of modern printed documents.

    PubMed

    Marinai, Simone; Marino, Emanuele; Soda, Giovanni

    2006-08-01

    We propose an approach for the word-level indexing of modern printed documents which are difficult to recognize using current OCR engines. By means of word-level indexing, it is possible to retrieve the position of words in a document, enabling queries involving proximity of terms. Web search engines implement this kind of indexing, allowing users to retrieve Web pages on the basis of their textual content. Nowadays, digital libraries hold collections of digitized documents that can be retrieved either by browsing the document images or relying on appropriate metadata assembled by domain experts. Word indexing tools would therefore increase the access to these collections. The proposed system is designed to index homogeneous document collections by automatically adapting to different languages and font styles without relying on OCR engines for character recognition. The approach is based on three main ideas: the use of Self Organizing Maps (SOM) to perform unsupervised character clustering, the definition of one suitable vector-based word representation whose size depends on the word aspect-ratio, and the run-time alignment of the query word with indexed words to deal with broken and touching characters. The most appropriate applications are for processing modern printed documents (17th to 19th centuries) where current OCR engines are less accurate. Our experimental analysis addresses six data sets containing documents ranging from books of the 17th century to contemporary journals.

  6. Tags Extarction from Spatial Documents in Search Engines

    NASA Astrophysics Data System (ADS)

    Borhaninejad, S.; Hakimpour, F.; Hamzei, E.

    2015-12-01

    Nowadays the selective access to information on the Web is provided by search engines, but in the cases which the data includes spatial information the search task becomes more complex and search engines require special capabilities. The purpose of this study is to extract the information which lies in spatial documents. To that end, we implement and evaluate information extraction from GML documents and a retrieval method in an integrated approach. Our proposed system consists of three components: crawler, database and user interface. In crawler component, GML documents are discovered and their text is parsed for information extraction; storage. The database component is responsible for indexing of information which is collected by crawlers. Finally the user interface component provides the interaction between system and user. We have implemented this system as a pilot system on an Application Server as a simulation of Web. Our system as a spatial search engine provided searching capability throughout the GML documents and thus an important step to improve the efficiency of search engines has been taken.

  7. Semantic annotation of Web data applied to risk in food.

    PubMed

    Hignette, Gaëlle; Buche, Patrice; Couvert, Olivier; Dibie-Barthélemy, Juliette; Doussot, David; Haemmerlé, Ollivier; Mettler, Eric; Soler, Lydie

    2008-11-30

    A preliminary step to risk in food assessment is the gathering of experimental data. In the framework of the Sym'Previus project (http://www.symprevius.org), a complete data integration system has been designed, grouping data provided by industrial partners and data extracted from papers published in the main scientific journals of the domain. Those data have been classified by means of a predefined vocabulary, called ontology. Our aim is to complement the database with data extracted from the Web. In the framework of the WebContent project (www.webcontent.fr), we have designed a semi-automatic acquisition tool, called @WEB, which retrieves scientific documents from the Web. During the @WEB process, data tables are extracted from the documents and then annotated with the ontology. We focus on the data tables as they contain, in general, a synthesis of data published in the documents. In this paper, we explain how the columns of the data tables are automatically annotated with data types of the ontology and how the relations represented by the table are recognised. We also give the results of our experimentation to assess the quality of such an annotation.

  8. WebGLORE: a web service for Grid LOgistic REgression.

    PubMed

    Jiang, Wenchao; Li, Pinghao; Wang, Shuang; Wu, Yuan; Xue, Meng; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-12-15

    WebGLORE is a free web service that enables privacy-preserving construction of a global logistic regression model from distributed datasets that are sensitive. It only transfers aggregated local statistics (from participants) through Hypertext Transfer Protocol Secure to a trusted server, where the global model is synthesized. WebGLORE seamlessly integrates AJAX, JAVA Applet/Servlet and PHP technologies to provide an easy-to-use web service for biomedical researchers to break down policy barriers during information exchange. http://dbmi-engine.ucsd.edu/webglore3/. WebGLORE can be used under the terms of GNU general public license as published by the Free Software Foundation.

  9. ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis

    PubMed Central

    Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas

    2016-01-01

    Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/. PMID:26882475

  10. World-Wide Web: The Information Universe.

    ERIC Educational Resources Information Center

    Berners-Lee, Tim; And Others

    1992-01-01

    Describes the World-Wide Web (W3) project, which is designed to create a global information universe using techniques of hypertext, information retrieval, and wide area networking. Discussion covers the W3 data model, W3 architecture, the document naming scheme, protocols, document formats, comparison with other systems, experience with the W3…

  11. Location-based Web Search

    NASA Astrophysics Data System (ADS)

    Ahlers, Dirk; Boll, Susanne

    In recent years, the relation of Web information to a physical location has gained much attention. However, Web content today often carries only an implicit relation to a location. In this chapter, we present a novel location-based search engine that automatically derives spatial context from unstructured Web resources and allows for location-based search: our focused crawler applies heuristics to crawl and analyze Web pages that have a high probability of carrying a spatial relation to a certain region or place; the location extractor identifies the actual location information from the pages; our indexer assigns a geo-context to the pages and makes them available for a later spatial Web search. We illustrate the usage of our spatial Web search for location-based applications that provide information not only right-in-time but also right-on-the-spot.

  12. WebGLORE: a Web service for Grid LOgistic REgression

    PubMed Central

    Jiang, Wenchao; Li, Pinghao; Wang, Shuang; Wu, Yuan; Xue, Meng; Ohno-Machado, Lucila; Jiang, Xiaoqian

    2013-01-01

    WebGLORE is a free web service that enables privacy-preserving construction of a global logistic regression model from distributed datasets that are sensitive. It only transfers aggregated local statistics (from participants) through Hypertext Transfer Protocol Secure to a trusted server, where the global model is synthesized. WebGLORE seamlessly integrates AJAX, JAVA Applet/Servlet and PHP technologies to provide an easy-to-use web service for biomedical researchers to break down policy barriers during information exchange. Availability and implementation: http://dbmi-engine.ucsd.edu/webglore3/. WebGLORE can be used under the terms of GNU general public license as published by the Free Software Foundation. Contact: x1jiang@ucsd.edu PMID:24072732

  13. Characteristics of Food Industry Web Sites and "Advergames" Targeting Children

    ERIC Educational Resources Information Center

    Culp, Jennifer; Bell, Robert A.; Cassady, Diana

    2010-01-01

    Objective: To assess the content of food industry Web sites targeting children by describing strategies used to prolong their visits and foster brand loyalty; and to document health-promoting messages on these Web sites. Design: A content analysis was conducted of Web sites advertised on 2 children's networks, Cartoon Network and Nickelodeon. A…

  14. Sign language Web pages.

    PubMed

    Fels, Deborah I; Richards, Jan; Hardman, Jim; Lee, Daniel G

    2006-01-01

    The WORLD WIDE WEB has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The present article describes a system that allows sign language-only Web pages to be created and linked through a video-based technique called sign-linking. In two studies, 14 Deaf participants examined two iterations of signlinked Web pages to gauge the usability and learnability of a signing Web page interface. The first study indicated that signing Web pages were usable by sign language users but that some interface features required improvement. The second study showed increased usability for those features; users consequently couldnavigate sign language information with ease and pleasure.

  15. [Analysis of the web pages of the intensive care units of Spain].

    PubMed

    Navarro-Arnedo, J M

    2009-01-01

    In order to determine the Intensive Care Units (ICU) of Spanish hospitals that had a web site, to analyze the information they offered and to know what information they needed to offer according to a sample of ICU nurses, a cross-sectional observational, descriptive study was carried out between January and September 2008. For each ICU website, an analysis was made on the information available on the unit, its care, teaching and research activity on nursing. Simultaneously, based on a sample of intensive care nurses, the information that should be contained on an ICU website was determined. The results, expressed in absolute numbers and percentage, showed that 66 of the 292 hospitals with ICU (22.6%) had a web site; 50.7% of the sites showed the number of beds, 19.7% the activity report, 11.3% the published articles/studies and followed research lines and 9.9% the organized formation courses. 14 webs (19.7%) displayed images of nurses. However, only 1 (1.4%) offered guides on the actions followed. No web site offered a navigation section for nursing, the E-mail of the chief nursing, the nursing documentation used or if any nursing model of their own was used. It is concluded that only one-fourth of the Spanish hospitals with ICU have a web site; number of beds was the data offered by the most sites, whereas information on care, educational and investigating activities was very reduced and that on nursing was practically omitted on the web pages of intensive care units.

  16. Overview of Historical Earthquake Document Database in Japan and Future Development

    NASA Astrophysics Data System (ADS)

    Nishiyama, A.; Satake, K.

    2014-12-01

    In Japan, damage and disasters from historical large earthquakes have been documented and preserved. Compilation of historical earthquake documents started in the early 20th century and 33 volumes of historical document source books (about 27,000 pages) have been published. However, these source books are not effectively utilized for researchers due to a contamination of low-reliability historical records and a difficulty for keyword searching by characters and dates. To overcome these problems and to promote historical earthquake studies in Japan, construction of text database started in the 21 century. As for historical earthquakes from the beginning of the 7th century to the early 17th century, "Online Database of Historical Documents in Japanese Earthquakes and Eruptions in the Ancient and Medieval Ages" (Ishibashi, 2009) has been already constructed. They investigated the source books or original texts of historical literature, emended the descriptions, and assigned the reliability of each historical document on the basis of written age. Another database compiled the historical documents for seven damaging earthquakes occurred along the Sea of Japan coast in Honshu, central Japan in the Edo period (from the beginning of the 17th century to the middle of the 19th century) and constructed text database and seismic intensity data base. These are now publicized on the web (written only in Japanese). However, only about 9 % of the earthquake source books have been digitized so far. Therefore, we plan to digitize all of the remaining historical documents by the research-program which started in 2014. The specification of the data base will be similar for previous ones. We also plan to combine this database with liquefaction traces database, which will be constructed by other research program, by adding the location information described in historical documents. Constructed database would be utilized to estimate the distributions of seismic intensities and tsunami

  17. 32 CFR 701.119 - Privacy and the web.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 5 2013-07-01 2013-07-01 false Privacy and the web. 701.119 Section 701.119 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY REGULATIONS... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.119 Privacy and the web. DON...

  18. 32 CFR 701.119 - Privacy and the web.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 5 2011-07-01 2011-07-01 false Privacy and the web. 701.119 Section 701.119 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY REGULATIONS... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.119 Privacy and the web. DON...

  19. 32 CFR 701.119 - Privacy and the web.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 5 2012-07-01 2012-07-01 false Privacy and the web. 701.119 Section 701.119 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY REGULATIONS... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.119 Privacy and the web. DON...

  20. 32 CFR 701.119 - Privacy and the web.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 5 2014-07-01 2014-07-01 false Privacy and the web. 701.119 Section 701.119 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY REGULATIONS... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.119 Privacy and the web. DON...

  1. JPL, NASA and the Historical Record: Key Events/Documents in Lunar and Mars Exploration

    NASA Technical Reports Server (NTRS)

    Hooks, Michael Q.

    1999-01-01

    This document represents a presentation about the Jet Propulsion Laboratory (JPL) historical archives in the area of Lunar and Martian Exploration. The JPL archives documents the history of JPL's flight projects, research and development activities and administrative operations. The archives are in a variety of format. The presentation reviews the information available through the JPL archives web site, information available through the Regional Planetary Image Facility web site, and the information on past missions available through the web sites. The presentation also reviews the NASA historical resources at the NASA History Office and the National Archives and Records Administration.

  2. Comparing Web, Group and Telehealth Formats of a Military Parenting Program

    DTIC Science & Technology

    2017-06-01

    AWARD NUMBER: W81XWH-14-1-0143 TITLE: Comparing Web, Group and Telehealth Formats of a Military Parenting Program PRINCIPAL INVESTIGATOR...be construed as an official Department of the Army position, policy or decision unless so designated by other documentation. REPORT DOCUMENTATION...2017 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Comparing Web, Group and Telehealth Formats of a Military Parenting Program 5b. GRANT NUMBER 5c

  3. Documenting historical data and accessing it on the World Wide Web

    Treesearch

    Malchus B. Baker; Daniel P. Huebner; Peter F. Ffolliott

    2000-01-01

    New computer technologies facilitate the storage, retrieval, and summarization of watershed-based data sets on the World Wide Web. These data sets are used by researchers when testing and validating predictive models, managers when planning and implementing watershed management practices, educators when learning about hydrologic processes, and decisionmakers when...

  4. Documentation systems for educators seeking academic promotion in U.S. medical schools.

    PubMed

    Simpson, Deborah; Hafler, Janet; Brown, Diane; Wilkerson, LuAnn

    2004-08-01

    To explore the state and use of teaching portfolios in promotion and tenure in U.S. medical schools. A two-phase qualitative study using a Web-based search procedure and telephone interviews was conducted. The first phase assessed the penetration of teaching portfolio-like systems in U.S. medical schools using a keyword search of medical school Web sites. The second phase examined the current use of teaching portfolios in 16 U.S. medical schools that reported their use in a survey in 1992. The individual designated as having primary responsibility for faculty appointments/promotions was contacted to participate in a 30-60 minute interview. The Phase 1 search of U.S. medical schools' Web sites revealed that 76 medical schools have Web-based access to information on documenting educational activities for promotion. A total of 16 of 17 medical schools responded to Phase 2. All 16 continued to use a portfolio-like system in 2003. Two documentation categories, honors/awards and philosophy/personal statement regarding education, were included by six more of these schools than used these categories in 1992. Dissemination of work to colleagues is now a key inclusion at 15 of the Phase 2 schools. The most common type of evidence used to document education was learner and/or peer ratings with infrequent use of outcome measures and internal/external review. The number of medical schools whose promotion packets include portfolio-like documentation associated with a faculty member's excellence in education has increased by more than 400% in just over ten years. Among early-responder schools the types of documentation categories have increased, but students' ratings of teaching remain the primary evidence used to document the quality or outcomes of the educational efforts reported.

  5. WebProtégé: a collaborative Web-based platform for editing biomedical ontologies.

    PubMed

    Horridge, Matthew; Tudorache, Tania; Nuylas, Csongor; Vendetti, Jennifer; Noy, Natalya F; Musen, Mark A

    2014-08-15

    WebProtégé is an open-source Web application for editing OWL 2 ontologies. It contains several features to aid collaboration, including support for the discussion of issues, change notification and revision-based change tracking. WebProtégé also features a simple user interface, which is geared towards editing the kinds of class descriptions and annotations that are prevalent throughout biomedical ontologies. Moreover, it is possible to configure the user interface using views that are optimized for editing Open Biomedical Ontology (OBO) class descriptions and metadata. Some of these views are shown in the Supplementary Material and can be seen in WebProtégé itself by configuring the project as an OBO project. WebProtégé is freely available for use on the Web at http://webprotege.stanford.edu. It is implemented in Java and JavaScript using the OWL API and the Google Web Toolkit. All major browsers are supported. For users who do not wish to host their ontologies on the Stanford servers, WebProtégé is available as a Web app that can be run locally using a Servlet container such as Tomcat. Binaries, source code and documentation are available under an open-source license at http://protegewiki.stanford.edu/wiki/WebProtege. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. A document centric metadata registration tool constructing earth environmental data infrastructure

    NASA Astrophysics Data System (ADS)

    Ichino, M.; Kinutani, H.; Ono, M.; Shimizu, T.; Yoshikawa, M.; Masuda, K.; Fukuda, K.; Kawamoto, H.

    2009-12-01

    DIAS (Data Integration and Analysis System) is one of GEOSS activities in Japan. It is also a leading part of the GEOSS task with the same name defined in GEOSS Ten Year Implementation Plan. The main mission of DIAS is to construct data infrastructure that can effectively integrate earth environmental data such as observation data, numerical model outputs, and socio-economic data provided from the fields of climate, water cycle, ecosystem, ocean, biodiversity and agriculture. Some of DIAS's data products are available at the following web site of http://www.jamstec.go.jp/e/medid/dias. Most of earth environmental data commonly have spatial and temporal attributes such as the covering geographic scope or the created date. The metadata standards including these common attributes are published by the geographic information technical committee (TC211) in ISO (the International Organization for Standardization) as specifications of ISO 19115:2003 and 19139:2007. Accordingly, DIAS metadata is developed with basing on ISO/TC211 metadata standards. From the viewpoint of data users, metadata is useful not only for data retrieval and analysis but also for interoperability and information sharing among experts, beginners and nonprofessionals. On the other hand, from the viewpoint of data providers, two problems were pointed out after discussions. One is that data providers prefer to minimize another tasks and spending time for creating metadata. Another is that data providers want to manage and publish documents to explain their data sets more comprehensively. Because of solving these problems, we have been developing a document centric metadata registration tool. The features of our tool are that the generated documents are available instantly and there is no extra cost for data providers to generate metadata. Also, this tool is developed as a Web application. So, this tool does not demand any software for data providers if they have a web-browser. The interface of the tool

  7. An open annotation ontology for science on web 3.0.

    PubMed

    Ciccarese, Paolo; Ocana, Marco; Garcia Castro, Leyla Jael; Das, Sudeshna; Clark, Tim

    2011-05-17

    There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables "stand-off" or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO's Google Code page: http://code.google.com/p/annotation-ontology/ . The Annotation Ontology meets critical requirements for

  8. The Use of Supporting Documentation for Information Architecture by Australian Libraries

    ERIC Educational Resources Information Center

    Hider, Philip; Burford, Sally; Ferguson, Stuart

    2009-01-01

    This article reports the results of an online survey that examined the development of information architecture of Australian library Web sites with reference to documented methods and guidelines. A broad sample of library Web managers responded from across the academic, public, and special sectors. A majority of libraries used either in-house or…

  9. Local File Disclosure Vulnerability: A Case Study of Public-Sector Web Applications

    NASA Astrophysics Data System (ADS)

    Ahmed, M. Imran; Maruf Hassan, Md; Bhuyian, Touhid

    2018-01-01

    Almost all public-sector organisations in Bangladesh now offer online services through web applications, along with the existing channels, in their endeavour to realise the dream of a ‘Digital Bangladesh’. Nations across the world have joined the online environment thanks to training and awareness initiatives by their government. File sharing and downloading activities using web applications have now become very common, not only ensuring the easy distribution of different types of files and documents but also enormously reducing the time and effort of users. Although the online services that are being used frequently have made users’ life easier, it has increased the risk of exploitation of local file disclosure (LFD) vulnerability in the web applications of different public-sector organisations due to unsecure design and careless coding. This paper analyses the root cause of LFD vulnerability, its exploitation techniques, and its impact on 129 public-sector websites in Bangladesh by examining the use of manual black box testing approach.

  10. WebGIVI: a web-based gene enrichment analysis and visualization tool.

    PubMed

    Sun, Liang; Zhu, Yongnan; Mahmood, A S M Ashique; Tudor, Catalina O; Ren, Jia; Vijay-Shanker, K; Chen, Jian; Schmidt, Carl J

    2017-05-04

    A major challenge of high throughput transcriptome studies is presenting the data to researchers in an interpretable format. In many cases, the outputs of such studies are gene lists which are then examined for enriched biological concepts. One approach to help the researcher interpret large gene datasets is to associate genes and informative terms (iTerm) that are obtained from the biomedical literature using the eGIFT text-mining system. However, examining large lists of iTerm and gene pairs is a daunting task. We have developed WebGIVI, an interactive web-based visualization tool ( http://raven.anr.udel.edu/webgivi/ ) to explore gene:iTerm pairs. WebGIVI was built via Cytoscape and Data Driven Document JavaScript libraries and can be used to relate genes to iTerms and then visualize gene and iTerm pairs. WebGIVI can accept a gene list that is used to retrieve the gene symbols and corresponding iTerm list. This list can be submitted to visualize the gene iTerm pairs using two distinct methods: a Concept Map or a Cytoscape Network Map. In addition, WebGIVI also supports uploading and visualization of any two-column tab separated data. WebGIVI provides an interactive and integrated network graph of gene and iTerms that allows filtering, sorting, and grouping, which can aid biologists in developing hypothesis based on the input gene lists. In addition, WebGIVI can visualize hundreds of nodes and generate a high-resolution image that is important for most of research publications. The source code can be freely downloaded at https://github.com/sunliang3361/WebGIVI . The WebGIVI tutorial is available at http://raven.anr.udel.edu/webgivi/tutorial.php .

  11. ICCE/ICCAI 2000 Full & Short Papers (Web-Based Learning).

    ERIC Educational Resources Information Center

    2000

    This document contains full and short papers on World Wide Web-based learning from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction). Topics covered include: design and development of CAL (Computer Assisted Learning) systems; design and development of WBI (Web-Based…

  12. Web Application Software for Ground Operations Planning Database (GOPDb) Management

    NASA Technical Reports Server (NTRS)

    Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey

    2013-01-01

    A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.

  13. Free Web-based personal health records: an analysis of functionality.

    PubMed

    Fernández-Alemán, José Luis; Seva-Llor, Carlos Luis; Toval, Ambrosio; Ouhbi, Sofia; Fernández-Luque, Luis

    2013-12-01

    This paper analyzes and assesses the functionality of free Web-based PHRs as regards health information, user actions and connection with other tools. A systematic literature review in Medline, ACM Digital Library, IEEE Digital Library and ScienceDirect was used to select 19 free Web-based PHRs from the 47 PHRs identified. The results show that none of the PHRs selected met 100% of the 28 functions presented in this paper. Two free Web-based PHRs target a particular public. Around 90 % of the PHRs identified allow users throughout the world to create their own profiles without any geographical restrictions. Only half of the PHRs selected provide physicians with user actions. Few PHRs can connect with other tools. There was considerable variability in the types of data included in free Web-based PHRs. Functionality may have implications for PHR use and adoption, particularly as regards patients with chronic illnesses or disabilities. Support for standard medical document formats and protocols are required to enable data to be exchanged with other stakeholders in the health care domain. The results of our study may assist users in selecting the PHR that best fits their needs, since no significant connection exists between the number of functions of the PHRs identified and their popularity.

  14. 10 CFR 2.1303 - Availability of documents.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... NUCLEAR REGULATORY COMMISSION RULES OF PRACTICE FOR DOMESTIC LICENSING PROCEEDINGS AND ISSUANCE OF ORDERS Procedures for Hearings on License Transfer Applications § 2.1303 Availability of documents. Unless exempt... for a license transfer requiring Commission approval will be placed at the NRC Web site, http://www...

  15. 10 CFR 2.1303 - Availability of documents.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... NUCLEAR REGULATORY COMMISSION RULES OF PRACTICE FOR DOMESTIC LICENSING PROCEEDINGS AND ISSUANCE OF ORDERS Procedures for Hearings on License Transfer Applications § 2.1303 Availability of documents. Unless exempt... for a license transfer requiring Commission approval will be placed at the NRC Web site, http://www...

  16. Intranet-based quality improvement documentation at the Veterans Affairs Maryland Health Care System.

    PubMed

    Borkowski, A; Lee, D H; Sydnor, D L; Johnson, R J; Rabinovitch, A; Moore, G W

    2001-01-01

    The Pathology and Laboratory Medicine Service of the Veterans Affairs Maryland Health Care System is inspected biannually by the College of American Pathologists (CAP). As of the year 2000, all documentation in the Anatomic Pathology Section is available to all staff through the VA Intranet. Signed, supporting paper documents are on file in the office of the department chair. For the year 2000 CAP inspection, inspectors conducted their document review by use of these Web-based documents, in which each CAP question had a hyperlink to the corresponding section of the procedure manual. Thus inspectors were able to locate the documents relevant to each question quickly and efficiently. The procedure manuals consist of 87 procedures for surgical pathology, 52 procedures for cytopathology, and 25 procedures for autopsy pathology. Each CAP question requiring documentation had from one to three hyperlinks to the corresponding section of the procedure manual. Intranet documentation allows for easier sharing among decentralized institutions and for centralized updates of the laboratory documentation. These documents can be upgraded to allow for multimedia presentations, including text search for key words, hyperlinks to other documents, and images, audio, and video. Use of Web-based documents can improve the efficiency of the inspection process.

  17. Sensor web

    NASA Technical Reports Server (NTRS)

    Delin, Kevin A. (Inventor); Jackson, Shannon P. (Inventor)

    2011-01-01

    A Sensor Web formed of a number of different sensor pods. Each of the sensor pods include a clock which is synchronized with a master clock so that all of the sensor pods in the Web have a synchronized clock. The synchronization is carried out by first using a coarse synchronization which takes less power, and subsequently carrying out a fine synchronization to make a fine sync of all the pods on the Web. After the synchronization, the pods ping their neighbors to determine which pods are listening and responded, and then only listen during time slots corresponding to those pods which respond.

  18. 12 CFR 611.1216 - Public availability of documents related to the termination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... termination. 611.1216 Section 611.1216 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM ORGANIZATION Termination of System Institution Status § 611.1216 Public availability of documents related to the termination. (a) We may post on our Web site, or require you to post on your Web site: (1) Results...

  19. Centrality based Document Ranking

    DTIC Science & Technology

    2014-11-01

    clinical domain and very uncommon elsewhere. A regular IR system may fail to rank documents from such a domain, dealing with symptoms, diagnosis and...description). We prepared a hand-crafted list of synonyms for each of the query types, viz. diagnosis , test and treatment. This list was used to expand the...Miller. Semantic search. In INTERNATIONAL WORLD WIDE WEB CONFERENCE, pages 700–709. ACM, 2003. 8. A. Hanbury and M. Lupu . Toward a Model of Domain

  20. Utilization of a Web-Based vs Integrated Phone/Web Cessation Program Among 140,000 Tobacco Users: An Evaluation Across 10 Free State Quitlines

    PubMed Central

    Vickerman, Katrina A; Kellogg, Elizabeth S; Zbikowski, Susan M

    2015-01-01

    Background Phone-based tobacco cessation program effectiveness has been established and randomized controlled trials have provided some support for Web-based services. Relatively little is known about who selects different treatment modalities and how they engage with treatments in a real-world setting. Objective This paper describes the characteristics, Web utilization patterns, and return rates of tobacco users who self-selected into a Web-based (Web-Only) versus integrated phone/Web (Phone/Web) cessation program. Methods We examined the demographics, baseline tobacco use, Web utilization patterns, and return rates of 141,429 adult tobacco users who self-selected into a Web-Only or integrated Phone/Web cessation program through 1 of 10 state quitlines from August 2012 through July 2013. For each state, registrants were only included from the timeframe in which both programs were offered to all enrollees. Utilization data were limited to site interactions occurring within 6 months after registration. Results Most participants selected the Phone/Web program (113,019/141,429, 79.91%). After enrollment in Web services, Web-Only were more likely to log in compared to Phone/Web (21,832/28,410, 76.85% vs 23,920/56,892, 42.04%; P<.001), but less likely to return after their initial log-in (8766/21,832, 40.15% vs 13,966/23,920, 58.39%; P<.001). In bivariate and multivariable analyses, those who chose Web-Only were younger, healthier, more highly educated, more likely to be uninsured or commercially insured, more likely to be white non-Hispanic and less likely to be black non-Hispanic, less likely to be highly nicotine-addicted, and more likely to have started their program enrollment online (all P<.001). Among both program populations, participants were more likely to return to Web services if they were women, older, more highly educated, or were sent nicotine replacement therapy (NRT) through their quitline (all P<.001). Phone/Web were also more likely to return if they

  1. Utilization of a Web-based vs integrated phone/Web cessation program among 140,000 tobacco users: an evaluation across 10 free state quitlines.

    PubMed

    Nash, Chelsea M; Vickerman, Katrina A; Kellogg, Elizabeth S; Zbikowski, Susan M

    2015-02-04

    Phone-based tobacco cessation program effectiveness has been established and randomized controlled trials have provided some support for Web-based services. Relatively little is known about who selects different treatment modalities and how they engage with treatments in a real-world setting. This paper describes the characteristics, Web utilization patterns, and return rates of tobacco users who self-selected into a Web-based (Web-Only) versus integrated phone/Web (Phone/Web) cessation program. We examined the demographics, baseline tobacco use, Web utilization patterns, and return rates of 141,429 adult tobacco users who self-selected into a Web-Only or integrated Phone/Web cessation program through 1 of 10 state quitlines from August 2012 through July 2013. For each state, registrants were only included from the timeframe in which both programs were offered to all enrollees. Utilization data were limited to site interactions occurring within 6 months after registration. Most participants selected the Phone/Web program (113,019/141,429, 79.91%). After enrollment in Web services, Web-Only were more likely to log in compared to Phone/Web (21,832/28,410, 76.85% vs 23,920/56,892, 42.04%; P<.001), but less likely to return after their initial log-in (8766/21,832, 40.15% vs 13,966/23,920, 58.39%; P<.001). In bivariate and multivariable analyses, those who chose Web-Only were younger, healthier, more highly educated, more likely to be uninsured or commercially insured, more likely to be white non-Hispanic and less likely to be black non-Hispanic, less likely to be highly nicotine-addicted, and more likely to have started their program enrollment online (all P<.001). Among both program populations, participants were more likely to return to Web services if they were women, older, more highly educated, or were sent nicotine replacement therapy (NRT) through their quitline (all P<.001). Phone/Web were also more likely to return if they had completed a coaching call

  2. Web thickness determines the therapeutic effect of endoscopic keel placement on anterior glottic web.

    PubMed

    Chen, Jian; Shi, Fang; Chen, Min; Yang, Yue; Cheng, Lei; Wu, Haitao

    2017-10-01

    This work is a retrospective analysis to investigate the critical risk factor for the therapeutic effect of endoscopic keel placement on anterior glottic web. Altogether, 36 patients with anterior glottic web undergoing endoscopic lysis and silicone keel placement were enrolled. Their voice qualities were evaluated using the voice handicap index-10 (VHI-10) questionnaire, and improved significantly 3 months after surgery (21.53 ± 3.89 vs 9.81 ± 6.68, P < 0.0001). However, 10 (27.8%) cases had web recurrence during the at least 1-year follow-up. Therefore, patients were classified according to the Cohen classification or web thickness, and the recurrence rates were compared. The distribution of recurrence rates for Cohen type 1 ~ 4 were 28.6, 16.7, 33.3, and 40%, respectively. The difference was not statistically significant (P = 0.461). When classified by web thickness, only 2 of 27 (7.41%) thin type cases relapsed whereas 8 of 9 (88.9%) cases in the thick group reformed webs (P < 0.001). These results suggest that the therapeutic outcome of endoscopic keel placement mostly depends on the web thickness rather than the Cohen grades. Endoscopic lysis and keel placement is only effective for cases with thin glottic webs. Patients with thick webs should be treated by other means.

  3. An open annotation ontology for science on web 3.0

    PubMed Central

    2011-01-01

    Background There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Methods Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. Results This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables “stand-off” or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO’s Google Code page: http://code.google.com/p/annotation-ontology/ . Conclusions The

  4. Bibliometrics of the World Wide Web: An Exploratory Analysis of the Intellectual Structure of Cyberspace.

    ERIC Educational Resources Information Center

    Larson, Ray R.

    1996-01-01

    Examines the bibliometrics of the World Wide Web based on analysis of Web pages collected by the Inktomi "Web Crawler" and on the use of the DEC AltaVista search engine for cocitation analysis of a set of Earth Science related Web sites. Looks at the statistical characteristics of Web documents and their hypertext links, and the…

  5. Scale-free characteristics of random networks: the topology of the world-wide web

    NASA Astrophysics Data System (ADS)

    Barabási, Albert-László; Albert, Réka; Jeong, Hawoong

    2000-06-01

    The world-wide web forms a large directed graph, whose vertices are documents and edges are links pointing from one document to another. Here we demonstrate that despite its apparent random character, the topology of this graph has a number of universal scale-free characteristics. We introduce a model that leads to a scale-free network, capturing in a minimal fashion the self-organization processes governing the world-wide web.

  6. The anatomy of a World Wide Web library service: the BONES demonstration project. Biomedically Oriented Navigator of Electronic Services.

    PubMed Central

    Schnell, E H

    1995-01-01

    In 1994, the John A. Prior Health Sciences Library at Ohio State University began to develop a World Wide Web demonstration project, the Biomedically Oriented Navigator of Electronic Services (BONES). The initial intent of BONES was to facilitate the health professional's access to Internet resources by organizing them in a systematic manner. The project not only met this goal but also helped identify the resources needed to launch a full-scale Web library service. This paper discusses the tasks performed and resources used in the development of BONES and describes the creation and organization of documents on the BONES Web server. The paper also discusses the outcomes of the project and the impact on the library's staff and services. PMID:8547903

  7. Data Mining of Web-Based Documents on Social Networking Sites That Included Suicide-Related Words Among Korean Adolescents.

    PubMed

    Song, Juyoung; Song, Tae Min; Seo, Dong-Chul; Jin, Jae Hyun

    2016-12-01

    To investigate online search activity of suicide-related words in South Korean adolescents through data mining of social media Web sites as the suicide rate in South Korea is one of the highest in the world. Out of more than 2.35 billion posts for 2 years from January 1, 2011 to December 31, 2012 on 163 social media Web sites in South Korea, 99,693 suicide-related documents were retrieved by Crawler and analyzed using text mining and opinion mining. These data were further combined with monthly employment rate, monthly rental prices index, monthly youth suicide rate, and monthly number of reported bully victims to fit multilevel models as well as structural equation models. The link from grade pressure to suicide risk showed the largest standardized path coefficient (beta = .357, p < .001) in structural models and a significant random effect (p < .01) in multilevel models. Depression was a partial mediator between suicide risk and grade pressure, low body image, victims of bullying, and concerns about disease. The largest total effect was observed in the grade pressure to depression to suicide risk. The multilevel models indicate about 27% of the variance in the daily suicide-related word search activity is explained by month-to-month variations. A lower employment rate, a higher rental prices index, and more bullying were associated with an increased suicide-related word search activity. Academic pressure appears to be the biggest contributor to Korean adolescents' suicide risk. Real-time suicide-related word search activity monitoring and response system needs to be developed. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  8. 4 CFR 201.3 - Publicly available documents and electronic reading room.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 4 Accounts 1 2011-01-01 2011-01-01 false Publicly available documents and electronic reading room. 201.3 Section 201.3 Accounts RECOVERY ACCOUNTABILITY AND TRANSPARENCY BOARD PUBLIC INFORMATION AND REQUESTS § 201.3 Publicly available documents and electronic reading room. (a) Many Board records are available electronically at the Board's Web sit...

  9. 4 CFR 201.3 - Publicly available documents and electronic reading room.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 4 Accounts 1 2012-01-01 2012-01-01 false Publicly available documents and electronic reading room. 201.3 Section 201.3 Accounts RECOVERY ACCOUNTABILITY AND TRANSPARENCY BOARD PUBLIC INFORMATION AND REQUESTS § 201.3 Publicly available documents and electronic reading room. (a) Many Board records are available electronically at the Board's Web sit...

  10. 4 CFR 201.3 - Publicly available documents and electronic reading room.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 4 Accounts 1 2013-01-01 2013-01-01 false Publicly available documents and electronic reading room. 201.3 Section 201.3 Accounts RECOVERY ACCOUNTABILITY AND TRANSPARENCY BOARD PUBLIC INFORMATION AND REQUESTS § 201.3 Publicly available documents and electronic reading room. (a) Many Board records are available electronically at the Board's Web sit...

  11. 4 CFR 201.3 - Publicly available documents and electronic reading room.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 4 Accounts 1 2014-01-01 2013-01-01 true Publicly available documents and electronic reading room. 201.3 Section 201.3 Accounts RECOVERY ACCOUNTABILITY AND TRANSPARENCY BOARD PUBLIC INFORMATION AND REQUESTS § 201.3 Publicly available documents and electronic reading room. (a) Many Board records are available electronically at the Board's Web site...

  12. Ad-Hoc Queries over Document Collections - A Case Study

    NASA Astrophysics Data System (ADS)

    Löser, Alexander; Lutter, Steffen; Düssel, Patrick; Markl, Volker

    We discuss the novel problem of supporting analytical business intelligence queries over web-based textual content, e.g., BI-style reports based on 100.000's of documents from an ad-hoc web search result. Neither conventional search engines nor conventional Business Intelligence and ETL tools address this problem, which lies at the intersection of their capabilities. "Google Squared" or our system GOOLAP.info, are examples of these kinds of systems. They execute information extraction methods over one or several document collections at query time and integrate extracted records into a common view or tabular structure. Frequent extraction and object resolution failures cause incomplete records which could not be joined into a record answering the query. Our focus is the identification of join-reordering heuristics maximizing the size of complete records answering a structured query. With respect to given costs for document extraction we propose two novel join-operations: The multi-way CJ-operator joins records from multiple relationships extracted from a single document. The two-way join-operator DJ ensures data density by removing incomplete records from results. In a preliminary case study we observe that our join-reordering heuristics positively impact result size, record density and lower execution costs.

  13. Strong regularities in world wide web surfing

    PubMed

    Huberman; Pirolli; Pitkow; Lukose

    1998-04-03

    One of the most common modes of accessing information in the World Wide Web is surfing from one document to another along hyperlinks. Several large empirical studies have revealed common patterns of surfing behavior. A model that assumes that users make a sequence of decisions to proceed to another page, continuing as long as the value of the current page exceeds some threshold, yields the probability distribution for the number of pages that a user visits within a given Web site. This model was verified by comparing its predictions with detailed measurements of surfing patterns. The model also explains the observed Zipf-like distributions in page hits observed at Web sites.

  14. Improving the Product Documentation Process of a Small Software Company

    NASA Astrophysics Data System (ADS)

    Valtanen, Anu; Ahonen, Jarmo J.; Savolainen, Paula

    Documentation is an important part of the software process, even though it is often neglected in software companies. The eternal question is how much documentation is enough. In this article, we present a practical implementation of lightweight product documentation process resulting from SPI efforts in a small company. Small companies’ financial and human resources are often limited. The documentation process described here, offers a template for creating adequate documentation consuming minimal amount of resources. The key element of the documentation process is an open source web-based bugtracking system that was customized to be used as a documentation tool. The use of the tool enables iterative and well structured documentation. The solution best serves the needs of a small company with off-the-shelf software products and striving for SPI.

  15. Moving toward a universally accessible web: Web accessibility and education.

    PubMed

    Kurt, Serhat

    2017-12-08

    The World Wide Web is an extremely powerful source of information, inspiration, ideas, and opportunities. As such, it has become an integral part of daily life for a great majority of people. Yet, for a significant number of others, the internet offers only limited value due to the existence of barriers which make accessing the Web difficult, if not impossible. This article illustrates some of the reasons that achieving equality of access to the online world of education is so critical, explores the current status of Web accessibility, discusses evaluative tools and methods that can help identify accessibility issues in educational websites, and provides practical recommendations and guidelines for resolving some of the obstacles that currently hinder the achievability of the goal of universal Web access.

  16. Automated MeSH indexing of the World-Wide Web.

    PubMed Central

    Fowler, J.; Kouramajian, V.; Maram, S.; Devadhar, V.

    1995-01-01

    To facilitate networked discovery and information retrieval in the biomedical domain, we have designed a system for automatic assignment of Medical Subject Headings to documents retrieved from the World-Wide Web. Our prototype implementations show significant promise. We describe our methods and discuss the further development of a completely automated indexing tool called the "Web-MeSH Medibot." PMID:8563421

  17. Experimenting with semantic web services to understand the role of NLP technologies in healthcare.

    PubMed

    Jagannathan, V

    2006-01-01

    NLP technologies can play a significant role in healthcare where a predominant segment of the clinical documentation is in text form. In a graduate course focused on understanding semantic web services at West Virginia University, a class project was designed with the purpose of exploring potential use for NLP-based abstraction of clinical documentation. The role of NLP-technology was simulated using human abstractors and various workflows were investigated using public domain workflow and semantic web service technologies. This poster explores the potential use of NLP and the role of workflow and semantic web technologies in developing healthcare IT environments.

  18. Web information retrieval for health professionals.

    PubMed

    Ting, S L; See-To, Eric W K; Tse, Y K

    2013-06-01

    This paper presents a Web Information Retrieval System (WebIRS), which is designed to assist the healthcare professionals to obtain up-to-date medical knowledge and information via the World Wide Web (WWW). The system leverages the document classification and text summarization techniques to deliver the highly correlated medical information to the physicians. The system architecture of the proposed WebIRS is first discussed, and then a case study on an application of the proposed system in a Hong Kong medical organization is presented to illustrate the adoption process and a questionnaire is administrated to collect feedback on the operation and performance of WebIRS in comparison with conventional information retrieval in the WWW. A prototype system has been constructed and implemented on a trial basis in a medical organization. It has proven to be of benefit to healthcare professionals through its automatic functions in classification and summarizing the medical information that the physicians needed and interested. The results of the case study show that with the use of the proposed WebIRS, significant reduction of searching time and effort, with retrieval of highly relevant materials can be attained.

  19. Web Standard: PDF - When to Use, Document Metadata, PDF Sections

    EPA Pesticide Factsheets

    PDF files provide some benefits when used appropriately. PDF files should not be used for short documents ( 5 pages) unless retaining the format for printing is important. PDFs should have internal file metadata and meet section 508 standards.

  20. Power is only skin deep: an institutional ethnography of nurse-driven outpatient psoriasis treatment in the era of clinic web sites.

    PubMed

    Winkelman, Warren J; Halifax, Nancy V Davis

    2007-04-01

    We present an institutional ethnography of hospital-based psoriasis day treatment in the context of evaluating readiness to supplement services and support with a new web site. Through observation, interviews and a critical consideration of documents, forms and other textually-mediated discourses in the day-to-day work of nurses and physicians, we come to understand how the historical gender-determined power structure of nurses and physicians impacts nurses' work. On the one hand, nurses' work can have certain social benefits that would usually be considered untenable in traditional healthcare: nurses as primary decision-makers, nurses as experts in the treatment of disease, physicians as secondary consultants, and patients as co-facilitators in care delivery processes. However, benefits seem to have come at the nurses' expense, as they are required to maintain a cloak of invisibility for themselves and for their workplace, so that the Centre appears like all other outpatient clinics, and the nurses do not enjoy appropriate economic recognition. Implications for this negotiated invisibility on the implementation of new information systems in healthcare are discussed.

  1. Privacy and health in the information age: a content analysis of health web site privacy policy statements.

    PubMed

    Rains, Stephen A; Bosch, Leslie A

    2009-07-01

    This article reports a content analysis of the privacy policy statements (PPSs) from 97 general reference health Web sites that was conducted to examine the ways in which visitors' privacy is constructed by health organizations. PPSs are formal documents created by the Web site owner to describe how information regarding site visitors and their behavior is collected and used. The results show that over 80% of the PPSs in the sample indicated automatically collecting or requesting that visitors voluntarily provide information about themselves, and only 3% met all five of the Federal Trade Commission's Fair Information Practices guidelines. Additionally, the results suggest that the manner in which PPSs are framed and the use of justifications for collecting information are tropes used by health organizations to foster a secondary exchange of visitors' personal information for access to Web site content.

  2. Standards opportunities around data-bearing Web pages.

    PubMed

    Karger, David

    2013-03-28

    The evolving Web has seen ever-growing use of structured data, thanks to the way it enhances information authoring, querying, visualization and sharing. To date, however, most structured data authoring and management tools have been oriented towards programmers and Web developers. End users have been left behind, unable to leverage structured data for information management and communication as well as professionals. In this paper, I will argue that many of the benefits of structured data management can be provided to end users as well. I will describe an approach and tools that allow end users to define their own schemas (without knowing what a schema is), manage data and author (not program) interactive Web visualizations of that data using the Web tools with which they are already familiar, such as plain Web pages, blogs, wikis and WYSIWYG document editors. I will describe our experience deploying these tools and some lessons relevant to their future evolution.

  3. MAPI: towards the integrated exploitation of bioinformatics Web Services.

    PubMed

    Ramirez, Sergio; Karlsson, Johan; Trelles, Oswaldo

    2011-10-27

    Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).

  4. Unraveling the intricate dynamics of planktonic Arctic marine food webs. A sensitivity analysis of a well-documented food web model

    NASA Astrophysics Data System (ADS)

    Saint-Béat, Blanche; Maps, Frédéric; Babin, Marcel

    2018-01-01

    The extreme and variable environment shapes the functioning of Arctic ecosystems and the life cycles of its species. This delicate balance is now threatened by the unprecedented pace and magnitude of global climate change and anthropogenic pressure. Understanding the long-term consequences of these changes remains an elusive, yet pressing, goal. Our work was specifically aimed at identifying which biological processes impact Arctic planktonic ecosystem functioning, and how. Ecological Network Analysis (ENA) indices reveal emergent ecosystem properties that are not accessible through simple in situ observation. These indices are based on the architecture of carbon flows within food webs. But, despite the recent increase in in situ measurements from Arctic seas, many flow values remain unknown. Linear inverse modeling (LIM) allows missing flow values to be estimated from existing flow observations and, subsequent reconstruction of ecosystem food webs. Through a sensitivity analysis on a LIM model of the Amundsen Gulf in the Canadian Arctic, we were able to determine which processes affected the emergent properties of the planktonic ecosystem. The analysis highlighted the importance of an accurate knowledge of the various processes controlling bacterial production (e.g. bacterial growth efficiency and viral lysis). More importantly, a change in the fate of the microzooplankton within the food web can be monitored through the trophic level of mesozooplankton. It can be used as a "canary in the coal mine" signal, a forewarner of larger ecosystem change.

  5. Capitalizing on Web 2.0 in the Social Studies Context

    ERIC Educational Resources Information Center

    Holcomb, Lori B.; Beal, Candy M.

    2010-01-01

    This paper focuses primarily on the integration of Web 2.0 technologies into social studies education. It documents how various Web 2.0 tools can be utilized in the social studies context to support and enhance teaching and learning. For the purposes of focusing on one specific topic, global connections at the middle school level will be the…

  6. BioCatalogue: a universal catalogue of web services for the life sciences.

    PubMed

    Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A

    2010-07-01

    The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable 'Web 2.0'-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community.

  7. Borderless Geospatial Web (bolegweb)

    NASA Astrophysics Data System (ADS)

    Cetl, V.; Kliment, T.; Kliment, M.

    2016-06-01

    The effective access and use of geospatial information (GI) resources acquires a critical value of importance in modern knowledge based society. Standard web services defined by Open Geospatial Consortium (OGC) are frequently used within the implementations of spatial data infrastructures (SDIs) to facilitate discovery and use of geospatial data. This data is stored in databases located in a layer, called the invisible web, thus are ignored by search engines. SDI uses a catalogue (discovery) service for the web as a gateway to the GI world through the metadata defined by ISO standards, which are structurally diverse to OGC metadata. Therefore, a crosswalk needs to be implemented to bridge the OGC resources discovered on mainstream web with those documented by metadata in an SDI to enrich its information extent. A public global wide and user friendly portal of OGC resources available on the web ensures and enhances the use of GI within a multidisciplinary context and bridges the geospatial web from the end-user perspective, thus opens its borders to everybody. Project "Crosswalking the layers of geospatial information resources to enable a borderless geospatial web" with the acronym BOLEGWEB is ongoing as a postdoctoral research project at the Faculty of Geodesy, University of Zagreb in Croatia (http://bolegweb.geof.unizg.hr/). The research leading to the results of the project has received funding from the European Union Seventh Framework Programme (FP7 2007-2013) under Marie Curie FP7-PEOPLE-2011-COFUND. The project started in the November 2014 and is planned to be finished by the end of 2016. This paper provides an overview of the project, research questions and methodology, so far achieved results and future steps.

  8. Publishing Accessible Materials on the Web and CD-ROM.

    ERIC Educational Resources Information Center

    Federal Resource Center for Special Education, Washington, DC.

    While it is generally simple to make electronic content accessible, it is also easy inadvertently to make it inaccessible. This guide covers the many formats of electronic documents and points out what to keep in mind and what procedures to follow to make documents accessible to all when disseminating information via the World Wide Web and on…

  9. Documenting the use of computers in Swedish Health Care up to 1980.

    PubMed

    Peterson, H E; Lundin, P

    2011-01-01

    This paper describes a documentation project to create, collect and preserve previously unavailable sources on informatics in Sweden (including health care as one of 16 subgroups), and making them available on the Web. Time was critical as the personal documentation and artifacts of early pioneers could be irretrievably lost. The criteria for participation were that a person had developed a system in a clinical environment which was used by others prior to 1980. Participants were interviewed and asked for early documentation such as notes, minutes from meetings, drawings, test results and early models - together with related artifacts. The approach included traditional oral history interviews, collection of autobiographies and new self-structuring and time saving methods, such as witness seminars and an Internet-based repository of their recollections (the Writers' Web). The combination of methods obtained new information on system errors, and challenges in reaching the goals due partly to inadequacies of the early technology, and partly to the insufficient understanding of the complexity of the many problems which needed to be solved before a useful electronic patient record could be realized. A very important result was the development of a method to collect information in an easier, faster and much less expensive way than using the traditional scientific method, and still reach results that are qualitative and quantitative for the purpose of documenting the early period of computer-based health care technology. The witness seminars and the Writers' Web yielded especially large amounts of hitherto-unknown information. With all material in one database available to everyone on the Web, it is accessed very frequently - especially by students, researchers, journalists and teachers. Study of the materials explains and clarifies the reasons behind the delays and difficulties that have been encountered in developing electronic patient records, as described in an

  10. Automating testbed documentation and database access using World Wide Web (WWW) tools

    NASA Technical Reports Server (NTRS)

    Ames, Charles; Auernheimer, Brent; Lee, Young H.

    1994-01-01

    A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.

  11. Ontology-based reusable clinical document template production system.

    PubMed

    Nam, Sejin; Lee, Sungin; Kim, James G Boram; Kim, Hong-Gee

    2012-01-01

    Clinical documents embody professional clinical knowledge. This paper shows an effective clinical document template (CDT) production system that uses a clinical description entity (CDE) model, a CDE ontology, and a knowledge management system called STEP that manages ontology-based clinical description entities. The ontology represents CDEs and their inter-relations, and the STEP system stores and manages CDE ontology-based information regarding CDTs. The system also provides Web Services interfaces for search and reasoning over clinical entities. The system was populated with entities and relations extracted from 35 CDTs that were used in admission, discharge, and progress reports, as well as those used in nursing and operation functions. A clinical document template editor is shown that uses STEP.

  12. Starlink Document Styles

    NASA Astrophysics Data System (ADS)

    Lawden, M. D.

    This document describes the various styles which are recommended for Starlink documents. It also explains how to use the templates which are provided by Starlink to help authors create documents in a standard style. This paper is concerned mainly with conveying the ``look and feel" of the various styles of Starlink document rather than describing the technical details of how to produce them. Other Starlink papers give recommendations for the detailed aspects of document production, design, layout, and typography. The only style that is likely to be used by most Starlink authors is the Standard style.

  13. F-OWL: An Inference Engine for Semantic Web

    NASA Technical Reports Server (NTRS)

    Zou, Youyong; Finin, Tim; Chen, Harry

    2004-01-01

    Understanding and using the data and knowledge encoded in semantic web documents requires an inference engine. F-OWL is an inference engine for the semantic web language OWL language based on F-logic, an approach to defining frame-based systems in logic. F-OWL is implemented using XSB and Flora-2 and takes full advantage of their features. We describe how F-OWL computes ontology entailment and compare it with other description logic based approaches. We also describe TAGA, a trading agent environment that we have used as a test bed for F-OWL and to explore how multiagent systems can use semantic web concepts and technology.

  14. Collapse of a pollination web in small conservation areas.

    PubMed

    Pauw, Anton

    2007-07-01

    A suspected global decline in pollinators has heightened interest in their ecological significance. In a worst-case scenario, the decline of generalist pollinators is predicted to trigger cascades of linked declines among the multiple specialist plant species to which they are linked, but this has not been documented. I studied a portion of a pollination web involving a generalist pollinator, the oil-collecting bee Rediviva peringueyi, and a community of oil-secreting plants. Across 27 established conservation areas located in the Cape Floral Region, I found substantial variation in the bees' occurrence in relation to soil type and the successional stage of the vegetation. Anthropogenic declines were detectable against this background of naturally occurring variation: R. peringueyi was absent from small conservation areas (< 385 ha) in an urban matrix. In the absence of the bee, seed set failed in six specialist plant species that are pollinated only by R. peringueyi but remained high in a pollination generalist, which had replacement pollinators. The findings are consistent with theoretical predictions of the importance of generalist pollinators in maintaining the structure of pollination webs.

  15. Improving Web Accessibility in a University Setting

    ERIC Educational Resources Information Center

    Olive, Geoffrey C.

    2010-01-01

    Improving Web accessibility for disabled users visiting a university's Web site is explored following the World Wide Web Consortium (W3C) guidelines and Section 508 of the Rehabilitation Act rules for Web page designers to ensure accessibility. The literature supports the view that accessibility is sorely lacking, not only in the USA, but also…

  16. A web-based referral system for neurosurgery--a solution to our problems?

    PubMed

    Choo, Melissa C; Thennakon, Shyamica; Shapey, Jonathan; Tolias, Christos M

    2011-06-01

    Accurate handover is very important in the running of all modern neurosurgical units. Referrals are notoriously difficult to track and review due to poor quality of written paper-based recorded information for handover (illegibility, incomplete paper trail, repetition of information and loss of patients). We have recently introduced a web-based referral system to three of our referring hospitals. To review the experience of a tertiary neurosurgical unit in using the UK's first real time online referral system and to discuss its strengths and weaknesses in comparison to the currently used written paper-based referral system. A retrospective analysis of all paper-based referrals made to our unit in March 2009, compared to 14 months' referrals through the web system. Patterns of information recorded in both systems were investigated and advantages and disadvantages of each identified. One hundred ninety-six patients were referred using the online system, 483 using the traditional method. Significant problems of illegibility and missing information were identified with the paper-based referrals. In comparison, 100% documentation was achieved with the online referral system. Only 63% penetrance in the best performing trust was found using the online system, with significant delays in responding to referrals. Traditional written paper-based referrals do not provide an acceptable level of documentation. We present our experience and difficulties implementing a web-based system to address this. Although our data are unable to show improved patient care, we believe the potential benefits of a fully integrated system may offer a solution.

  17. Analyzing Document Retrievability in Patent Retrieval Settings

    NASA Astrophysics Data System (ADS)

    Bashir, Shariq; Rauber, Andreas

    Most information retrieval settings, such as web search, are typically precision-oriented, i.e. they focus on retrieving a small number of highly relevant documents. However, in specific domains, such as patent retrieval or law, recall becomes more relevant than precision: in these cases the goal is to find all relevant documents, requiring algorithms to be tuned more towards recall at the cost of precision. This raises important questions with respect to retrievability and search engine bias: depending on how the similarity between a query and documents is measured, certain documents may be more or less retrievable in certain systems, up to some documents not being retrievable at all within common threshold settings. Biases may be oriented towards popularity of documents (increasing weight of references), towards length of documents, favour the use of rare or common words; rely on structural information such as metadata or headings, etc. Existing accessibility measurement techniques are limited as they measure retrievability with respect to all possible queries. In this paper, we improve accessibility measurement by considering sets of relevant and irrelevant queries for each document. This simulates how recall oriented users create their queries when searching for relevant information. We evaluate retrievability scores using a corpus of patents from US Patent and Trademark Office.

  18. Web Application Design Using Server-Side JavaScript

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, J.; Simons, R.

    1999-02-01

    This document describes the application design philosophy for the Comprehensive Nuclear Test Ban Treaty Research & Development Web Site. This design incorporates object-oriented techniques to produce a flexible and maintainable system of applications that support the web site. These techniques will be discussed at length along with the issues they address. The overall structure of the applications and their relationships with one another will also be described. The current problems and future design changes will be discussed as well.

  19. Web Annotation and Threaded Forum: How Did Learners Use the Two Environments in an Online Discussion?

    ERIC Educational Resources Information Center

    Sun, Yanyan; Gao, Fei

    2014-01-01

    Web annotation is a Web 2.0 technology that allows learners to work collaboratively on web pages or electronic documents. This study explored the use of Web annotation as an online discussion tool by comparing it to a traditional threaded discussion forum. Ten graduate students participated in the study. Participants had access to both a Web…

  20. Web-based routing assistance tool to reduce pavement damage by overweight and oversize vehicles.

    DOT National Transportation Integrated Search

    2016-10-30

    This report documents the results of a completed project titled Web-Based Routing Assistance Tool to Reduce Pavement Damage by Overweight and Oversize Vehicles. The tasks involved developing a Web-based GIS routing assistance tool and evaluate ...

  1. WorldWide Web: Hypertext from CERN.

    ERIC Educational Resources Information Center

    Nickerson, Gord

    1992-01-01

    Discussion of software tools for accessing information on the Internet focuses on the WorldWideWeb (WWW) system, which was developed at the European Particle Physics Laboratory (CERN) in Switzerland to build a worldwide network of hypertext links using available networking technology. Its potential for use with multimedia documents is also…

  2. WebGIS based on semantic grid model and web services

    NASA Astrophysics Data System (ADS)

    Zhang, WangFei; Yue, CaiRong; Gao, JianGuo

    2009-10-01

    As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by

  3. A usability evaluation exploring the design of American Nurses Association state web sites.

    PubMed

    Alexander, Gregory L; Wakefield, Bonnie J; Anbari, Allison B; Lyons, Vanessa; Prentice, Donna; Shepherd, Marilyn; Strecker, E Bradley; Weston, Marla J

    2014-08-01

    National leaders are calling for opportunities to facilitate the Future of Nursing. Opportunities can be encouraged through state nurses association Web sites, which are part of the American Nurses Association, that are well designed, with appropriate content, and in a language professional nurses understand. The American Nurses Association and constituent state nurses associations provide information about nursing practice, ethics, credentialing, and health on Web sites. We conducted usability evaluations to determine compliance with heuristic and ethical principles for Web site design. We purposefully sampled 27 nursing association Web sites and used 68 heuristic and ethical criteria to perform systematic usability assessments of nurse association Web sites. Web site analysis included seven double experts who were all RNs trained in usability analysis. The extent to which heuristic and ethical criteria were met ranged widely from one state that met 0% of the criteria for "help and documentation" to states that met greater than 92% of criteria for "visibility of system status" and "aesthetic and minimalist design." Suggested improvements are simple yet make an impact on a first-time visitor's impression of the Web site. For example, adding internal navigation and tracking features and providing more details about the application process through help and frequently asked question documentation would facilitate better use. Improved usability will improve effectiveness, efficiency, and consumer satisfaction with these Web sites.

  4. WebDMS: A Web-Based Data Management System for Environmental Data

    NASA Astrophysics Data System (ADS)

    Ekstrand, A. L.; Haderman, M.; Chan, A.; Dye, T.; White, J. E.; Parajon, G.

    2015-12-01

    DMS is an environmental Data Management System to manage, quality-control (QC), summarize, document chain-of-custody, and disseminate data from networks ranging in size from a few sites to thousands of sites, instruments, and sensors. The server-client desktop version of DMS is used by local and regional air quality agencies (including the Bay Area Air Quality Management District, the South Coast Air Quality Management District, and the California Air Resources Board), the EPA's AirNow Program, and the EPA's AirNow-International (AirNow-I) program, which offers countries the ability to run an AirNow-like system. As AirNow's core data processing engine, DMS ingests, QCs, and stores real-time data from over 30,000 active sensors at over 5,280 air quality and meteorological sites from over 130 air quality agencies across the United States. As part of the AirNow-I program, several instances of DMS are deployed in China, Mexico, and Taiwan. The U.S. Department of State's StateAir Program also uses DMS for five regions in China and plans to expand to other countries in the future. Recent development has begun to migrate DMS from an onsite desktop application to WebDMS, a web-based application designed to take advantage of cloud hosting and computing services to increase scalability and lower costs. WebDMS will continue to provide easy-to-use data analysis tools, such as time-series graphs, scatterplots, and wind- or pollution-rose diagrams, as well as allowing data to be exported to external systems such as the EPA's Air Quality System (AQS). WebDMS will also provide new GIS analysis features and a suite of web services through a RESTful web API. These changes will better meet air agency needs and allow for broader national and international use (for example, by the AirNow-I partners). We will talk about the challenges and advantages of migrating DMS to the web, modernizing the DMS user interface, and making it more cost-effective to enhance and maintain over time.

  5. Techniques for Improving Communication of Emotional Content in Text-Only Web-Based Therapeutic Communications: Systematic Review

    PubMed Central

    Cox, Martine Elizabeth; Small, Hannah Julie; Boyes, Allison W; O'Brien, Lorna; Rose, Shiho Karina; Baker, Amanda L; Henskens, Frans A; Kirkwood, Hannah Naomi; Roach, Della M

    2017-01-01

    Background Web-based typed exchanges are increasingly used by professionals to provide emotional support to patients. Although some empirical evidence exists to suggest that various strategies may be used to convey emotion during Web-based text communication, there has been no critical review of these data in patients with chronic conditions. Objectives The objective of this review was to identify the techniques used to convey emotion in written or typed Web-based communication and assess the empirical evidence regarding impact on communication and psychological outcomes. Methods An electronic search of databases, including MEDLINE, CINAHL, PsycINFO, EMBASE, and the Cochrane Library was conducted to identify literature published from 1990 to 2016. Searches were also conducted using Google Scholar, manual searching of reference lists of identified papers and manual searching of tables of contents for selected relevant journals. Data extraction and coding were completed by 2 reviewers (10.00% [573/5731] of screened papers, at abstract/title screening stage; 10.0% of screened [69/694] papers, at full-text screening stage). Publications were assessed against the eligibility criteria and excluded if they were duplicates, were not published in English, were published before 1990, referenced animal or nonhuman subjects, did not describe original research, were not journal papers, or did not empirically test the effect of one or more nonverbal communication techniques (for eg, smileys, emoticons, emotional bracketing, voice accentuation, trailers [ellipsis], and pseudowords) as part of Web-based or typed communication on communication-related variables, including message interpretation, social presence, the nature of the interaction (eg, therapeutic alliance), patient perceptions of the interaction (eg, participant satisfaction), or psychological outcomes, including depression, anxiety, and distress. Results A total of 6902 unique publications were identified. Of these

  6. Cyber-T web server: differential analysis of high-throughput data.

    PubMed

    Kayala, Matthew A; Baldi, Pierre

    2012-07-01

    The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.

  7. Secure Web-Site Access with Tickets and Message-Dependent Digests

    USGS Publications Warehouse

    Donato, David I.

    2008-01-01

    Although there are various methods for restricting access to documents stored on a World Wide Web (WWW) site (a Web site), none of the widely used methods is completely suitable for restricting access to Web applications hosted on an otherwise publicly accessible Web site. A new technique, however, provides a mix of features well suited for restricting Web-site or Web-application access to authorized users, including the following: secure user authentication, tamper-resistant sessions, simple access to user state variables by server-side applications, and clean session terminations. This technique, called message-dependent digests with tickets, or MDDT, maintains secure user sessions by passing single-use nonces (tickets) and message-dependent digests of user credentials back and forth between client and server. Appendix 2 provides a working implementation of MDDT with PHP server-side code and JavaScript client-side code.

  8. Sign Language Web Pages

    ERIC Educational Resources Information Center

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  9. Web-Education Systems in Europe. ZIFF Papiere.

    ERIC Educational Resources Information Center

    Paulsen, Morten; Keegan, Desmond; Dias, Ana; Dias, Paulo; Pimenta, Pedro; Fritsch, Helmut; Follmer, Holger; Micincova, Maria; Olsen, Gro-Anett

    This document contains the following papers on Web-based education systems in Europe: (1) "European Experiences with Learning Management Systems" (Morten Flate Paulsen and Desmond Keegan); (2) "Online Education Systems: Definition of Terms" (Morten Flate Paulsen); (3) "Learning Management Systems (LMS) Used in Southern…

  10. Using Web 2.0 to Collaborate

    ERIC Educational Resources Information Center

    Buechler, Scott

    2010-01-01

    Web 2.0 is not only for kids anymore, businesses are using it, too. Businesses are adopting Web 2.0 technology for a variety of purposes. In this article, the author discusses how he incorporates Web 2.0 into his business communications course. He describes a project that has both individual and collaborative elements and requires extensive…

  11. Value of Information Web Application

    DTIC Science & Technology

    2015-04-01

    their understanding of VoI attributes (source reliable, information content, and latency). The VoI web application emulates many features of a...only when using the Firefox web browser on those computers (Internet Explorer was not viable due to unchangeable user settings). During testing, the

  12. Web party effect: a cocktail party effect in the web environment.

    PubMed

    Rigutti, Sara; Fantoni, Carlo; Gerbino, Walter

    2015-01-01

    In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others.

  13. Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text

    NASA Astrophysics Data System (ADS)

    Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.

    2015-12-01

    We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction

  14. Automatic public access to documents and maps stored on and internal secure system.

    NASA Astrophysics Data System (ADS)

    Trench, James; Carter, Mary

    2013-04-01

    The Geological Survey of Ireland operates a Document Management System for providing documents and maps stored internally in high resolution and in a high level secure environment, to an external service where the documents are automatically presented in a lower resolution to members of the public. Security is devised through roles and Individual Users where role level and folder level can be set. The application is an electronic document/data management (EDM) system which has a Geographical Information System (GIS) component integrated to allow users to query an interactive map of Ireland for data that relates to a particular area of interest. The data stored in the database consists of Bedrock Field Sheets, Bedrock Notebooks, Bedrock Maps, Geophysical Surveys, Geotechnical Maps & Reports, Groundwater, GSI Publications, Marine, Mine Records, Mineral Localities, Open File, Quaternary and Unpublished Reports. The Konfig application Tool is both an internal and public facing application. It acts as a tool for high resolution data entry which are stored in a high resolution vault. The public facing application is a mirror of the internal application and differs only in that the application furnishes high resolution data into low resolution format which is stored in a low resolution vault thus, making the data web friendly to the end user for download.

  15. BioCatalogue: a universal catalogue of web services for the life sciences

    PubMed Central

    Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A.

    2010-01-01

    The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable ‘Web 2.0’-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community. PMID:20484378

  16. UFOs, NGOs, or IGOs: Using International Documents for General Reference.

    ERIC Educational Resources Information Center

    Shreve, Catherine

    1997-01-01

    Discusses accessing and using documents from international (intergovernmental) organizations. Profiles the United Nations, the European Union and other Intergovernmental Organizations (IGOs). Discusses the librarian as "Web detective," notes questions to focus on, and presents examples to demonstrate navigation of IGO sites. Lists basic…

  17. BioServices: a common Python package to access biological Web Services programmatically.

    PubMed

    Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio

    2013-12-15

    Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.

  18. Characteristics of food industry web sites and "advergames" targeting children.

    PubMed

    Culp, Jennifer; Bell, Robert A; Cassady, Diana

    2010-01-01

    To assess the content of food industry Web sites targeting children by describing strategies used to prolong their visits and foster brand loyalty; and to document health-promoting messages on these Web sites. A content analysis was conducted of Web sites advertised on 2 children's networks, Cartoon Network and Nickelodeon. A total of 290 Web pages and 247 unique games on 19 Internet sites were examined. Games, found on 81% of Web sites, were the most predominant promotion strategy used. All games had at least 1 brand identifier, with logos being most frequently used. On average Web sites contained 1 "healthful" message for every 45 exposures to brand identifiers. Food companies use Web sites to extend their television advertising to promote brand loyalty among children. These sites almost exclusively promoted food items high in sugar and fat. Health professionals need to monitor food industry marketing practices used in "new media." Published by Elsevier Inc.

  19. Croatian Medical Journal citation score in Web of Science, Scopus, and Google Scholar.

    PubMed

    Sember, Marijan; Utrobicić, Ana; Petrak, Jelka

    2010-04-01

    To analyze the 2007 citation count of articles published by the Croatian Medical Journal in 2005-2006 based on data from the Web of Science, Scopus, and Google Scholar. Web of Science and Scopus were searched for the articles published in 2005-2006. As all articles returned by Scopus were included in Web of Science, the latter list was the sample for further analysis. Total citation counts for each article on the list were retrieved from Web of Science, Scopus, and Google Scholar. The overlap and unique citations were compared and analyzed. Proportions were compared using chi(2)-test. Google Scholar returned the greatest proportion of articles with citations (45%), followed by Scopus (42%), and Web of Science (38%). Almost a half (49%) of articles had no citations and 11% had an equal number of identical citations in all 3 databases. The greatest overlap was found between Web of Science and Scopus (54%), followed by Scopus and Google Scholar (51%), and Web of Science and Google Scholar (44%). The greatest number of unique citations was found by Google Scholar (n=86). The majority of these citations (64%) came from journals, followed by books and PhD theses. Approximately 55% of all citing documents were full-text resources in open access. The language of citing documents was mostly English, but as many as 25 citing documents (29%) were in Chinese. Google Scholar shares a total of 42% citations returned by two others, more influential, bibliographic resources. The list of unique citations in Google Scholar is predominantly journal based, but these journals are mainly of local character. Citations received by internationally recognized medical journals are crucial for increasing the visibility of small medical journals but Google Scholar may serve as an alternative bibliometric tool for an orientational citation insight.

  20. Croatian Medical Journal Citation Score in Web of Science, Scopus, and Google Scholar

    PubMed Central

    Šember, Marijan; Utrobičić, Ana; Petrak, Jelka

    2010-01-01

    Aim To analyze the 2007 citation count of articles published by the Croatian Medical Journal in 2005-2006 based on data from the Web of Science, Scopus, and Google Scholar. Methods Web of Science and Scopus were searched for the articles published in 2005-2006. As all articles returned by Scopus were included in Web of Science, the latter list was the sample for further analysis. Total citation counts for each article on the list were retrieved from Web of Science, Scopus, and Google Scholar. The overlap and unique citations were compared and analyzed. Proportions were compared using χ2-test. Results Google Scholar returned the greatest proportion of articles with citations (45%), followed by Scopus (42%), and Web of Science (38%). Almost a half (49%) of articles had no citations and 11% had an equal number of identical citations in all 3 databases. The greatest overlap was found between Web of Science and Scopus (54%), followed by Scopus and Google Scholar (51%), and Web of Science and Google Scholar (44%). The greatest number of unique citations was found by Google Scholar (n = 86). The majority of these citations (64%) came from journals, followed by books and PhD theses. Approximately 55% of all citing documents were full-text resources in open access. The language of citing documents was mostly English, but as many as 25 citing documents (29%) were in Chinese. Conclusion Google Scholar shares a total of 42% citations returned by two others, more influential, bibliographic resources. The list of unique citations in Google Scholar is predominantly journal based, but these journals are mainly of local character. Citations received by internationally recognized medical journals are crucial for increasing the visibility of small medical journals but Google Scholar may serve as an alternative bibliometric tool for an orientational citation insight. PMID:20401951

  1. Analysis of co-occurrence toponyms in web pages based on complex networks

    NASA Astrophysics Data System (ADS)

    Zhong, Xiang; Liu, Jiajun; Gao, Yong; Wu, Lun

    2017-01-01

    A large number of geographical toponyms exist in web pages and other documents, providing abundant geographical resources for GIS. It is very common for toponyms to co-occur in the same documents. To investigate these relations associated with geographic entities, a novel complex network model for co-occurrence toponyms is proposed. Then, 12 toponym co-occurrence networks are constructed from the toponym sets extracted from the People's Daily Paper documents of 2010. It is found that two toponyms have a high co-occurrence probability if they are at the same administrative level or if they possess a part-whole relationship. By applying complex network analysis methods to toponym co-occurrence networks, we find the following characteristics. (1) The navigation vertices of the co-occurrence networks can be found by degree centrality analysis. (2) The networks express strong cluster characteristics, and it takes only several steps to reach one vertex from another one, implying that the networks are small-world graphs. (3) The degree distribution satisfies the power law with an exponent of 1.7, so the networks are free-scale. (4) The networks are disassortative and have similar assortative modes, with assortative exponents of approximately 0.18 and assortative indexes less than 0. (5) The frequency of toponym co-occurrence is weakly negatively correlated with geographic distance, but more strongly negatively correlated with administrative hierarchical distance. Considering the toponym frequencies and co-occurrence relationships, a novel method based on link analysis is presented to extract the core toponyms from web pages. This method is suitable and effective for geographical information retrieval.

  2. The Quebec National Library on the Web.

    ERIC Educational Resources Information Center

    Kieran, Shirley; Sauve, Diane

    1997-01-01

    Provides an overview of the Quebec National Library (Bibliotheque Nationale du Quebec, or BNQ) Web site. Highlights include issues related to content, design, and technology; IRIS, the BNQ online public access catalog; development of the multimedia catalog; software; digitization of documents; links to bibliographic records; and future…

  3. Bioinformatics data distribution and integration via Web Services and XML.

    PubMed

    Li, Xiao; Zhang, Yizheng

    2003-11-01

    It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.

  4. Assessing usage patterns of electronic clinical documentation templates.

    PubMed

    Vawdrey, David K

    2008-11-06

    Many vendors of electronic medical records support structured and free-text entry of clinical documents using configurable templates. At a healthcare institution comprising two large academic medical centers, a documentation management data mart and a custom, Web-accessible business intelligence application were developed to track the availability and usage of electronic documentation templates. For each medical center, template availability and usage trends were measured from November 2007 through February 2008. By February 2008, approximately 65,000 electronic notes were authored per week on the two campuses. One site had 934 available templates, with 313 being used to author at least one note. The other site had 765 templates, of which 480 were used. The most commonly used template at both campuses was a free text note called "Miscellaneous Nursing Note," which accounted for 33.3% of total documents generated at one campus and 15.2% at the other.

  5. Seamless Management of Paper and Electronic Documents for Task Knowledge Sharing

    NASA Astrophysics Data System (ADS)

    Kojima, Hiroyuki; Iwata, Ken

    Due to the progress of Internet technology and the increase of distributed information on networks, the present knowledge management has been based more and more on the performance of various experienced users. In addition to the increase of electronic documents, the use of paper documents has not been reduced because of their convenience. This paper describes a method of tracking paper document locations and contents using radio frequency identification (RFID) technology. This research also focuses on the expression of a task process and the seamless structuring of related electronic and paper documents as a result of task knowledge formalization using information organizing. A system is proposed here that implements information organization for both Web documents and paper documents with the task model description and RFID technology. Examples of a prototype system are also presented.

  6. Web party effect: a cocktail party effect in the web environment

    PubMed Central

    Gerbino, Walter

    2015-01-01

    In goal-directed web navigation, labels compete for selection: this process often involves knowledge integration and requires selective attention to manage the dizziness of web layouts. Here we ask whether the competition for selection depends on all web navigation options or only on those options that are more likely to be useful for information seeking, and provide evidence in favor of the latter alternative. Participants in our experiment navigated a representative set of real websites of variable complexity, in order to reach an information goal located two clicks away from the starting home page. The time needed to reach the goal was accounted for by a novel measure of home page complexity based on a part of (not all) web options: the number of links embedded within web navigation elements weighted by the number and type of embedding elements. Our measure fully mediated the effect of several standard complexity metrics (the overall number of links, words, images, graphical regions, the JPEG file size of home page screenshots) on information seeking time and usability ratings. Furthermore, it predicted the cognitive demand of web navigation, as revealed by the duration judgment ratio (i.e., the ratio of subjective to objective duration of information search). Results demonstrate that focusing on relevant links while ignoring other web objects optimizes the deployment of attentional resources necessary to navigation. This is in line with a web party effect (i.e., a cocktail party effect in the web environment): users tune into web elements that are relevant for the achievement of their navigation goals and tune out all others. PMID:25802803

  7. UW Inventory of Freight Emissions (WIFE3) heavy duty diesel vehicle web calculator methodology.

    DOT National Transportation Integrated Search

    2013-09-01

    This document serves as an overview and technical documentation for the University of Wisconsin Inventory of : Freight Emissions (WIFE3) calculator. The WIFE3 web calculator rapidly estimates future heavy duty diesel : vehicle (HDDV) roadway emission...

  8. Phylowood: interactive web-based animations of biogeographic and phylogeographic histories.

    PubMed

    Landis, Michael J; Bedford, Trevor

    2014-01-01

    Phylowood is a web service that uses JavaScript to generate in-browser animations of biogeographic and phylogeographic histories from annotated phylogenetic input. The animations are interactive, allowing the user to adjust spatial and temporal resolution, and highlight phylogenetic lineages of interest. All documentation and source code for Phylowood is freely available at https://github.com/mlandis/phylowood, and a live web application is available at https://mlandis.github.io/phylowood.

  9. Alzforum and SWAN: the present and future of scientific web communities.

    PubMed

    Clark, Tim; Kinoshita, June

    2007-05-01

    Scientists drove the early development of the World Wide Web, primarily as a means for rapid communication, document sharing and data access. They have been far slower to adopt the web as a medium for building research communities. Yet, web-based communities hold great potential for accelerating the pace of scientific research. In this article, we will describe the 10-year experience of the Alzheimer Research Forum ('Alzforum'), a unique example of a thriving scientific web community, and explain the features that contributed to its success. We will then outline the SWAN (Semantic Web Applications in Neuromedicine) project, in which Alzforum curators are collaborating with informatics researchers to develop novel approaches that will enable communities to share richly contextualized information about scientific data, claims and hypotheses.

  10. Implementation of a School-wide Clinical Intervention Documentation System

    PubMed Central

    Stevenson, T. Lynn; Fox, Brent I.; Andrus, Miranda; Carroll, Dana

    2011-01-01

    Objective. To evaluate the effectiveness and impact of a customized Web-based software program implemented in 2006 for school-wide documentation of clinical interventions by pharmacy practice faculty members, pharmacy residents, and student pharmacists. Methods. The implementation process, directed by a committee of faculty members and school administrators, included preparation and refinement of the software, user training, development of forms and reports, and integration of the documentation process within the curriculum. Results. Use of the documentation tool consistently increased from May 2007 to December 2010. Over 187,000 interventions were documented with over $6.2 million in associated cost avoidance. Conclusions. Successful implementation of a school-wide documentation tool required considerable time from the oversight committee and a comprehensive training program for all users, with ongoing monitoring of data collection practices. Data collected proved to be useful to show the impact of faculty members, residents, and student pharmacists at affiliated training sites. PMID:21829264

  11. Graphite Web: web tool for gene set analysis exploiting pathway topology

    PubMed Central

    Sales, Gabriele; Calura, Enrica; Martini, Paolo; Romualdi, Chiara

    2013-01-01

    Graphite web is a novel web tool for pathway analyses and network visualization for gene expression data of both microarray and RNA-seq experiments. Several pathway analyses have been proposed either in the univariate or in the global and multivariate context to tackle the complexity and the interpretation of expression results. These methods can be further divided into ‘topological’ and ‘non-topological’ methods according to their ability to gain power from pathway topology. Biological pathways are, in fact, not only gene lists but can be represented through a network where genes and connections are, respectively, nodes and edges. To this day, the most used approaches are non-topological and univariate although they miss the relationship among genes. On the contrary, topological and multivariate approaches are more powerful, but difficult to be used by researchers without bioinformatic skills. Here we present Graphite web, the first public web server for pathway analysis on gene expression data that combines topological and multivariate pathway analyses with an efficient system of interactive network visualizations for easy results interpretation. Specifically, Graphite web implements five different gene set analyses on three model organisms and two pathway databases. Graphite Web is freely available at http://graphiteweb.bio.unipd.it/. PMID:23666626

  12. Presence of pro-tobacco messages on the Web.

    PubMed

    Hong, Traci; Cody, Michael J

    2002-01-01

    Ignored in the finalized Master Settlement Agreement (National Association of Attorneys General, 1998), the unmonitored, unregulated World Wide Web (Web) can operate as a major vehicle for delivering pro-tobacco messages, images, and products to millions of young consumers. A content analysis of 318 randomly sampled pro-tobacco Web sites revealed that tobacco has a pervasive presence on the Web, especially on e-commerce sites and sites featuring hobbies, recreation, and "fetishes." Products can be ordered online on nearly 50% of the sites, but only 23% of the sites included underage verification. Further, only 11% of these sites contain health warnings. Instead, pro-tobacco sites frequently associate smoking with "glamorous" and "alternative" lifestyles, and with images of young males and young (thin, attractive) females. Finally, many of the Web sites offered interactive site features that are potentially appealing to young Web users. Recommendations for future research and counterstrategies are discussed.

  13. Warming and Resource Availability Shift Food Web Structure and Metabolism

    PubMed Central

    O'Connor, Mary I.; Piehler, Michael F.; Leech, Dina M.; Anton, Andrea; Bruno, John F.

    2009-01-01

    Climate change disrupts ecological systems in many ways. Many documented responses depend on species' life histories, contributing to the view that climate change effects are important but difficult to characterize generally. However, systematic variation in metabolic effects of temperature across trophic levels suggests that warming may lead to predictable shifts in food web structure and productivity. We experimentally tested the effects of warming on food web structure and productivity under two resource supply scenarios. Consistent with predictions based on universal metabolic responses to temperature, we found that warming strengthened consumer control of primary production when resources were augmented. Warming shifted food web structure and reduced total biomass despite increases in primary productivity in a marine food web. In contrast, at lower resource levels, food web production was constrained at all temperatures. These results demonstrate that small temperature changes could dramatically shift food web dynamics and provide a general, species-independent mechanism for ecological response to environmental temperature change. PMID:19707271

  14. Object-Oriented Approach for 3d Archaeological Documentation

    NASA Astrophysics Data System (ADS)

    Valente, R.; Brumana, R.; Oreni, D.; Banfi, F.; Barazzetti, L.; Previtali, M.

    2017-08-01

    Documentation on archaeological fieldworks needs to be accurate and time-effective. Many features unveiled during excavations can be recorded just once, since the archaeological workflow physically removes most of the stratigraphic elements. Some of them have peculiar characteristics which make them hardly recognizable as objects and prevent a full 3D documentation. The paper presents a suitable feature-based method to carry on archaeological documentation with a three-dimensional approach, tested on the archaeological site of S. Calocero in Albenga (Italy). The method is based on one hand on the use of structure from motion techniques for on-site recording and 3D Modelling to represent the three-dimensional complexity of stratigraphy. The entire documentation workflow is carried out through digital tools, assuring better accuracy and interoperability. Outputs can be used in GIS to perform spatial analysis; moreover, a more effective dissemination of fieldworks results can be assured with the spreading of datasets and other information through web-services.

  15. OWL (On-Lie Webstories for Learning): A Unique Web-based Literacy Resource for Primary/Elementary Children.

    ERIC Educational Resources Information Center

    Juliebo, Moira; Durnford, Carol

    2000-01-01

    Describes Online Webstories for Learning (OWL), a Web-based resource for elementary school literacy education that was initially developed for use in the United Kingdom. Discusses the importance of including narrative, how OWL is being adapted for use in other countries, and off-line class activities suggested as part of OWL. (Contains 8…

  16. Using component technologies for web based wavelet enhanced mammographic image visualization.

    PubMed

    Sakellaropoulos, P; Costaridou, L; Panayiotakis, G

    2000-01-01

    The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.

  17. Acquiring geographical data with web harvesting

    NASA Astrophysics Data System (ADS)

    Dramowicz, K.

    2016-04-01

    Many websites contain very attractive and up to date geographical information. This information can be extracted, stored, analyzed and mapped using web harvesting techniques. Poorly organized data from websites are transformed with web harvesting into a more structured format, which can be stored in a database and analyzed. Almost 25% of web traffic is related to web harvesting, mostly while using search engines. This paper presents how to harvest geographic information from web documents using the free tool called the Beautiful Soup, one of the most commonly used Python libraries for pulling data from HTML and XML files. It is a relatively easy task to process one static HTML table. The more challenging task is to extract and save information from tables located in multiple and poorly organized websites. Legal and ethical aspects of web harvesting are discussed as well. The paper demonstrates two case studies. The first one shows how to extract various types of information about the Good Country Index from the multiple web pages, load it into one attribute table and map the results. The second case study shows how script tools and GIS can be used to extract information from one hundred thirty six websites about Nova Scotia wines. In a little more than three minutes a database containing one hundred and six liquor stores selling these wines is created. Then the availability and spatial distribution of various types of wines (by grape types, by wineries, and by liquor stores) are mapped and analyzed.

  18. Mental health first aid guidelines: an evaluation of impact following download from the World Wide Web.

    PubMed

    Hart, Laura M; Jorm, Anthony F; Paxton, Susan J; Cvetkovski, Stefan

    2012-11-01

    Mental health first aid guidelines provide the public with consensus-based information about how to assist someone who is developing a mental illness or experiencing a mental health crisis. The aim of the current study was to evaluate the usefulness and impact of the guidelines on web users who download them. Web users who downloaded the documents were invited to respond to an initial demographic questionnaire, then a follow up about how the documents had been used, their perceived usefulness, whether first-aid situations had been encountered and if these were influenced by the documents. Over 9.8 months, 706 web users responded to the initial questionnaire and 154 responded to the second. A majority reported downloading the document because their job involved contact with people with mental illness. Sixty-three web users reported providing first aid, 44 of whom reported that the person they were assisting had sought professional care as a result of their suggestion. Twenty-three web users reported seeking care themselves. A majority of those who provided first aid reported feeling that they had been successful in helping the person, that they had been able to assist in a way that was more knowledgeable, skilful and supportive, and that the guidelines had contributed to these outcomes. Information made freely available on the Internet, about how to provide mental health first aid to someone who is developing a mental health problem or experiencing a mental health crisis, is associated with more positive, empathic and successful helping behaviours. © 2012 Wiley Publishing Asia Pty Ltd.

  19. Applying Sensor Web Technology to Marine Sensor Data

    NASA Astrophysics Data System (ADS)

    Jirka, Simon; del Rio, Joaquin; Mihai Toma, Daniel; Nüst, Daniel; Stasch, Christoph; Delory, Eric

    2015-04-01

    SWE specifications that provide stricter guidance how these standards shall be applied to marine data (e.g. SensorML 2.0 profiles stating which metadata elements are mandatory building upon the ESONET Sensor Registry developments, etc.). Within the NeXOS project the presented architecture is implemented as a set of open source components. These implementations can be re-used by all interested scientists and data providers needing tools for publishing or consuming oceanographic sensor data. In further projects such as the European project FixO3 (Fixed-point Open Ocean Observatories), these software development activities are complemented with additional efforts to provide guidance how Sensor Web technology can be applied in an efficient manner. This way, not only software components are made available but also documentation and information resources that help to understand which types of Sensor Web deployments are best suited to fulfil different types of user requirements.

  20. Digging Deeper: The Deep Web.

    ERIC Educational Resources Information Center

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  1. SU-F-P-10: A Web-Based Radiation Safety Relational Database Module for Regulatory Compliance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosen, C; Ramsay, B; Konerth, S

    Purpose: Maintaining compliance with Radioactive Materials Licenses is inherently a time-consuming task requiring focus and attention to detail. Staff tasked with these responsibilities, such as the Radiation Safety Officer and associated personnel must retain disparate records for eventual placement into one or more annual reports. Entering results and records in a relational database using a web browser as the interface, and storing that data in a cloud-based storage site, removes procedural barriers. The data becomes more adaptable for mining and sharing. Methods: Web-based code was written utilizing the web framework Django, written in Python. Additionally, the application utilizes JavaScript formore » front-end interaction, SQL, HTML and CSS. Quality assurance code testing is performed in a sequential style, and new code is only added after the successful testing of the previous goals. Separate sections of the module include data entry and analysis for audits, surveys, quality management, and continuous quality improvement. Data elements can be adapted for quarterly and annual reporting, and for immediate notification of user determined alarm settings. Results: Current advances are focusing on user interface issues, and determining the simplest manner by which to teach the user to build query forms. One solution has been to prepare library documents that a user can select or edit in place of creation a new document. Forms are being developed based upon Nuclear Regulatory Commission federal code, and will be expanded to include State Regulations. Conclusion: Establishing a secure website to act as the portal for data entry, storage and manipulation can lead to added efficiencies for a Radiation Safety Program. Access to multiple databases can lead to mining for big data programs, and for determining safety issues before they occur. Overcoming web programming challenges, a category that includes mathematical handling, is providing challenges that are being

  2. The case for fire safe cigarettes made through industry documents

    PubMed Central

    Gunja, M; Wayne, G; Landman, A; Connolly, G; McGuire, A

    2002-01-01

    Objectives: To examine the extensive research undertaken by the tobacco industry over the past 25 years toward development of a fire safe cigarette. Methods: Research was conducted through a web based search of internal tobacco industry documents made publicly available through the 1998 Master Settlement Agreement. Results: The documents reveal that the tobacco industry produced a fire safe cigarette years ago, but failed to put it on the market. These findings contradict public industry claims that denied the technical feasibility and commercial acceptability of fire safe cigarettes. Internal documents also reveal a decades long, coordinated political strategy used to block proposed legislation and obfuscate the fire safe issue. Conclusions: Federal legislation mandating fire safe cigarettes is needed. PMID:12432160

  3. World Wide Web Server Standards and Guidelines.

    ERIC Educational Resources Information Center

    Stubbs, Keith M.

    This document defines the specific standards and general guidelines which the U.S. Department of Education (ED) will use to make information available on the World Wide Web (WWW). The purpose of providing such guidance is to ensure high quality and consistent content, organization, and presentation of information on ED WWW servers, in order to…

  4. Being There is Only the Beginning: Toward More Effective Web 2.0 Use in Academic Libraries

    DTIC Science & Technology

    2010-01-02

    Google is Our Friend,” and “ Plagiarism 101.” Also unlike the hard-to-find blogs, many academic libraries, including both Hollins University and Urbana...Effective Web 2.0 Use in Academic Libraries by Hanna C. Bachrach Pratt Institute...5a. CONTRACT NUMBER 2.0 Use in Academic Libraries 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Bachrach

  5. A binaural Web-based tour of the acoustics of Troy Music Hall

    NASA Astrophysics Data System (ADS)

    Torres, Rendell R.; Cooney, James; Shimizu, Yasushi

    2004-05-01

    For classical music to become more widely enjoyed, it must sound exciting. We hypothesize that if people could hear examples of truly exciting acoustics, classical music would be perceived less as a rarefied delicacy and more as a viscerally engaging listening experience. The Troy Savings Bank Music Hall in Troy, New York, is a legendary 1200-seat concert hall famous for its acoustics. Such landmarks are commonly documented architecturally but with few attempts to document their acoustics in a way that it is listenable. Thus, the goal is to capture and sonically disseminate the hall's acoustics through a Web-based acoustical tour, where one can click on various seats to hear binaural auralizations of different instruments and see corresponding views of the stage. The hope is that these auralizations will not only sonically document the acoustics of the hall but also tantalize even geographically distant listeners with binaural samples of how exciting music can be in excellent acoustics. The fun and challenges of devising (let alone standardizing) such an auralization-based system of documentation will be discussed, and a demonstration given. This process can be applied to other historically and acoustically significant spaces. [Work supported by the National Endowment for the Arts.

  6. Electronic Document Supply Systems.

    ERIC Educational Resources Information Center

    Cawkell, A. E.

    1991-01-01

    Describes electronic document delivery systems used by libraries and document image processing systems used for business purposes. Topics discussed include technical specifications; analogue read-only laser videodiscs; compact discs and CD-ROM; WORM; facsimile; ADONIS (Article Delivery over Network Information System); DOCDEL; and systems at the…

  7. Web-Based Software for Managing Research

    NASA Technical Reports Server (NTRS)

    Hoadley, Sherwood T.; Ingraldi, Anthony M.; Gough, Kerry M.; Fox, Charles; Cronin, Catherine K.; Hagemann, Andrew G.; Kemmerly, Guy T.; Goodman, Wesley L.

    2007-01-01

    aeroCOMPASS is a software system, originally designed to aid in the management of wind tunnels at Langley Research Center, that could be adapted to provide similar aid to other enterprises in which research is performed in common laboratory facilities by users who may be geographically dispersed. Included in aeroCOMPASS is Web-interface software that provides a single, convenient portal to a set of project- and test-related software tools and other application programs. The heart of aeroCOMPASS is a user-oriented document-management software subsystem that enables geographically dispersed users to easily share and manage a variety of documents. A principle of "write once, read many" is implemented throughout aeroCOMPASS to eliminate the need for multiple entry of the same information. The Web framework of aeroCOMPASS provides links to client-side application programs that are fully integrated with databases and server-side application programs. Other subsystems of aeroCOMPASS include ones for reserving hardware, tracking of requests and feedback from users, generating interactive notes, administration of a customer-satisfaction questionnaire, managing execution of tests, managing archives of metadata about tests, planning tests, and providing online help and instruction for users.

  8. The Advancement of Public Awareness, Concerning TRU Waste Characterization, Using a Virtual Document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, T. B.; Burns, T. P.; Estill, W. G.

    2002-02-28

    Building public trust and confidence through openness is a goal of the DOE Carlsbad Field Office for the Waste Isolation Pilot Plant (WIPP). The objective of the virtual document described in this paper is to give the public an overview of the waste characterization steps, an understanding of how waste characterization instrumentation works, and the type and amount of data generated from a batch of drums. The document is intended to be published on a web page and/or distributed at public meetings on CDs. Users may gain as much information as they desire regarding the transuranic (TRU) waste characterization program,more » starting at the highest level requirements (drivers) and progressing to more and more detail regarding how the requirements are met. Included are links to: drivers (which include laws, permits and DOE Orders); various characterization steps required for transportation and disposal under WIPP's Hazardous Waste Facility Permit; physical/chemical basis for each characterization method; types of data produced; and quality assurance process that accompanies each measurement. Examples of each type of characterization method in use across the DOE complex are included. The original skeleton of the document was constructed in a PowerPoint presentation and included descriptions of each section of the waste characterization program. This original document had a brief overview of Acceptable Knowledge, Non-Destructive Examination, Non-Destructive Assay, Small Quantity sites, and the National Certification Team. A student intern was assigned the project of converting the document to a virtual format and to discuss each subject in depth. The resulting product is a fully functional virtual document that works in a web browser and functions like a web page. All documents that were referenced, linked to, or associated, are included on the virtual document's CD. WIPP has been engaged in a variety of Hazardous Waste Facility Permit modification activities. During

  9. Chemical markup, XML and the World-Wide Web. 3. Toward a signed semantic chemical web of trust.

    PubMed

    Gkoutos, G V; Murray-Rust, P; Rzepa, H S; Wright, M

    2001-01-01

    We describe how a collection of documents expressed in XML-conforming languages such as CML and XHTML can be authenticated and validated against digital signatures which make use of established X.509 certificate technology. These can be associated either with specific nodes in the XML document or with the entire document. We illustrate this with two examples. An entire journal article expressed in XML has its individual components digitally signed by separate authors, and the collection is placed in an envelope and again signed. The second example involves using a software robot agent to acquire a collection of documents from a specified URL, to perform various operations and transformations on the content, including expressing molecules in CML, and to automatically sign the various components and deposit the result in a repository. We argue that these operations can used as components for building what we term an authenticated and semantic chemical web of trust.

  10. Web-based continuing medical education. (II): Evaluation study of computer-mediated continuing medical education.

    PubMed

    Curran, V R; Hoekman, T; Gulliver, W; Landells, I; Hatcher, L

    2000-01-01

    Over the years, various distance learning technologies and methods have been applied to the continuing medical education needs of rural and remote physicians. They have included audio teleconferencing, slow scan imaging, correspondence study, and compressed videoconferencing. The recent emergence and growth of Internet, World Wide Web (Web), and compact disk read-only-memory (CD-ROM) technologies have introduced new opportunities for providing continuing education to the rural medical practitioner. This evaluation study assessed the instructional effectiveness of a hybrid computer-mediated courseware delivery system on dermatologic office procedures. A hybrid delivery system merges Web documents, multimedia, computer-mediated communications, and CD-ROMs to enable self-paced instruction and collaborative learning. Using a modified pretest to post-test control group study design, several evaluative criteria (participant reaction, learning achievement, self-reported performance change, and instructional transactions) were assessed by various qualitative and quantitative data collection methods. This evaluation revealed that a hybrid computer-mediated courseware system was an effective means for increasing knowledge (p < .05) and improving self-reported competency (p < .05) in dermatologic office procedures, and that participants were very satisfied with the self-paced instruction and use of asynchronous computer conferencing for collaborative information sharing among colleagues.

  11. Web portal for dynamic creation and publication of teaching materials in multiple formats from a single source representation

    NASA Astrophysics Data System (ADS)

    Roganov, E. A.; Roganova, N. A.; Aleksandrov, A. I.; Ukolova, A. V.

    2017-01-01

    We implement a web portal which dynamically creates documents in more than 30 different formats including html, pdf and docx from a single original material source. It is obtained by using a number of free software such as Markdown (markup language), Pandoc (document converter), MathJax (library to display mathematical notation in web browsers), framework Ruby on Rails. The portal enables the creation of documents with a high quality visualization of mathematical formulas, is compatible with a mobile device and allows one to search documents by text or formula fragments. Moreover, it gives professors the ability to develop the latest technology educational materials, without qualified technicians' assistance, thus improving the quality of the whole educational process.

  12. World Wide Web Page Design: A Structured Approach.

    ERIC Educational Resources Information Center

    Gregory, Gwen; Brown, M. Marlo

    1997-01-01

    Describes how to develop a World Wide Web site based on structured programming concepts. Highlights include flowcharting, first page design, evaluation, page titles, documenting source code, text, graphics, and browsers. Includes a template for HTML writers, tips for using graphics, a sample homepage, guidelines for authoring structured HTML, and…

  13. E-Marketing: Are Community Colleges Embracing the Web?

    ERIC Educational Resources Information Center

    Clagett, Craig

    2001-01-01

    Conducted a pilot survey of community colleges to assess their online marketing efforts. Found that while all had Web sites, only a minority of sites were truly interactive. Involvement of marketing offices with Web sites varied considerably, and a minority had used e-mail or Web ads for marketing. (EV)

  14. Reliability and type of consumer health documents on the World Wide Web: an annotation study.

    PubMed

    Martin, Melanie J

    2011-01-01

    In this paper we present a detailed scheme for annotating medical web pages designed for health care consumers. The annotation is along two axes: first, by reliability (the extent to which the medical information on the page can be trusted), second, by the type of page (patient leaflet, commercial, link, medical article, testimonial, or support). We analyze inter-rater agreement among three judges for each axis. Inter-rater agreement was moderate (0.77 accuracy, 0.62 F-measure, 0.49 Kappa) on the page reliability axis and good (0.81 accuracy, 0.72 F-measure, 0.73 Kappa) along the page type axis. We have shown promising results in this study that appropriate classes of pages can be developed and used by human annotators to annotate web pages with reasonable to good agreement. No.

  15. PC-based web authoring: How to learn as little unix as possible while getting on the Web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gennari, L.T.; Breaux, M.; Minton, S.

    1996-09-01

    This document is a general guide for creating Web pages, using commonly available word processing and file transfer applications. It is not a full guide to HTML, nor does it provide an introduction to the many WYSIWYG HTML editors available. The viability of the authoring method it describes will not be affected by changes in the HTML specification or the rapid release-and-obsolescence cycles of commercial WYSIWYG HTML editors. This document provides a gentle introduction to HTML for the beginner, and as the user gains confidence and experience, encourages greater familiarity with HTML through continued exposure to and hands-on usage ofmore » HTML code.« less

  16. Mac-based Web authoring: How to learn as little Unix as possible while getting on the Web.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gennari, L.T.

    1996-06-01

    This document is a general guide for creating Web pages, using commonly available word processing and file transfer applications. It is not a full guide to HTML, nor does it provide an introduction to the many WYSIWYG HTML editors available. The viability of the authoring method it describes will not be affected by changes in the HTML specification or the rapid release-and-obsolescence cycles of commercial WYSIWYG HTML editors. This document provides a gentle introduction to HTML for the beginner and as the user gains confidence and experience, encourages greater familiarity with HTML through continued exposure to and hands-on usage ofmore » HTML code.« less

  17. Biotea: RDFizing PubMed Central in support for the paper as an interface to the Web of Data

    PubMed Central

    2013-01-01

    Background The World Wide Web has become a dissemination platform for scientific and non-scientific publications. However, most of the information remains locked up in discrete documents that are not always interconnected or machine-readable. The connectivity tissue provided by RDF technology has not yet been widely used to support the generation of self-describing, machine-readable documents. Results In this paper, we present our approach to the generation of self-describing machine-readable scholarly documents. We understand the scientific document as an entry point and interface to the Web of Data. We have semantically processed the full-text, open-access subset of PubMed Central. Our RDF model and resulting dataset make extensive use of existing ontologies and semantic enrichment services. We expose our model, services, prototype, and datasets at http://biotea.idiginfo.org/ Conclusions The semantic processing of biomedical literature presented in this paper embeds documents within the Web of Data and facilitates the execution of concept-based queries against the entire digital library. Our approach delivers a flexible and adaptable set of tools for metadata enrichment and semantic processing of biomedical documents. Our model delivers a semantically rich and highly interconnected dataset with self-describing content so that software can make effective use of it. PMID:23734622

  18. Food and beverage advertising on children's web sites.

    PubMed

    Ustjanauskas, A E; Harris, J L; Schwartz, M B

    2014-10-01

    Food marketing contributes to childhood obesity. Food companies commonly place display advertising on children's web sites, but few studies have investigated this form of advertising. Document the number of food and beverage display advertisements viewed on popular children's web sites, nutritional quality of advertised brands and proportion of advertising approved by food companies as healthier dietary choices for child-directed advertising. Syndicated Internet exposure data identified popular children's web sites and food advertisements viewed on these web sites from July 2009 through June 2010. Advertisements were classified according to food category and companies' participation in food industry self-regulation. The percent of advertisements meeting government-proposed nutrition standards was calculated. 3.4 billion food advertisements appeared on popular children's web sites; 83% on just four web sites. Breakfast cereals and fast food were advertised most often (64% of ads). Most ads (74%) promoted brands approved by companies for child-directed advertising, but 84% advertised products that were high in fat, sugar and/or sodium. Ads for foods designated by companies as healthier dietary choices appropriate for child-directed advertising were least likely to meet independent nutrition standards. Most foods advertised on popular children's web sites do not meet independent nutrition standards. Further improvements to industry self-regulation are required. © 2013 The Authors. Pediatric Obesity © 2013 International Association for the Study of Obesity.

  19. Semantic Annotations and Querying of Web Data Sources

    NASA Astrophysics Data System (ADS)

    Hornung, Thomas; May, Wolfgang

    A large part of the Web, actually holding a significant portion of the useful information throughout the Web, consists of views on hidden databases, provided by numerous heterogeneous interfaces that are partly human-oriented via Web forms ("Deep Web"), and partly based on Web Services (only machine accessible). In this paper we present an approach for annotating these sources in a way that makes them citizens of the Semantic Web. We illustrate how queries can be stated in terms of the ontology, and how the annotations are used to selected and access appropriate sources and to answer the queries.

  20. Web-based healthcare hand drawing management system.

    PubMed

    Hsieh, Sheau-Ling; Weng, Yung-Ching; Chen, Chi-Huang; Hsu, Kai-Ping; Lin, Jeng-Wei; Lai, Feipei

    2010-01-01

    The paper addresses Medical Hand Drawing Management System architecture and implementation. In the system, we developed four modules: hand drawing management module; patient medical records query module; hand drawing editing and upload module; hand drawing query module. The system adapts windows-based applications and encompasses web pages by ASP.NET hosting mechanism under web services platforms. The hand drawings implemented as files are stored in a FTP server. The file names with associated data, e.g. patient identification, drawing physician, access rights, etc. are reposited in a database. The modules can be conveniently embedded, integrated into any system. Therefore, the system possesses the hand drawing features to support daily medical operations, effectively improve healthcare qualities as well. Moreover, the system includes the printing capability to achieve a complete, computerized medical document process. In summary, the system allows web-based applications to facilitate the graphic processes for healthcare operations.

  1. A model for enhancing Internet medical document retrieval with "medical core metadata".

    PubMed

    Malet, G; Munoz, F; Appleyard, R; Hersh, W

    1999-01-01

    Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and MEDLINE-type content descriptions. The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines.

  2. TMFoldWeb: a web server for predicting transmembrane protein fold class.

    PubMed

    Kozma, Dániel; Tusnády, Gábor E

    2015-09-17

    Here we present TMFoldWeb, the web server implementation of TMFoldRec, a transmembrane protein fold recognition algorithm. TMFoldRec uses statistical potentials and utilizes topology filtering and a gapless threading algorithm. It ranks template structures and selects the most likely candidates and estimates the reliability of the obtained lowest energy model. The statistical potential was developed in a maximum likelihood framework on a representative set of the PDBTM database. According to the benchmark test the performance of TMFoldRec is about 77 % in correctly predicting fold class for a given transmembrane protein sequence. An intuitive web interface has been developed for the recently published TMFoldRec algorithm. The query sequence goes through a pipeline of topology prediction and a systematic sequence to structure alignment (threading). Resulting templates are ordered by energy and reliability values and are colored according to their significance level. Besides the graphical interface, a programmatic access is available as well, via a direct interface for developers or for submitting genome-wide data sets. The TMFoldWeb web server is unique and currently the only web server that is able to predict the fold class of transmembrane proteins while assigning reliability scores for the prediction. This method is prepared for genome-wide analysis with its easy-to-use interface, informative result page and programmatic access. Considering the info-communication evolution in the last few years, the developed web server, as well as the molecule viewer, is responsive and fully compatible with the prevalent tablets and mobile devices.

  3. Informatics in radiology: A prototype Web-based reporting system for onsite-offsite clinician communication.

    PubMed

    Arnold, Corey W; Bui, Alex A T; Morioka, Craig; El-Saden, Suzie; Kangarloo, Hooshang

    2007-01-01

    The communication of imaging findings to a referring physician is an important role of the radiologist. However, communication between onsite and offsite physicians is a time-consuming process that can obstruct work flow and frequently involves no exchange of visual information, which is especially problematic given the importance of radiologic images for diagnosis and treatment. A prototype World Wide Web-based image documentation and reporting system was developed for use in supporting a "communication loop" that is based on the concept of a classic "wet-read" system. The proposed system represents an attempt to address many of the problems seen in current communication work flows by implementing a well-documented and easily accessible communication loop that is adaptable to different types of imaging study evaluation. Images are displayed in a native (DICOM) Digital Imaging and Communications in Medicine format with a Java applet, which allows accurate presentation along with use of various image manipulation tools. The Web-based infrastructure consists of a server that stores imaging studies and reports, with Web browsers that download and install necessary client software on demand. Application logic consists of a set of PHP (hypertext preprocessor) modules that are accessible with an application programming interface. The system may be adapted to any clinician-specialist communication loop, and, because it integrates radiologic standards with Web-based technologies, can more effectively communicate and document imaging data. RSNA, 2007

  4. In Brief: Web site for human spaceflight review committee

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2009-06-01

    As part of an independent review of human spaceflight plans and programs, NASA has established a Web site for the Review of U.S. Human Space Flight Plans Committee (http://hsfnasagov). The Web site provides the committee's charter, relevant documents, information about meetings and members, and ways to contact the committee. “The human spaceflight program belongs to everyone. Our committee would hope to benefit from the views of all who would care to contact us,” noted committee chairman Norman Augustine, retired chair and CEO of Lockheed Martin Corporation.

  5. Cat swarm optimization based evolutionary framework for multi document summarization

    NASA Astrophysics Data System (ADS)

    Rautray, Rasmita; Balabantaray, Rakesh Chandra

    2017-07-01

    Today, World Wide Web has brought us enormous quantity of on-line information. As a result, extracting relevant information from massive data has become a challenging issue. In recent past text summarization is recognized as one of the solution to extract useful information from vast amount documents. Based on number of documents considered for summarization, it is categorized as single document or multi document summarization. Rather than single document, multi document summarization is more challenging for the researchers to find accurate summary from multiple documents. Hence in this study, a novel Cat Swarm Optimization (CSO) based multi document summarizer is proposed to address the problem of multi document summarization. The proposed CSO based model is also compared with two other nature inspired based summarizer such as Harmony Search (HS) based summarizer and Particle Swarm Optimization (PSO) based summarizer. With respect to the benchmark Document Understanding Conference (DUC) datasets, the performance of all algorithms are compared in terms of different evaluation metrics such as ROUGE score, F score, sensitivity, positive predicate value, summary accuracy, inter sentence similarity and readability metric to validate non-redundancy, cohesiveness and readability of the summary respectively. The experimental analysis clearly reveals that the proposed approach outperforms the other summarizers included in the study.

  6. Comparing Web, Group and Telehealth Formats of a Military Parenting Program

    DTIC Science & Technology

    2017-06-01

    directed approaches. Comparative effectiveness will be tested by specifying a non - equivalence hypothesis for group -based and web-facilitated relative...Comparative effectiveness will be tested by specifying a non - equivalence hypothesis fro group based and individualized facilitated relative to self-directed...documents for review and approval. 1a. Finalize human subjects protocol and consent documents for pilot group (N=5 families), and randomized controlled

  7. A Web-based approach to blood donor preparation.

    PubMed

    France, Christopher R; France, Janis L; Kowalsky, Jennifer M; Copley, Diane M; Lewis, Kristin N; Ellis, Gary D; McGlone, Sarah T; Sinclair, Kadian S

    2013-02-01

    Written and video approaches to donor education have been shown to enhance donation attitudes and intentions to give blood, particularly when the information provides specific coping suggestions for donation-related concerns. This study extends this work by comparing Web-based approaches to donor preparation among donors and nondonors. Young adults (62% female; mean [±SD] age, 19.3 [±1.5] years; mean [range] number of prior blood donations, 1.1 [0-26]; 60% nondonors) were randomly assigned to view 1) a study Web site designed to address common blood donor concerns and suggest specific coping strategies (n = 238), 2) a standard blood center Web site (n = 233), or 3) a control Web site where participants viewed videos of their choice (n = 202). Measures of donation attitude, anxiety, confidence, intention, anticipated regret, and moral norm were completed before and after the intervention. Among nondonors, the study Web site produced greater changes in donation attitude, confidence, intention, and anticipated regret relative to both the standard and the control Web sites, but only differed significantly from the control Web site for moral norm and anxiety. Among donors, the study Web site produced greater changes in donation confidence and anticipated regret relative to both the standard and the control Web sites, but only differed significantly from the control Web site for donation attitude, anxiety, intention, and moral norm. Web-based donor preparation materials may provide a cost-effective way to enhance donation intentions and encourage donation behavior. © 2012 American Association of Blood Banks.

  8. GeneXplorer: an interactive web application for microarray data visualization and analysis.

    PubMed

    Rees, Christian A; Demeter, Janos; Matese, John C; Botstein, David; Sherlock, Gavin

    2004-10-01

    When publishing large-scale microarray datasets, it is of great value to create supplemental websites where either the full data, or selected subsets corresponding to figures within the paper, can be browsed. We set out to create a CGI application containing many of the features of some of the existing standalone software for the visualization of clustered microarray data. We present GeneXplorer, a web application for interactive microarray data visualization and analysis in a web environment. GeneXplorer allows users to browse a microarray dataset in an intuitive fashion. It provides simple access to microarray data over the Internet and uses only HTML and JavaScript to display graphic and annotation information. It provides radar and zoom views of the data, allows display of the nearest neighbors to a gene expression vector based on their Pearson correlations and provides the ability to search gene annotation fields. The software is released under the permissive MIT Open Source license, and the complete documentation and the entire source code are freely available for download from CPAN http://search.cpan.org/dist/Microarray-GeneXplorer/.

  9. Web Based Tool for Mission Operations Scenarios

    NASA Technical Reports Server (NTRS)

    Boyles, Carole A.; Bindschadler, Duane L.

    2008-01-01

    A conventional practice for spaceflight projects is to document scenarios in a monolithic Operations Concept document. Such documents can be hundreds of pages long and may require laborious updates. Software development practice utilizes scenarios in the form of smaller, individual use cases, which are often structured and managed using UML. We have developed a process and a web-based scenario tool that utilizes a similar philosophy of smaller, more compact scenarios (but avoids the formality of UML). The need for a scenario process and tool became apparent during the authors' work on a large astrophysics mission. It was noted that every phase of the Mission (e.g., formulation, design, verification and validation, and operations) looked back to scenarios to assess completeness of requirements and design. It was also noted that terminology needed to be clarified and structured to assure communication across all levels of the project. Attempts to manage, communicate, and evolve scenarios at all levels of a project using conventional tools (e.g., Excel) and methods (Scenario Working Group meetings) were not effective given limitations on budget and staffing. The objective of this paper is to document the scenario process and tool created to offer projects a low-cost capability to create, communicate, manage, and evolve scenarios throughout project development. The process and tool have the further benefit of allowing the association of requirements with particular scenarios, establishing and viewing relationships between higher- and lower-level scenarios, and the ability to place all scenarios in a shared context. The resulting structured set of scenarios is widely visible (using a web browser), easily updated, and can be searched according to various criteria including the level (e.g., Project, System, and Team) and Mission Phase. Scenarios are maintained in a web-accessible environment that provides a structured set of scenario fields and allows for maximum

  10. Flexible querying of Web data to simulate bacterial growth in food.

    PubMed

    Buche, Patrice; Couvert, Olivier; Dibie-Barthélemy, Juliette; Hignette, Gaëlle; Mettler, Eric; Soler, Lydie

    2011-06-01

    A preliminary step in microbial risk assessment in foods is the gathering of experimental data. In the framework of the Sym'Previus project, we have designed a complete data integration system opened on the Web which allows a local database to be complemented by data extracted from the Web and annotated using a domain ontology. We focus on the Web data tables as they contain, in general, a synthesis of data published in the documents. We propose in this paper a flexible querying system using the domain ontology to scan simultaneously local and Web data, this in order to feed the predictive modeling tools available on the Sym'Previus platform. Special attention is paid on the way fuzzy annotations associated with Web data are taken into account in the querying process, which is an important and original contribution of the proposed system. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Using Knowledge Base for Event-Driven Scheduling of Web Monitoring Systems

    NASA Astrophysics Data System (ADS)

    Kim, Yang Sok; Kang, Sung Won; Kang, Byeong Ho; Compton, Paul

    Web monitoring systems report any changes to their target web pages by revisiting them frequently. As they operate under significant resource constraints, it is essential to minimize revisits while ensuring minimal delay and maximum coverage. Various statistical scheduling methods have been proposed to resolve this problem; however, they are static and cannot easily cope with events in the real world. This paper proposes a new scheduling method that manages unpredictable events. An MCRDR (Multiple Classification Ripple-Down Rules) document classification knowledge base was reused to detect events and to initiate a prompt web monitoring process independent of a static monitoring schedule. Our experiment demonstrates that the approach improves monitoring efficiency significantly.

  12. 78 FR 64970 - New Deadlines for Public Comment on Draft Environmental Documents

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... interested parties to contact Service personnel and Web sites for information about these draft documents. As..., fdsys/pkg/FR-2013- CO; Comprehensive Conservation 08-07/pdf/2013- Plan and Environmental Impact 19052.pdf. Statement; Two Ponds National Wildlife Refuge, Arvada, CO; Comprehensive Conservation Plan and...

  13. Integrating the Web and continuous media through distributed objects

    NASA Astrophysics Data System (ADS)

    Labajo, Saul P.; Garcia, Narciso N.

    1998-09-01

    The Web has rapidly grown to become the standard for documents interchange on the Internet. At the same time the interest on transmitting continuous media flows on the Internet, and its associated applications like multimedia on demand, is also growing. Integrating both kinds of systems should allow building real hypermedia systems where all media objects can be linked from any other, taking into account temporal and spatial synchronization. A way to achieve this integration is using the Corba architecture. This is a standard for open distributed systems. There are also recent efforts to integrate Web and Corba systems. We use this architecture to build a service for distribution of data flows endowed with timing restrictions. We use to integrate it with the Web, by one side Java applets that can use the Corba architecture and are embedded on HTML pages. On the other side, we also benefit from the efforts to integrate Corba and the Web.

  14. Accounting Data to Web Interface Using PERL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hargeaves, C

    2001-08-13

    This document will explain the process to create a web interface for the accounting information generated by the High Performance Storage Systems (HPSS) accounting report feature. The accounting report contains useful data but it is not easily accessed in a meaningful way. The accounting report is the only way to see summarized storage usage information. The first step is to take the accounting data, make it meaningful and store the modified data in persistent databases. The second step is to generate the various user interfaces, HTML pages, that will be used to access the data. The third step is tomore » transfer all required files to the web server. The web pages pass parameters to Common Gateway Interface (CGI) scripts that generate dynamic web pages and graphs. The end result is a web page with specific information presented in text with or without graphs. The accounting report has a specific format that allows the use of regular expressions to verify if a line is storage data. Each storage data line is stored in a detailed database file with a name that includes the run date. The detailed database is used to create a summarized database file that also uses run date in its name. The summarized database is used to create the group.html web page that includes a list of all storage users. Scripts that query the database folder to build a list of available databases generate two additional web pages. A master script that is run monthly as part of a cron job, after the accounting report has completed, manages all of these individual scripts. All scripts are written in the PERL programming language. Whenever possible data manipulation scripts are written as filters. All scripts are written to be single source, which means they will function properly on both the open and closed networks at LLNL. The master script handles the command line inputs for all scripts, file transfers to the web server and records run information in a log file. The rest of the scripts

  15. Web information retrieval based on ontology

    NASA Astrophysics Data System (ADS)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  16. Locating and Searching Electronic Documents: A User Study of Supply Publications in the United States Marine Corps

    DTIC Science & Technology

    2007-12-01

    Boyle, “Important issues in hypertext documentation usability,” In Proceedings of the 9th Annual international Conference on Systems Documentation...Tufte’s principles of information design to creating effective Web sites.” In Proceedings of the 15th Annual international Conference on Computer...usability,” In Proceedings of the 9th Annual international Conference on Systems Documentation (Chicago, Illinois, 1991). SIGDOC 󈨟. ACM, New York, NY

  17. Food webs for parasitologists: a review.

    PubMed

    Sukhdeo, Michael V K

    2010-04-01

    This review examines the historical origins of food web theory and explores the reasons why parasites have traditionally been left out of food web studies. Current paradigms may still be an impediment because, despite several attempts, it remains virtually impossible to retrofit parasites into food web theory in any satisfactory manner. It seems clear that parasitologists must return to first principles to solve how best to incorporate parasites into ecological food webs, and a first step in changing paradigms will be to include parasites in the classic ecological patterns that inform food web theory. The limitations of current food web models are discussed with respect to their logistic exclusion of parasites, and the traditional matrix approach in food web studies is critically examined. The well-known energetic perspective on ecosystem organization is presented as a viable alternative to the matrix approach because it provides an intellectually powerful theoretical paradigm for generating testable hypotheses on true food web structure. This review proposes that to make significant contributions to the food web debate, parasitologists must work from the standpoint of natural history to elucidate patterns of biomass, species abundance, and interaction strengths in real food webs, and these will provide the basis for more realistic models that incorporate parasite dynamics into the overall functional dynamics of the whole web. A general conclusion is that only by quantifying the effects of parasites in terms of energy flows (or biomass) will we be able to correctly place parasites into food webs.

  18. Improving Web Searches: Case Study of Quit-Smoking Web Sites for Teenagers

    PubMed Central

    Skinner, Harvey

    2003-01-01

    Background The Web has become an important and influential source of health information. With the vast number of Web sites on the Internet, users often resort to popular search sites when searching for information. However, little is known about the characteristics of Web sites returned by simple Web searches for information about smoking cessation for teenagers. Objective To determine the characteristics of Web sites retrieved by search engines about smoking cessation for teenagers and how information quality correlates with the search ranking. Methods The top 30 sites returned by 4 popular search sites in response to the search terms "teen quit smoking" were examined. The information relevance and quality characteristics of these sites were evaluated by 2 raters. Objective site characteristics were obtained using a page-analysis Web site. Results Only 14 of the 30 Web sites are of direct relevance to smoking cessation for teenagers. The readability of about two-thirds of the 14 sites is below an eighth-grade school level and they ranked significantly higher (Kendall rank correlation, tau = -0.39, P= .05) in search-site results than sites with readability above or equal to that grade level. Sites that ranked higher were significantly associated with the presence of e-mail address for contact (tau = -0.46, P= .01), annotated hyperlinks to external sites (tau = -0.39, P= .04), and the presence of meta description tag (tau = -0.48, P= .002). The median link density (number of external sites that have a link to that site) of the Web pages was 6 and the maximum was 735. A higher link density was significantly associated with a higher rank (tau = -0.58, P= .02). Conclusions Using simple search terms on popular search sites to look for information on smoking cessation for teenagers resulted in less than half of the sites being of direct relevance. To improve search efficiency, users could supplement results obtained from simple Web searches with human-maintained Web

  19. 78 FR 66746 - Medical Device User Fee and Modernization Act; Notice to Public of Web Site Location of Fiscal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-06

    ...] Medical Device User Fee and Modernization Act; Notice to Public of Web Site Location of Fiscal Year 2014... and Drug Administration (FDA or the Agency) is announcing the Web site location where the Agency will... documents, FDA has committed to updating its Web site in a timely manner to reflect the Agency's review of...

  20. Readability of ASPS and ASAPS educational web sites: an analysis of consumer impact.

    PubMed

    Aliu, Oluseyi; Chung, Kevin C

    2010-04-01

    Patients use the Internet to educate themselves about health-related topics, and learning about plastic surgery is a common activity for enthusiastic consumers in the United States. How to educate consumers regarding plastic surgical procedures is a continued concern for plastic surgeons when faced with the growing portion of the American population having relatively low health care literacy. The usefulness of health-related education materials on the Internet depends largely on their comprehensibility and understandability for all who visit the Web sites. The authors studied the readability of patient education materials related to common plastic surgery procedures from the American Society of Plastic Surgeons (ASPS) and the American Society for Aesthetic Plastic Surgery (ASAPS) Web sites and compared them with materials on similar topics from 10 popular health information-providing sites. The authors found that all analyzed documents on the ASPS and ASAPS Web sites targeted to the consumers were rated to be more difficult than the recommended reading grade level for most American adults, and these documents were consistently among the most difficult to read when compared with the other health information Web sites. The Internet is an increasingly popular avenue for patients to educate themselves about plastic surgery procedures. Patient education material provided on ASPS and ASAPS Web sites should be written at recommended reading grade levels to ensure that it is readable and comprehensible to the targeted audience.

  1. Vibration transmission through sheet webs of hobo spiders (Eratigena agrestis) and tangle webs of western black widow spiders (Latrodectus hesperus).

    PubMed

    Vibert, Samantha; Scott, Catherine; Gries, Gerhard

    2016-11-01

    Web-building spiders construct their own vibratory signaling environments. Web architecture should affect signal design, and vice versa, such that vibratory signals are transmitted with a minimum of attenuation and degradation. However, the web is the medium through which a spider senses both vibratory signals from courting males and cues produced by captured prey. Moreover, webs function not only in vibration transmission, but also in defense from predators and the elements. These multiple functions may impose conflicting selection pressures on web design. We investigated vibration transmission efficiency and accuracy through two web types with contrasting architectures: sheet webs of Eratigena agrestis (Agelenidae) and tangle webs of Latrodectus hesperus (Theridiidae). We measured vibration transmission efficiencies by playing frequency sweeps through webs with a piezoelectric vibrator and a loudspeaker, recording the resulting web vibrations at several locations on each web using a laser Doppler vibrometer. Transmission efficiencies through both web types were highly variable, with within-web variation greater than among-web variation. There was little difference in transmission efficiencies of longitudinal and transverse vibrations. The inconsistent transmission of specific frequencies through webs suggests that parameters other than frequency are most important in allowing these spiders to distinguish between vibrations of prey and courting males.

  2. Search Interface Design Using Faceted Indexing for Web Resources.

    ERIC Educational Resources Information Center

    Devadason, Francis; Intaraksa, Neelawat; Patamawongjariya, Pornprapa; Desai, Kavita

    2001-01-01

    Describes an experimental system designed to organize and provide access to Web documents using a faceted pre-coordinate indexing system based on the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. (AEF)

  3. Visual Communication in Web Design - Analyzing Visual Communication in Web Design

    NASA Astrophysics Data System (ADS)

    Thorlacius, Lisbeth

    Web sites are rapidly becoming the preferred media choice for information search, company presentation, shopping, entertainment, education, and social contacts. And along with the various forms of communication that the Web offers the aesthetic aspects have begun to play an increasingly important role. However, studies in the design and the relevance of focusing on the aesthetic aspects in planning and using Web sites have only to a smaller degree been subject of theoretical reflection. For example, Miller (2000), Thorlacius (2001, 2002, 2005), Engholm (2002, 2003), and Beaird (2007) have been contributing to set a beginning agenda that address the aesthetic aspects. On the other hand, there is a considerable amount of literature addressing the theoretical and methodological aspects focusing on the technical and functional aspects. In this context it is the aim of this article to introduce a model for analysis of visual communication on websites.

  4. MODPATH-LGR; documentation of a computer program for particle tracking in shared-node locally refined grids by using MODFLOW-LGR

    USGS Publications Warehouse

    Dickinson, Jesse; Hanson, R.T.; Mehl, Steffen W.; Hill, Mary C.

    2011-01-01

    The computer program described in this report, MODPATH-LGR, is designed to allow simulation of particle tracking in locally refined grids. The locally refined grids are simulated by using MODFLOW-LGR, which is based on MODFLOW-2005, the three-dimensional groundwater-flow model published by the U.S. Geological Survey. The documentation includes brief descriptions of the methods used and detailed descriptions of the required input files and how the output files are typically used. The code for this model is available for downloading from the World Wide Web from a U.S. Geological Survey software repository. The repository is accessible from the U.S. Geological Survey Water Resources Information Web page at http://water.usgs.gov/software/ground_water.html. The performance of the MODPATH-LGR program has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program by using the email address available on the Web site. Updates might occasionally be made to this document and to the MODPATH-LGR program, and users should check the Web site periodically.

  5. WebViz: A web browser based application for collaborative analysis of 3D data

    NASA Astrophysics Data System (ADS)

    Ruegg, C. S.

    2011-12-01

    In the age of high speed Internet where people can interact instantly, scientific tools have lacked technology which can incorporate this concept of communication using the web. To solve this issue a web application for geological studies has been created, tentatively titled WebViz. This web application utilizes tools provided by Google Web Toolkit to create an AJAX web application capable of features found in non web based software. Using these tools, a web application can be created to act as piece of software from anywhere in the globe with a reasonably speedy Internet connection. An application of this technology can be seen with data regarding the recent tsunami from the major japan earthquakes. After constructing the appropriate data to fit a computer render software called HVR, WebViz can request images of the tsunami data and display it to anyone who has access to the application. This convenience alone makes WebViz a viable solution, but the option to interact with this data with others around the world causes WebViz to be taken as a serious computational tool. WebViz also can be used on any javascript enabled browser such as those found on modern tablets and smart phones over a fast wireless connection. Due to the fact that WebViz's current state is built using Google Web Toolkit the portability of the application is in it's most efficient form. Though many developers have been involved with the project, each person has contributed to increase the usability and speed of the application. In the project's most recent form a dramatic speed increase has been designed as well as a more efficient user interface. The speed increase has been informally noticed in recent uses of the application in China and Australia with the hosting server being located at the University of Minnesota. The user interface has been improved to not only look better but the functionality has been improved. Major functions of the application are rotating the 3D object using buttons

  6. MetExploreViz: web component for interactive metabolic network visualization.

    PubMed

    Chazalviel, Maxime; Frainay, Clément; Poupin, Nathalie; Vinson, Florence; Merlet, Benjamin; Gloaguen, Yoann; Cottret, Ludovic; Jourdan, Fabien

    2017-09-15

    MetExploreViz is an open source web component that can be easily embedded in any web site. It provides features dedicated to the visualization of metabolic networks and pathways and thus offers a flexible solution to analyze omics data in a biochemical context. Documentation and link to GIT code repository (GPL 3.0 license)are available at this URL: http://metexplore.toulouse.inra.fr/metexploreViz/doc /. Tutorial is available at this URL. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  7. SureChEMBL: a large-scale, chemically annotated patent document database

    PubMed Central

    Papadatos, George; Davies, Mark; Dedman, Nathan; Chambers, Jon; Gaulton, Anna; Siddle, James; Koks, Richard; Irvine, Sean A.; Pettersson, Joe; Goncharoff, Nicko; Hersey, Anne; Overington, John P.

    2016-01-01

    SureChEMBL is a publicly available large-scale resource containing compounds extracted from the full text, images and attachments of patent documents. The data are extracted from the patent literature according to an automated text and image-mining pipeline on a daily basis. SureChEMBL provides access to a previously unavailable, open and timely set of annotated compound-patent associations, complemented with sophisticated combined structure and keyword-based search capabilities against the compound repository and patent document corpus; given the wealth of knowledge hidden in patent documents, analysis of SureChEMBL data has immediate applications in drug discovery, medicinal chemistry and other commercial areas of chemical science. Currently, the database contains 17 million compounds extracted from 14 million patent documents. Access is available through a dedicated web-based interface and data downloads at: https://www.surechembl.org/. PMID:26582922

  8. Teaching Students about Plagiarism Using a Web-Based Module

    ERIC Educational Resources Information Center

    Stetter, Maria Earman

    2013-01-01

    The following research delivered a web-based module about plagiarism and paraphrasing to avoid plagiarism in both a blended method, with live instruction paired with web presentation for 105 students, and a separate web-only method for 22 other students. Participants were graduates and undergraduates preparing to become teachers, the majority of…

  9. Concepts and Technologies for a Comprehensive Information System for Historical Research and Heritage Documentation

    NASA Astrophysics Data System (ADS)

    Henze, F.; Magdalinski, N.; Schwarzbach, F.; Schulze, A.; Gerth, Ph.; Schäfer, F.

    2013-07-01

    Information systems play an important role in historical research as well as in heritage documentation. As part of a joint research project of the German Archaeological Institute, the Brandenburg University of Technology Cottbus and the Dresden University of Applied Sciences a web-based documentation system is currently being developed, which can easily be adapted to the needs of different projects with individual scientific concepts, methods and questions. Based on open source and standardized technologies it will focus on open and well-documented interfaces to ease the dissemination and re-use of its content via web-services and to communicate with desktop applications for further evaluation and analysis. Core of the system is a generic data model that represents a wide range of topics and methods of archaeological work. By the provision of a concerted amount of initial themes and attributes a cross project analysis of research data will be possible. The development of enhanced search and retrieval functionalities will simplify the processing and handling of large heterogeneous data sets. To achieve a high degree of interoperability with existing external data, systems and applications, standardized interfaces will be integrated. The analysis of spatial data shall be possible through the integration of web-based GIS functions. As an extension to this, customized functions for storage, processing and provision of 3D geo data are being developed. As part of the contribution system requirements and concepts will be presented and discussed. A particular focus will be on introducing the generic data model and the derived database schema. The research work on enhanced search and retrieval capabilities will be illustrated by prototypical developments, as well as concepts and first implementations for an integrated 2D/3D Web-GIS.

  10. An Evaluative Methodology for Virtual Communities Using Web Analytics

    ERIC Educational Resources Information Center

    Phippen, A. D.

    2004-01-01

    The evaluation of virtual community usage and user behaviour has its roots in social science approaches such as interview, document analysis and survey. Little evaluation is carried out using traffic or protocol analysis. Business approaches to evaluating customer/business web site usage are more advanced, in particular using advanced web…

  11. MetaSpider: Meta-Searching and Categorization on the Web.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Fan, Haiyan; Chau, Michael; Zeng, Daniel

    2001-01-01

    Discusses the difficulty of locating relevant information on the Web and studies two approaches to addressing the low precision and poor presentation of search results: meta-search and document categorization. Introduces MetaSpider, a meta-search engine, and presents results of a user evaluation study that compared three search engines.…

  12. 50 CFR 300.185 - Documentation, reporting and recordkeeping requirements for consignment documents and re-export...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... recordkeeping requirements for consignment documents and re-export certificates. 300.185 Section 300.185..., reporting and recordkeeping requirements for consignment documents and re-export certificates. (a) Imports... apply only to entries for consumption. The reporting requirements of paragraph (a)(3) of this section do...

  13. 50 CFR 300.185 - Documentation, reporting and recordkeeping requirements for consignment documents and re-export...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... recordkeeping requirements for consignment documents and re-export certificates. 300.185 Section 300.185..., reporting and recordkeeping requirements for consignment documents and re-export certificates. (a) Imports... apply only to entries for consumption. The reporting requirements of paragraph (a)(3) of this section do...

  14. 50 CFR 300.185 - Documentation, reporting and recordkeeping requirements for consignment documents and re-export...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... recordkeeping requirements for consignment documents and re-export certificates. 300.185 Section 300.185..., reporting and recordkeeping requirements for consignment documents and re-export certificates. (a) Imports... apply only to entries for consumption. The reporting requirements of paragraph (a)(3) of this section do...

  15. HEP Outreach, Inreach, and Web 2.0

    NASA Astrophysics Data System (ADS)

    Goldfarb, Steven

    2011-12-01

    I report on current usage of multimedia and social networking "Web 2.0" tools for Education and Outreach in high-energy physics, and discuss their potential for internal communication within large worldwide collaborations, such as those of the LHC. Following a brief description of the history of Web 2.0 development, I present a survey of the most popular sites and describe their usage in HEP to disseminate information to students and the general public. I then discuss the potential of certain specific tools, such as document and multimedia sharing sites, for boosting the speed and effectiveness of information exchange within the collaborations. I conclude with a brief discussion of the successes and failures of these tools, and make suggestions for improved usage in the future.

  16. Earth System Documentation (ES-DOC) Preparation for CMIP6

    NASA Astrophysics Data System (ADS)

    Denvil, S.; Murphy, S.; Greenslade, M. A.; Lawrence, B.; Guilyardi, E.; Pascoe, C.; Treshanksy, A.; Elkington, M.; Hibling, E.; Hassell, D.

    2015-12-01

    During the course of 2015 the Earth System Documentation (ES-DOC) project began its preparations for CMIP6 (Coupled Model Inter-comparison Project 6) by further extending the ES-DOC tooling ecosystem in support of Earth System Model (ESM) documentation creation, search, viewing & comparison. The ES-DOC online questionnaire, the ES-DOC desktop notebook, and the ES-DOC python toolkit will serve as multiple complementary pathways to generating CMIP6 documentation. It is envisaged that institutes will leverage these tools at different points of the CMIP6 lifecycle. Institutes will be particularly interested to know that the documentation burden will be either streamlined or completely automated.As all the tools are tightly integrated with the ES-DOC web-service, institutes can be confident that the latency between documentation creation & publishing will be reduced to a minimum. Published documents will be viewable with the online ES-DOC Viewer (accessible via citable URL's). Model inter-comparison scenarios will be supported using the ES-DOC online Comparator tool. The Comparator is being extended to:• Support comparison of both Model descriptions & Simulation runs;• Greatly streamline the effort involved in compiling official tables.The entire ES-DOC ecosystem is open source and built upon open standards such as the Common Information Model (CIM) (versions 1 and 2).

  17. Tobacco related bar promotions: insights from tobacco industry documents

    PubMed Central

    Katz, S; Lavack, A

    2002-01-01

    Design: Over 2000 tobacco industry documents available as a result of the Master Settlement Agreement were reviewed on the internet at several key web sites using keyword searches that included "bar", "night", "pub", "party", and "club". The majority of the documents deal with the US market, with a minor emphasis on Canadian and overseas markets. Results: The documents indicate that bar promotions are important for creating and maintaining brand image, and are generally targeted at a young adult audience. Several measures of the success of these promotions are used, including number of individuals exposed to the promotion, number of promotional items given away, and increased sales of a particular brand during and after the promotion. Conclusion: Bar promotions position cigarettes as being part of a glamorous lifestyle that includes attendance at nightclubs and bars, and appear to be highly successful in increasing sales of particular brands. PMID:11893819

  18. Fuzzy Document Clustering Approach using WordNet Lexical Categories

    NASA Astrophysics Data System (ADS)

    Gharib, Tarek F.; Fouad, Mohammed M.; Aref, Mostafa M.

    Text mining refers generally to the process of extracting interesting information and knowledge from unstructured text. This area is growing rapidly mainly because of the strong need for analysing the huge and large amount of textual data that reside on internal file systems and the Web. Text document clustering provides an effective navigation mechanism to organize this large amount of data by grouping their documents into a small number of meaningful classes. In this paper we proposed a fuzzy text document clustering approach using WordNet lexical categories and Fuzzy c-Means algorithm. Some experiments are performed to compare efficiency of the proposed approach with the recently reported approaches. Experimental results show that Fuzzy clustering leads to great performance results. Fuzzy c-means algorithm overcomes other classical clustering algorithms like k-means and bisecting k-means in both clustering quality and running time efficiency.

  19. Web 2.0 and Pharmacy Education

    PubMed Central

    Fox, Brent I.

    2009-01-01

    New types of social Internet applications (often referred to as Web 2.0) are becoming increasingly popular within higher education environments. Although developed primarily for entertainment and social communication within the general population, applications such as blogs, social video sites, and virtual worlds are being adopted by higher education institutions. These newer applications differ from standard Web sites in that they involve the users in creating and distributing information, hence effectively changing how the Web is used for knowledge generation and dispersion. Although Web 2.0 applications offer exciting new ways to teach, they should not be the core of instructional planning, but rather selected only after learning objectives and instructional strategies have been identified. This paper provides an overview of prominent Web 2.0 applications, explains how they are being used within education environments, and elaborates on some of the potential opportunities and challenges that these applications present. PMID:19960079

  20. Web 2.0 and pharmacy education.

    PubMed

    Cain, Jeff; Fox, Brent I

    2009-11-12

    New types of social Internet applications (often referred to as Web 2.0) are becoming increasingly popular within higher education environments. Although developed primarily for entertainment and social communication within the general population, applications such as blogs, social video sites, and virtual worlds are being adopted by higher education institutions. These newer applications differ from standard Web sites in that they involve the users in creating and distributing information, hence effectively changing how the Web is used for knowledge generation and dispersion. Although Web 2.0 applications offer exciting new ways to teach, they should not be the core of instructional planning, but rather selected only after learning objectives and instructional strategies have been identified. This paper provides an overview of prominent Web 2.0 applications, explains how they are being used within education environments, and elaborates on some of the potential opportunities and challenges that these applications present.

  1. A Quantitative Cost Effectiveness Model for Web-Supported Academic Instruction

    ERIC Educational Resources Information Center

    Cohen, Anat; Nachmias, Rafi

    2006-01-01

    This paper describes a quantitative cost effectiveness model for Web-supported academic instruction. The model was designed for Web-supported instruction (rather than distance learning only) characterizing most of the traditional higher education institutions. It is based on empirical data (Web logs) of students' and instructors' usage…

  2. Health on the Net Foundation: assessing the quality of health web pages all over the world.

    PubMed

    Boyer, Célia; Gaudinat, Arnaud; Baujard, Vincent; Geissbühler, Antoine

    2007-01-01

    The Internet provides a great amount of information and has become one of the communication media which is most widely used [1]. However, the problem is no longer finding information but assessing the credibility of the publishers as well as the relevance and accuracy of the documents retrieved from the web. This problem is particularly relevant in the medical area which has a direct impact on the well-being of citizens. In this paper, we assume that the quality of web pages can be controlled, even when a huge amount of documents has to be reviewed. But this must be supported by both specific automatic tools and human expertise. In this context, we present various initiatives of the Health on the Net Foundation informing the citizens about the reliability of the medical content on the web.

  3. Health search engine with e-document analysis for reliable search results.

    PubMed

    Gaudinat, Arnaud; Ruch, Patrick; Joubert, Michel; Uziel, Philippe; Strauss, Anne; Thonnet, Michèle; Baud, Robert; Spahni, Stéphane; Weber, Patrick; Bonal, Juan; Boyer, Celia; Fieschi, Marius; Geissbuhler, Antoine

    2006-01-01

    After a review of the existing practical solution available to the citizen to retrieve eHealth document, the paper describes an original specialized search engine WRAPIN. WRAPIN uses advanced cross lingual information retrieval technologies to check information quality by synthesizing medical concepts, conclusions and references contained in the health literature, to identify accurate, relevant sources. Thanks to MeSH terminology [1] (Medical Subject Headings from the U.S. National Library of Medicine) and advanced approaches such as conclusion extraction from structured document, reformulation of the query, WRAPIN offers to the user a privileged access to navigate through multilingual documents without language or medical prerequisites. The results of an evaluation conducted on the WRAPIN prototype show that results of the WRAPIN search engine are perceived as informative 65% (59% for a general-purpose search engine), reliable and trustworthy 72% (41% for the other engine) by users. But it leaves room for improvement such as the increase of database coverage, the explanation of the original functionalities and an audience adaptability. Thanks to evaluation outcomes, WRAPIN is now in exploitation on the HON web site (http://www.healthonnet.org), free of charge. Intended to the citizen it is a good alternative to general-purpose search engines when the user looks up trustworthy health and medical information or wants to check automatically a doubtful content of a Web page.

  4. A Model for Enhancing Internet Medical Document Retrieval with “Medical Core Metadata”

    PubMed Central

    Malet, Gary; Munoz, Felix; Appleyard, Richard; Hersh, William

    1999-01-01

    Objective: Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. Design: The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and Medline-type content descriptions. Results: The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. Conclusions: The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines. PMID:10094069

  5. Web Content Management Systems: An Analysis of Forensic Investigatory Challenges.

    PubMed

    Horsman, Graeme

    2018-02-26

    With an increase in the creation and maintenance of personal websites, web content management systems are now frequently utilized. Such systems offer a low cost and simple solution for those seeking to develop an online presence, and subsequently, a platform from which reported defamatory content, abuse, and copyright infringement has been witnessed. This article provides an introductory forensic analysis of the three current most popular web content management systems available, WordPress, Drupal, and Joomla! Test platforms have been created, and their site structures have been examined to provide guidance for forensic practitioners facing investigations of this type. Result's document available metadata for establishing site ownership, user interactions, and stored content following analysis of artifacts including Wordpress's wp_users, and wp_comments tables, Drupal's "watchdog" records, and Joomla!'s _users, and _content tables. Finally, investigatory limitations documenting the difficulties of investigating WCMS usage are noted, and analysis recommendations are offered. © 2018 American Academy of Forensic Sciences.

  6. Panning for Gold: Utility of the World Wide Web for Metadata and Authority Control in Special Collections.

    ERIC Educational Resources Information Center

    Ellero, Nadine P.

    2002-01-01

    Describes the use of the World Wide Web as a name authority resource and tool for special collections' analytic-level cataloging, based on experiences at The Claude Moore Health Sciences Library. Highlights include primary documents and metadata; authority control and the Web as authority source information; and future possibilities. (Author/LRW)

  7. Comparison of historical documents for writership

    NASA Astrophysics Data System (ADS)

    Ball, Gregory R.; Pu, Danjun; Stritmatter, Roger; Srihari, Sargur N.

    2010-01-01

    Over the last century forensic document science has developed progressively more sophisticated pattern recognition methodologies for ascertaining the authorship of disputed documents. These include advances not only in computer assisted stylometrics, but forensic handwriting analysis. We present a writer verification method and an evaluation of an actual historical document written by an unknown writer. The questioned document is compared against two known handwriting samples of Herman Melville, a 19th century American author who has been hypothesized to be the writer of this document. The comparison led to a high confidence result that the questioned document was written by the same writer as the known documents. Such methodology can be applied to many such questioned documents in historical writing, both in literary and legal fields.

  8. Migration of the ATLAS Metadata Interface (AMI) to Web 2.0 and cloud

    NASA Astrophysics Data System (ADS)

    Odier, J.; Albrand, S.; Fulachier, J.; Lambert, F.

    2015-12-01

    The ATLAS Metadata Interface (AMI), a mature application of more than 10 years of existence, is currently under adaptation to some recently available technologies. The web interfaces, which previously manipulated XML documents using XSL transformations, are being migrated to Asynchronous JavaScript (AJAX). Web development is considerably simplified by the introduction of a framework based on JQuery and Twitter Bootstrap. Finally, the AMI services are being migrated to an OpenStack cloud infrastructure.

  9. Food web complexity and stability across habitat connectivity gradients.

    PubMed

    LeCraw, Robin M; Kratina, Pavel; Srivastava, Diane S

    2014-12-01

    The effects of habitat connectivity on food webs have been studied both empirically and theoretically, yet the question of whether empirical results support theoretical predictions for any food web metric other than species richness has received little attention. Our synthesis brings together theory and empirical evidence for how habitat connectivity affects both food web stability and complexity. Food web stability is often predicted to be greatest at intermediate levels of connectivity, representing a compromise between the stabilizing effects of dispersal via rescue effects and prey switching, and the destabilizing effects of dispersal via regional synchronization of population dynamics. Empirical studies of food web stability generally support both this pattern and underlying mechanisms. Food chain length has been predicted to have both increasing and unimodal relationships with connectivity as a result of predators being constrained by the patch occupancy of their prey. Although both patterns have been documented empirically, the underlying mechanisms may differ from those predicted by models. In terms of other measures of food web complexity, habitat connectivity has been empirically found to generally increase link density but either reduce or have no effect on connectance, whereas a unimodal relationship is expected. In general, there is growing concordance between empirical patterns and theoretical predictions for some effects of habitat connectivity on food webs, but many predictions remain to be tested over a full connectivity gradient, and empirical metrics of complexity are rarely modeled. Closing these gaps will allow a deeper understanding of how natural and anthropogenic changes in connectivity can affect real food webs.

  10. Distributed data collection and supervision based on web sensor

    NASA Astrophysics Data System (ADS)

    He, Pengju; Dai, Guanzhong; Fu, Lei; Li, Xiangjun

    2006-11-01

    As a node in Internet/Intranet, web sensor has been promoted in recent years and wildly applied in remote manufactory, workshop measurement and control field. However, the conventional scheme can only support HTTP protocol, and the remote users supervise and control the collected data published by web in the standard browser because of the limited resource of the microprocessor in the sensor; moreover, only one node of data acquirement can be supervised and controlled in one instant therefore the requirement of centralized remote supervision, control and data process can not be satisfied in some fields. In this paper, the centralized remote supervision, control and data process by the web sensor are proposed and implemented by the principle of device driver program. The useless information of the every collected web page embedded in the sensor is filtered and the useful data is transmitted to the real-time database in the workstation, and different filter algorithms are designed for different sensors possessing independent web pages. Every sensor node has its own filter program of web, called "web data collection driver program", the collecting details are shielded, and the supervision, control and configuration software can be implemented by the call of web data collection driver program just like the use of the I/O driver program. The proposed technology can be applied in the data acquirement where relative low real-time is required.

  11. Non-visual Web Browsing: Beyond Web Accessibility

    PubMed Central

    Ramakrishnan, I.V.; Ashok, Vikas

    2017-01-01

    People with vision impairments typically use screen readers to browse the Web. To facilitate non-visual browsing, web sites must be made accessible to screen readers, i.e., all the visible elements in the web site must be readable by the screen reader. But even if web sites are accessible, screen-reader users may not find them easy to use and/or easy to navigate. For example, they may not be able to locate the desired information without having to listen to a lot of irrelevant contents. These issues go beyond web accessibility and directly impact web usability. Several techniques have been reported in the accessibility literature for making the Web usable for screen reading. This paper is a review of these techniques. Interestingly, the review reveals that understanding the semantics of the web content is the overarching theme that drives these techniques for improving web usability. PMID:29202137

  12. Non-visual Web Browsing: Beyond Web Accessibility.

    PubMed

    Ramakrishnan, I V; Ashok, Vikas; Billah, Syed Masum

    2017-07-01

    People with vision impairments typically use screen readers to browse the Web. To facilitate non-visual browsing, web sites must be made accessible to screen readers, i.e., all the visible elements in the web site must be readable by the screen reader. But even if web sites are accessible, screen-reader users may not find them easy to use and/or easy to navigate. For example, they may not be able to locate the desired information without having to listen to a lot of irrelevant contents. These issues go beyond web accessibility and directly impact web usability. Several techniques have been reported in the accessibility literature for making the Web usable for screen reading. This paper is a review of these techniques. Interestingly, the review reveals that understanding the semantics of the web content is the overarching theme that drives these techniques for improving web usability.

  13. Experience of Integrating Web 2.0 Technologies

    ERIC Educational Resources Information Center

    Zdravkova, Katerina; Ivanovic, Mirjana; Putnik, Zoran

    2012-01-01

    Web users in the 21st century are no longer only passive consumers. On a contrary, they are active contributors willing to obtain, share and evolve information. In this paper we report our experience regarding the implementation of Web 2.0 concept in several Computer Ethics related courses jointly conducted at two Universities. These courses have…

  14. A web access script language to support clinical application development.

    PubMed

    O'Kane, K C; McColligan, E E

    1998-02-01

    This paper describes the development of a script language to support the implementation of decentralized, clinical information applications on the World Wide Web (Web). The goal of this work is to facilitate construction of low overhead, fully functional clinical information systems that can be accessed anywhere by low cost Web browsers to search, retrieve and analyze stored patient data. The Web provides a model of network access to data bases on a global scale. Although it was originally conceived as a means to exchange scientific documents, Web browsers and servers currently support access to a wide variety of audio, video, graphical and text based data to a rapidly growing community. Access to these services is via inexpensive client software browsers that connect to servers by means of the open architecture of the Internet. In this paper, the design and implementation of a script language that supports the development of low cost, Web-based, distributed clinical information systems for both Inter- and Intra-Net use is presented. The language is based on the Mumps language and, consequently, supports many legacy applications with few modifications. Several enhancements, however, have been made to support modern programming practices and the Web interface. The interpreter for the language also supports standalone program execution on Unix, MS-Windows, OS/2 and other operating systems.

  15. XML and its impact on content and structure in electronic health care documents.

    PubMed Central

    Sokolowski, R.; Dudeck, J.

    1999-01-01

    Worldwide information networks have the requirement that electronic documents must be easily accessible, portable, flexible and system-independent. With the development of XML (eXtensible Markup Language), the future of electronic documents, health care informatics and the Web itself are about to change. The intent of the recently formed ASTM E31.25 subcommittee, "XML DTDs for Health Care", is to develop standard electronic document representations of paper-based health care documents and forms. A goal of the subcommittee is to work together to enhance existing levels of interoperability among the various XML/SGML standardization efforts, products and systems in health care. The ASTM E31.25 subcommittee uses common practices and software standards to develop the implementation recommendations for XML documents in health care. The implementation recommendations are being developed to standardize the many different structures of documents. These recommendations are in the form of a set of standard DTDs, or document type definitions that match the electronic document requirements in the health care industry. This paper discusses recent efforts of the ASTM E31.25 subcommittee. PMID:10566338

  16. Automated software system for checking the structure and format of ACM SIG documents

    NASA Astrophysics Data System (ADS)

    Mirza, Arsalan Rahman; Sah, Melike

    2017-04-01

    Microsoft (MS) Office Word is one of the most commonly used software tools for creating documents. MS Word 2007 and above uses XML to represent the structure of MS Word documents. Metadata about the documents are automatically created using Office Open XML (OOXML) syntax. We develop a new framework, which is called ADFCS (Automated Document Format Checking System) that takes the advantage of the OOXML metadata, in order to extract semantic information from MS Office Word documents. In particular, we develop a new ontology for Association for Computing Machinery (ACM) Special Interested Group (SIG) documents for representing the structure and format of these documents by using OWL (Web Ontology Language). Then, the metadata is extracted automatically in RDF (Resource Description Framework) according to this ontology using the developed software. Finally, we generate extensive rules in order to infer whether the documents are formatted according to ACM SIG standards. This paper, introduces ACM SIG ontology, metadata extraction process, inference engine, ADFCS online user interface, system evaluation and user study evaluations.

  17. Creating a course-based web site in a university environment

    NASA Astrophysics Data System (ADS)

    Robin, Bernard R.; Mcneil, Sara G.

    1997-06-01

    The delivery of educational materials is undergoing a remarkable change from the traditional lecture method to dissemination of courses via the World Wide Web. This paradigm shift from a paper-based structure to an electronic one has profound implications for university faculty. Students are enrolling in classes with the expectation of using technology and logging on to the Internet, and professors are realizing that the potential of the Web can have a significant impact on classroom activities. An effective method of integrating electronic technologies into teaching and learning is to publish classroom materials on the World Wide Web. Already, many faculty members are creating their own home pages and Web sites for courses that include syllabi, handouts, and student work. Additionally, educators are finding value in adding hypertext links to a wide variety of related Web resources from online research and electronic journals to government and commercial sites. A number of issues must be considered when developing course-based Web sites. These include meeting the needs of a target audience, designing effective instructional materials, and integrating graphics and other multimedia components. There are also numerous technical issues that must be addressed in developing, uploading and maintaining HTML documents. This article presents a model for a university faculty who want to begin using the Web in their teaching and is based on the experiences of two College of Education professors who are using the Web as an integral part of their graduate courses.

  18. Tobacco related bar promotions: insights from tobacco industry documents.

    PubMed

    Katz, S K; Lavack, A M

    2002-03-01

    To examine the tobacco industry's use of bar promotions, including their target groups, objectives, strategies, techniques, and results. Over 2000 tobacco industry documents available as a result of the Master Settlement Agreement were reviewed on the internet at several key web sites using keyword searches that included "bar", "night", "pub", "party", and "club". The majority of the documents deal with the US market, with a minor emphasis on Canadian and overseas markets. The documents indicate that bar promotions are important for creating and maintaining brand image, and are generally targeted at a young adult audience. Several measures of the success of these promotions are used, including number of individuals exposed to the promotion, number of promotional items given away, and increased sales of a particular brand during and after the promotion. Bar promotions position cigarettes as being part of a glamorous lifestyle that includes attendance at nightclubs and bars, and appear to be highly successful in increasing sales of particular brands.

  19. A survey of World Wide Web use in middle-aged and older adults.

    PubMed

    Morrell, R W; Mayhorn, C B; Bennett, J

    2000-01-01

    We conducted a survey to document World Wide Web use patterns in middle-aged (ages 40-59), young-old (ages 60-74), and old-old adults (ages 75-92). We conducted this survey of 550 adults 40 years old and over in southeastern Michigan, and the overall response rate was approximately 71%. The results suggested that (a) there are distinct age and demographic differences in individuals who use the Web; (b) middle-aged and older Web users are similar in their use patterns; (c) the two primary predictors for not using the Web are lack of access to a computer and lack of knowledge about the Web; (d) old-old adults have the least interest in using the Web compared with middle-aged and young-old adults; and (e) the primary content areas in learning how to use the Web are learning how to use electronic mail and accessing health information and information about traveling for pleasure. This research may serve as a preliminary attempt to ascertain the issues that must be faced in order to increase use of the World Wide Web in middle-aged and older adults.

  20. Network dynamics: The World Wide Web

    NASA Astrophysics Data System (ADS)

    Adamic, Lada Ariana

    Despite its rapidly growing and dynamic nature, the Web displays a number of strong regularities which can be understood by drawing on methods of statistical physics. This thesis finds power-law distributions in website sizes, traffic, and links, and more importantly, develops a stochastic theory which explains them. Power-law link distributions are shown to lead to network characteristics which are especially suitable for scalable localized search. It is also demonstrated that the Web is a "small world": to reach one site from any other takes an average of only 4 hops, while most related sites cluster together. Additional dynamical properties of the Web graph are extracted from diffusion processes.

  1. The Impact on Education of the World Wide Web.

    ERIC Educational Resources Information Center

    Hobbs, D. J.; Taylor, R. J.

    This paper describes a project which created a set of World Wide Web (WWW) pages documenting the state of the art in educational multimedia design; a prototype WWW-based multimedia teaching tool--a podiatry test using HTML forms, 24-bit color images and MPEG video--was also designed, developed, and evaluated. The project was conducted between…

  2. Easy Web Interfaces to IDL Code for NSTX Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W.M. Davis

    Reusing code is a well-known Software Engineering practice to substantially increase the efficiency of code production, as well as to reduce errors and debugging time. A variety of "Web Tools" for the analysis and display of raw and analyzed physics data are in use on NSTX [1], and new ones can be produced quickly from existing IDL [2] code. A Web Tool with only a few inputs, and which calls an IDL routine written in the proper style, can be created in less than an hour; more typical Web Tools with dozens of inputs, and the need for some adaptationmore » of existing IDL code, can be working in a day or so. Efficiency is also increased for users of Web Tools because o f the familiar interface of the web browser, and not needing X-windows, accounts, passwords, etc. Web Tools were adapted for use by PPPL physicists accessing EAST data stored in MDSplus with only a few man-weeks of effort; adapting to additional sites should now be even easier. An overview of Web Tools in use on NSTX, and a list of the most useful features, is also presented.« less

  3. The New Frontier: Conquering the World Wide Web by Mule.

    ERIC Educational Resources Information Center

    Gresham, Morgan

    1999-01-01

    Examines effects of teaching hypertext markup language on students' perceptions of class goals in a networked composition classroom. Suggests sending documents via file transfer protocol by command line and viewing the Web with a textual browser shifted emphasis from writing to coding. Argues that helping students identify a balance between…

  4. Engineering a Multi-Purpose Test Collection for Web Retrieval Experiments.

    ERIC Educational Resources Information Center

    Bailey, Peter; Craswell, Nick; Hawking, David

    2003-01-01

    Describes a test collection that was developed as a multi-purpose testbed for experiments on the Web in distributed information retrieval, hyperlink algorithms, and conventional ad hoc retrieval. Discusses inter-server connectivity, integrity of server holdings, inclusion of documents related to a wide spread of likely queries, and distribution of…

  5. 14 CFR 302.3 - Filing of documents.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... in Washington, DC. Documents may be filed either on paper or by electronic means using the process set at the DOT Dockets Management System (DMS) internet website. (2) Such documents will be deemed to... below the space provided for signature. Electronic filers need only submit one copy of the document...

  6. 14 CFR 302.3 - Filing of documents.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... in Washington, DC. Documents may be filed either on paper or by electronic means using the process set at the DOT Dockets Management System (DMS) internet website. (2) Such documents will be deemed to... below the space provided for signature. Electronic filers need only submit one copy of the document...

  7. 14 CFR 302.3 - Filing of documents.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... in Washington, DC. Documents may be filed either on paper or by electronic means using the process set at the DOT Dockets Management System (DMS) internet website. (2) Such documents will be deemed to... below the space provided for signature. Electronic filers need only submit one copy of the document...

  8. 14 CFR 302.3 - Filing of documents.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... in Washington, DC. Documents may be filed either on paper or by electronic means using the process set at the DOT Dockets Management System (DMS) internet website. (2) Such documents will be deemed to... below the space provided for signature. Electronic filers need only submit one copy of the document...

  9. WEB-IS2: Next Generation Web Services Using Amira Visualization Package

    NASA Astrophysics Data System (ADS)

    Yang, X.; Wang, Y.; Bollig, E. F.; Kadlec, B. J.; Garbow, Z. A.; Yuen, D. A.; Erlebacher, G.

    2003-12-01

    Amira (www.amiravis.com) is a powerful 3-D visualization package and has been employed recently by the science and engineering communities to gain insight into their data. We present a new web-based interface to Amira, packaged in a Java applet. We have developed a module called WEB-IS/Amira (WEB-IS2), which provides web-based access to Amira. This tool allows earth scientists to manipulate Amira controls remotely and to analyze, render and view large datasets over the internet, without regard for time or location. This could have important ramifications for GRID computing. The design of our implementation will soon allow multiple users to visually collaborate by manipulating a single dataset through a variety of client devices. These clients will only require a browser capable of displaying Java applets. As the deluge of data continues, innovative solutions that maximize ease of use without sacrificing efficiency or flexibility will continue to gain in importance, particularly in the Earth sciences. Major initiatives, such as Earthscope (http://www.earthscope.org), which will generate at least a terabyte of data daily, stand to profit enormously by a system such as WEB-IS/Amira (WEB-IS2). We discuss our use of SOAP (Livingston, D., Advanced SOAP for Web development, Prentice Hall, 2002), a novel 2-way communication protocol, as a means of providing remote commands, and efficient point-to-point transfer of binary image data. We will present our initial experiences with the use of Naradabrokering (www.naradabrokering.org) as a means to decouple clients and servers. Information is submitted to the system as a published item, while it is retrieved through a subscription mechanisms, via what is known as "topics". These topic headers, their contents, and the list of subscribers are automatically tracked by Naradabrokering. This novel approach promises a high degree of fault tolerance, flexibility with respect to client diversity, and language independence for the

  10. SureChEMBL: a large-scale, chemically annotated patent document database.

    PubMed

    Papadatos, George; Davies, Mark; Dedman, Nathan; Chambers, Jon; Gaulton, Anna; Siddle, James; Koks, Richard; Irvine, Sean A; Pettersson, Joe; Goncharoff, Nicko; Hersey, Anne; Overington, John P

    2016-01-04

    SureChEMBL is a publicly available large-scale resource containing compounds extracted from the full text, images and attachments of patent documents. The data are extracted from the patent literature according to an automated text and image-mining pipeline on a daily basis. SureChEMBL provides access to a previously unavailable, open and timely set of annotated compound-patent associations, complemented with sophisticated combined structure and keyword-based search capabilities against the compound repository and patent document corpus; given the wealth of knowledge hidden in patent documents, analysis of SureChEMBL data has immediate applications in drug discovery, medicinal chemistry and other commercial areas of chemical science. Currently, the database contains 17 million compounds extracted from 14 million patent documents. Access is available through a dedicated web-based interface and data downloads at: https://www.surechembl.org/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Web-Writing in One Minute--and Beyond.

    ERIC Educational Resources Information Center

    Hughes, Kenneth

    This paper describes how librarians can teach patrons the basics of hypertext markup language (HTML) so that patrons can publish their own homepages on the World Wide Web. With proper use of handouts and practice time afterwards, the three basics of HTML can be conveyed in only 60 seconds. The three basics are: the basic template of Web tags, used…

  12. Hanford science and technology needs statements document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piper, L.L.

    This document is a compilation of the Hanford science and technology needs statements for FY 1998. The needs were developed by the Hanford Site Technology Coordination Group (STCG) with full participation and endorsement of site user organizations, stakeholders, and regulators. The purpose of this document is to: (a) provide a comprehensive listing of Hanford science and technology needs, and (b) identify partnering and commercialization opportunities with industry, other federal and state agencies, and the academic community. The Hanford STCG reviews and updates the needs annually. Once completed, the needs are communicated to DOE for use in the development and prioritizationmore » of their science and technology programs, including the Focus Areas, Cross-Cutting Programs, and the Environmental Management Science Program. The needs are also transmitted to DOE through the Accelerating Cleanup: 2006 Plan. The public may access the need statements on the Internet on: the Hanford Home Page (www.hanford.gov), the Pacific Rim Enterprise Center`s web site (www2.pacific-rim.org/pacific rim), or the STCG web site at DOE headquarters (em-52.em.doegov/ifd/stcg/stcg.htm). This page includes links to science and technology needs for many DOE sites. Private industry is encouraged to review the need statements and contact the Hanford STCG if they can provide technologies that meet these needs. On-site points of contact are included at the ends of each need statement. The Pacific Rim Enterprise Center (206-224-9934) can also provide assistance to businesses interested in marketing technologies to the DOE.« less

  13. IBM techexplorer and MathML: Interactive Multimodal Scientific Documents

    NASA Astrophysics Data System (ADS)

    Diaz, Angel

    2001-06-01

    The World Wide Web provides a standard publishing platform for disseminating scientific and technical articles, books, journals, courseware, or even homework on the internet; however, the transition from paper to web-based interactive content has brought new opportunities for creating interactive content. Students, scientists, and engineers are now faced with the task of rendering the 2D presentational structure of mathematics, harnessing the wealth of scientific and technical software, and creating truly accessible scientific portals across international boundaries and markets. The recent emergence of World Wide Web Consortium (W3C) standards such as the Mathematical Markup Language (MathML), Language (XSL), and Aural CSS (ACSS) provide a foundation whereby mathematics can be displayed, enlivened, computed, and audio formatted. With interoperability ensured by standards, software applications can be easily brought together to create extensible and interactive scientific content. In this presentation we will provide an overview of the IBM techexplorer Hypermedia Browser, a web browser plug-in and ActiveX control aimed at bringing interactive mathematics to the masses across platforms and applications. We will demonstrate "live" mathematics where documents that contain MathML expressions can be edited and computed right inside your favorite web browser. This demonstration will be generalized as we show how MathML can be used to enliven even PowerPoint presentations. Finally, we will close the loop by demonstrating a novel approach to spoken mathematics based on MathML, DOM, XSL, ACSS, techexplorer, and IBM ViaVoice. By making use of techexplorer as the glue that binds the rendered content to the web browser, the back-end computation software, the Java applets that augment the exposition, and voice-rendering systems such as ViaVoice, authors can indeed create truly extensible and interactive scientific content. For more information see: [http

  14. 17 CFR 232.105 - Limitation on use of HTML documents and hypertext links.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false Limitation on use of HTML... Requirements § 232.105 Limitation on use of HTML documents and hypertext links. (a) Electronic filers must... EDGAR database on the Commission's public web site (www.sec.gov). Electronic filers also may include...

  15. Taking a traditional web site to patient portal technology.

    PubMed

    Labow, Kimberly

    2010-01-01

    In this era of consumer-driven healthcare, consumers (your current and potential patients) seek healthcare information on the Internet. If your practice doesn't have a Web site, or has one that's static and uninformative, you won't be found, and the patient will move on to the next practice Web site. Why? Because only the most graphically appealing, informative, and patient-centered Web sites will drive patients to your practice. Patients are demanding improved communication with their physician. A practice Web site is a start, but the adoption of a fully functional, interactive Web site with patient portal solutions will not only improve patient-to-provider relationships but will also give the patient access to your practice from anywhere, at any time of the day. Furthermore, these solutions can help practices increase efficiencies and revenue, while reducing operating costs. With the American Recovery and Reinvestment Act of 2009 and other incentives for healthcare information technology adoption, the time is right for your practice to consider implementing technology that will bring considerable value to your practice and also increase patient satisfaction.

  16. Paper and Other Web Coating: National Emission Standards for Hazardous Air Pollutants (NESHAP)

    EPA Pesticide Factsheets

    Find information on the NESHAP for paper and other web coatings. Read the rule summary, history and supporting documents including fact sheets, responses to public comments, related rules, and compliance and applicability information for this regulation.

  17. Web Extensible Display Manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slominski, Ryan; Larrieu, Theodore L.

    Jefferson Lab's Web Extensible Display Manager (WEDM) allows staff to access EDM control system screens from a web browser in remote offices and from mobile devices. Native browser technologies are leveraged to avoid installing and managing software on remote clients such as browser plugins, tunnel applications, or an EDM environment. Since standard network ports are used firewall exceptions are minimized. To avoid security concerns from remote users modifying a control system, WEDM exposes read-only access and basic web authentication can be used to further restrict access. Updates of monitored EPICS channels are delivered via a Web Socket using a webmore » gateway. The software translates EDM description files (denoted with the edl suffix) to HTML with Scalable Vector Graphics (SVG) following the EDM's edl file vector drawing rules to create faithful screen renderings. The WEDM server parses edl files and creates the HTML equivalent in real-time allowing existing screens to work without modification. Alternatively, the familiar drag and drop EDM screen creation tool can be used to create optimized screens sized specifically for smart phones and then rendered by WEDM.« less

  18. Land use alters the resistance and resilience of soil food webs to drought

    USGS Publications Warehouse

    de Vries, Franciska T.; Liiri, Mira E.; Bjørnlund, Lisa; Bowker, Matthew A.; Christensen, Søren; Setälä, Heikki; Bardgett, Richard D.

    2012-01-01

    Soils deliver several ecosystem services including carbon sequestration and nutrient cycling, which are of central importance to climate mitigation and sustainable food production. Soil biota play an important role in carbon and nitrogen cycling, and, although the effects of land use on soil food webs are well documented the consequences for their resistance and resilience to climate change are not known. We compared the resistance and resilience to drought--which is predicted to increase under climate change of soil food webs of two common land-use systems: intensively managed wheat with a bacterial-based soil food web and extensively managed grassland with a fungal-based soil food web. We found that the fungal-based food web, and the processes of C and N loss it governs, of grassland soil was more resistant, although not resilient, and better able to adapt to drought than the bacterial-based food web of wheat soil. Structural equation modelling revealed that fungal-based soil food webs and greater microbial evenness mitigated C and N loss. Our findings show that land use strongly affects the resistance and resilience of soil food webs to climate change, and that extensively managed grassland promotes more resistant, and adaptable, fungal-based soil food webs.

  19. Using ESO Reflex with Web Services

    NASA Astrophysics Data System (ADS)

    Järveläinen, P.; Savolainen, V.; Oittinen, T.; Maisala, S.; Ullgrén, M. Hook, R.

    2008-08-01

    ESO Reflex is a prototype graphical workflow system, based on Taverna, and primarily intended to be a flexible way of running ESO data reduction recipes along with other legacy applications and user-written tools. ESO Reflex can also readily use the Taverna Web Services features that are based on the Apache Axis SOAP implementation. Taverna is a general purpose Web Service client, and requires no programming to use such services. However, Taverna also has some restrictions: for example, no numerical types such integers. In addition the preferred binding style is document/literal wrapped, but most astronomical services publish the Axis default WSDL using RPC/encoded style. Despite these minor limitations we have created simple but very promising test VO workflow using the Sesame name resolver service at CDS Strasbourg, the Hubble SIAP server at the Multi-Mission Archive at Space Telescope (MAST) and the WESIX image cataloging and catalogue cross-referencing service at the University of Pittsburgh. ESO Reflex can also pass files and URIs via the PLASTIC protocol to visualisation tools and has its own viewer for VOTables. We picked these three Web Services to try to set up a realistic and useful ESO Reflex workflow. They also demonstrate ESO Reflex abilities to use many kind of Web Services because each of them requires a different interface. We describe each of these services in turn and comment on how it was used

  20. A review of the FDA draft guidance document for software validation: guidance for industry.

    PubMed

    Keatley, K L

    1999-01-01

    A Draft Guidance Document (Version 1.1) was issued by the United States Food and Drug Administration (FDA) to address the software validation requirement of the Quality System Regulation, 21 CFR Part 820, effective June 1, 1997. The guidance document outlines validation considerations that the FDA regards as applicable to both medical device software and software used to "design, develop or manufacture" medical devices. The Draft Guidance is available at the FDA web site http:@www.fda.gov/cdrh/comps/swareval++ +.html. Presented here is a review of the main features of the FDA document for Quality System Regulation (QSR), and some guidance for its implementation in industry.

  1. Wireless, Web-Based Interactive Control of Optical Coherence Tomography with Mobile Devices.

    PubMed

    Mehta, Rajvi; Nankivil, Derek; Zielinski, David J; Waterman, Gar; Keller, Brenton; Limkakeng, Alexander T; Kopper, Regis; Izatt, Joseph A; Kuo, Anthony N

    2017-01-01

    Optical coherence tomography (OCT) is widely used in ophthalmology clinics and has potential for more general medical settings and remote diagnostics. In anticipation of remote applications, we developed wireless interactive control of an OCT system using mobile devices. A web-based user interface (WebUI) was developed to interact with a handheld OCT system. The WebUI consisted of key OCT displays and controls ported to a webpage using HTML and JavaScript. Client-server relationships were created between the WebUI and the OCT system computer. The WebUI was accessed on a cellular phone mounted to the handheld OCT probe to wirelessly control the OCT system. Twenty subjects were imaged using the WebUI to assess the system. System latency was measured using different connection types (wireless 802.11n only, wireless to remote virtual private network [VPN], and cellular). Using a cellular phone, the WebUI was successfully used to capture posterior eye OCT images in all subjects. Simultaneous interactivity by a remote user on a laptop was also demonstrated. On average, use of the WebUI added only 58, 95, and 170 ms to the system latency using wireless only, wireless to VPN, and cellular connections, respectively. Qualitatively, operator usage was not affected. Using a WebUI, we demonstrated wireless and remote control of an OCT system with mobile devices. The web and open source software tools used in this project make it possible for any mobile device to potentially control an OCT system through a WebUI. This platform can be a basis for remote, teleophthalmology applications using OCT.

  2. Wireless, Web-Based Interactive Control of Optical Coherence Tomography with Mobile Devices

    PubMed Central

    Mehta, Rajvi; Nankivil, Derek; Zielinski, David J.; Waterman, Gar; Keller, Brenton; Limkakeng, Alexander T.; Kopper, Regis; Izatt, Joseph A.; Kuo, Anthony N.

    2017-01-01

    Purpose Optical coherence tomography (OCT) is widely used in ophthalmology clinics and has potential for more general medical settings and remote diagnostics. In anticipation of remote applications, we developed wireless interactive control of an OCT system using mobile devices. Methods A web-based user interface (WebUI) was developed to interact with a handheld OCT system. The WebUI consisted of key OCT displays and controls ported to a webpage using HTML and JavaScript. Client–server relationships were created between the WebUI and the OCT system computer. The WebUI was accessed on a cellular phone mounted to the handheld OCT probe to wirelessly control the OCT system. Twenty subjects were imaged using the WebUI to assess the system. System latency was measured using different connection types (wireless 802.11n only, wireless to remote virtual private network [VPN], and cellular). Results Using a cellular phone, the WebUI was successfully used to capture posterior eye OCT images in all subjects. Simultaneous interactivity by a remote user on a laptop was also demonstrated. On average, use of the WebUI added only 58, 95, and 170 ms to the system latency using wireless only, wireless to VPN, and cellular connections, respectively. Qualitatively, operator usage was not affected. Conclusions Using a WebUI, we demonstrated wireless and remote control of an OCT system with mobile devices. Translational Relevance The web and open source software tools used in this project make it possible for any mobile device to potentially control an OCT system through a WebUI. This platform can be a basis for remote, teleophthalmology applications using OCT. PMID:28138415

  3. NGNP Data Management and Analysis System Analysis and Web Delivery Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cynthia D. Gentillon

    2011-09-01

    Projects for the Very High Temperature Reactor (VHTR) Technology Development Office provide data in support of Nuclear Regulatory Commission licensing of the very high temperature reactor. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high-temperature and high-fluence environments. The NGNP Data Management and Analysis System (NDMAS) at the Idaho National Laboratory has been established to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities formore » displaying the data in meaningful ways and for data analysis to identify useful relationships among the measured quantities. The capabilities are described from the perspective of NDMAS users, starting with those who just view experimental data and analytical results on the INL NDMAS web portal. Web display and delivery capabilities are described in detail. Also the current web pages that show Advanced Gas Reactor, Advanced Graphite Capsule, and High Temperature Materials test results are itemized. Capabilities available to NDMAS developers are more extensive, and are described using a second series of examples. Much of the data analysis efforts focus on understanding how thermocouple measurements relate to simulated temperatures and other experimental parameters. Statistical control charts and correlation monitoring provide an ongoing assessment of instrument accuracy. Data analysis capabilities are virtually unlimited for those who use the NDMAS web data download capabilities and the analysis software of their choice. Overall, the NDMAS provides convenient data analysis and web delivery capabilities for studying a very large and rapidly increasing database of well-documented, pedigreed data.« less

  4. 40 CFR 262.84 - Tracking document.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) STANDARDS APPLICABLE TO GENERATORS OF HAZARDOUS WASTE Transfrontier Shipments of Hazardous Waste for... (bulk shipments only) the generator must forward the tracking document with the manifest to the last... the U.S. which originate at the site of generation, the generator must forward the tracking document...

  5. Enhancing the AliEn Web Service Authentication

    NASA Astrophysics Data System (ADS)

    Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Vladimirovna Datskova, Olga; Sankar Banerjee, Subho; Zhang, Guoping

    2011-12-01

    Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.

  6. Exchanging the Context between OGC Geospatial Web clients and GIS applications using Atom

    NASA Astrophysics Data System (ADS)

    Maso, Joan; Díaz, Paula; Riverola, Anna; Pons, Xavier

    2013-04-01

    Currently, the discovery and sharing of geospatial information over the web still presents difficulties. News distribution through website content was simplified by the use of Really Simple Syndication (RSS) and Atom syndication formats. This communication exposes an extension of Atom to redistribute references to geospatial information in a Spatial Data Infrastructure distributed environment. A geospatial client can save the status of an application that involves several OGC services of different kind and direct data and share this status with other users that need the same information and use different client vendor products in an interoperable way. The extensibility of the Atom format was essential to define a format that could be used in RSS enabled web browser, Mass Market map viewers and emerging geospatial enable integrated clients that support Open Geospatial Consortium (OGC) services. Since OWS Context has been designed as an Atom extension, it is possible to see the document in common places where Atom documents are valid. Internet web browsers are able to present the document as a list of items with title, abstract, time, description and downloading features. OWS Context uses GeoRSS so that, the document can be to be interpreted by both Google maps and Bing Maps as items that have the extent represented in a dynamic map. Another way to explode a OWS Context is to develop an XSLT to transform the Atom feed into an HTML5 document that shows the exact status of the client view window that saved the context document. To accomplish so, we use the width and height of the client window, and the extent of the view in world (geographic) coordinates in order to calculate the scale of the map. Then, we can mix elements in world coordinates (such as CF-NetCDF files or GML) with elements in pixel coordinates (such as WMS maps, WMTS tiles and direct SVG content). A smarter map browser application called MiraMon Map Browser is able to write a context document and read

  7. Prey interception drives web invasion and spider size determines successful web takeover in nocturnal orb-web spiders.

    PubMed

    Gan, Wenjin; Liu, Shengjie; Yang, Xiaodong; Li, Daiqin; Lei, Chaoliang

    2015-09-24

    A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders. © 2015. Published by The Company of Biologists Ltd.

  8. Prey interception drives web invasion and spider size determines successful web takeover in nocturnal orb-web spiders

    PubMed Central

    Gan, Wenjin; Liu, Shengjie; Yang, Xiaodong; Li, Daiqin; Lei, Chaoliang

    2015-01-01

    ABSTRACT A striking feature of web-building spiders is the use of silk to make webs, mainly for prey capture. However, building a web is energetically expensive and increases the risk of predation. To reduce such costs and still have access to abundant prey, some web-building spiders have evolved web invasion behaviour. In general, no consistent patterns of web invasion have emerged and the factors determining web invasion remain largely unexplored. Here we report web invasion among conspecifics in seven nocturnal species of orb-web spiders, and examined the factors determining the probability of webs that could be invaded and taken over by conspecifics. About 36% of webs were invaded by conspecifics, and 25% of invaded webs were taken over by the invaders. A web that was built higher and intercepted more prey was more likely to be invaded. Once a web was invaded, the smaller the size of the resident spider, the more likely its web would be taken over by the invader. This study suggests that web invasion, as a possible way of reducing costs, may be widespread in nocturnal orb-web spiders. PMID:26405048

  9. Where are the parasites in food webs?

    PubMed Central

    2012-01-01

    This review explores some of the reasons why food webs seem to contain relatively few parasite species when compared to the full diversity of free living species in the system. At present, there are few coherent food web theories to guide scientific studies on parasites, and this review posits that the methods, directions and questions in the field of food web ecology are not always congruent with parasitological inquiry. For example, topological analysis (the primary tool in food web studies) focuses on only one of six important steps in trematode life cycles, each of which requires a stable community dynamic to evolve. In addition, these transmission strategies may also utilize pathways within the food web that are not considered in traditional food web investigations. It is asserted that more effort must be focused on parasite-centric models, and a central theme is that many different approaches will be required. One promising approach is the old energetic perspective, which considers energy as the critical resource for all organisms, and the currency of all food web interactions. From the parasitological point of view, energy can be used to characterize the roles of parasites at all levels in the food web, from individuals to populations to community. The literature on parasite energetics in food webs is very sparse, but the evidence suggests that parasite species richness is low in food webs because parasites are limited by the quantity of energy available to their unique lifestyles. PMID:23092160

  10. A study of Web-based instructional strategies in post-secondary sciences

    NASA Astrophysics Data System (ADS)

    Stanley, Scott A.

    There is a large demand for web-based instruction offered by post secondary institutions (U.S. Department of Education, 2003), but only recently have post secondary science faculty begun to develop courses for this medium (Carr, 2000). Research evaluating the effectiveness of this type of instruction suggests that there is no significant difference in the grades between students in traditional and online courses (Russell, 1999; Spooner, Jordan, Agozzine, & Spooner, 1999; Verduin & Clark, 1991; Wideman & Owston, 1999). It is important to note that while grades may be similar in face-to-face (FTF) and web-based science courses, it cannot be implied that student learning is identical in both environments. Experts in web-based instruction claim that teaching practices for web-based instruction are similar to those used in a FTF environment (Bronack & Riedl, 1998; Ragan, 1999). This is troublesome when viewed in context with the data on instructional strategies used in FTF post-secondary science courses. It is well documented that undergraduate students perceive science pedagogy as ineffective (NSF, 1996; Seymour & Hewitt, 1997; Tobias, 1990). This research examined web-based instructional strategies in post secondary science courses. Using a web-based questionnaire, this study collected data in order to examine the frequency of use of previously identified effective FTF instructional strategies, and the difference in use of instructional strategies in the different fields of science. One hundred and thirty respondents completed the web-based questionnaire. Data from faculty (N=122) who teach more than 75% of their course online were analyzed. Data analyses revealed the frequency of use of effective face-to-face instructional strategies is variable. Science faculty do not regularly assess students' conceptual understandings prior to the presentation of new concepts. Faculty frequently made connections to the real-world and incorporated problem solving using real

  11. Persistence and availability of Web services in computational biology.

    PubMed

    Schultheiss, Sebastian J; Münch, Marc-Christian; Andreeva, Gergana D; Rätsch, Gunnar

    2011-01-01

    We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository.

  12. Persistence and Availability of Web Services in Computational Biology

    PubMed Central

    Schultheiss, Sebastian J.; Münch, Marc-Christian; Andreeva, Gergana D.; Rätsch, Gunnar

    2011-01-01

    We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was no example data or a related problem; 13% were truly no longer working as expected; we could positively confirm functionality only for 45% of all services. Additionally, we conducted a survey among 872 Web Server Issue corresponding authors; 274 replied. 78% of all respondents indicate their services have been developed solely by students and researchers without a permanent position. Consequently, these services are in danger of falling into disrepair after the original developers move to another institution, and indeed, for 24% of services, there is no plan for maintenance, according to the respondents. We introduce a Web service quality scoring system that correlates with the number of citations: services with a high score are cited 1.8 times more often than low-scoring services. We have identified key characteristics that are predictive of a service's survival, providing reviewers, editors, and Web service developers with the means to assess or improve Web services. A Web service conforming to these criteria receives more citations and provides more reliable service for its users. The most effective way of ensuring continued access to a service is a persistent Web address, offered either by the publishing journal, or created on the authors' own initiative, for example at http://bioweb.me. The community would benefit the most from a policy requiring any source code needed to reproduce results to be deposited in a public repository. PMID:21966383

  13. Second Language Acquisition: Implications of Web 2.0 and Beyond

    ERIC Educational Resources Information Center

    Chang, Ching-Wen; Pearman, Cathy; Farha, Nicholas

    2012-01-01

    Language laboratories, developed in the 1970s under the influence of the Audiolingual Method, were superseded several decades later by computer-assisted language learning (CALL) work stations (Gündüz, 2005). The World Wide Web was developed shortly thereafter. From this introduction and the well-documented and staggering growth of the Internet and…

  14. Using Open Web APIs in Teaching Web Mining

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Li, Xin; Chau, M.; Ho, Yi-Jen; Tseng, Chunju

    2009-01-01

    With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems…

  15. Cross-Cultural Language Learning and Web Design Complexity

    ERIC Educational Resources Information Center

    Park, Ji Yong

    2015-01-01

    Accepting the fact that culture and language are interrelated in second language learning (SLL), the web sites should be designed to integrate with the cultural aspects. Yet many SLL web sites fail to integrate with the cultural aspects and/or focus on language acquisition only. This study identified three issues: (1) anthropologists'…

  16. SNPversity: a web-based tool for visualizing diversity

    PubMed Central

    Schott, David A; Vinnakota, Abhinav G; Portwood, John L; Andorf, Carson M

    2018-01-01

    Abstract Many stand-alone desktop software suites exist to visualize single nucleotide polymorphism (SNP) diversity, but web-based software that can be easily implemented and used for biological databases is absent. SNPversity was created to answer this need by building an open-source visualization tool that can be implemented on a Unix-like machine and served through a web browser that can be accessible worldwide. SNPversity consists of a HDF5 database back-end for SNPs, a data exchange layer powered by TASSEL libraries that represent data in JSON format, and an interface layer using PHP to visualize SNP information. SNPversity displays data in real-time through a web browser in grids that are color-coded according to a given SNP’s allelic status and mutational state. SNPversity is currently available at MaizeGDB, the maize community’s database, and will be soon available at GrainGenes, the clade-oriented database for Triticeae and Avena species, including wheat, barley, rye, and oat. The code and documentation are uploaded onto github, and they are freely available to the public. We expect that the tool will be highly useful for other biological databases with a similar need to display SNP diversity through their web interfaces. Database URL: https://www.maizegdb.org/snpversity PMID:29688387

  17. The use of the World Wide Web by medical journals in 2003 and 2005: an observational study.

    PubMed

    Schriger, David L; Ouk, Sripha; Altman, Douglas G

    2007-01-01

    The 2- to 6-page print journal article has been the standard for 200 years, yet this format severely limits the amount of detailed information that can be conveyed. The World Wide Web provides a low-cost option for posting extended text and supplementary information. It also can enhance the experience of journal editors, reviewers, readers, and authors through added functionality (eg, online submission and peer review, postpublication critique, and e-mail notification of table of contents.) Our aim was to characterize ways that journals were using the World Wide Web in 2005 and note changes since 2003. We analyzed the Web sites of 138 high-impact print journals in 3 ways. First, we compared the print and Web versions of March 2003 and 2005 issues of 28 journals (20 of which were randomly selected from the 138) to determine how often articles were published Web only and how often print articles were augmented by Web-only supplements. Second, we examined what functions were offered by each journal Web site. Third, for journals that offered Web pages for reader commentary about each article, we analyzed the number of comments and characterized these comments. Fifty-six articles (7%) in 5 journals were Web only. Thirteen of the 28 journals had no supplementary online content. By 2005, several journals were including Web-only supplements in >20% of their papers. Supplementary methods, tables, and figures predominated. The use of supplementary material increased by 5% from 2% to 7% in the 20-journal random sample from 2003 to 2005. Web sites had similar functionality with an emphasis on linking each article to related material and e-mailing readers about activity related to each article. There was little evidence of journals using the Web to provide readers an interactive experience with the data or with each other. Seventeen of the 138 journals offered rapid-response pages. Only 18% of eligible articles had any comments after 5 months. Journal Web sites offer similar

  18. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  19. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  20. Program Director as Webmaster? Analysis of 131 Anesthesiology Department Web Sites and Program Director Web Site Involvement and Opinion Survey.

    PubMed

    Daneshpayeh, Negin; Lee, Howard; Berger, Jeffrey

    2013-01-01

    The last formal review of academic anesthesiology department Web sites (ADWs) for content was conducted in 2009. ADWs have been rated as very important by medical students in researching residency training programs; however, the rapid evolution of sites require that descriptive statistics must be more current to be considered reliable. We set out to provide an updated overview of ADW content and to better understand residency program directors' (PD) role and comfort with ADWs. Two independent reviewers (ND and HL) analyzed all 131 Accreditation Council for Graduate Medical Education (ACGME) accredited ADWs. A binary system (Yes/No) was used to determine which features were present. Reviewer reliability was confirmed with inter-rater reliability and percentage agreement calculation. Additionally, a blinded electronic survey (Survey Monkey, Portland, OR) was sent to anesthesiology residency PDs via electronic mail investigating the audiences for ADWs, the frequency of updates and the degree of PD involvement. 13% of anesthesiology departments still lack a Web site with a homepage with links to the residency program and educational offerings (18% in 2009). Only half (55%) of Web sites contain information for medical students, including clerkship information. Furthermore, programs rarely contain up-to-date calendars (13%), accreditation cycle lengths (11%), accreditation dates (7%) or board pass rates (6%). The PD survey, completed by 42 of 131 PDs, noted a correlation (r = 0.36) between the number of years as PD and the frequency of Web site updates - less experienced PDs appear to update their sites more frequently (p = 0.03). Although 86% of PDs regarded a Web site as "very" important in recruitment, only 9% felt "very" comfortable with the skills required to advertise and market a Web site. Despite the overall increase in ADW content since 2009, privacy concerns, limited resources and time constraints may prevent PDs from providing the most up-to-date Web sites for

  1. High Plains Regional Ground-water Study web site

    USGS Publications Warehouse

    Qi, Sharon L.

    2000-01-01

    Now available on the Internet is a web site for the U.S. Geological Survey's (USGS) National Water-Quality Assessment (NAWQA) Program-High Plains Regional Ground-Water Study. The purpose of the web site is to provide public access to a wide variety of information on the USGS investigation of the ground-water resources within the High Plains aquifer system. Typical pages on the web site include the following: descriptions of the High Plains NAWQA, the National NAWQA Program, the study-area setting, current and past activities, significant findings, chemical and ancillary data (which can be downloaded), listing and access to publications, links to other sites about the High Plains area, and links to other web sites studying High Plains ground-water resources. The High Plains aquifer is a regional aquifer system that underlies 174,000 square miles in parts of eight States (Colorado, Kansas, Nebraska, New Mexico, Oklahoma, South Dakota, Texas, and Wyoming). Because the study area is so large, the Internet is an ideal way to provide project data and information on a near real-time basis. The web site will be a collection of living documents where project data and information are updated as it becomes available throughout the life of the project. If you have an interest in the High Plains area, you can check this site periodically to learn how the High Plains NAWQA activities are progressing over time and access new data and publications as they become available.

  2. Web-Based Environment for Maintaining Legacy Software

    NASA Technical Reports Server (NTRS)

    Tigges, Michael; Thompson, Nelson; Orr, Mark; Fox, Richard

    2007-01-01

    Advanced Tool Integration Environment (ATIE) is the name of both a software system and a Web-based environment created by the system for maintaining an archive of legacy software and expertise involved in developing the legacy software. ATIE can also be used in modifying legacy software and developing new software. The information that can be encapsulated in ATIE includes experts documentation, input and output data of tests cases, source code, and compilation scripts. All of this information is available within a common environment and retained in a database for ease of access and recovery by use of powerful search engines. ATIE also accommodates the embedment of supporting software that users require for their work, and even enables access to supporting commercial-off-the-shelf (COTS) software within the flow of the experts work. The flow of work can be captured by saving the sequence of computer programs that the expert uses. A user gains access to ATIE via a Web browser. A modern Web-based graphical user interface promotes efficiency in the retrieval, execution, and modification of legacy code. Thus, ATIE saves time and money in the support of new and pre-existing programs.

  3. Electronic Document Management Using Inverted Files System

    NASA Astrophysics Data System (ADS)

    Suhartono, Derwin; Setiawan, Erwin; Irwanto, Djon

    2014-03-01

    The amount of documents increases so fast. Those documents exist not only in a paper based but also in an electronic based. It can be seen from the data sample taken by the SpringerLink publisher in 2010, which showed an increase in the number of digital document collections from 2003 to mid of 2010. Then, how to manage them well becomes an important need. This paper describes a new method in managing documents called as inverted files system. Related with the electronic based document, the inverted files system will closely used in term of its usage to document so that it can be searched over the Internet using the Search Engine. It can improve document search mechanism and document save mechanism.

  4. Document similarity measures and document browsing

    NASA Astrophysics Data System (ADS)

    Ahmadullin, Ildus; Fan, Jian; Damera-Venkata, Niranjan; Lim, Suk Hwan; Lin, Qian; Liu, Jerry; Liu, Sam; O'Brien-Strain, Eamonn; Allebach, Jan

    2011-03-01

    Managing large document databases is an important task today. Being able to automatically com- pare document layouts and classify and search documents with respect to their visual appearance proves to be desirable in many applications. We measure single page documents' similarity with respect to distance functions between three document components: background, text, and saliency. Each document component is represented as a Gaussian mixture distribution; and distances between dierent documents' components are calculated as probabilistic similarities between corresponding distributions. The similarity measure between documents is represented as a weighted sum of the components' distances. Using this document similarity measure, we propose a browsing mechanism operating on a document dataset. For these purposes, we use a hierarchical browsing environment which we call the document similarity pyramid. It allows the user to browse a large document dataset and to search for documents in the dataset that are similar to the query. The user can browse the dataset on dierent levels of the pyramid, and zoom into the documents that are of interest.

  5. Web flexibility and I-beam torsional oscillation

    NASA Astrophysics Data System (ADS)

    Stephen, N. G.; Wang, P. J.

    1986-08-01

    Two recent theories on torsional oscillation of general doubly-symmetric non-circular cross-section beams incorporate a second order effect, that of in-plane shear deformation involving a change in cross-sectional shape, and are found to give excellent agreement with exact results for an elliptical section rod. For "technical" torsional oscillation theories of I-section beams this in-plane shear has previously been considered within the flanges only; in the present work the greater effect of shear distortion of the web is included, having previously been considered only in static analysis. The theory predicts three modes of wave propagation, one of which is essentially torsional in character; a second mode may be identified with predominatly flange bending according to the second branch of Timoshenko beam theory whilst a new mode involves individual flange torsion with asymmetric web deformation, and has the lowest phase velocity except at the longest wavelength. An alternative symmetric web deformation is also considered.

  6. Web Navigation Sequences Automation in Modern Websites

    NASA Astrophysics Data System (ADS)

    Montoto, Paula; Pan, Alberto; Raposo, Juan; Bellas, Fernando; López, Javier

    Most today’s web sources are designed to be used by humans, but they do not provide suitable interfaces for software programs. That is why a growing interest has arisen in so-called web automation applications that are widely used for different purposes such as B2B integration, automated testing of web applications or technology and business watch. Previous proposals assume models for generating and reproducing navigation sequences that are not able to correctly deal with new websites using technologies such as AJAX: on one hand existing systems only allow recording simple navigation actions and, on the other hand, they are unable to detect the end of the effects caused by an user action. In this paper, we propose a set of new techniques to record and execute web navigation sequences able to deal with all the complexity existing in AJAX-based web sites. We also present an exhaustive evaluation of the proposed techniques that shows very promising results.

  7. Web-based education in anesthesiology: a critical overview.

    PubMed

    Doyle, D John

    2008-12-01

    The purpose of this review is to discuss the rise of web-based educational resources available to the anesthesiology community. Recent developments of particular importance include the growth of 'Web 2.0' resources, the development of the concepts of 'open access' and 'information philanthropy', and the expansion of web-based medical simulation software products.In addition, peer review of online educational resources has now come of age. The worldwide web has made available a large variety of valuable medical information and education resources only dreamed of two decades ago. To a large extent,these developments represent a shift in the focus of medical education resources to emphasize free access to materials and to encourage collaborative development efforts.

  8. Standard classification of software documentation

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    General conceptual requirements for standard levels of documentation and for application of these requirements to intended usages. These standards encourage the policy to produce only those forms of documentation that are needed and adequate for the purpose. Documentation standards are defined with respect to detail and format quality. Classes A through D range, in order, from the most definitive down to the least definitive, and categories 1 through 4 range, in order, from high-quality typeset down to handwritten material. Criteria for each of the classes and categories, as well as suggested selection guidelines for each are given.

  9. The impact of web services at the IRIS DMC

    NASA Astrophysics Data System (ADS)

    Weekly, R. T.; Trabant, C. M.; Ahern, T. K.; Stults, M.; Suleiman, Y. Y.; Van Fossen, M.; Weertman, B.

    2015-12-01

    The IRIS Data Management Center (DMC) has served the seismological community for nearly 25 years. In that time we have offered data and information from our archive using a variety of mechanisms ranging from email-based to desktop applications to web applications and web services. Of these, web services have quickly become the primary method for data extraction at the DMC. In 2011, the first full year of operation, web services accounted for over 40% of the data shipped from the DMC. In 2014, over ~450 TB of data was delivered directly to users through web services, representing nearly 70% of all shipments from the DMC that year. In addition to handling requests directly from users, the DMC switched all data extraction methods to use web services in 2014. On average the DMC now handles between 10 and 20 million requests per day submitted to web service interfaces. The rapid adoption of web services is attributed to the many advantages they bring. For users, they provide on-demand data using an interface technology, HTTP, that is widely supported in nearly every computing environment and language. These characteristics, combined with human-readable documentation and existing tools make integration of data access into existing workflows relatively easy. For the DMC, the web services provide an abstraction layer to internal repositories allowing for concentrated optimization of extraction workflow and easier evolution of those repositories. Lending further support to DMC's push in this direction, the core web services for station metadata, timeseries data and event parameters were adopted as standards by the International Federation of Digital Seismograph Networks (FDSN). We expect to continue enhancing existing services and building new capabilities for this platform. For example, the DMC has created a federation system and tools allowing researchers to discover and collect seismic data from data centers running the FDSN-standardized services. A future capability

  10. A database for TMT interface control documents

    NASA Astrophysics Data System (ADS)

    Gillies, Kim; Roberts, Scott; Brighton, Allan; Rogers, John

    2016-08-01

    The TMT Software System consists of software components that interact with one another through a software infrastructure called TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their roles. The use of common component types and shared middleware services allows standardized software interfaces for the components. A software system called the TMT Interface Database System was constructed to support the documentation of the interfaces for components based on CSW. The programmer describes a subsystem and each of its components using JSON-style text files. A command interface file describes each command a component can receive and any commands a component sends. The event interface files describe status, alarms, and events a component publishes and status and events subscribed to by a component. A web application was created to provide a user interface for the required features. Files are ingested into the software system's database. The user interface allows browsing subsystem interfaces, publishing versions of subsystem interfaces, and constructing and publishing interface control documents that consist of the intersection of two subsystem interfaces. All published subsystem interfaces and interface control documents are versioned for configuration control and follow the standard TMT change control processes. Subsystem interfaces and interface control documents can be visualized in the browser or exported as PDF files.

  11. Web Content Management and One EPA Web Factsheet

    EPA Pesticide Factsheets

    One EPA Web is a multi-year project to improve EPA’s website to better meet the needs of our Web visitors. Content is developed and managed in the WebCMS which supports One EPA Web goals by standardizing how we create and publish content.

  12. StreamStats in North Carolina: a water-resources Web application

    USGS Publications Warehouse

    Weaver, J. Curtis; Terziotti, Silvia; Kolb, Katharine R.; Wagner, Chad R.

    2012-01-01

    A statewide StreamStats application for North Carolina was developed in cooperation with the North Carolina Department of Transportation following completion of a pilot application for the upper French Broad River basin in western North Carolina (Wagner and others, 2009). StreamStats for North Carolina, available at http://water.usgs.gov/osw/streamstats/north_carolina.html, is a Web-based Geographic Information System (GIS) application developed by the U.S. Geological Survey (USGS) in consultation with Environmental Systems Research Institute, Inc. (Esri) to provide access to an assortment of analytical tools that are useful for water-resources planning and management (Ries and others, 2008). The StreamStats application provides an accurate and consistent process that allows users to easily obtain streamflow statistics, basin characteristics, and descriptive information for USGS data-collection sites and user-selected ungaged sites. In the North Carolina application, users can compute 47 basin characteristics and peak-flow frequency statistics (Weaver and others, 2009; Robbins and Pope, 1996) for a delineated drainage basin. Selected streamflow statistics and basin characteristics for data-collection sites have been compiled from published reports and also are immediately accessible by querying individual sites from the web interface. Examples of basin characteristics that can be computed in StreamStats include drainage area, stream slope, mean annual precipitation, and percentage of forested area (Ries and others, 2008). Examples of streamflow statistics that were previously available only through published documents include peak-flow frequency, flow-duration, and precipitation data. These data are valuable for making decisions related to bridge design, floodplain delineation, water-supply permitting, and sustainable stream quality and ecology. The StreamStats application also allows users to identify stream reaches upstream and downstream from user-selected sites

  13. Duplicate document detection in DocBrowse

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Bruce, Andrew G.; Nguyen, Thien

    1998-04-01

    Duplicate documents are frequently found in large databases of digital documents, such as those found in digital libraries or in the government declassification effort. Efficient duplicate document detection is important not only to allow querying for similar documents, but also to filter out redundant information in large document databases. We have designed three different algorithm to identify duplicate documents. The first algorithm is based on features extracted from the textual content of a document, the second algorithm is based on wavelet features extracted from the document image itself, and the third algorithm is a combination of the first two. These algorithms are integrated within the DocBrowse system for information retrieval from document images which is currently under development at MathSoft. DocBrowse supports duplicate document detection by allowing (1) automatic filtering to hide duplicate documents, and (2) ad hoc querying for similar or duplicate documents. We have tested the duplicate document detection algorithms on 171 documents and found that text-based method has an average 11-point precision of 97.7 percent while the image-based method has an average 11- point precision of 98.9 percent. However, in general, the text-based method performs better when the document contains enough high-quality machine printed text while the image- based method performs better when the document contains little or no quality machine readable text.

  14. Leveraging Web 2.0 in the Redesign of a Graduate-Level Technology Integration Course

    ERIC Educational Resources Information Center

    Oliver, Kevin

    2007-01-01

    In the emerging era of the "read-write" web, students can not only research and collect information from existing web resources, but also collaborate and create new information on the web in a surprising number of ways. Web 2.0 is an umbrella term for many individual tools that have been created with web collaboration, sharing, and/or new…

  15. Knowledge representation and management: benefits and challenges of the semantic web for the fields of KRM and NLP.

    PubMed

    Rassinoux, A-M

    2011-01-01

    To summarize excellent current research in the field of knowledge representation and management (KRM). A synopsis of the articles selected for the IMIA Yearbook 2011 is provided and an attempt to highlight the current trends in the field is sketched. This last decade, with the extension of the text-based web towards a semantic-structured web, NLP techniques have experienced a renewed interest in knowledge extraction. This trend is corroborated through the five papers selected for the KRM section of the Yearbook 2011. They all depict outstanding studies that exploit NLP technologies whenever possible in order to accurately extract meaningful information from various biomedical textual sources. Bringing semantic structure to the meaningful content of textual web pages affords the user with cooperative sharing and intelligent finding of electronic data. As exemplified by the best paper selection, more and more advanced biomedical applications aim at exploiting the meaningful richness of free-text documents in order to generate semantic metadata and recently to learn and populate domain ontologies. These later are becoming a key piece as they allow portraying the semantics of the Semantic Web content. Maintaining their consistency with documents and semantic annotations that refer to them is a crucial challenge of the Semantic Web for the coming years.

  16. Applying Analogical Reasoning Techniques for Teaching XML Document Querying Skills in Database Classes

    ERIC Educational Resources Information Center

    Mitri, Michel

    2012-01-01

    XML has become the most ubiquitous format for exchange of data between applications running on the Internet. Most Web Services provide their information to clients in the form of XML. The ability to process complex XML documents in order to extract relevant information is becoming as important a skill for IS students to master as querying…

  17. Thematic clustering of text documents using an EM-based approach

    PubMed Central

    2012-01-01

    Clustering textual contents is an important step in mining useful information on the web or other text-based resources. The common task in text clustering is to handle text in a multi-dimensional space, and to partition documents into groups, where each group contains documents that are similar to each other. However, this strategy lacks a comprehensive view for humans in general since it cannot explain the main subject of each cluster. Utilizing semantic information can solve this problem, but it needs a well-defined ontology or pre-labeled gold standard set. In this paper, we present a thematic clustering algorithm for text documents. Given text, subject terms are extracted and used for clustering documents in a probabilistic framework. An EM approach is used to ensure documents are assigned to correct subjects, hence it converges to a locally optimal solution. The proposed method is distinctive because its results are sufficiently explanatory for human understanding as well as efficient for clustering performance. The experimental results show that the proposed method provides a competitive performance compared to other state-of-the-art approaches. We also show that the extracted themes from the MEDLINE® dataset represent the subjects of clusters reasonably well. PMID:23046528

  18. Evaluating the Informative Quality of Documents in SGML Format from Judgements by Means of Fuzzy Linguistic Techniques Based on Computing with Words.

    ERIC Educational Resources Information Center

    Herrera-Viedma, Enrique; Peis, Eduardo

    2003-01-01

    Presents a fuzzy evaluation method of SGML documents based on computing with words. Topics include filtering the amount of information available on the Web to assist users in their search processes; document type definitions; linguistic modeling; user-system interaction; and use with XML and other markup languages. (Author/LRW)

  19. Text categorization models for identifying unproven cancer treatments on the web.

    PubMed

    Aphinyanaphongs, Yin; Aliferis, Constantin

    2007-01-01

    The nature of the internet as a non-peer-reviewed (and largely unregulated) publication medium has allowed wide-spread promotion of inaccurate and unproven medical claims in unprecedented scale. Patients with conditions that are not currently fully treatable are particularly susceptible to unproven and dangerous promises about miracle treatments. In extreme cases, fatal adverse outcomes have been documented. Most commonly, the cost is financial, psychological, and delayed application of imperfect but proven scientific modalities. To help protect patients, who may be desperately ill and thus prone to exploitation, we explored the use of machine learning techniques to identify web pages that make unproven claims. This feasibility study shows that the resulting models can identify web pages that make unproven claims in a fully automatic manner, and substantially better than previous web tools and state-of-the-art search engine technology.

  20. E-Portfolio Web-based for Students’ Internship Program Activities

    NASA Astrophysics Data System (ADS)

    Juhana, A.; Abdullah, A. G.; Somantri, M.; Aryadi, S.; Zakaria, D.; Amelia, N.; Arasid, W.

    2018-02-01

    Internship program is an important part in vocational education process to improve the quality of competent graduates. The complete work documentation process in electronic portfolio (e-Portfolio) platform will facilitate students in reporting the results of their work to both university and industry supervisor. The purpose of this research is to create a more easily accessed e-Portfolio which is appropriate for students and supervisors’ need in documenting their work and monitoring process. The method used in this research is fundamental research. This research is focused on the implementation of internship e-Portfolio features by demonstrating them to students who have conducted internship program. The result of this research is to create a proper web-based e-Portfolio which can be used to facilitate students in documenting the results of their work and aid supervisors in monitoring process during internship.

  1. Revealing the Cosmic Web-dependent Halo Bias

    NASA Astrophysics Data System (ADS)

    Yang, Xiaohu; Zhang, Youcai; Lu, Tianhuan; Wang, Huiyuan; Shi, Feng; Tweed, Dylan; Li, Shijie; Luo, Wentao; Lu, Yi; Yang, Lei

    2017-10-01

    Halo bias is the one of the key ingredients of the halo models. It was shown at a given redshift to be only dependent, to the first order, on the halo mass. In this study, four types of cosmic web environments—clusters, filaments, sheets, and voids—are defined within a state-of-the-art high-resolution N-body simulation. Within these environments, we use both halo-dark matter cross correlation and halo-halo autocorrelation functions to probe the clustering properties of halos. The nature of the halo bias differs strongly between the four different cosmic web environments described here. With respect to the overall population, halos in clusters have significantly lower biases in the {10}11.0˜ {10}13.5 {h}-1 {M}⊙ mass range. In other environments, however, halos show extremely enhanced biases up to a factor 10 in voids for halos of mass ˜ {10}12.0 {h}-1 {M}⊙ . Such a strong cosmic web environment dependence in the halo bias may play an important role in future cosmological and galaxy formation studies. Within this cosmic web framework, the age dependency of halo bias is found to be only significant in clusters and filaments for relatively small halos ≲ {10}12.5 {h}-1 {M}⊙ .

  2. Parasites in food webs: the ultimate missing links

    PubMed Central

    Lafferty, Kevin D; Allesina, Stefano; Arim, Matias; Briggs, Cherie J; De Leo, Giulio; Dobson, Andrew P; Dunne, Jennifer A; Johnson, Pieter T J; Kuris, Armand M; Marcogliese, David J; Martinez, Neo D; Memmott, Jane; Marquet, Pablo A; McLaughlin, John P; Mordecai, Erin A; Pascual, Mercedes; Poulin, Robert; Thieltges, David W

    2008-01-01

    Parasitism is the most common consumer strategy among organisms, yet only recently has there been a call for the inclusion of infectious disease agents in food webs. The value of this effort hinges on whether parasites affect food-web properties. Increasing evidence suggests that parasites have the potential to uniquely alter food-web topology in terms of chain length, connectance and robustness. In addition, parasites might affect food-web stability, interaction strength and energy flow. Food-web structure also affects infectious disease dynamics because parasites depend on the ecological networks in which they live. Empirically, incorporating parasites into food webs is straightforward. We may start with existing food webs and add parasites as nodes, or we may try to build food webs around systems for which we already have a good understanding of infectious processes. In the future, perhaps researchers will add parasites while they construct food webs. Less clear is how food-web theory can accommodate parasites. This is a deep and central problem in theoretical biology and applied mathematics. For instance, is representing parasites with complex life cycles as a single node equivalent to representing other species with ontogenetic niche shifts as a single node? Can parasitism fit into fundamental frameworks such as the niche model? Can we integrate infectious disease models into the emerging field of dynamic food-web modelling? Future progress will benefit from interdisciplinary collaborations between ecologists and infectious disease biologists. PMID:18462196

  3. Parasites in food webs: the ultimate missing links.

    PubMed

    Lafferty, Kevin D; Allesina, Stefano; Arim, Matias; Briggs, Cherie J; De Leo, Giulio; Dobson, Andrew P; Dunne, Jennifer A; Johnson, Pieter T J; Kuris, Armand M; Marcogliese, David J; Martinez, Neo D; Memmott, Jane; Marquet, Pablo A; McLaughlin, John P; Mordecai, Erin A; Pascual, Mercedes; Poulin, Robert; Thieltges, David W

    2008-06-01

    Parasitism is the most common consumer strategy among organisms, yet only recently has there been a call for the inclusion of infectious disease agents in food webs. The value of this effort hinges on whether parasites affect food-web properties. Increasing evidence suggests that parasites have the potential to uniquely alter food-web topology in terms of chain length, connectance and robustness. In addition, parasites might affect food-web stability, interaction strength and energy flow. Food-web structure also affects infectious disease dynamics because parasites depend on the ecological networks in which they live. Empirically, incorporating parasites into food webs is straightforward. We may start with existing food webs and add parasites as nodes, or we may try to build food webs around systems for which we already have a good understanding of infectious processes. In the future, perhaps researchers will add parasites while they construct food webs. Less clear is how food-web theory can accommodate parasites. This is a deep and central problem in theoretical biology and applied mathematics. For instance, is representing parasites with complex life cycles as a single node equivalent to representing other species with ontogenetic niche shifts as a single node? Can parasitism fit into fundamental frameworks such as the niche model? Can we integrate infectious disease models into the emerging field of dynamic food-web modelling? Future progress will benefit from interdisciplinary collaborations between ecologists and infectious disease biologists.

  4. Parasites in food webs: the ultimate missing links

    USGS Publications Warehouse

    Lafferty, Kevin D.; Allesina, Stefano; Arim, Matias; Briggs, Cherie J.; De Leo, Giulio A.; Dobson, Andrew P.; Dunne, Jennifer A.; Johnson, Pieter T.J.; Kuris, Armand M.; Marcogliese, David J.; Martinez, Neo D.; Memmott, Jane; Marquet, Pablo A.; McLaughlin, John P.; Mordecai, Eerin A.; Pascual, Mercedes; Poulin, Robert; Thieltges, David W.

    2008-01-01

    Parasitism is the most common consumer strategy among organisms, yet only recently has there been a call for the inclusion of infectious disease agents in food webs. The value of this effort hinges on whether parasites affect food-web properties. Increasing evidence suggests that parasites have the potential to uniquely alter food-web topology in terms of chain length, connectance and robustness. In addition, parasites might affect food-web stability, interaction strength and energy flow. Food-web structure also affects infectious disease dynamics because parasites depend on the ecological networks in which they live. Empirically, incorporating parasites into food webs is straightforward. We may start with existing food webs and add parasites as nodes, or we may try to build food webs around systems for which we already have a good understanding of infectious processes. In the future, perhaps researchers will add parasites while they construct food webs. Less clear is how food-web theory can accommodate parasites. This is a deep and central problem in theoretical biology and applied mathematics. For instance, is representing parasites with complex life cycles as a single node equivalent to representing other species with ontogenetic niche shifts as a single node? Can parasitism fit into fundamental frameworks such as the niche model? Can we integrate infectious disease models into the emerging field of dynamic food-web modelling? Future progress will benefit from interdisciplinary collaborations between ecologists and infectious disease biologists.

  5. Semantic Web technologies for the big data in life sciences.

    PubMed

    Wu, Hongyan; Yamaguchi, Atsuko

    2014-08-01

    The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.

  6. Government documents and the online catalog.

    PubMed

    Lynch, F H; Lasater, M C

    1990-01-01

    Prior to planning for implementing the NOTIS system, the Vanderbilt Medical Center Library had not fully cataloged its government publications, and records for these materials were not in machine-readable format. A decision was made that patrons should need to look in only one place for all library materials, including the Health and Human Services Department publications received each year from the central library's Government Documents Unit. Beginning in 1985, these publications were added to the library's database, and the entire 7,200-piece collection is now in the online catalog. Working with these publications has taught the library much about the advantages and disadvantages of cataloging government documents in an online environment. It was found that OCLC cataloging copy is eventually available for most titles, although only about 10% of the records have MeSH headings. Staff time is the major expenditure; problems are caused by documents' irregular nature, frequent format changes, and difficult authority work. Since their addition to the online catalog, documents are used more and the library has better control.

  7. Government documents and the online catalog.

    PubMed Central

    Lynch, F H; Lasater, M C

    1990-01-01

    Prior to planning for implementing the NOTIS system, the Vanderbilt Medical Center Library had not fully cataloged its government publications, and records for these materials were not in machine-readable format. A decision was made that patrons should need to look in only one place for all library materials, including the Health and Human Services Department publications received each year from the central library's Government Documents Unit. Beginning in 1985, these publications were added to the library's database, and the entire 7,200-piece collection is now in the online catalog. Working with these publications has taught the library much about the advantages and disadvantages of cataloging government documents in an online environment. It was found that OCLC cataloging copy is eventually available for most titles, although only about 10% of the records have MeSH headings. Staff time is the major expenditure; problems are caused by documents' irregular nature, frequent format changes, and difficult authority work. Since their addition to the online catalog, documents are used more and the library has better control. PMID:2295010

  8. Enriching a document collection by integrating information extraction and PDF annotation

    NASA Astrophysics Data System (ADS)

    Powley, Brett; Dale, Robert; Anisimoff, Ilya

    2009-01-01

    Modern digital libraries offer all the hyperlinking possibilities of the World Wide Web: when a reader finds a citation of interest, in many cases she can now click on a link to be taken to the cited work. This paper presents work aimed at providing the same ease of navigation for legacy PDF document collections that were created before the possibility of integrating hyperlinks into documents was ever considered. To achieve our goal, we need to carry out two tasks: first, we need to identify and link citations and references in the text with high reliability; and second, we need the ability to determine physical PDF page locations for these elements. We demonstrate the use of a high-accuracy citation extraction algorithm which significantly improves on earlier reported techniques, and a technique for integrating PDF processing with a conventional text-stream based information extraction pipeline. We demonstrate these techniques in the context of a particular document collection, this being the ACL Anthology; but the same approach can be applied to other document sets.

  9. UnCover on the Web: search hints and applications in library environments.

    PubMed

    Galpern, N F; Albert, K M

    1997-01-01

    Among the huge maze of resources available on the Internet, UnCoverWeb stands out as a valuable tool for medical libraries. This up-to-date, free-access, multidisciplinary database of periodical references is searched through an easy-to-learn graphical user interface that is a welcome improvement over the telnet version. This article reviews the basic and advanced search techniques for UnCoverWeb, as well as providing information on the document delivery functions and table of contents alerting service called Reveal. UnCover's currency is evaluated and compared with other current awareness resources. System deficiencies are discussed, with the conclusion that although UnCoverWeb lacks the sophisticated features of many commercial database search services, it is nonetheless a useful addition to the repertoire of information sources available in a library.

  10. Semantic enrichment of medical forms - semi-automated coding of ODM-elements via web services.

    PubMed

    Breil, Bernhard; Watermann, Andreas; Haas, Peter; Dziuballe, Philipp; Dugas, Martin

    2012-01-01

    Semantic interoperability is an unsolved problem which occurs while working with medical forms from different information systems or institutions. Standards like ODM or CDA assure structural homogenization but in order to compare elements from different data models it is necessary to use semantic concepts and codes on an item level of those structures. We developed and implemented a web-based tool which enables a domain expert to perform semi-automated coding of ODM-files. For each item it is possible to inquire web services which result in unique concept codes without leaving the context of the document. Although it was not feasible to perform a totally automated coding we have implemented a dialog based method to perform an efficient coding of all data elements in the context of the whole document. The proportion of codable items was comparable to results from previous studies.

  11. Does teaching of documentation of shoulder dystocia delivery through simulation result in improved documentation in real life?

    PubMed

    Comeau, Robyn; Craig, Catherine

    2014-03-01

    Documentation of deliveries complicated by shoulder dystocia is a valuable communication skill necessary for residents to attain during residency training. Our objective was to determine whether the teaching of documentation of shoulder dystocia in a simulation environment would translate to improved documentation of the event in an actual clinical situation. We conducted a cohort study involving obstetrics and gynaecology residents in years 2 to 5 between November 2010 and December 2012. Each resident participated in a shoulder dystocia simulation teaching session and was asked to write a delivery note immediately afterwards. They were given feedback regarding their performance of the delivery and their documentation of the events. Following this, dictated records of shoulder dystocia deliveries immediately before and after the simulation session were identified through the Meditech system. An itemized checklist was used to assess the quality of residents' dictated documentation before and after the simulation session. All eligible residents (18) enrolled in the study, and 17 met the inclusion criteria. For 10 residents (59%) documentation of a delivery with shoulder dystocia was present before and after the simulation session, for five residents (29%) it was only present before the session, and for two residents (18%) it was only present after the session. When residents were assessed as a group, there were no differences in the proportion of residents recording items on the checklist before and after the simulation session (P > 0.05 for all). Similarly, analysis of the performance of the10 residents who had dictated documentation both before and after the session showed no differences in the number of elements recorded on dictations done before and after the simulation session (P > 0.05 for all). The teaching of shoulder dystocia documentation through simulation did not result in a measurable improvement in the quality of documentation of shoulder dystocia in

  12. A Relationship-Building Model for the Web Retail Marketplace.

    ERIC Educational Resources Information Center

    Wang, Fang; Head, Milena; Archer, Norm

    2000-01-01

    Discusses the effects of the Web on marketing practices. Introduces the concept and theory of relationship marketing. The relationship network concept, which typically is only applied to the business-to-business market, is discussed within the business-to-consumer market, and a new relationship-building model for the Web marketplace is proposed.…

  13. Establishing Transportation Framework Services Using the Open Geospatial Consortium Web Feature Service Specification

    NASA Astrophysics Data System (ADS)

    Yang, C.; Wong, D. W.; Phillips, T.; Wright, R. A.; Lindsey, S.; Kafatos, M.

    2005-12-01

    As a teamed partnership of the Center for Earth Observing and Space Research (CEOSR) at George Mason University (GMU), Virginia Department of Transportation (VDOT), Bureau of Transportation Statistics at the Department of Transportation (BTS/DOT), and Intergraph, we established Transportation Framework Data Services using Open Geospatial Consortium (OGC)'s Web Feature Service (WFS) Specification to enable the sharing of transportation data among the federal level with data from BTS/DOT, the state level through VDOT, the industries through Intergraph. CEOSR develops WFS solutions using Intergraph software. Relevant technical documents are also developed and disseminated through the partners. The WFS is integrated with operational geospatial systems at CEOSR and VDOT. CEOSR works with Intergraph on developing WFS solutions and technical documents. GeoMedia WebMap WFS toolkit is used with software and technical support from Intergraph. ESRI ArcIMS WFS connector is used with GMU's campus license of ESRI products. Tested solutions are integrated with framework data service operational systems, including 1) CEOSR's interoperable geospatial information services, FGDC clearinghouse Node, Geospatial One Stop (GOS) portal, and WMS services, 2) VDOT's state transportation data and GIS infrastructure, and 3)BTS/DOT's national transportation data. The project presents: 1) develop and deploy an operational OGC WFS 1.1 interfaces at CEOSR for registering with FGDC/GOS Portal and responding to Web ``POST'' requests for transportation Framework data as listed in Table 1; 2) build the WFS service that can return the data that conform to the drafted ANSI/INCITS L1 Standard (when available) for each identified theme in the format given by OGC Geography Markup Language (GML) Version 3.0 or higher; 3) integrate the OGC WFS with CEOSR's clearinghouse nodes, 4) establish a formal partnership to develop and share WFS-based geospatial interoperability technology among GMU, VDOT, BTS

  14. 17 CFR 232.401 - XBRL-Related Document submissions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... REGULATION S-T-GENERAL RULES AND REGULATIONS FOR ELECTRONIC FILINGS Xbrl-Related Documents § 232.401 XBRL-Related Document submissions. (a) Only an electronic filer that is an investment company registered under... XBRL-Related Documents relate; or, if the electronic filer is eligible to file a Form 8-K (§ 249.308 of...

  15. 17 CFR 232.401 - XBRL-Related Document submissions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... REGULATION S-T-GENERAL RULES AND REGULATIONS FOR ELECTRONIC FILINGS Xbrl-Related Documents § 232.401 XBRL-Related Document submissions. (a) Only an electronic filer that is an investment company registered under... XBRL-Related Documents relate; or, if the electronic filer is eligible to file a Form 8-K (§ 249.308 of...

  16. 17 CFR 232.401 - XBRL-Related Document submissions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... REGULATION S-T-GENERAL RULES AND REGULATIONS FOR ELECTRONIC FILINGS Xbrl-Related Documents § 232.401 XBRL-Related Document submissions. (a) Only an electronic filer that is an investment company registered under... XBRL-Related Documents relate; or, if the electronic filer is eligible to file a Form 8-K (§ 249.308 of...

  17. 17 CFR 232.401 - XBRL-Related Document submissions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... REGULATION S-T-GENERAL RULES AND REGULATIONS FOR ELECTRONIC FILINGS Xbrl-Related Documents § 232.401 XBRL-Related Document submissions. (a) Only an electronic filer that is an investment company registered under... XBRL-Related Documents relate; or, if the electronic filer is eligible to file a Form 8-K (§ 249.308 of...

  18. 17 CFR 232.401 - XBRL-Related Document submissions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... REGULATION S-T-GENERAL RULES AND REGULATIONS FOR ELECTRONIC FILINGS Xbrl-Related Documents § 232.401 XBRL-Related Document submissions. (a) Only an electronic filer that is an investment company registered under... XBRL-Related Documents relate; or, if the electronic filer is eligible to file a Form 8-K (§ 249.308 of...

  19. The Food Web of Potter Cove (Antarctica): complexity, structure and function

    NASA Astrophysics Data System (ADS)

    Marina, Tomás I.; Salinas, Vanesa; Cordone, Georgina; Campana, Gabriela; Moreira, Eugenia; Deregibus, Dolores; Torre, Luciana; Sahade, Ricardo; Tatián, Marcos; Barrera Oro, Esteban; De Troch, Marleen; Doyle, Santiago; Quartino, María Liliana; Saravia, Leonardo A.; Momo, Fernando R.

    2018-01-01

    Knowledge of the food web structure and complexity are central to better understand ecosystem functioning. A food-web approach includes both species and energy flows among them, providing a natural framework for characterizing species' ecological roles and the mechanisms through which biodiversity influences ecosystem dynamics. Here we present for the first time a high-resolution food web for a marine ecosystem at Potter Cove (northern Antarctic Peninsula). Eleven food web properties were analyzed in order to document network complexity, structure and topology. We found a low linkage density (3.4), connectance (0.04) and omnivory percentage (45), as well as a short path length (1.8) and a low clustering coefficient (0.08). Furthermore, relating the structure of the food web to its dynamics, an exponential degree distribution (in- and out-links) was found. This suggests that the Potter Cove food web may be vulnerable if the most connected species became locally extinct. For two of the three more connected functional groups, competition overlap graphs imply high trophic interaction between demersal fish and niche specialization according to feeding strategies in amphipods. On the other hand, the prey overlap graph shows also that multiple energy pathways of carbon flux exist across benthic and pelagic habitats in the Potter Cove ecosystem. Although alternative food sources might add robustness to the web, network properties (low linkage density, connectance and omnivory) suggest fragility and potential trophic cascade effects.

  20. Using Web Server Logs in Evaluating Instructional Web Sites.

    ERIC Educational Resources Information Center

    Ingram, Albert L.

    2000-01-01

    Web server logs contain a great deal of information about who uses a Web site and how they use it. This article discusses the analysis of Web logs for instructional Web sites; reviews the data stored in most Web server logs; demonstrates what further information can be gleaned from the logs; and discusses analyzing that information for the…

  1. Available, intuitive and free! Building e-learning modules using web 2.0 services.

    PubMed

    Tam, Chun Wah Michael; Eastwood, Anne

    2012-01-01

    E-learning is part of the mainstream in medical education and often provides the most efficient and effective means of engaging learners in a particular topic. However, translating design and content ideas into a useable product can be technically challenging, especially in the absence of information technology (IT) support. There is little published literature on the use of web 2.0 services to build e-learning activities. To describe the web 2.0 tools and solutions employed to build the GP Synergy evidence-based medicine and critical appraisal online course. We used and integrated a number of free web 2.0 services including: Prezi, a web-based presentation platform; YouTube, a video sharing service; Google Docs, a online document platform; Tiny.cc, a URL shortening service; and Wordpress, a blogging platform. The course consisting of five multimedia-rich, tutorial-like modules was built without IT specialist assistance or specialised software. The web 2.0 services used were free. The course can be accessed with a modern web browser. Modern web 2.0 services remove many of the technical barriers for creating and sharing content on the internet. When used synergistically, these services can be a flexible and low-cost platform for building e-learning activities. They were a pragmatic solution in our context.

  2. Web accessibility and open source software.

    PubMed

    Obrenović, Zeljko

    2009-07-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.

  3. How To Succeed in Promoting Your Web Site: The Impact of Search Engine Registration on Retrieval of a World Wide Web Site.

    ERIC Educational Resources Information Center

    Tunender, Heather; Ervin, Jane

    1998-01-01

    Character strings were planted in a World Wide Web site (Project Whistlestop) to test indexing and retrieval rates of five Web search tools (Lycos, infoseek, AltaVista, Yahoo, Excite). It was found that search tools indexed few of the planted character strings, none indexed the META descriptor tag, and only Excite indexed into the 3rd-4th site…

  4. KBWS: an EMBOSS associated package for accessing bioinformatics web services.

    PubMed

    Oshita, Kazuki; Arakawa, Kazuharu; Tomita, Masaru

    2011-04-29

    The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS) UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS), adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded) and http://soap.g-language.org/kbws_dl.wsdl (Document/literal).

  5. Geomorphology and the World Wide Web

    NASA Astrophysics Data System (ADS)

    Shroder, John F.; Bishop, Michael P.; Olsenholler, Jeffrey; Craiger, J. Philip

    2002-10-01

    The Internet and the World Wide Web have brought many dimensions of new technology to education and research in geomorphology. As with other disciplines on the Web, Web-based geomorphology has become an eclectic mix of whatever material an individual deems worthy of presentation, and in many cases is without quality control. Nevertheless, new electronic media can facilitate education and research in geomorphology. For example, virtual field trips can be developed and accessed to reinforce concepts in class. Techniques for evaluating Internet references helps students to write traditional term papers, but professional presentations can also involve student papers that are published on the Web. Faculty can also address plagiarism issues by using search engines. Because of the lack of peer review of much of the content on the Web, care must be exercised in using it for reference searches. Today, however, refereed journals are going online and can be accessed through subscription or payment per article viewed. Library reference desks regularly use the Web for searches of refereed articles. Research on the Web ranges from communication between investigators, data acquisition, scientific visualization, or comprehensive searches of refereed sources, to interactive analyses of remote data sets. The Nanga Parbat and the Global Land Ice Measurements from Space (GLIMS) Projects are two examples of geomorphologic research that are achieving full potential through use of the Web. Teaching and research in geomorphology are undergoing a beneficial, but sometimes problematic, transition with the new technology. The learning curve is steep for some users but the view from the top is bright. Geomorphology can only prosper from the benefits offered by computer technologies.

  6. Sharing on Web 3d Models of Ancient Theatres. a Methodological Workflow

    NASA Astrophysics Data System (ADS)

    Scianna, A.; La Guardia, M.; Scaduto, M. L.

    2016-06-01

    In the last few years, the need to share on the Web the knowledge of Cultural Heritage (CH) through navigable 3D models has increased. This need requires the availability of Web-based virtual reality systems and 3D WEBGIS. In order to make the information available to all stakeholders, these instruments should be powerful and at the same time very user-friendly. However, research and experiments carried out so far show that a standardized methodology doesn't exist. All this is due both to complexity and dimensions of geometric models to be published, on the one hand, and to excessive costs of hardware and software tools, on the other. In light of this background, the paper describes a methodological approach for creating 3D models of CH, freely exportable on the Web, based on HTML5 and free and open source software. HTML5, supporting the WebGL standard, allows the exploration of 3D spatial models using most used Web browsers like Chrome, Firefox, Safari, Internet Explorer. The methodological workflow here described has been tested for the construction of a multimedia geo-spatial platform developed for three-dimensional exploration and documentation of the ancient theatres of Segesta and of Carthage, and the surrounding landscapes. The experimental application has allowed us to explore the potential and limitations of sharing on the Web of 3D CH models based on WebGL standard. Sharing capabilities could be extended defining suitable geospatial Web-services based on capabilities of HTML5 and WebGL technology.

  7. Comparison of Document Index Graph Using TextRank and HITS Weighting Method in Automatic Text Summarization

    NASA Astrophysics Data System (ADS)

    Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.

    2017-01-01

    Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.

  8. Wind speed affects prey-catching behaviour in an orb web spider.

    PubMed

    Turner, Joe; Vollrath, Fritz; Hesselberg, Thomas

    2011-12-01

    Wind has previously been shown to influence the location and orientation of spider web sites and also the geometry and material composition of constructed orb webs. We now show that wind also influences components of prey-catching behaviour within the web. A small wind tunnel was used to generate different wind speeds. Araneus diadematus ran more slowly towards entangled Drosophila melanogaster in windy conditions, which took less time to escape the web. This indicates a lower capture probability and a diminished overall predation efficiency for spiders at higher wind speeds. We conclude that spiders' behaviour of taking down their webs as wind speed increases may therefore not be a response only to possible web damage.

  9. Wind speed affects prey-catching behaviour in an orb web spider

    NASA Astrophysics Data System (ADS)

    Turner, Joe; Vollrath, Fritz; Hesselberg, Thomas

    2011-12-01

    Wind has previously been shown to influence the location and orientation of spider web sites and also the geometry and material composition of constructed orb webs. We now show that wind also influences components of prey-catching behaviour within the web. A small wind tunnel was used to generate different wind speeds. Araneus diadematus ran more slowly towards entangled Drosophila melanogaster in windy conditions, which took less time to escape the web. This indicates a lower capture probability and a diminished overall predation efficiency for spiders at higher wind speeds. We conclude that spiders' behaviour of taking down their webs as wind speed increases may therefore not be a response only to possible web damage.

  10. 77 FR 70449 - Medical Device User Fee and Modernization Act; Notice to Public of Web Site Location of Fiscal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-26

    ... of guidance documents that the Center for Devices and Radiological Health (CDRH) is intending to... notice announces the Web site location of the two lists of guidance documents which CDRH is intending to... list. FDA and CDRH priorities are subject to change at any time. Topics on this and past guidance...

  11. 76 FR 61367 - Medical Device User Fee and Modernization Act; Notice to Public of Web Site Location of Fiscal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-04

    ... the Agency will post a list of guidance documents the Center for Devices and Radiological Health (CDRH... guidance documents that CDRH is considering for development and providing stakeholders an opportunity to.... This notice announces the Web site location of the list of guidances on which CDRH is intending to work...

  12. Electronic document management systems: an overview.

    PubMed

    Kohn, Deborah

    2002-08-01

    For over a decade, most health care information technology (IT) professionals erroneously learned that document imaging, which is one of the many component technologies of an electronic document management system (EDMS), is the only technology of an EDMS. In addition, many health care IT professionals erroneously believed that EDMSs have either a limited role or no place in IT environments. As a result, most health care IT professionals do not understand documents and unstructured data and their value as structured data partners in most aspects of transaction and information processing systems.

  13. Evolution of health web certification through the HONcode experience.

    PubMed

    Boyer, Célia; Baujard, Vincent; Geissbuhler, Antoine

    2011-01-01

    Today, the Web is a media with increasing pervasiveness around the world. Its use is constantly growing and the medical field is no exception. With this large amount of information, the problem is no longer about finding information but assessing the credibility of the publishers as well as the relevance and accuracy of the documents retrieved from the web. This problem is particularly relevant in the medical area which has a direct impact on the well-being of citizens and in the Web 2.0 context where information publishing is easier than ever. To address the quality of the medical Internet, the HONcode certification proposed by the Health On the Net Foundation (HON) is certainly the most successful initiative. The aims of this paper are to present certification activity through the HONcode experience and to show that certification is more complex than a simple code of conduct. Therefore, we first present the HONcode, its application and its current evolutions. Following that, we give some quantitative results and describe how the final user can access the certified information.

  14. Do sign language videos improve Web navigation for Deaf Signer users?

    PubMed

    Fajardo, Inmaculada; Parra, Elena; Cañas, José J

    2010-01-01

    The efficacy of video-based sign language (SL) navigation aids to improve Web search for Deaf Signers was tested by two experiments. Experiment 1 compared 2 navigation aids based on text hyperlinks linked to embedded SL videos, which differed in the spatial contiguity between the text hyperlink and SL video (contiguous vs. distant). Deaf Signers' performance was similar in Web search using both aids, but a positive correlation between their word categorization abilities and search efficiency appeared in the distant condition. In Experiment 2, the contiguous condition was compared with a text-only hyperlink condition. Deaf Signers became less disorientated (used shorter paths to find the target) in the text plus SL condition than in the text-only condition. In addition, the positive correlation between word categorization abilities and search only appeared in the text-only condition. These findings suggest that SL videos added to text hyperlinks improve Web search efficiency for Deaf Signers.

  15. The design and implementation of web mining in web sites security

    NASA Astrophysics Data System (ADS)

    Li, Jian; Zhang, Guo-Yin; Gu, Guo-Chang; Li, Jian-Li

    2003-06-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information, so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  16. Web-based remote monitoring of infant incubators in the ICU.

    PubMed

    Shin, D I; Huh, S J; Lee, T S; Kim, I Y

    2003-09-01

    A web-based real-time operating, management, and monitoring system for checking temperature and humidity within infant incubators using the Intranet has been developed and installed in the infant Intensive Care Unit (ICU). We have created a pilot system which has a temperature and humidity sensor and a measuring module in each incubator, which is connected to a web-server board via an RS485 port. The system transmits signals using standard web-based TCP/IP so that users can access the system from any Internet-connected personal computer in the hospital. Using this method, the system gathers temperature and humidity data transmitted from the measuring modules via the RS485 port on the web-server board and creates a web document containing these data. The system manager can maintain centralized supervisory monitoring of the situations in all incubators while sitting within the infant ICU at a work space equipped with a personal computer. The system can be set to monitor unusual circumstances and to emit an alarm signal expressed as a sound or a light on a measuring module connected to the related incubator. If the system is configured with a large number of incubators connected to a centralized supervisory monitoring station, it will improve convenience and assure meaningful improvement in response to incidents that require intervention.

  17. Web Mining: Machine Learning for Web Applications.

    ERIC Educational Resources Information Center

    Chen, Hsinchun; Chau, Michael

    2004-01-01

    Presents an overview of machine learning research and reviews methods used for evaluating machine learning systems. Ways that machine-learning algorithms were used in traditional information retrieval systems in the "pre-Web" era are described, and the field of Web mining and how machine learning has been used in different Web mining…

  18. Intelligent web image retrieval system

    NASA Astrophysics Data System (ADS)

    Hong, Sungyong; Lee, Chungwoo; Nah, Yunmook

    2001-07-01

    Recently, the web sites such as e-business sites and shopping mall sites deal with lots of image information. To find a specific image from these image sources, we usually use web search engines or image database engines which rely on keyword only retrievals or color based retrievals with limited search capabilities. This paper presents an intelligent web image retrieval system. We propose the system architecture, the texture and color based image classification and indexing techniques, and representation schemes of user usage patterns. The query can be given by providing keywords, by selecting one or more sample texture patterns, by assigning color values within positional color blocks, or by combining some or all of these factors. The system keeps track of user's preferences by generating user query logs and automatically add more search information to subsequent user queries. To show the usefulness of the proposed system, some experimental results showing recall and precision are also explained.

  19. Paper and Other Web Coating National Emission Standards for Hazardous Air Pollutants (NESHAP) Questions and Answers

    EPA Pesticide Factsheets

    This May 2003 document contains questions and answers on the Paper and Web Coating National Emission Standards for Hazardous Air Pollutants (NESHAP) regulation. The questions cover topics such as compliance, applicability, and initial notification.

  20. Semantic Document Library: A Virtual Research Environment for Documents, Data and Workflows Sharing

    NASA Astrophysics Data System (ADS)

    Kotwani, K.; Liu, Y.; Myers, J.; Futrelle, J.

    2008-12-01

    The Semantic Document Library (SDL) was driven by use cases from the environmental observatory communities and is designed to provide conventional document repository features of uploading, downloading, editing and versioning of documents as well as value adding features of tagging, querying, sharing, annotating, ranking, provenance, social networking and geo-spatial mapping services. It allows users to organize a catalogue of watershed observation data, model output, workflows, as well publications and documents related to the same watershed study through the tagging capability. Users can tag all relevant materials using the same watershed name and find all of them easily later using this tag. The underpinning semantic content repository can store materials from other cyberenvironments such as workflow or simulation tools and SDL provides an effective interface to query and organize materials from various sources. Advanced features of the SDL allow users to visualize the provenance of the materials such as the source and how the output data is derived. Other novel features include visualizing all geo-referenced materials on a geospatial map. SDL as a component of a cyberenvironment portal (the NCSA Cybercollaboratory) has goal of efficient management of information and relationships between published artifacts (Validated models, vetted data, workflows, annotations, best practices, reviews and papers) produced from raw research artifacts (data, notes, plans etc.) through agents (people, sensors etc.). Tremendous scientific potential of artifacts is achieved through mechanisms of sharing, reuse and collaboration - empowering scientists to spread their knowledge and protocols and to benefit from the knowledge of others. SDL successfully implements web 2.0 technologies and design patterns along with semantic content management approach that enables use of multiple ontologies and dynamic evolution (e.g. folksonomies) of terminology. Scientific documents involved with

  1. Affordances of students' using the World Wide Web as a publishing medium in project-based learning environments

    NASA Astrophysics Data System (ADS)

    Bos, Nathan Daniel

    This dissertation investigates the emerging affordance of the World Wide Web as a place for high school students to become authors and publishers of information. Two empirical studies lay groundwork for student publishing by examining learning issues related to audience adaptation in writing, motivation and engagement with hypermedia, design, problem-solving, and critical evaluation. Two models of student publishing on the World Wide Web were investigated over the course of two 11spth grade project-based science curriculums. In the first curricular model, students worked in pairs to design informative hypermedia projects about infectious diseases that were published on the Web. Four case studies were written, drawing on both product- and process-related data sources. Four theoretically important findings are illustrated through these cases: (1) multimedia, especially graphics, seemed to catalyze some students' design processes by affecting the sequence of their design process and by providing a connection between the science content and their personal interest areas, (2) hypermedia design can demand high levels of analysis and synthesis of science content, (3) students can learn to think about science content representation through engagement with challenging design tasks, and (4) students' consideration of an outside audience can be facilitated by teacher-given design principles. The second Web-publishing model examines how students critically evaluate scientific resources on the Web, and how students can contribute to the Web's organization and usability by publishing critical reviews. Students critically evaluated Web resources using a four-part scheme: summarization of content, content, evaluation of credibility, evaluation of organizational structure, and evaluation of appearance. Content analyses comparing students' reviews and reviewed Web documents showed that students were proficient at summarizing content of Web documents, identifying their publishing

  2. Dealing with extreme data diversity: extraction and fusion from the growing types of document formats

    NASA Astrophysics Data System (ADS)

    David, Peter; Hansen, Nichole; Nolan, James J.; Alcocer, Pedro

    2015-05-01

    The growth in text data available online is accompanied by a growth in the diversity of available documents. Corpora with extreme heterogeneity in terms of file formats, document organization, page layout, text style, and content are common. The absence of meaningful metadata describing the structure of online and open-source data leads to text extraction results that contain no information about document structure and are cluttered with page headers and footers, web navigation controls, advertisements, and other items that are typically considered noise. We describe an approach to document structure and metadata recovery that uses visual analysis of documents to infer the communicative intent of the author. Our algorithm identifies the components of documents such as titles, headings, and body content, based on their appearance. Because it operates on an image of a document, our technique can be applied to any type of document, including scanned images. Our approach to document structure recovery considers a finer-grained set of component types than prior approaches. In this initial work, we show that a machine learning approach to document structure recovery using a feature set based on the geometry and appearance of images of documents achieves a 60% greater F1- score than a baseline random classifier.

  3. Combining infobuttons and semantic web rules for identifying patterns and delivering highly-personalized education materials.

    PubMed

    Hulse, Nathan C; Long, Jie; Tao, Cui

    2013-01-01

    Infobuttons have been established to be an effective resource for addressing information needs at the point of care, as evidenced by recent research and their inclusion in government-based electronic health record incentive programs in the United States. Yet their utility has been limited to wide success for only a specific set of domains (lab data, medication orders, and problem lists) and only for discrete, singular concepts that are already documented in the electronic medical record. In this manuscript, we present an effort to broaden their utility by connecting a semantic web-based phenotyping engine with an infobutton framework in order to identify and address broader issues in patient data, derived from multiple data sources. We have tested these patterns by defining and testing semantic definitions of pre-diabetes and metabolic syndrome. We intend to carry forward relevant information to the infobutton framework to present timely, relevant education resources to patients and providers.

  4. Cosmic Web of Galaxies in the COMOS Field

    NASA Astrophysics Data System (ADS)

    Darvish, Behnam; Martin, Christopher D.; Mobasher, Bahram; Scoville, Nicholas; Sobral, David; COSMOS science Team

    2017-01-01

    We use a mass complete sample of galaxies with accurate photometric redshifts in the COSMOS field to estimate the density field and to extract the components of the cosmic web. The comic web extraction algorithm relies on the signs and the ratio of eigenvalues of the Hessian matrix and is enable to integrate the density field into clusters, filaments and the field. We show that at z < 0.8, the median star-formation rate in the cosmic web gradually declines from the field to clusters and this decline is especially sharp for satellite galaxies (~1 dex vs. ~0.4 dex for centrals). However, at z > 0.8, the trend flattens out. For star-forming galaxies only, the median star-formation rate declines by ~ 0.3-0.4 dex from the field to clusters for both satellites and centrals, only at z < 0.5. We argue that for satellite galaxies, the main role of the cosmic web environment is to control their star-forming/quiescent fraction, whereas for centrals, it is mainly to control their overall star-formation rate. Given these, we suggest that most satellite galaxies experience a rapid quenching mechanism as they fall from the field into clusters through the channel of filaments, whereas for central galaxies, it is mostly due to a slow quenching process. Our preliminary results highlight the importance of the large-scale cosmic web on the evolution of galaxies.

  5. [Consensus document for the detection and management of chronic kidney disease].

    PubMed

    Martínez-Castelao, Alberto; Górriz, José L; Bover, Jordi; Segura-de la Morena, Julián; Cebollada, Jesús; Escalada, Javier; Esmatjes, Enric; Fácila, Lorenzo; Gamarra, Javier; Gràcia, Silvia; Hernández-Moreno, Julio; Llisterri-Caro, José L; Mazón, Pilar; Montañés, Rosario; Morales-Olivas, Francisco; Muñoz-Torres, Manuel; de Pablos-Velasco, Pedro; de Santiago, Ana; Sánchez-Celaya, Marta; Suárez, Carmen; Tranche, Salvador

    2014-11-01

    Chronic kidney disease (CKD) is an important global health problem, involving to 10% of the Spanish population, promoting high morbidity and mortality for the patient and an elevate consumption of the total health resources for the National Health System. This is a summary of an executive consensus document of ten scientific societies involved in the care of the renal patient, that actualizes the consensus document published in 2007. The central extended document can be consulted in the web page of each society. The aspects included in the document are: Concept, epidemiology and risk factors for CKD. Diagnostic criteria, evaluation and stages of CKD, albuminuria and glomerular filtration rate estimation. Progression factors for renal damage. Patient remission criteria. Follow-up and objectives of each speciality control. Nephrotoxicity prevention. Cardio-vascular damage detection. Diet, life-style and treatment attitudes: hypertension, dyslipidaemia, hyperglycemia, smoking, obesity, hyperuricemia, anemia, mineral and bone disorders. Multidisciplinary management for Primary Care, other specialities and Nephrology. Integrated management of CKD patient in haemodialysis, peritoneal dialysis and renal transplant patients. Management of the uremic patient in palliative care. We hope that this document may be of help for the multidisciplinary management of CKD patients by summarizing the most updated recommendations. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.

  6. [Consensus document for the detection and management of chronic kidney disease].

    PubMed

    Martínez-Castelao, Alberto; Górriz, José L; Bover, Jordi; Segura-de la Morena, Julián; Cebollada, Jesús; Escalada, Javier; Esmatjes, Enric; Fácila, Lorenzo; Gamarra, Javier; Gràcia, Silvia; Hernández-Moreno, Julio; Llisterri-Caro, José L; Mazón, Pilar; Montañés, Rosario; Morales-Olivas, Francisco; Muñoz-Torres, Manuel; de Pablos-Velasco, Pedro; de Santiago, Ana; Sánchez-Celaya, Marta; Suárez, Carmen; Tranche, Salvador

    2014-11-01

    Chronic kidney disease (CKD) is an important global health problem, involving to 10% of the Spanish population, promoting high morbidity and mortality for the patient and an elevate consumption of the total health resources for the National Health System. This is a summary of an executive consensus document of ten scientific societies involved in the care of the renal patient, that actualizes the consensus document published in 2007. The central extended document can be consulted in the web page of each society. The aspects included in the document are: Concept, epidemiology and risk factors for CKD. Diagnostic criteria, evaluation and stages of CKD, albuminuria and glomerular filtration rate estimation. Progression factors for renal damage. Patient remission criteria. Follow-up and objectives of each speciality control. Nephrotoxicity prevention. Cardio-vascular damage detection. Diet, life-style and treatment attitudes: hypertension, dyslipidaemia, hyperglycemia, smoking, obesity, hyperuricemia, anemia, mineral and bone disorders. Multidisciplinary management for Primary Care, other specialities and Nephrology. Integrated management of CKD patient in haemodialysis, peritoneal dialysis and renal transplant patients. Management of the uremic patient in palliative care. We hope that this document may be of help for the multidisciplinary management of CKD patients by summarizing the most updated recommendations. Copyright © 2014. Published by Elsevier Espana.

  7. Incremental Ontology-Based Extraction and Alignment in Semi-structured Documents

    NASA Astrophysics Data System (ADS)

    Thiam, Mouhamadou; Bennacer, Nacéra; Pernelle, Nathalie; Lô, Moussa

    SHIRIis an ontology-based system for integration of semi-structured documents related to a specific domain. The system’s purpose is to allow users to access to relevant parts of documents as answers to their queries. SHIRI uses RDF/OWL for representation of resources and SPARQL for their querying. It relies on an automatic, unsupervised and ontology-driven approach for extraction, alignment and semantic annotation of tagged elements of documents. In this paper, we focus on the Extract-Align algorithm which exploits a set of named entity and term patterns to extract term candidates to be aligned with the ontology. It proceeds in an incremental manner in order to populate the ontology with terms describing instances of the domain and to reduce the access to extern resources such as Web. We experiment it on a HTML corpus related to call for papers in computer science and the results that we obtain are very promising. These results show how the incremental behaviour of Extract-Align algorithm enriches the ontology and the number of terms (or named entities) aligned directly with the ontology increases.

  8. Food-web stability signals critical transitions in temperate shallow lakes

    PubMed Central

    Kuiper, Jan J.; van Altena, Cassandra; de Ruiter, Peter C.; van Gerven, Luuk P. A.; Janse, Jan H.; Mooij, Wolf M.

    2015-01-01

    A principal aim of ecologists is to identify critical levels of environmental change beyond which ecosystems undergo radical shifts in their functioning. Both food-web theory and alternative stable states theory provide fundamental clues to mechanisms conferring stability to natural systems. Yet, it is unclear how the concept of food-web stability is associated with the resilience of ecosystems susceptible to regime change. Here, we use a combination of food web and ecosystem modelling to show that impending catastrophic shifts in shallow lakes are preceded by a destabilizing reorganization of interaction strengths in the aquatic food web. Analysis of the intricate web of trophic interactions reveals that only few key interactions, involving zooplankton, diatoms and detritus, dictate the deterioration of food-web stability. Our study exposes a tight link between food-web dynamics and the dynamics of the whole ecosystem, implying that trophic organization may serve as an empirical indicator of ecosystem resilience. PMID:26173798

  9. User Documentation for Multiple Software Releases

    NASA Technical Reports Server (NTRS)

    Humphrey, R.

    1982-01-01

    In proposed solution to problems of frequent software releases and updates, documentation would be divided into smaller packages, each of which contains data relating to only one of several software components. Changes would not affect entire document. Concept would improve dissemination of information regarding changes and would improve quality of data supporting packages. Would help to insure both timeliness and more thorough scrutiny of changes.

  10. The Propulsive-Only Flight Control Problem

    NASA Technical Reports Server (NTRS)

    Blezad, Daniel J.

    1996-01-01

    Attitude control of aircraft using only the throttles is investigated. The long time constants of both the engines and of the aircraft dynamics, together with the coupling between longitudinal and lateral aircraft modes make piloted flight with failed control surfaces hazardous, especially when attempting to land. This research documents the results of in-flight operation using simulated failed flight controls and ground simulations of piloted propulsive-only control to touchdown. Augmentation control laws to assist the pilot are described using both optimal control and classical feedback methods. Piloted simulation using augmentation shows that simple and effective augmented control can be achieved in a wide variety of failed configurations.

  11. Can Embedded Annotations Help High School Students Perform Problem Solving Tasks Using a Web-Based Historical Document?

    ERIC Educational Resources Information Center

    Lee, John K.; Calandra, Brendan

    2004-01-01

    Two versions of a Web site on the United States Constitution were used by students in separate high school history classes to solve problems that emerged from four constitutional scenarios. One site contained embedded conceptual scaffolding devices in the form of textual annotations; the other did not. The results of our study demonstrated the…

  12. The Full Monty: Locating Resources, Creating, and Presenting a Web Enhanced History Course.

    ERIC Educational Resources Information Center

    Bazillion, Richard J.; Braun, Connie L.

    2001-01-01

    Discusses how to develop a history course using the World Wide Web; course development software; full text digitized articles, electronic books, primary documents, images, and audio files; and computer equipment such as LCD projectors and interactive whiteboards. Addresses the importance of support for faculty using technology in teaching. (PAL)

  13. Agency-wide Quality System Documents

    EPA Pesticide Factsheets

    Quality specifications for EPA organizations as defined by EPA Directives are internal policy documents that apply only to EPA organizations. The Code of Federal Regulations defines specifications for extramural agreements with non-EPA organizations.

  14. Documentation of surgical specimens using digital video technology.

    PubMed

    Melín-Aldana, Héctor; Carter, Barbara; Sciortino, Debra

    2006-09-01

    Digital technology is commonly used for documentation of specimens in anatomic pathology and has been mainly limited to still photographs. Technologic innovations, such as digital video, provide additional, in some cases better, options for documentation. To demonstrate the applicability of digital video to the documentation of surgical specimens. A Canon Elura MC40 digital camcorder was used, and the unedited movies were transferred to a Macintosh PowerBook G4 computer. Both the camcorder and specimens were hand-held during filming. The movies were edited using the software iMovie. Annotations and histologic photographs may be easily incorporated into movies when editing, if desired. The finished movies are best viewed in computers which contain the free program QuickTime Player. Movies may also be incorporated onto DVDs, for viewing in standard DVD players or appropriately equipped computers. The final movies are on average 2 minutes in duration, with a file size between 2 and 400 megabytes, depending on the intended use. Because of file size, distribution is more practical via CD or DVD, but movies may be compressed for distribution through the Internet (e-mail, Web sites) or through internal hospital networks. Digital video is a practical, easy, and affordable methodology for specimen documentation, permitting a better 3-dimensional understanding of the specimens. Discussions with colleagues, student education, presentation at conferences, and other educational activities can be enhanced with the implementation of digital video technology.

  15. Self-authentication of value documents

    NASA Astrophysics Data System (ADS)

    Hayosh, Thomas D.

    1998-04-01

    To prevent fraud it is critical to distinguish an authentic document from a counterfeit or altered document. Most current technologies rely on difficult-to-print human detectable features which are added to a document to prevent illegal reproduction. Fraud detection is mostly accomplished by human observation and is based upon the examiner's knowledge, experience and time allotted for examination of a document. Another approach to increasing the security of a value document is to add a unique property to each document. Data about that property is then encoded on the document itself and finally secured using a public key based digital signature. In such a scheme, machine readability of authenticity is possible. This paper describes a patent-applied-for methodology using the unique property of magnetic ink printing, magnetic remanence, that provides for full self- authentication when used with a recordable magnetic stripe for storing a digital signature and other document data. Traditionally the authenticity of a document is determined by physical examination for color, background printing, paper texture, printing resolution, and ink characteristics. On an initial level, there may be numerous security features present on a value document but only a few can be detected and evaluated by the untrained individual. Because security features are normally not standardized except on currency, training tellers and cashiers to do extensive security evaluation is not practical, even though these people are often the only people who get a chance to closely examine the document in a payment system which is back-end automated. In the context of this paper, one should be thinking about value documents such as commercial and personal checks although the concepts presented here can easily be applied to travelers cheques, credit cards, event tickets, passports, driver's licenses, motor vehicle titles, and even currency. For a practical self-authentication system, the false alarms

  16. CoP Sensing Framework on Web-Based Environment

    NASA Astrophysics Data System (ADS)

    Mustapha, S. M. F. D. Syed

    The Web technologies and Web applications have shown similar high growth rate in terms of daily usages and user acceptance. The Web applications have not only penetrated in the traditional domains such as education and business but have also encroached into areas such as politics, social, lifestyle, and culture. The emergence of Web technologies has enabled Web access even to the person on the move through PDAs or mobile phones that are connected using Wi-Fi, HSDPA, or other communication protocols. These two phenomena are the inducement factors toward the need of building Web-based systems as the supporting tools in fulfilling many mundane activities. In doing this, one of the many focuses in research has been to look at the implementation challenges in building Web-based support systems in different types of environment. This chapter describes the implementation issues in building the community learning framework that can be supported on the Web-based platform. The Community of Practice (CoP) has been chosen as the community learning theory to be the case study and analysis as it challenges the creativity of the architectural design of the Web system in order to capture the presence of learning activities. The details of this chapter describe the characteristics of the CoP to understand the inherent intricacies in modeling in the Web-based environment, the evidences of CoP that need to be traced automatically in a slick manner such that the evidence-capturing process is unobtrusive, and the technologies needed to embrace a full adoption of Web-based support system for the community learning framework.

  17. Analysis of Technique to Extract Data from the Web for Improved Performance

    NASA Astrophysics Data System (ADS)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  18. Web2Quests: Updating a Popular Web-Based Inquiry-Oriented Activity

    ERIC Educational Resources Information Center

    Kurt, Serhat

    2009-01-01

    WebQuest is a popular inquiry-oriented activity in which learners use Web resources. Since the creation of the innovation, almost 15 years ago, the Web has changed significantly, while the WebQuest technique has changed little. This article examines possible applications of new Web trends on WebQuest instructional strategy. Some possible…

  19. Compliance Options Diagrams for the Paper and Other Web Coating National Emission Standards for Hazardous Air Pollutants (NESHAP)

    EPA Pesticide Factsheets

    This January 2004 document contains 14 diagrams illustrating the different compliance options available for those facilities that fall under the Paper and Web Coating Maximum Achievable control Technology (MACT).

  20. Exposing the structure of an Arctic food web.

    PubMed

    Wirta, Helena K; Vesterinen, Eero J; Hambäck, Peter A; Weingartner, Elisabeth; Rasmussen, Claus; Reneerkens, Jeroen; Schmidt, Niels M; Gilg, Olivier; Roslin, Tomas

    2015-09-01

    How food webs are structured has major implications for their stability and dynamics. While poorly studied to date, arctic food webs are commonly assumed to be simple in structure, with few links per species. If this is the case, then different parts of the web may be weakly connected to each other, with populations and species united by only a low number of links. We provide the first highly resolved description of trophic link structure for a large part of a high-arctic food web. For this purpose, we apply a combination of recent techniques to describing the links between three predator guilds (insectivorous birds, spiders, and lepidopteran parasitoids) and their two dominant prey orders (Diptera and Lepidoptera). The resultant web shows a dense link structure and no compartmentalization or modularity across the three predator guilds. Thus, both individual predators and predator guilds tap heavily into the prey community of each other, offering versatile scope for indirect interactions across different parts of the web. The current description of a first but single arctic web may serve as a benchmark toward which to gauge future webs resolved by similar techniques. Targeting an unusual breadth of predator guilds, and relying on techniques with a high resolution, it suggests that species in this web are closely connected. Thus, our findings call for similar explorations of link structure across multiple guilds in both arctic and other webs. From an applied perspective, our description of an arctic web suggests new avenues for understanding how arctic food webs are built and function and of how they respond to current climate change. It suggests that to comprehend the community-level consequences of rapid arctic warming, we should turn from analyses of populations, population pairs, and isolated predator-prey interactions to considering the full set of interacting species.

  1. A Java viewer to publish Digital Imaging and Communications in Medicine (DICOM) radiologic images on the World Wide Web.

    PubMed

    Setti, E; Musumeci, R

    2001-06-01

    The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.

  2. Toward a Web Based Environment for Evaluation and Design of Pedagogical Hypermedia

    ERIC Educational Resources Information Center

    Trigano, Philippe C.; Pacurar-Giacomini, Ecaterina

    2004-01-01

    We are working on a method, called CEPIAH. We propose a web based system used to help teachers to design multimedia documents and to evaluate their prototypes. Our current research objectives are to create a methodology to sustain the educational hypermedia design and evaluation. A module is used to evaluate multimedia software applied in…

  3. World Wide Web Indexes and Hierarchical Lists: Finding Tools for the Internet.

    ERIC Educational Resources Information Center

    Munson, Kurt I.

    1996-01-01

    In World Wide Web indexing: (1) the creation process is automated; (2) the indexes are merely descriptive, not analytical of document content; (3) results may be sorted differently depending on the search engine; and (4) indexes link directly to the resources. This article compares the indexing methods and querying options of the search engines…

  4. WebEAV

    PubMed Central

    Nadkarni, Prakash M.; Brandt, Cynthia M.; Marenco, Luis

    2000-01-01

    The task of creating and maintaining a front end to a large institutional entity-attribute-value (EAV) database can be cumbersome when using traditional client-server technology. Switching to Web technology as a delivery vehicle solves some of these problems but introduces others. In particular, Web development environments tend to be primitive, and many features that client-server developers take for granted are missing. WebEAV is a generic framework for Web development that is intended to streamline the process of Web application development for databases having a significant EAV component. It also addresses some challenging user interface issues that arise when any complex system is created. The authors describe the architecture of WebEAV and provide an overview of its features with suitable examples. PMID:10887163

  5. Bipolar disorder research 2.0: Web technologies for research capacity and knowledge translation.

    PubMed

    Michalak, Erin E; McBride, Sally; Barnes, Steven J; Wood, Chanel S; Khatri, Nasreen; Balram Elliott, Nusha; Parikh, Sagar V

    2017-12-01

    Current Web technologies offer bipolar disorder (BD) researchers many untapped opportunities for conducting research and for promoting knowledge exchange. In the present paper, we document our experiences with a variety of Web 2.0 technologies in the context of an international BD research network: The Collaborative RESearch Team to Study psychosocial issues in BD (CREST.BD). Three technologies were used as tools for enabling research within CREST.BD and for encouraging the dissemination of the results of our research: (1) the crestbd.ca website, (2) social networking tools (ie, Facebook, Twitter), and (3) several sorts of file sharing (ie YouTube, FileShare). For each Web technology, we collected quantitative assessments of their effectiveness (in reach, exposure, and engagement) over a 6-year timeframe (2010-2016). In general, many of our strategies were deemed successful for promoting knowledge exchange and other network goals. We discuss how we applied our Web analytics to inform adaptations and refinements of our Web 2.0 platforms to maximise knowledge exchange with people with BD, their supporters, and health care providers. We conclude with some general recommendations for other mental health researchers and research networks interested in pursuing Web 2.0 strategies. © 2017 John Wiley & Sons, Ltd.

  6. Barcoding a quantified food web: crypsis, concepts, ecology and hypotheses.

    PubMed

    Smith, M Alex; Eveleigh, Eldon S; McCann, Kevin S; Merilo, Mark T; McCarthy, Peter C; Van Rooyen, Kathleen I

    2011-01-01

    The efficient and effective monitoring of individuals and populations is critically dependent on correct species identification. While this point may seem obvious, identifying the majority of the more than 100 natural enemies involved in the spruce budworm (Choristoneura fumiferana--SBW) food web remains a non-trivial endeavor. Insect parasitoids play a major role in the processes governing the population dynamics of SBW throughout eastern North America. However, these species are at the leading edge of the taxonomic impediment and integrating standardized identification capacity into existing field programs would provide clear benefits. We asked to what extent DNA barcoding the SBW food web would alter our understanding of the diversity and connectence of the food web and the frequency of generalists vs. specialists in different forest habitats. We DNA barcoded over 10% of the insects collected from the SBW food web in three New Brunswick forest plots from 1983 to 1993. For 30% of these specimens, we amplified at least one additional nuclear region. When the nodes of the food web were estimated based on barcode divergences (using molecular operational taxonomic units (MOTU) or phylogenetic diversity (PD)--the food web became much more diverse and connectence was reduced. We tested one measure of food web structure (the "bird feeder effect") and found no difference compared to the morphologically based predictions. Many, but not all, of the presumably polyphagous parasitoids now appear to be morphologically-cryptic host-specialists. To our knowledge, this project is the first to barcode a food web in which interactions have already been well-documented and described in space, time and abundance. It is poised to be a system in which field-based methods permit the identification capacity required by forestry scientists. Food web barcoding provided an effective tool for the accurate identification of all species involved in the cascading effects of future budworm

  7. Barcoding a Quantified Food Web: Crypsis, Concepts, Ecology and Hypotheses

    PubMed Central

    Smith, M. Alex; Eveleigh, Eldon S.; McCann, Kevin S.; Merilo, Mark T.; McCarthy, Peter C.; Van Rooyen, Kathleen I.

    2011-01-01

    The efficient and effective monitoring of individuals and populations is critically dependent on correct species identification. While this point may seem obvious, identifying the majority of the more than 100 natural enemies involved in the spruce budworm (Choristoneura fumiferana – SBW) food web remains a non-trivial endeavor. Insect parasitoids play a major role in the processes governing the population dynamics of SBW throughout eastern North America. However, these species are at the leading edge of the taxonomic impediment and integrating standardized identification capacity into existing field programs would provide clear benefits. We asked to what extent DNA barcoding the SBW food web would alter our understanding of the diversity and connectence of the food web and the frequency of generalists vs. specialists in different forest habitats. We DNA barcoded over 10% of the insects collected from the SBW food web in three New Brunswick forest plots from 1983 to 1993. For 30% of these specimens, we amplified at least one additional nuclear region. When the nodes of the food web were estimated based on barcode divergences (using molecular operational taxonomic units (MOTU) or phylogenetic diversity (PD) – the food web became much more diverse and connectence was reduced. We tested one measure of food web structure (the “bird feeder effect”) and found no difference compared to the morphologically based predictions. Many, but not all, of the presumably polyphagous parasitoids now appear to be morphologically-cryptic host-specialists. To our knowledge, this project is the first to barcode a food web in which interactions have already been well-documented and described in space, time and abundance. It is poised to be a system in which field-based methods permit the identification capacity required by forestry scientists. Food web barcoding provided an effective tool for the accurate identification of all species involved in the cascading effects of future

  8. The Effectiveness of Commercial Internet Web Sites: A User's Perspective.

    ERIC Educational Resources Information Center

    Bell, Hudson; Tang, Nelson K. H.

    1998-01-01

    A user survey of 60 company Web sites (electronic commerce, entertainment and leisure, financial and banking services, information services, retailing and travel, and tourism) determined that 30% had facilities for conducting online transactions and only 7% charged for site access. Overall, Web sites were rated high in ease of access, content, and…

  9. Semantic Web Services Challenge, Results from the First Year. Series: Semantic Web And Beyond, Volume 8.

    NASA Astrophysics Data System (ADS)

    Petrie, C.; Margaria, T.; Lausen, H.; Zaremba, M.

    Explores trade-offs among existing approaches. Reveals strengths and weaknesses of proposed approaches, as well as which aspects of the problem are not yet covered. Introduces software engineering approach to evaluating semantic web services. Service-Oriented Computing is one of the most promising software engineering trends because of the potential to reduce the programming effort for future distributed industrial systems. However, only a small part of this potential rests on the standardization of tools offered by the web services stack. The larger part of this potential rests upon the development of sufficient semantics to automate service orchestration. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. A common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their capabilities and shortcomings, is necessary to make progress in developing the full potential of Service-Oriented Computing. The Semantic Web Services Challenge is an open source initiative that provides a public evaluation and certification of multiple frameworks on common industrially-relevant problem sets. This edited volume reports on the first results in developing common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. Semantic Web Services Challenge: Results from the First Year is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.

  10. Jflow: a workflow management system for web applications.

    PubMed

    Mariette, Jérôme; Escudié, Frédéric; Bardou, Philippe; Nabihoudine, Ibouniyamine; Noirot, Céline; Trotard, Marie-Stéphane; Gaspin, Christine; Klopp, Christophe

    2016-02-01

    Biologists produce large data sets and are in demand of rich and simple web portals in which they can upload and analyze their files. Providing such tools requires to mask the complexity induced by the needed High Performance Computing (HPC) environment. The connection between interface and computing infrastructure is usually specific to each portal. With Jflow, we introduce a Workflow Management System (WMS), composed of jQuery plug-ins which can easily be embedded in any web application and a Python library providing all requested features to setup, run and monitor workflows. Jflow is available under the GNU General Public License (GPL) at http://bioinfo.genotoul.fr/jflow. The package is coming with full documentation, quick start and a running test portal. Jerome.Mariette@toulouse.inra.fr. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Unifying Access to National Hydrologic Data Repositories via Web Services

    NASA Astrophysics Data System (ADS)

    Valentine, D. W.; Jennings, B.; Zaslavsky, I.; Maidment, D. R.

    2006-12-01

    The CUAHSI hydrologic information system (HIS) is designed to be a live, multiscale web portal system for accessing, querying, visualizing, and publishing distributed hydrologic observation data and models for any location or region in the United States. The HIS design follows the principles of open service oriented architecture, i.e. system components are represented as web services with well defined standard service APIs. WaterOneFlow web services are the main component of the design. The currently available services have been completely re-written compared to the previous version, and provide programmatic access to USGS NWIS. (steam flow, groundwater and water quality repositories), DAYMET daily observations, NASA MODIS, and Unidata NAM streams, with several additional web service wrappers being added (EPA STORET, NCDC and others.). Different repositories of hydrologic data use different vocabularies, and support different types of query access. Resolving semantic and structural heterogeneities across different hydrologic observation archives and distilling a generic set of service signatures is one of the main scalability challenges in this project, and a requirement in our web service design. To accomplish the uniformity of the web services API, data repositories are modeled following the CUAHSI Observation Data Model. The web service responses are document-based, and use an XML schema to express the semantics in a standard format. Access to station metadata is provided via web service methods, GetSites, GetSiteInfo and GetVariableInfo. The methdods form the foundation of CUAHSI HIS discovery interface and may execute over locally-stored metadata or request the information from remote repositories directly. Observation values are retrieved via a generic GetValues method which is executed against national data repositories. The service is implemented in ASP.Net, and other providers are implementing WaterOneFlow services in java. Reference implementation of

  12. Web-based rehabilitation interventions for people with rheumatoid arthritis: A systematic review.

    PubMed

    Srikesavan, Cynthia; Bryer, Catherine; Ali, Usama; Williamson, Esther

    2018-01-01

    Background Rehabilitation approaches for people with rheumatoid arthritis include joint protection, exercises and self-management strategies. Health interventions delivered via the web have the potential to improve access to health services overcoming time constraints, physical limitations, and socioeconomic and geographic barriers. The objective of this review is to determine the effects of web-based rehabilitation interventions in adults with rheumatoid arthritis. Methods Randomised controlled trials that compared web-based rehabilitation interventions with usual care, waiting list, no treatment or another web-based intervention in adults with rheumatoid arthritis were included. The outcomes were pain, function, quality of life, self-efficacy, rheumatoid arthritis knowledge, physical activity and adverse effects. Methodological quality was assessed using the Cochrane Risk of Bias tool and quality of evidence with the Grading of Recommendations Assessment, Development and Evaluation approach. Results Six source documents from four trials ( n = 567) focusing on self-management, health information or physical activity were identified. The effects of web-based rehabilitation interventions on pain, function, quality of life, self-efficacy, rheumatoid arthritis knowledge and physical activity are uncertain because of the very low quality of evidence mostly from small single trials. Adverse effects were not reported. Conclusion Large, well-designed trials are needed to evaluate the clinical and cost-effectiveness of web-based rehabilitation interventions in rheumatoid arthritis.

  13. Finding Information on the World Wide Web: The Retrieval Effectiveness of Search Engines.

    ERIC Educational Resources Information Center

    Pathak, Praveen; Gordon, Michael

    1999-01-01

    Describes a study that examined the effectiveness of eight search engines for the World Wide Web. Calculated traditional information-retrieval measures of recall and precision at varying numbers of retrieved documents to use as the bases for statistical comparisons of retrieval effectiveness. Also examined the overlap between search engines.…

  14. Web sites for postpartum depression: convenient, frustrating, incomplete, and misleading.

    PubMed

    Summers, Audra L; Logsdon, M Cynthia

    2005-01-01

    To evaluate the content and the technology of Web sites providing information on postpartum depression. Eleven search engines were queried using the words "Postpartum Depression." The top 10 sites in each search engine were evaluated for correct content and technology using the Web Depression Tool, based on the Technology Assessment Model. Of the 36 unique Web sites located, 34 were available to review. Only five Web sites provided >75% correct responses to questions that summarized the current state of the science for postpartum depression. Eleven of the Web sites contained little or no useful information about postpartum depression, despite being among the first 10 Web sites listed by the search engine. Some Web sites contained possibly harmful suggestions for treatment of postpartum depression. In addition, there are many problems with the technology of Web sites providing information on postpartum depression. A better Web site for postpartum depression is necessary if we are to meet the needs of consumers for accurate and current information using technology that enhances learning. Since patient education is a core competency for nurses, it is essential that nurses understand how their patients are using the World Wide Web for learning and how we can assist our patients to find appropriate sites containing correct information.

  15. Tobacco-prevention messages online: social marketing via the Web.

    PubMed

    Lin, Carolyn A; Hullman, Gwen A

    2005-01-01

    Antitobacco groups have joined millions of other commercial or noncommercial entities in developing a presence on the Web. These groups primarily represent the following different sponsorship categories: grassroots, medical, government, and corporate. To obtain a better understanding of the strengths and weaknesses in the message design of antitobacco Web sites, this project analyzed 100 antitobacco Web sites ranging across these four sponsorship categories. The results show that the tobacco industry sites posted just enough antismoking information to appease the antismoking publics. Medical organizations designed their Web sites as specialty sites and offered mostly scientific information. While the government sites resembled a clearinghouse for antitobacco related information, the grassroots sites represented the true advocacy outlets. In general, the industry sites provided the weakest persuasive messages and medical sites fared only slightly better. Government and grassroots sites rated most highly in presenting their antitobacco campaign messages on the Web.

  16. Method and apparatus for filtering visual documents

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E. (Inventor); Shelton, Robert O. (Inventor)

    1993-01-01

    A method and apparatus for producing an abstract or condensed version of a visual document is presented. The frames comprising the visual document are first sampled to reduce the number of frames required for processing. The frames are then subjected to a structural decomposition process that reduces all information in each frame to a set of values. These values are in turn normalized and further combined to produce only one information content value per frame. The information content values of these frames are then compared to a selected distribution cutoff point. This effectively selects those values at the tails of a normal distribution, thus filtering key frames from their surrounding frames. The value for each frame is then compared with the value from the previous frame, and the respective frame is finally stored only if the values are significantly different. The method filters or compresses a visual document with a reduction in digital storage on the ratio of up to 700 to 1 or more, depending on the content of the visual document being filtered.

  17. WebSat--a web software for microsatellite marker development.

    PubMed

    Martins, Wellington Santos; Lucas, Divino César Soares; Neves, Kelligton Fabricio de Souza; Bertioli, David John

    2009-01-01

    Simple sequence repeats (SSR), also known as microsatellites, have been extensively used as molecular markers due to their abundance and high degree of polymorphism. We have developed a simple to use web software, called WebSat, for microsatellite molecular marker prediction and development. WebSat is accessible through the Internet, requiring no program installation. Although a web solution, it makes use of Ajax techniques, providing a rich, responsive user interface. WebSat allows the submission of sequences, visualization of microsatellites and the design of primers suitable for their amplification. The program allows full control of parameters and the easy export of the resulting data, thus facilitating the development of microsatellite markers. The web tool may be accessed at http://purl.oclc.org/NET/websat/

  18. Microsoft or Google Web 2.0 Tools for Course Management

    ERIC Educational Resources Information Center

    Rienzo, Thomas; Han, Bernard

    2009-01-01

    While Web 2.0 has no universal definition, it always refers to online interactions in which user groups both provide and receive content with the aim of collective intelligence. Since 2005, online software has provided Web 2.0 collaboration technologies, for little or no charge, that were formerly available only to wealthy organizations. Academic…

  19. Dynamic Web Pages: Performance Impact on Web Servers.

    ERIC Educational Resources Information Center

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  20. The world wide web: exploring a new advertising environment.

    PubMed

    Johnson, C R; Neath, I

    1999-01-01

    The World Wide Web currently boasts millions of users in the United States alone and is likely to continue to expand both as a marketplace and as an advertising environment. Three experiments explored advertising in the Web environment, in particular memory for ads as they appear in everyday use across the Web. Experiments 1 and 2 examined the effect of advertising repetition on the retention of familiar and less familiar brand names, respectively. Experiment 1 demonstrated that repetition of a banner ad within multiple web pages can improve recall of familiar brand names, and Experiment 2 demonstrated that repetition can improve recognition of less familiar brand names. Experiment 3 directly compared the retention of familiar and less familiar brand names that were promoted by static and dynamic ads and demonstrated that the use of dynamic advertising can increase brand name recall, though only for familiar brand names. This study also demonstrated that, in the Web environment, much as in other advertising environments, familiar brand names possess a mnemonic advantage not possessed by less familiar brand names. Finally, data regarding Web usage gathered from all experiments confirm reports that Web usage among males tends to exceed that among females.

  1. Caught in the Web: A Review of Web-Based Suicide Prevention

    PubMed Central

    Lai, Mee Huong; Maniam, Thambu; Ravindran, Arun V

    2014-01-01

    Background Suicide is a serious and increasing problem worldwide. The emergence of the digital world has had a tremendous impact on people’s lives, both negative and positive, including an impact on suicidal behaviors. Objective Our aim was to perform a review of the published literature on Web-based suicide prevention strategies, focusing on their efficacy, benefits, and challenges. Methods The EBSCOhost (Medline, PsycINFO, CINAHL), OvidSP, the Cochrane Library, and ScienceDirect databases were searched for literature regarding Web-based suicide prevention strategies from 1997 to 2013 according to the modified PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement. The selected articles were subjected to quality rating and data extraction. Results Good quality literature was surprisingly sparse, with only 15 fulfilling criteria for inclusion in the review, and most were rated as being medium to low quality. Internet-based cognitive behavior therapy (iCBT) reduced suicidal ideation in the general population in two randomized controlled trial (effect sizes, d=0.04-0.45) and in a clinical audit of depressed primary care patients. Descriptive studies reported improved accessibility and reduced barriers to treatment with Internet among students. Besides automated iCBT, preventive strategies were mainly interactive (email communication, online individual or supervised group support) or information-based (website postings). The benefits and potential challenges of accessibility, anonymity, and text-based communication as key components for Web-based suicide prevention strategies were emphasized. Conclusions There is preliminary evidence that suggests the probable benefit of Web-based strategies in suicide prevention. Future larger systematic research is needed to confirm the effectiveness and risk benefit ratio of such strategies. PMID:24472876

  2. GASS-WEB: a web server for identifying enzyme active sites based on genetic algorithms.

    PubMed

    Moraes, João P A; Pappa, Gisele L; Pires, Douglas E V; Izidoro, Sandro C

    2017-07-03

    Enzyme active sites are important and conserved functional regions of proteins whose identification can be an invaluable step toward protein function prediction. Most of the existing methods for this task are based on active site similarity and present limitations including performing only exact matches on template residues, template size restraints, despite not being capable of finding inter-domain active sites. To fill this gap, we proposed GASS-WEB, a user-friendly web server that uses GASS (Genetic Active Site Search), a method based on an evolutionary algorithm to search for similar active sites in proteins. GASS-WEB can be used under two different scenarios: (i) given a protein of interest, to match a set of specific active site templates; or (ii) given an active site template, looking for it in a database of protein structures. The method has shown to be very effective on a range of experiments and was able to correctly identify >90% of the catalogued active sites from the Catalytic Site Atlas. It also managed to achieve a Matthew correlation coefficient of 0.63 using the Critical Assessment of protein Structure Prediction (CASP 10) dataset. In our analysis, GASS was ranking fourth among 18 methods. GASS-WEB is freely available at http://gass.unifei.edu.br/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. WebVR: an interactive web browser for virtual environments

    NASA Astrophysics Data System (ADS)

    Barsoum, Emad; Kuester, Falko

    2005-03-01

    The pervasive nature of web-based content has lead to the development of applications and user interfaces that port between a broad range of operating systems and databases, while providing intuitive access to static and time-varying information. However, the integration of this vast resource into virtual environments has remained elusive. In this paper we present an implementation of a 3D Web Browser (WebVR) that enables the user to search the internet for arbitrary information and to seamlessly augment this information into virtual environments. WebVR provides access to the standard data input and query mechanisms offered by conventional web browsers, with the difference that it generates active texture-skins of the web contents that can be mapped onto arbitrary surfaces within the environment. Once mapped, the corresponding texture functions as a fully integrated web-browser that will respond to traditional events such as the selection of links or text input. As a result, any surface within the environment can be turned into a web-enabled resource that provides access to user-definable data. In order to leverage from the continuous advancement of browser technology and to support both static as well as streamed content, WebVR uses ActiveX controls to extract the desired texture skin from industry strength browsers, providing a unique mechanism for data fusion and extensibility.

  4. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  5. Vietnamese Document Representation and Classification

    NASA Astrophysics Data System (ADS)

    Nguyen, Giang-Son; Gao, Xiaoying; Andreae, Peter

    Vietnamese is very different from English and little research has been done on Vietnamese document classification, or indeed, on any kind of Vietnamese language processing, and only a few small corpora are available for research. We created a large Vietnamese text corpus with about 18000 documents, and manually classified them based on different criteria such as topics and styles, giving several classification tasks of different difficulty levels. This paper introduces a new syllable-based document representation at the morphological level of the language for efficient classification. We tested the representation on our corpus with different classification tasks using six classification algorithms and two feature selection techniques. Our experiments show that the new representation is effective for Vietnamese categorization, and suggest that best performance can be achieved using syllable-pair document representation, an SVM with a polynomial kernel as the learning algorithm, and using Information gain and an external dictionary for feature selection.

  6. MyWEST: my Web Extraction Software Tool for effective mining of annotations from web-based databanks.

    PubMed

    Masseroli, Marco; Stella, Andrea; Meani, Natalia; Alcalay, Myriam; Pinciroli, Francesco

    2004-12-12

    High-throughput technologies create the necessity to mine large amounts of gene annotations from diverse databanks, and to integrate the resulting data. Most databanks can be interrogated only via Web, for a single gene at a time, and query results are generally available only in the HTML format. Although some databanks provide batch retrieval of data via FTP, this requires expertise and resources for locally reimplementing the databank. We developed MyWEST, a tool aimed at researchers without extensive informatics skills or resources, which exploits user-defined templates to easily mine selected annotations from different Web-interfaced databanks, and aggregates and structures results in an automatically updated database. Using microarray results from a model system of retinoic acid-induced differentiation, MyWEST effectively gathered relevant annotations from various biomolecular databanks, highlighted significant biological characteristics and supported a global approach to the understanding of complex cellular mechanisms. MyWEST is freely available for non-profit use at http://www.medinfopoli.polimi.it/MyWEST/

  7. 10 CFR 2.305 - Service of documents, methods, proof.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... optical storage media containing the electronic document. (3) A participant granted an exemption under § 2... certificate of service. (i) If a document is served on participants through only the E-filing system, then the certificate of service must state that the document has been filed through the E-Filing system. (ii) If a...

  8. 10 CFR 2.305 - Service of documents, methods, proof.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... optical storage media containing the electronic document. (3) A participant granted an exemption under § 2... certificate of service. (i) If a document is served on participants through only the E-filing system, then the certificate of service must state that the document has been filed through the E-Filing system. (ii) If a...

  9. Bibliometric analysis of nutrition and dietetics research activity in Arab countries using ISI Web of Science database.

    PubMed

    Sweileh, Waleed M; Al-Jabi, Samah W; Sawalha, Ansam F; Zyoud, Sa'ed H

    2014-01-01

    Reducing nutrition-related health problems in Arab countries requires an understanding of the performance of Arab countries in the field of nutrition and dietetics research. Assessment of research activity from a particular country or region could be achieved through bibliometric analysis. This study was carried out to investigate research activity in "nutrition and dietetics" in Arab countries. Original and review articles published from Arab countries in "nutrition and dietetics" Web of Science category up until 2012 were retrieved and analyzed using the ISI Web of Science database. The total number of documents published in "nutrition and dietetics" category from Arab countries was 2062. This constitutes 1% of worldwide research activity in the field. Annual research productivity showed a significant increase after 2005. Approximately 60% of published documents originated from three Arab countries, particularly Egypt, Kingdom of Saudi Arabia, and Tunisia. However, Kuwait has the highest research productivity per million inhabitants. Main research areas of published documents were in "Food Science/Technology" and "Chemistry" which constituted 75% of published documents compared with 25% for worldwide documents in nutrition and dietetics. A total of 329 (15.96%) nutrition - related diabetes or obesity or cancer documents were published from Arab countries compared with 21% for worldwide published documents. Interest in nutrition and dietetics research is relatively recent in Arab countries. Focus of nutrition research is mainly toward food technology and chemistry with lesser activity toward nutrition-related health research. International cooperation in nutrition research will definitely help Arab researchers in implementing nutrition research that will lead to better national policies regarding nutrition.

  10. SWS: accessing SRS sites contents through Web Services.

    PubMed

    Romano, Paolo; Marra, Domenico

    2008-03-26

    Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.

  11. Publishing high-quality climate data on the semantic web

    NASA Astrophysics Data System (ADS)

    Woolf, Andrew; Haller, Armin; Lefort, Laurent; Taylor, Kerry

    2013-04-01

    The effort over more than a decade to establish the semantic web [Berners-Lee et. al., 2001] has received a major boost in recent years through the Open Government movement. Governments around the world are seeking technical solutions to enable more open and transparent access to Public Sector Information (PSI) they hold. Existing technical protocols and data standards tend to be domain specific, and so limit the ability to publish and integrate data across domains (health, environment, statistics, education, etc.). The web provides a domain-neutral platform for information publishing, and has proven itself beyond expectations for publishing and linking human-readable electronic documents. Extending the web pattern to data (often called Web 3.0) offers enormous potential. The semantic web applies the basic web principles to data [Berners-Lee, 2006]: using URIs as identifiers (for data objects and real-world 'things', instead of documents) making the URIs actionable by providing useful information via HTTP using a common exchange standard (serialised RDF for data instead of HTML for documents) establishing typed links between information objects to enable linking and integration Leading examples of 'linked data' for publishing PSI may be found in both the UK (http://data.gov.uk/linked-data) and US (http://www.data.gov/page/semantic-web). The Bureau of Meteorology (BoM) is Australia's national meteorological agency, and has a new mandate to establish a national environmental information infrastructure (under the National Plan for Environmental Information, NPEI [BoM, 2012a]). While the initial approach is based on the existing best practice Spatial Data Infrastructure (SDI) architecture, linked-data is being explored as a technological alternative that shows great promise for the future. We report here the first trial of government linked-data in Australia under data.gov.au. In this initial pilot study, we have taken BoM's new high-quality reference surface

  12. Cataloguing and displaying Web feeds from French language health sites: a Web 2.0 add-on to a health gateway.

    PubMed

    Kerdelhué, G; Thirion, B; Dahamna, B; Darmoni, S J

    2008-01-01

    Among the numerous new functionalities of the Internet, commonly called Web 2.0, Web syndication illustrates the trend for better and faster information sharing. Web feeds (a.k.a RSS feeds), which were used mostly on weblogs at first, are now also widely used in academic, scientific and institutional websites such as PubMed. As very few French language feeds were listed or catalogued in the Health field by the year of 2007, it was decided to implement them in the quality-controlled health gateway CISMeF ([French] acronym for Catalogue and Index of French Language Health Resources on the Internet). Furthermore, making full use of the nature of Web syndication, a Web feed aggregator was put online in to provide a dynamic news gateway called "CISMeF actualités" (http://www.chu-rouen.fr/actualites/). This article describes the process to retrieve and implement the Web feeds in the catalogue and how its terminology was adjusted to describe this new content. It also describes how the aggregator was put online and the features of this news gateway. CISMeF actualités was built accordingly to the editorial policy of CISMeF. Only a part of the Web feeds of the catalogue were included to display the most authoritative sources. Web feeds were also grouped by medical specialties and by countries using the prior indexing of websites with MeSH terms and the so-called metaterms. CISMeF actualités now displays 131 Web feeds across 40 different medical specialities, coming from 5 different countries. It is one example, among many, that static hypertext links can now easily and beneficially be completed, or replaced, by dynamic display of Web content using syndication feeds.

  13. Towards a Simple and Efficient Web Search Framework

    DTIC Science & Technology

    2014-11-01

    any useful information about the various aspects of a topic. For example, for the query “ raspberry pi ”, it covers topics such as “what is raspberry pi ...topics generated by the LDA topic model for query ” raspberry pi ”. One simple explanation is that web texts are too noisy and unfocused for the LDA process...making a rasp- berry pi ”. However, the topics generated based on the 10 top ranked documents do not make much sense to us in terms of their keywords

  14. 75 FR 27986 - Electronic Filing System-Web (EFS-Web) Contingency Option

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-19

    ...] Electronic Filing System--Web (EFS-Web) Contingency Option AGENCY: United States Patent and Trademark Office... availability of its patent electronic filing system, Electronic Filing System--Web (EFS-Web) by providing a new contingency option when the primary portal to EFS-Web has an unscheduled outage. Previously, the entire EFS...

  15. Designing Effective Web Forms for Older Web Users

    ERIC Educational Resources Information Center

    Li, Hui; Rau, Pei-Luen Patrick; Fujimura, Kaori; Gao, Qin; Wang, Lin

    2012-01-01

    This research aims to provide insight for web form design for older users. The effects of task complexity and information structure of web forms on older users' performance were examined. Forty-eight older participants with abundant computer and web experience were recruited. The results showed significant differences in task time and error rate…

  16. Global Connections: Web Conferencing Tools Help Educators Collaborate Anytime, Anywhere

    ERIC Educational Resources Information Center

    Forrester, Dave

    2009-01-01

    Web conferencing tools help educators from around the world collaborate in real time. Teachers, school counselors, and administrators need only to put on their headsets, check the time zone, and log on to meet and learn from educators across the globe. In this article, the author discusses how educators can use Web conferencing at their schools.…

  17. Creating Effective Web-Based Learning Environments: Relevant Research and Practice

    ERIC Educational Resources Information Center

    Wijekumar, Kay

    2005-01-01

    Web-based learning environments are a great asset only if they are designed well and used as intended. The urgency to create courses in response to the growing demand for online learning has resulted in a hurried push to drop PowerPoint notes into Web-based course management systems (WBCMSs), devise an electronic quiz, put together a few…

  18. The Web: Creating and Changing Jobs. Trends and Issues Alerts.

    ERIC Educational Resources Information Center

    Brown, Bettina Lankard

    The World Wide Web is changing not only how individuals locate jobs but also the ways existing jobs are performed. Individuals seeking work will need to know how to use the Web as a tool for enhancing their job performance. The enhanced global communication made possible through Internet technology and the increase of marketing plans combining…

  19. Using Web 2.0 Technology to Enhance, Scaffold and Assess Problem-Based Learning

    ERIC Educational Resources Information Center

    Hack, Catherine

    2013-01-01

    Web 2.0 technologies, such as social networks, wikis, blogs, and virtual worlds provide a platform for collaborative working, facilitating sharing of resources and joint document production. They can act as a stimulus to promote active learning and provide an engaging and interactive environment for students, and as such align with the philosophy…

  20. Web-based versus traditional paper questionnaires: a mixed-mode survey with a Nordic perspective.

    PubMed

    Hohwü, Lena; Lyshol, Heidi; Gissler, Mika; Jonsson, Stefan Hrafn; Petzold, Max; Obel, Carsten

    2013-08-26

    Survey response rates have been declining over the past decade. The more widespread use of the Internet and Web-based technologies among potential health survey participants suggests that Web-based questionnaires may be an alternative to paper questionnaires in future epidemiological studies. To compare response rates in a population of parents by using 4 different modes of data collection for a questionnaire survey of which 1 involved a nonmonetary incentive. A random sample of 3148 parents of Danish children aged 2-17 years were invited to participate in the Danish part of the NordChild 2011 survey on their children's health and welfare. NordChild was conducted in 1984 and 1996 in collaboration with Finland, Iceland, Norway, and Sweden using mailed paper questionnaires only. In 2011, all countries used conventional paper versions only except Denmark where the parents were randomized into 4 groups: (1) 789 received a paper questionnaire only (paper), (2) 786 received the paper questionnaire and a log-in code to the Web-based questionnaire (paper/Web), (3) 787 received a log-in code to the Web-based questionnaire (Web), and (4) 786 received log-in details to the Web-based questionnaire and were given an incentive consisting of a chance to win a tablet computer (Web/tablet). In connection with the first reminder, the nonresponders in the paper, paper/Web, and Web groups were also present with the opportunity to win a tablet computer as a means of motivation. Descriptive analysis was performed using chi-square tests. Odds ratios were used to estimate differences in response rates between the 4 modes. In 2011, 1704 of 3148 (54.13%) respondents answered the Danish questionnaire. The highest response rate was with the paper mode (n=443, 56.2%). The other groups had similar response rates: paper/Web (n=422, 53.7%), Web (n=420, 53.4%), and Web/tablet (n=419, 53.3%) modes. Compared to the paper mode, the odds for response rate in the paper/Web decreased by 9% (OR 0.91, 95

  1. Web-Based Versus Traditional Paper Questionnaires: A Mixed-Mode Survey With a Nordic Perspective

    PubMed Central

    Lyshol, Heidi; Gissler, Mika; Jonsson, Stefan Hrafn; Petzold, Max; Obel, Carsten

    2013-01-01

    Background Survey response rates have been declining over the past decade. The more widespread use of the Internet and Web-based technologies among potential health survey participants suggests that Web-based questionnaires may be an alternative to paper questionnaires in future epidemiological studies. Objective To compare response rates in a population of parents by using 4 different modes of data collection for a questionnaire survey of which 1 involved a nonmonetary incentive. Methods A random sample of 3148 parents of Danish children aged 2-17 years were invited to participate in the Danish part of the NordChild 2011 survey on their children’s health and welfare. NordChild was conducted in 1984 and 1996 in collaboration with Finland, Iceland, Norway, and Sweden using mailed paper questionnaires only. In 2011, all countries used conventional paper versions only except Denmark where the parents were randomized into 4 groups: (1) 789 received a paper questionnaire only (paper), (2) 786 received the paper questionnaire and a log-in code to the Web-based questionnaire (paper/Web), (3) 787 received a log-in code to the Web-based questionnaire (Web), and (4) 786 received log-in details to the Web-based questionnaire and were given an incentive consisting of a chance to win a tablet computer (Web/tablet). In connection with the first reminder, the nonresponders in the paper, paper/Web, and Web groups were also present with the opportunity to win a tablet computer as a means of motivation. Descriptive analysis was performed using chi-square tests. Odds ratios were used to estimate differences in response rates between the 4 modes. Results In 2011, 1704 of 3148 (54.13%) respondents answered the Danish questionnaire. The highest response rate was with the paper mode (n=443, 56.2%). The other groups had similar response rates: paper/Web (n=422, 53.7%), Web (n=420, 53.4%), and Web/tablet (n=419, 53.3%) modes. Compared to the paper mode, the odds for response rate in the

  2. Deep Web video

    ScienceCinema

    None Available

    2018-02-06

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  3. Awareness and action for eliminating health care disparities in pain care: Web-based resources.

    PubMed

    Fan, Ling; Thomas, Melissa; Deitrick, Ginna E; Polomano, Rosemary C

    2008-01-01

    Evidence shows that disparities in pain care exist, and this problem spans across all health care settings. Health care disparities are complex, and stem from the health system climate, limitations imposed by laws and regulations, and discriminatory practices that are deep seated in biases, stereotypes, and uncertainties surrounding communication and decision-making processes. A search of the Internet identified thousands of Web sites, documents, reports, and educational materials pertaining to health and pain disparities. Web sites for federal agencies, private foundations, and professional and consumer-oriented organizations provide useful information on disparities related to age, race, ethnicity, geography, socioeconomic status, and specific populations. The contents of 10 Web sites are examined for resources to assist health professionals and consumers in better understanding health and pain disparities and ways to overcome them in practice.

  4. Growing and navigating the small world Web by local content

    PubMed Central

    Menczer, Filippo

    2002-01-01

    Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues. PMID:12381792

  5. Growing and navigating the small world Web by local content

    NASA Astrophysics Data System (ADS)

    Menczer, Filippo

    2002-10-01

    Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.

  6. Growing and navigating the small world Web by local content.

    PubMed

    Menczer, Filippo

    2002-10-29

    Can we model the scale-free distribution of Web hypertext degree under realistic assumptions about the behavior of page authors? Can a Web crawler efficiently locate an unknown relevant page? These questions are receiving much attention due to their potential impact for understanding the structure of the Web and for building better search engines. Here I investigate the connection between the linkage and content topology of Web pages. The relationship between a text-induced distance metric and a link-based neighborhood probability distribution displays a phase transition between a region where linkage is not determined by content and one where linkage decays according to a power law. This relationship is used to propose a Web growth model that is shown to accurately predict the distribution of Web page degree, based on textual content and assuming only local knowledge of degree for existing pages. A qualitatively similar phase transition is found between linkage and semantic distance, with an exponential decay tail. Both relationships suggest that efficient paths can be discovered by decentralized Web navigation algorithms based on textual and/or categorical cues.

  7. Actionable, long-term stable and semantic web compatible identifiers for access to biological collection objects

    PubMed Central

    Hyam, Roger; Hagedorn, Gregor; Chagnoux, Simon; Röpert, Dominik; Casino, Ana; Droege, Gabi; Glöckler, Falko; Gödderz, Karsten; Groom, Quentin; Hoffmann, Jana; Holleman, Ayco; Kempa, Matúš; Koivula, Hanna; Marhold, Karol; Nicolson, Nicky; Smith, Vincent S.; Triebel, Dagmar

    2017-01-01

    With biodiversity research activities being increasingly shifted to the web, the need for a system of persistent and stable identifiers for physical collection objects becomes increasingly pressing. The Consortium of European Taxonomic Facilities agreed on a common system of HTTP-URI-based stable identifiers which is now rolled out to its member organizations. The system follows Linked Open Data principles and implements redirection mechanisms to human-readable and machine-readable representations of specimens facilitating seamless integration into the growing semantic web. The implementation of stable identifiers across collection organizations is supported with open source provider software scripts, best practices documentations and recommendations for RDF metadata elements facilitating harmonized access to collection information in web portals. Database URL: http://cetaf.org/cetaf-stable-identifiers PMID:28365724

  8. Informatics in radiology (infoRAD): HTML and Web site design for the radiologist: a primer.

    PubMed

    Ryan, Anthony G; Louis, Luck J; Yee, William C

    2005-01-01

    A Web site has enormous potential as a medium for the radiologist to store, present, and share information in the form of text, images, and video clips. With a modest amount of tutoring and effort, designing a site can be as painless as preparing a Microsoft PowerPoint presentation. The site can then be used as a hub for the development of further offshoots (eg, Web-based tutorials, storage for a teaching library, publication of information about one's practice, and information gathering from a wide variety of sources). By learning the basics of hypertext markup language (HTML), the reader will be able to produce a simple and effective Web page that permits display of text, images, and multimedia files. The process of constructing a Web page can be divided into five steps: (a) creating a basic template with formatted text, (b) adding color, (c) importing images and multimedia files, (d) creating hyperlinks, and (e) uploading one's page to the Internet. This Web page may be used as the basis for a Web-based tutorial comprising text documents and image files already in one's possession. Finally, there are many commercially available packages for Web page design that require no knowledge of HTML.

  9. Optimal foraging, not biogenetic law, predicts spider orb web allometry.

    PubMed

    Gregorič, Matjaž; Kiesbüy, Heine C; Lebrón, Shakira G Quiñones; Rozman, Alenka; Agnarsson, Ingi; Kuntner, Matjaž

    2013-03-01

    The biogenetic law posits that the ontogeny of an organism recapitulates the pattern of evolutionary changes. Morphological evidence has offered some support for, but also considerable evidence against, the hypothesis. However, biogenetic law in behavior remains underexplored. As physical manifestation of behavior, spider webs offer an interesting model for the study of ontogenetic behavioral changes. In orb-weaving spiders, web symmetry often gets distorted through ontogeny, and these changes have been interpreted to reflect the biogenetic law. Here, we test the biogenetic law hypothesis against the alternative, the optimal foraging hypothesis, by studying the allometry in Leucauge venusta orb webs. These webs range in inclination from vertical through tilted to horizontal; biogenetic law predicts that allometry relates to ontogenetic stage, whereas optimal foraging predicts that allometry relates to gravity. Specifically, pronounced asymmetry should only be seen in vertical webs under optimal foraging theory. We show that, through ontogeny, vertical webs in L. venusta become more asymmetrical in contrast to tilted and horizontal webs. Biogenetic law thus cannot explain L. venusta web allometry, but our results instead support optimization of foraging area in response to spider size.

  10. Now That We've Found the "Hidden Web," What Can We Do with It?

    ERIC Educational Resources Information Center

    Cole, Timothy W.; Kaczmarek, Joanne; Marty, Paul F.; Prom, Christopher J.; Sandore, Beth; Shreeves, Sarah

    The Open Archives Initiative (OAI) Protocol for Metadata Harvesting (PMH) is designed to facilitate discovery of the "hidden web" of scholarly information, such as that contained in databases, finding aids, and XML documents. OAI-PMH supports standardized exchange of metadata describing items in disparate collections, of such as those…

  11. Pride on the Other Side: The Emergence of LGBT Web Sites for Prospective Students

    ERIC Educational Resources Information Center

    Mathis, Daniel; Tremblay, Christopher

    2010-01-01

    For several decades, colleges have maintained an LGBT Web presence for currently enrolled students. These Web sites inform students about resources, services, events, and staff . They serve as a way to communicate a school's inclusivity and commitment to the LGBT population. Only recently have Web sites specifically targeted for the prospective…

  12. Web-Based Interactive Electronic Technical Manual (IETM) Common User Interface Style Guide, Version 2.0

    DTIC Science & Technology

    2003-07-01

    Technical Report WEB-BASED INTERACTIVE ELECTRONIC TECHNICAL MANUAL (IETM) COMMON USER INTERFACE STYLE GUIDE Version 2.0 – July 2003 by L. John Junod ...ACKNOWLEDGEMENTS The principal authors of this document were: John Junod – NSWC, Carderock Division, Phil Deuell – AMSEC LLC, Kathleen Moore

  13. Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture

    DOEpatents

    Sanfilippo, Antonio [Richland, WA; Calapristi, Augustin J [West Richland, WA; Crow, Vernon L [Richland, WA; Hetzler, Elizabeth G [Kennewick, WA; Turner, Alan E [Kennewick, WA

    2009-12-22

    Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture are described. In one aspect, a document clustering method includes providing a document set comprising a plurality of documents, providing a cluster comprising a subset of the documents of the document set, using a plurality of terms of the documents, providing a cluster label indicative of subject matter content of the documents of the cluster, wherein the cluster label comprises a plurality of word senses, and selecting one of the word senses of the cluster label.

  14. A dynamical classification of the cosmic web

    NASA Astrophysics Data System (ADS)

    Forero-Romero, J. E.; Hoffman, Y.; Gottlöber, S.; Klypin, A.; Yepes, G.

    2009-07-01

    In this paper, we propose a new dynamical classification of the cosmic web. Each point in space is classified in one of four possible web types: voids, sheets, filaments and knots. The classification is based on the evaluation of the deformation tensor (i.e. the Hessian of the gravitational potential) on a grid. The classification is based on counting the number of eigenvalues above a certain threshold, λth, at each grid point, where the case of zero, one, two or three such eigenvalues corresponds to void, sheet, filament or a knot grid point. The collection of neighbouring grid points, friends of friends, of the same web type constitutes voids, sheets, filaments and knots as extended web objects. A simple dynamical consideration of the emergence of the web suggests that the threshold should not be null, as in previous implementations of the algorithm. A detailed dynamical analysis would have found different threshold values for the collapse of sheets, filaments and knots. Short of such an analysis a phenomenological approach has been opted for, looking for a single threshold to be determined by analysing numerical simulations. Our cosmic web classification has been applied and tested against a suite of large (dark matter only) cosmological N-body simulations. In particular, the dependence of the volume and mass filling fractions on λth and on the resolution has been calculated for the four web types. We also study the percolation properties of voids and filaments. Our main findings are as follows. (i) Already at λth = 0.1 the resulting web classification reproduces the visual impression of the cosmic web. (ii) Between 0.2 <~ λth <~ 0.4, a system of percolated voids coexists with a net of interconnected filaments. This suggests a reasonable choice for λth as the parameter that defines the cosmic web. (iii) The dynamical nature of the suggested classification provides a robust framework for incorporating environmental information into galaxy formation models

  15. Adding Processing Functionality to the Sensor Web

    NASA Astrophysics Data System (ADS)

    Stasch, Christoph; Pross, Benjamin; Jirka, Simon; Gräler, Benedikt

    2017-04-01

    The Sensor Web allows discovering, accessing and tasking different kinds of environmental sensors in the Web, ranging from simple in-situ sensors to remote sensing systems. However, (geo-)processing functionality needs to be applied to integrate data from different sensor sources and to generate higher level information products. Yet, a common standardized approach for processing sensor data in the Sensor Web is still missing and the integration differs from application to application. Standardizing not only the provision of sensor data, but also the processing facilitates sharing and re-use of processing modules, enables reproducibility of processing results, and provides a common way to integrate external scalable processing facilities or legacy software. In this presentation, we provide an overview on on-going research projects that develop concepts for coupling standardized geoprocessing technologies with Sensor Web technologies. At first, different architectures for coupling sensor data services with geoprocessing services are presented. Afterwards, profiles for linear regression and spatio-temporal interpolation of the OGC Web Processing Services that allow consuming sensor data coming from and uploading predictions to Sensor Observation Services are introduced. The profiles are implemented in processing services for the hydrological domain. Finally, we illustrate how the R software can be coupled with existing OGC Sensor Web and Geoprocessing Services and present an example, how a Web app can be built that allows exploring the results of environmental models in an interactive way using the R Shiny framework. All of the software presented is available as Open Source Software.

  16. Flow Webs: Mechanism and Architecture for the Implementation of Sensor Webs

    NASA Astrophysics Data System (ADS)

    Gorlick, M. M.; Peng, G. S.; Gasster, S. D.; McAtee, M. D.

    2006-12-01

    The sensor web is a distributed, federated infrastructure much like its predecessors, the internet and the world wide web. It will be a federation of many sensor webs, large and small, under many distinct spans of control, that loosely cooperates and share information for many purposes. Realistically, it will grow piecemeal as distinct, individual systems are developed and deployed, some expressly built for a sensor web while many others were created for other purposes. Therefore, the architecture of the sensor web is of fundamental import and architectural strictures that inhibit innovation, experimentation, sharing or scaling may prove fatal. Drawing upon the architectural lessons of the world wide web, we offer a novel system architecture, the flow web, that elevates flows, sequences of messages over a domain of interest and constrained in both time and space, to a position of primacy as a dynamic, real-time, medium of information exchange for computational services. The flow web captures; in a single, uniform architectural style; the conflicting demands of the sensor web including dynamic adaptations to changing conditions, ease of experimentation, rapid recovery from the failures of sensors and models, automated command and control, incremental development and deployment, and integration at multiple levels—in many cases, at different times. Our conception of sensor webs—dynamic amalgamations of sensor webs each constructed within a flow web infrastructure—holds substantial promise for earth science missions in general, and of weather, air quality, and disaster management in particular. Flow webs, are by philosophy, design and implementation a dynamic infrastructure that permits massive adaptation in real-time. Flows may be attached to and detached from services at will, even while information is in transit through the flow. This concept, flow mobility, permits dynamic integration of earth science products and modeling resources in response to real

  17. [Consensus document for the detection and management of chronic kidney disease].

    PubMed

    Martínez-Castelao, Alberto; Górriz, José L; Bover, Jordi; Segura-de la Morena, Julián; Cebollada, Jesús; Escalada, Javier; Esmatjes, Enric; Fácila, Lorenzo; Gamarra, Javier; Gràcia, Silvia; Hernández-Moreno, Julio; Llisterri-Caro, José L; Mazón, Pilar; Montañés, Rosario; Morales-Olivas, Francisco; Muñoz-Torres, Manuel; de Pablos-Velasco, Pedro; de Santiago, Ana; Sánchez-Celaya, Marta; Suárez, Carmen; Tranche, Salvador

    2014-01-01

    Chronic kidney disease (CKD) is an important global health problem, involving to 10% of the Spanish population, promoting high morbidity and mortality for the patient and an elevate consumption of the total health resources for the National Health System. This is a summary of an executive consensus document of ten scientific societies involved in the care of the renal patient, that actualizes the consensus document published in 2007. The central extended document can be consulted in the web page of each society. The aspects included in the document are: Concept, epidemiology and risk factors for CKD. Diagnostic criteria, evaluation and stages of CKD, albuminuria and glomerular filtration rate estimation. Progression factors for renal damage. Patient remission criteria. Follow-up and objectives of each speciality control. Nephrotoxicity prevention. Cardio-vascular damage detection. Diet, life-style and treatment attitudes: hypertension, dyslipidaemia, hyperglycemia, smoking, obesity, hyperuricemia, anemia, mineral and bone disorders. Multidisciplinary management for Primary Care, other specialities and Nephrology. Integrated management of CKD patient in haemodialysis, peritoneal dialysis and renal transplant patients. Management of the uremic patient in palliative care. We hope that this document may be of help for the multidisciplinary management of CKD patients by summarizing the most updated recommendations. Copyright © 2014 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España. All rights reserved.

  18. Process model-based atomic service discovery and composition of composite semantic web services using web ontology language for services (OWL-S)

    NASA Astrophysics Data System (ADS)

    Paulraj, D.; Swamynathan, S.; Madhaiyan, M.

    2012-11-01

    Web Service composition has become indispensable as a single web service cannot satisfy complex functional requirements. Composition of services has received much interest to support business-to-business (B2B) or enterprise application integration. An important component of the service composition is the discovery of relevant services. In Semantic Web Services (SWS), service discovery is generally achieved by using service profile of Ontology Web Languages for Services (OWL-S). The profile of the service is a derived and concise description but not a functional part of the service. The information contained in the service profile is sufficient for atomic service discovery, but it is not sufficient for the discovery of composite semantic web services (CSWS). The purpose of this article is two-fold: first to prove that the process model is a better choice than the service profile for service discovery. Second, to facilitate the composition of inter-organisational CSWS by proposing a new composition method which uses process ontology. The proposed service composition approach uses an algorithm which performs a fine grained match at the level of atomic process rather than at the level of the entire service in a composite semantic web service. Many works carried out in this area have proposed solutions only for the composition of atomic services and this article proposes a solution for the composition of composite semantic web services.

  19. Increased capture of pediatric surgical complications utilizing a novel case-log web application to enhance quality improvement.

    PubMed

    Fisher, Jason C; Kuenzler, Keith A; Tomita, Sandra S; Sinha, Prashant; Shah, Paresh; Ginsburg, Howard B

    2017-01-01

    Documenting surgical complications is limited by multiple barriers and is not fostered in the electronic health record. Tracking complications is essential for quality improvement (QI) and required for board certification. Current registry platforms do not facilitate meaningful complication reporting. We developed a novel web application that improves accuracy and reduces barriers to documenting complications. We deployed a custom web application that allows pediatric surgeons to maintain case logs. The program includes a module for entering complication data in real time. Reminders to enter outcome data occur at key postoperative intervals to optimize recall of events. Between October 1, 2014, and March 31, 2015, frequencies of surgical complications captured by the existing hospital reporting system were compared with data aggregated by our application. 780 cases were captured by the web application, compared with 276 cases registered by the hospital system. We observed an increase in the capture of major complications when compared to the hospital dataset (14 events vs. 4 events). This web application improved real-time reporting of surgical complications, exceeding the accuracy of administrative datasets. Custom informatics solutions may help reduce barriers to self-reporting of adverse events and improve the data that presently inform pediatric surgical QI. Diagnostic study/Retrospective study. Level III - case control study. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. 77 FR 26304 - Federal Housing Administration (FHA) Healthcare Facility Documents: Proposed Revisions and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-03

    ...Consistent with the Paperwork Reduction Act of 1995 (PRA), HUD is publishing for public comment a comprehensive set of closing and other documents used in connection with transactions involving healthcare facilities (excluding hospitals) that are insured pursuant to section 232 of the National Housing Act (Section 232). In addition to meeting PRA requirements, this notice seeks public comment for the purpose of enlisting input from the lending industry and other interested parties in the development, updating, and adoption of a set of instruments (collectively, healthcare facility documents) that offer the requisite protection to all parties in these FHA-insured mortgage transactions, consistent with modern real estate and mortgage lending laws and practices. The healthcare facility documents, which are the subject of this notice, can be viewed on HUD's Web site: www.hud.gov/232forms. HUD is also publishing today a proposed rule that will submit for public comment certain revisions to FHA's Section 232 regulations for the purpose of ensuring consistency between the program regulations and the revised healthcare facility documents.

  1. A Semantic Approach for Geospatial Information Extraction from Unstructured Documents

    NASA Astrophysics Data System (ADS)

    Sallaberry, Christian; Gaio, Mauro; Lesbegueries, Julien; Loustau, Pierre

    Local cultural heritage document collections are characterized by their content, which is strongly attached to a territory and its land history (i.e., geographical references). Our contribution aims at making the content retrieval process more efficient whenever a query includes geographic criteria. We propose a core model for a formal representation of geographic information. It takes into account characteristics of different modes of expression, such as written language, captures of drawings, maps, photographs, etc. We have developed a prototype that fully implements geographic information extraction (IE) and geographic information retrieval (IR) processes. All PIV prototype processing resources are designed as Web Services. We propose a geographic IE process based on semantic treatment as a supplement to classical IE approaches. We implement geographic IR by using intersection computing algorithms that seek out any intersection between formal geocoded representations of geographic information in a user query and similar representations in document collection indexes.

  2. Web data mining

    NASA Astrophysics Data System (ADS)

    Wibonele, Kasanda J.; Zhang, Yanqing

    2002-03-01

    A web data mining system using granular computing and ASP programming is proposed. This is a web based application, which allows web users to submit survey data for many different companies. This survey is a collection of questions that will help these companies develop and improve their business and customer service with their clients by analyzing survey data. This web application allows users to submit data anywhere. All the survey data is collected into a database for further analysis. An administrator of this web application can login to the system and view all the data submitted. This web application resides on a web server, and the database resides on the MS SQL server.

  3. Spiders and Worms and Crawlers, Oh My: Searching on the World Wide Web.

    ERIC Educational Resources Information Center

    Eagan, Ann; Bender, Laura

    Searching on the world wide web can be confusing. A myriad of search engines exist, often with little or no documentation, and many of these search engines work differently from the standard search engines people are accustomed to using. Intended for librarians, this paper defines search engines, directories, spiders, and robots, and covers basics…

  4. Assimilation of Diazotrophic Nitrogen into Pelagic Food Webs

    PubMed Central

    Woodland, Ryan J.; Holland, Daryl P.; Beardall, John; Smith, Jonathan; Scicluna, Todd; Cook, Perran L. M.

    2013-01-01

    The fate of diazotrophic nitrogen (ND) fixed by planktonic cyanobacteria in pelagic food webs remains unresolved, particularly for toxic cyanophytes that are selectively avoided by most herbivorous zooplankton. Current theory suggests that ND fixed during cyanobacterial blooms can enter planktonic food webs contemporaneously with peak bloom biomass via direct grazing of zooplankton on cyanobacteria or via the uptake of bioavailable ND (exuded from viable cyanobacterial cells) by palatable phytoplankton or microbial consortia. Alternatively, ND can enter planktonic food webs post-bloom following the remineralization of bloom detritus. Although the relative contribution of these processes to planktonic nutrient cycles is unknown, we hypothesized that assimilation of bioavailable ND (e.g., nitrate, ammonium) by palatable phytoplankton and subsequent grazing by zooplankton (either during or after the cyanobacterial bloom) would be the primary pathway by which ND was incorporated into the planktonic food web. Instead, in situ stable isotope measurements and grazing experiments clearly documented that the assimilation of ND by zooplankton outpaced assimilation by palatable phytoplankton during a bloom of toxic Nodularia spumigena Mertens. We identified two distinct temporal phases in the trophic transfer of ND from N. spumigena to the plankton community. The first phase was a highly dynamic transfer of ND to zooplankton with rates that covaried with bloom biomass while bypassing other phytoplankton taxa; a trophic transfer that we infer was routed through bloom-associated bacteria. The second phase was a slowly accelerating assimilation of the dissolved-ND pool by phytoplankton that was decoupled from contemporaneous variability in N. spumigena concentrations. These findings provide empirical evidence that ND can be assimilated and transferred rapidly throughout natural plankton communities and yield insights into the specific processes underlying the propagation of ND

  5. Assimilation of diazotrophic nitrogen into pelagic food webs.

    PubMed

    Woodland, Ryan J; Holland, Daryl P; Beardall, John; Smith, Jonathan; Scicluna, Todd; Cook, Perran L M

    2013-01-01

    The fate of diazotrophic nitrogen (N(D)) fixed by planktonic cyanobacteria in pelagic food webs remains unresolved, particularly for toxic cyanophytes that are selectively avoided by most herbivorous zooplankton. Current theory suggests that N(D) fixed during cyanobacterial blooms can enter planktonic food webs contemporaneously with peak bloom biomass via direct grazing of zooplankton on cyanobacteria or via the uptake of bioavailable N(D) (exuded from viable cyanobacterial cells) by palatable phytoplankton or microbial consortia. Alternatively, N(D) can enter planktonic food webs post-bloom following the remineralization of bloom detritus. Although the relative contribution of these processes to planktonic nutrient cycles is unknown, we hypothesized that assimilation of bioavailable N(D) (e.g., nitrate, ammonium) by palatable phytoplankton and subsequent grazing by zooplankton (either during or after the cyanobacterial bloom) would be the primary pathway by which N(D) was incorporated into the planktonic food web. Instead, in situ stable isotope measurements and grazing experiments clearly documented that the assimilation of N(D) by zooplankton outpaced assimilation by palatable phytoplankton during a bloom of toxic Nodularia spumigena Mertens. We identified two distinct temporal phases in the trophic transfer of N(D) from N. spumigena to the plankton community. The first phase was a highly dynamic transfer of N(D) to zooplankton with rates that covaried with bloom biomass while bypassing other phytoplankton taxa; a trophic transfer that we infer was routed through bloom-associated bacteria. The second phase was a slowly accelerating assimilation of the dissolved-N(D) pool by phytoplankton that was decoupled from contemporaneous variability in N. spumigena concentrations. These findings provide empirical evidence that N(D) can be assimilated and transferred rapidly throughout natural plankton communities and yield insights into the specific processes underlying

  6. WebGL for Rosetta Science Planning

    NASA Astrophysics Data System (ADS)

    Schmidt, Albrecht; Völk, Stefan; Grieger, Björn

    2013-04-01

    Rosetta is a mission of the European Space Agency (ESA) to rendez-vous with comet Churyumov-Gerasimenko in 2014. The trajectory and operations of the mission are particularly complex, have many free parameters and are novel to the community. To support science planning, communicate operational ideas and disseminate operational scenarios to the scientific community, the science ground segment makes use of Web-based visualisation technologies. Using the recent standard WebGL, static pages of time-dependent three-dimensional views of the spacecraft and the field-of-views of the instruments are generated, directly from the operational files. These can then be viewed in modern Web browsers for understanding or verification, be analysed and correlated with other studies. Variable timesteps make it possible to provide both overviews and detailed animated scenes. The technical challenges that are particular to Web-based environments include: (1) In traditional OpenGL, is much easier to compute needed data on demand since the visualisation runs natively on a usually quite powerful computer. In WebGL application, since requests for additional data have to be passed through a Web server, they are more complex and also require a more complex infrastructure. (2) The volume of data that can be kept in a browser environment is limited and has to be transferred over often slow network links. Thus, careful design and reduction of data is required. (3) Although browser support for WebGL has improved since the authors started using it, it is often not well supported on mobile and small devices. (4) Web browsers often only support limited end user interactions with a mouse or keyboards. While some of the challenges can be expected to become less important as technological progress continues, others seem to be more inherent to the approach. On the positive side, the authors' experiences include: (1) low threshold in the community to using the visualisations, (2), thus, cooperative use

  7. Creating Web-Based Scientific Applications Using Java Servlets

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Arnold, James O. (Technical Monitor)

    2001-01-01

    There are many advantages to developing web-based scientific applications. Any number of people can access the application concurrently. The application can be accessed from a remote location. The application becomes essentially platform-independent because it can be run from any machine that has internet access and can run a web browser. Maintenance and upgrades to the application are simplified since only one copy of the application exists in a centralized location. This paper details the creation of web-based applications using Java servlets. Java is a powerful, versatile programming language that is well suited to developing web-based programs. A Java servlet provides the interface between the central server and the remote client machines. The servlet accepts input data from the client, runs the application on the server, and sends the output back to the client machine. The type of servlet that supports the HTTP protocol will be discussed in depth. Among the topics the paper will discuss are how to write an http servlet, how the servlet can run applications written in Java and other languages, and how to set up a Java web server. The entire process will be demonstrated by building a web-based application to compute stagnation point heat transfer.

  8. Working with WebQuests: Making the Web Accessible to Students with Disabilities.

    ERIC Educational Resources Information Center

    Kelly, Rebecca

    2000-01-01

    This article describes how students with disabilities in regular classes are using the WebQuest lesson format to access the Internet. It explains essential WebQuest principles, creating a draft Web page, and WebQuest components. It offers an example of a WebQuest about salvaging the sunken ships, Titanic and Lusitania. A WebQuest planning form is…

  9. Increasing the availability and usability of terrestrial ecology data through geospatial Web services and visualization tools (Invited)

    NASA Astrophysics Data System (ADS)

    Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Wei, Y.

    2010-12-01

    Terrestrial ecology data sets are produced from diverse data sources such as model output, field data collection, laboratory analysis and remote sensing observation. These data sets can be created, distributed, and consumed in diverse ways as well. However, this diversity can hinder the usability of the data, and limit data users’ abilities to validate and reuse data for science and application purposes. Geospatial web services, such as those described in this paper, are an important means of reducing this burden. Terrestrial ecology researchers generally create the data sets in diverse file formats, with file and data structures tailored to the specific needs of their project, possibly as tabular data, geospatial images, or documentation in a report. Data centers may reformat the data to an archive-stable format and distribute the data sets through one or more protocols, such as FTP, email, and WWW. Because of the diverse data preparation, delivery, and usage patterns, users have to invest time and resources to bring the data into the format and structure most useful for their analysis. This time-consuming data preparation process shifts valuable resources from data analysis to data assembly. To address these issues, the ORNL DAAC, a NASA-sponsored terrestrial ecology data center, has utilized geospatial Web service technology, such as Open Geospatial Consortium (OGC) Web Map Service (WMS) and OGC Web Coverage Service (WCS) standards, to increase the usability and availability of terrestrial ecology data sets. Data sets are standardized into non-proprietary file formats and distributed through OGC Web Service standards. OGC Web services allow the ORNL DAAC to store data sets in a single format and distribute them in multiple ways and formats. Registering the OGC Web services through search catalogues and other spatial data tools allows for publicizing the data sets and makes them more available across the Internet. The ORNL DAAC has also created a Web

  10. Include Your Patrons in Web Design. Computers in Small Libraries

    ERIC Educational Resources Information Center

    Roberts, Gary

    2005-01-01

    Successful Web publishing requires not only technical skills but also a refined sense of taste, a good understanding of design, and strong writing abilities. When designing a library Web page, a person must possess all of these talents and be able to market to a broad spectrum of patrons. As a result, library sites vary widely in their style and…

  11. Visual Links in the World-Wide Web: The Uses and Limitations of Image Maps.

    ERIC Educational Resources Information Center

    Cochenour, John J.; And Others

    As information delivery systems on the Internet increasingly evolve into World Wide Web browsers, understanding key graphical elements of the browser interface is critical to the design of effective information display and access tools. Image maps are one such element, and this document describes a pilot study that collected, reviewed, and…

  12. Food-web models predict species abundances in response to habitat change.

    PubMed

    Gotelli, Nicholas J; Ellison, Aaron M

    2006-10-01

    Plant and animal population sizes inevitably change following habitat loss, but the mechanisms underlying these changes are poorly understood. We experimentally altered habitat volume and eliminated top trophic levels of the food web of invertebrates that inhabit rain-filled leaves of the carnivorous pitcher plant Sarracenia purpurea. Path models that incorporated food-web structure better predicted population sizes of food-web constituents than did simple keystone species models, models that included only autecological responses to habitat volume, or models including both food-web structure and habitat volume. These results provide the first experimental confirmation that trophic structure can determine species abundances in the face of habitat loss.

  13. Food-Web Models Predict Species Abundances in Response to Habitat Change

    PubMed Central

    Gotelli, Nicholas J; Ellison, Aaron M

    2006-01-01

    Plant and animal population sizes inevitably change following habitat loss, but the mechanisms underlying these changes are poorly understood. We experimentally altered habitat volume and eliminated top trophic levels of the food web of invertebrates that inhabit rain-filled leaves of the carnivorous pitcher plant Sarracenia purpurea. Path models that incorporated food-web structure better predicted population sizes of food-web constituents than did simple keystone species models, models that included only autecological responses to habitat volume, or models including both food-web structure and habitat volume. These results provide the first experimental confirmation that trophic structure can determine species abundances in the face of habitat loss. PMID:17002518

  14. BPELPower—A BPEL execution engine for geospatial web services

    NASA Astrophysics Data System (ADS)

    Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi

    2012-10-01

    The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.

  15. Ondex Web: web-based visualization and exploration of heterogeneous biological networks.

    PubMed

    Taubert, Jan; Hassani-Pak, Keywan; Castells-Brooke, Nathalie; Rawlings, Christopher J

    2014-04-01

    Ondex Web is a new web-based implementation of the network visualization and exploration tools from the Ondex data integration platform. New features such as context-sensitive menus and annotation tools provide users with intuitive ways to explore and manipulate the appearance of heterogeneous biological networks. Ondex Web is open source, written in Java and can be easily embedded into Web sites as an applet. Ondex Web supports loading data from a variety of network formats, such as XGMML, NWB, Pajek and OXL. http://ondex.rothamsted.ac.uk/OndexWeb.

  16. 28 CFR 0.180 - Documents designated as orders.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Orders of the Attorney General § 0.180 Documents designated as orders. All documents relating to the... by the Attorney General or to general departmental policy shall be designated as orders and shall be issued only by the Attorney General in a separate, numbered series. Classified orders shall be identified...

  17. WebAlchemist: a Web transcoding system for mobile Web access in handheld devices

    NASA Astrophysics Data System (ADS)

    Whang, Yonghyun; Jung, Changwoo; Kim, Jihong; Chung, Sungkwon

    2001-11-01

    In this paper, we describe the design and implementation of WebAlchemist, a prototype web transcoding system, which automatically converts a given HTML page into a sequence of equivalent HTML pages that can be properly displayed on a hand-held device. The Web/Alchemist system is based on a set of HTML transcoding heuristics managed by the Transcoding Manager (TM) module. In order to tackle difficult-to-transcode pages such as ones with large or complex table structures, we have developed several new transcoding heuristics that extract partial semantics from syntactic information such as the table width, font size and cascading style sheet. Subjective evaluation results using popular HTML pages (such as the CNN home page) show that WebAlchemist generates readable, structure-preserving transcoded pages, which can be properly displayed on hand-held devices.

  18. WebSat ‐ A web software for microsatellite marker development

    PubMed Central

    Martins, Wellington Santos; Soares Lucas, Divino César; de Souza Neves, Kelligton Fabricio; Bertioli, David John

    2009-01-01

    Simple sequence repeats (SSR), also known as microsatellites, have been extensively used as molecular markers due to their abundance and high degree of polymorphism. We have developed a simple to use web software, called WebSat, for microsatellite molecular marker prediction and development. WebSat is accessible through the Internet, requiring no program installation. Although a web solution, it makes use of Ajax techniques, providing a rich, responsive user interface. WebSat allows the submission of sequences, visualization of microsatellites and the design of primers suitable for their amplification. The program allows full control of parameters and the easy export of the resulting data, thus facilitating the development of microsatellite markers. Availability The web tool may be accessed at http://purl.oclc.org/NET/websat/ PMID:19255650

  19. Using the World Wide Web for GIDEP Problem Data Processing at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    McPherson, John W.; Haraway, Sandra W.; Whirley, J. Don

    1999-01-01

    Since April 1997, Marshall Space Flight Center has been using electronic transfer and the web to support our processing of the Government-Industry Data Exchange Program (GIDEP) and NASA ALERT information. Specific aspects include: (1) Extraction of ASCII text information from GIDEP for loading into Word documents for e-mail to ALERT actionees; (2) Downloading of GIDEP form image formats in Adobe Acrobat (.pdf) for internal storage display on the MSFC ALERT web page; (3) Linkage of stored GRDEP problem forms with summary information for access from the MSFC ALERT Distribution Summary Chart or from an html table of released MSFC ALERTs (4) Archival of historic ALERTs for reference by GIDEP ID, MSFC ID, or MSFC release date; (5) On-line tracking of ALERT response status using a Microsoft Access database and the web (6) On-line response to ALERTs from MSFC actionees through interactive web forms. The technique, benefits, effort, coordination, and lessons learned for each aspect are covered herein.

  20. A kinematic classification of the cosmic web

    NASA Astrophysics Data System (ADS)

    Hoffman, Yehuda; Metuki, Ofer; Yepes, Gustavo; Gottlöber, Stefan; Forero-Romero, Jaime E.; Libeskind, Noam I.; Knebe, Alexander

    2012-09-01

    A new approach for the classification of the cosmic web is presented. In extension of the previous work of Hahn et al. and Forero-Romero et al., the new algorithm is based on the analysis of the velocity shear tensor rather than the gravitational tidal tensor. The procedure consists of the construction of the shear tensor at each (grid) point in space and the evaluation of its three eigenvectors. A given point is classified to be either a void, sheet, filament or a knot according to the number of eigenvalues above a certain threshold, 0, 1, 2 or 3, respectively. The threshold is treated as a free parameter that defines the web. The algorithm has been applied to a dark matter only simulation of a box of side length 64 h-1 Mpc and N = 10243 particles within the framework of the 5-year Wilkinson and Microwave Anisotropy Probe/Λ cold dark matter (ΛCDM) model. The resulting velocity-based cosmic web resolves structures down to ≲0.1 h-1 Mpc scales, as opposed to the ≈1 h-1 Mpc scale of the tidal-based web. The underdense regions are made of extended voids bisected by planar sheets, whose density is also below the mean. The overdense regions are vastly dominated by the linear filaments and knots. The resolution achieved by the velocity-based cosmic web provides a platform for studying the formation of haloes and galaxies within the framework of the cosmic web.