ERIC Educational Resources Information Center
Larson, Ray R.
1996-01-01
Examines the bibliometrics of the World Wide Web based on analysis of Web pages collected by the Inktomi "Web Crawler" and on the use of the DEC AltaVista search engine for cocitation analysis of a set of Earth Science related Web sites. Looks at the statistical characteristics of Web documents and their hypertext links, and the…
2011-03-28
particular topic of interest. Paper -based documents require the availability of a physical instance of a document, involving the transport of documents...repository of documents via the World Wide Web and search engines offer support in locating documents that are likely to contain relevant information. The... Web , with news agencies, newspapers, various organizations, and individuals as sources. Clearly the analysis, interpretation, and integration of
Vogel, Markus; Kaisers, Wolfgang; Wassmuth, Ralf; Mayatepek, Ertan
2015-11-03
Clinical documentation has undergone a change due to the usage of electronic health records. The core element is to capture clinical findings and document therapy electronically. Health care personnel spend a significant portion of their time on the computer. Alternatives to self-typing, such as speech recognition, are currently believed to increase documentation efficiency and quality, as well as satisfaction of health professionals while accomplishing clinical documentation, but few studies in this area have been published to date. This study describes the effects of using a Web-based medical speech recognition system for clinical documentation in a university hospital on (1) documentation speed, (2) document length, and (3) physician satisfaction. Reports of 28 physicians were randomized to be created with (intervention) or without (control) the assistance of a Web-based system of medical automatic speech recognition (ASR) in the German language. The documentation was entered into a browser's text area and the time to complete the documentation including all necessary corrections, correction effort, number of characters, and mood of participant were stored in a database. The underlying time comprised text entering, text correction, and finalization of the documentation event. Participants self-assessed their moods on a scale of 1-3 (1=good, 2=moderate, 3=bad). Statistical analysis was done using permutation tests. The number of clinical reports eligible for further analysis stood at 1455. Out of 1455 reports, 718 (49.35%) were assisted by ASR and 737 (50.65%) were not assisted by ASR. Average documentation speed without ASR was 173 (SD 101) characters per minute, while it was 217 (SD 120) characters per minute using ASR. The overall increase in documentation speed through Web-based ASR assistance was 26% (P=.04). Participants documented an average of 356 (SD 388) characters per report when not assisted by ASR and 649 (SD 561) characters per report when assisted by ASR. Participants' average mood rating was 1.3 (SD 0.6) using ASR assistance compared to 1.6 (SD 0.7) without ASR assistance (P<.001). We conclude that medical documentation with the assistance of Web-based speech recognition leads to an increase in documentation speed, document length, and participant mood when compared to self-typing. Speech recognition is a meaningful and effective tool for the clinical documentation process.
NASA Astrophysics Data System (ADS)
Fume, Kosei; Ishitani, Yasuto
2008-01-01
We propose a document categorization method based on a document model that can be defined externally for each task and that categorizes Web content or business documents into a target category in accordance with the similarity of the model. The main feature of the proposed method consists of two aspects of semantics extraction from an input document. The semantics of terms are extracted by the semantic pattern analysis and implicit meanings of document substructure are specified by a bottom-up text clustering technique focusing on the similarity of text line attributes. We have constructed a system based on the proposed method for trial purposes. The experimental results show that the system achieves more than 80% classification accuracy in categorizing Web content and business documents into 15 or 70 categories.
Content Recognition and Context Modeling for Document Analysis and Retrieval
ERIC Educational Resources Information Center
Zhu, Guangyu
2009-01-01
The nature and scope of available documents are changing significantly in many areas of document analysis and retrieval as complex, heterogeneous collections become accessible to virtually everyone via the web. The increasing level of diversity presents a great challenge for document image content categorization, indexing, and retrieval.…
Development of Innovative Design Processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Y.S.; Park, C.O.
2004-07-01
The nuclear design analysis requires time-consuming and erroneous model-input preparation, code run, output analysis and quality assurance process. To reduce human effort and improve design quality and productivity, Innovative Design Processor (IDP) is being developed. Two basic principles of IDP are the document-oriented design and the web-based design. The document-oriented design is that, if the designer writes a design document called active document and feeds it to a special program, the final document with complete analysis, table and plots is made automatically. The active documents can be written with ordinary HTML editors or created automatically on the web, which ismore » another framework of IDP. Using the proper mix-up of server side and client side programming under the LAMP (Linux/Apache/MySQL/PHP) environment, the design process on the web is modeled as a design wizard style so that even a novice designer makes the design document easily. This automation using the IDP is now being implemented for all the reload design of Korea Standard Nuclear Power Plant (KSNP) type PWRs. The introduction of this process will allow large reduction in all reload design efforts of KSNP and provide a platform for design and R and D tasks of KNFC. (authors)« less
Characteristics of Food Industry Web Sites and "Advergames" Targeting Children
ERIC Educational Resources Information Center
Culp, Jennifer; Bell, Robert A.; Cassady, Diana
2010-01-01
Objective: To assess the content of food industry Web sites targeting children by describing strategies used to prolong their visits and foster brand loyalty; and to document health-promoting messages on these Web sites. Design: A content analysis was conducted of Web sites advertised on 2 children's networks, Cartoon Network and Nickelodeon. A…
Semantic Similarity between Web Documents Using Ontology
NASA Astrophysics Data System (ADS)
Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh
2018-06-01
The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.
Semantic Similarity between Web Documents Using Ontology
NASA Astrophysics Data System (ADS)
Chahal, Poonam; Singh Tomer, Manjeet; Kumar, Suresh
2018-03-01
The World Wide Web is the source of information available in the structure of interlinked web pages. However, the procedure of extracting significant information with the assistance of search engine is incredibly critical. This is for the reason that web information is written mainly by using natural language, and further available to individual human. Several efforts have been made in semantic similarity computation between documents using words, concepts and concepts relationship but still the outcome available are not as per the user requirements. This paper proposes a novel technique for computation of semantic similarity between documents that not only takes concepts available in documents but also relationships that are available between the concepts. In our approach documents are being processed by making ontology of the documents using base ontology and a dictionary containing concepts records. Each such record is made up of the probable words which represents a given concept. Finally, document ontology's are compared to find their semantic similarity by taking the relationships among concepts. Relevant concepts and relations between the concepts have been explored by capturing author and user intention. The proposed semantic analysis technique provides improved results as compared to the existing techniques.
An Evaluative Methodology for Virtual Communities Using Web Analytics
ERIC Educational Resources Information Center
Phippen, A. D.
2004-01-01
The evaluation of virtual community usage and user behaviour has its roots in social science approaches such as interview, document analysis and survey. Little evaluation is carried out using traffic or protocol analysis. Business approaches to evaluating customer/business web site usage are more advanced, in particular using advanced web…
WEBCAP: Web Scheduler for Distance Learning Multimedia Documents with Web Workload Considerations
ERIC Educational Resources Information Center
Habib, Sami; Safar, Maytham
2008-01-01
In many web applications, such as the distance learning, the frequency of refreshing multimedia web documents places a heavy burden on the WWW resources. Moreover, the updated web documents may encounter inordinate delays, which make it difficult to retrieve web documents in time. Here, we present an Internet tool called WEBCAP that can schedule…
Web Content Management Systems: An Analysis of Forensic Investigatory Challenges.
Horsman, Graeme
2018-02-26
With an increase in the creation and maintenance of personal websites, web content management systems are now frequently utilized. Such systems offer a low cost and simple solution for those seeking to develop an online presence, and subsequently, a platform from which reported defamatory content, abuse, and copyright infringement has been witnessed. This article provides an introductory forensic analysis of the three current most popular web content management systems available, WordPress, Drupal, and Joomla! Test platforms have been created, and their site structures have been examined to provide guidance for forensic practitioners facing investigations of this type. Result's document available metadata for establishing site ownership, user interactions, and stored content following analysis of artifacts including Wordpress's wp_users, and wp_comments tables, Drupal's "watchdog" records, and Joomla!'s _users, and _content tables. Finally, investigatory limitations documenting the difficulties of investigating WCMS usage are noted, and analysis recommendations are offered. © 2018 American Academy of Forensic Sciences.
Girelli, Carlos Magno Alves
2016-05-01
Fingerprints present in false identity documents were found on the web. In some cases, laterally reversed (mirrored) images of a same fingerprint were observed in different documents. In the present work, 100 fingerprints images downloaded from the web, as well as their reversals obtained by image editing, were compared between themselves and against the database of the Brazilian Federal Police AFIS, in order to better understand trends about this kind of forgery in Brazil. Some image editing effects were observed in the analyzed fingerprints: addition of artifacts (such as watermarks), image rotation, image stylization, lateral reversal and tonal reversal. Discussion about lateral reversals' detection is presented in this article, as well as suggestion to reduce errors due to missed HIT decisions between reversed fingerprints. The present work aims to highlight the importance of the fingerprints' analysis when performing document examination, especially when only copies of documents are available, something very common in Brazil. Besides the intrinsic features of the fingermarks considered in three levels of details by ACE-V methodology, some visual features of the fingerprints images can be helpful to identify sources of forgeries and modus operandi, such as: limits and image contours, fails in the friction ridges caused by excess or lack of inking and presence of watermarks and artifacts arising from the background. Based on the agreement of such features in fingerprints present in different identity documents and also on the analysis of the time and location where the documents were seized, it is possible to highlight potential links between apparently unconnected crimes. Therefore, fingerprints have potential to reduce linkage blindness and the present work suggests the analysis of fingerprints when profiling false identity documents, as well as the inclusion of fingerprints features in the profile of the documents. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
SMART (Shop floor Modeling, Analysis and Reporting Tool Project
NASA Technical Reports Server (NTRS)
Centeno, Martha A.; Garcia, Maretys L.; Mendoza, Alicia C.; Molina, Louis A.; Correa, Daisy; Wint, Steve; Doice, Gregorie; Reyes, M. Florencia
1999-01-01
This document presents summarizes the design and prototype of the Shop floor Modeling, Analysis, and Reporting Tool (S.M.A.R.T.) A detailed description of it is found on the full documentation given to the NASA liaison. This documentation is also found on the A.R.I.S.E. Center web site, under a projected directory. Only authorized users can gain access to this site.
A usability evaluation exploring the design of American Nurses Association state web sites.
Alexander, Gregory L; Wakefield, Bonnie J; Anbari, Allison B; Lyons, Vanessa; Prentice, Donna; Shepherd, Marilyn; Strecker, E Bradley; Weston, Marla J
2014-08-01
National leaders are calling for opportunities to facilitate the Future of Nursing. Opportunities can be encouraged through state nurses association Web sites, which are part of the American Nurses Association, that are well designed, with appropriate content, and in a language professional nurses understand. The American Nurses Association and constituent state nurses associations provide information about nursing practice, ethics, credentialing, and health on Web sites. We conducted usability evaluations to determine compliance with heuristic and ethical principles for Web site design. We purposefully sampled 27 nursing association Web sites and used 68 heuristic and ethical criteria to perform systematic usability assessments of nurse association Web sites. Web site analysis included seven double experts who were all RNs trained in usability analysis. The extent to which heuristic and ethical criteria were met ranged widely from one state that met 0% of the criteria for "help and documentation" to states that met greater than 92% of criteria for "visibility of system status" and "aesthetic and minimalist design." Suggested improvements are simple yet make an impact on a first-time visitor's impression of the Web site. For example, adding internal navigation and tracking features and providing more details about the application process through help and frequently asked question documentation would facilitate better use. Improved usability will improve effectiveness, efficiency, and consumer satisfaction with these Web sites.
ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis
Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas
2016-01-01
Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/. PMID:26882475
Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.
Kahn, Charles E
2008-09-01
Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.
ERIC Educational Resources Information Center
Mousley, Judith A.
2010-01-01
The MERGA website has a list of the titles of the last 10 years of Australasian mathematics education Masters and Doctoral theses, with linked abstracts. After a discussion about the socially-determined nature of document analysis, this paper reports the results of an interpretive document analysis of the web page and the pages of abstracts, with…
NGNP Data Management and Analysis System Analysis and Web Delivery Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cynthia D. Gentillon
2011-09-01
Projects for the Very High Temperature Reactor (VHTR) Technology Development Office provide data in support of Nuclear Regulatory Commission licensing of the very high temperature reactor. Fuel and materials to be used in the reactor are tested and characterized to quantify performance in high-temperature and high-fluence environments. The NGNP Data Management and Analysis System (NDMAS) at the Idaho National Laboratory has been established to ensure that VHTR data are (1) qualified for use, (2) stored in a readily accessible electronic form, and (3) analyzed to extract useful results. This document focuses on the third NDMAS objective. It describes capabilities formore » displaying the data in meaningful ways and for data analysis to identify useful relationships among the measured quantities. The capabilities are described from the perspective of NDMAS users, starting with those who just view experimental data and analytical results on the INL NDMAS web portal. Web display and delivery capabilities are described in detail. Also the current web pages that show Advanced Gas Reactor, Advanced Graphite Capsule, and High Temperature Materials test results are itemized. Capabilities available to NDMAS developers are more extensive, and are described using a second series of examples. Much of the data analysis efforts focus on understanding how thermocouple measurements relate to simulated temperatures and other experimental parameters. Statistical control charts and correlation monitoring provide an ongoing assessment of instrument accuracy. Data analysis capabilities are virtually unlimited for those who use the NDMAS web data download capabilities and the analysis software of their choice. Overall, the NDMAS provides convenient data analysis and web delivery capabilities for studying a very large and rapidly increasing database of well-documented, pedigreed data.« less
An open annotation ontology for science on web 3.0
2011-01-01
Background There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Methods Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. Results This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables “stand-off” or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO’s Google Code page: http://code.google.com/p/annotation-ontology/ . Conclusions The Annotation Ontology meets critical requirements for an open, freely shareable model in OWL, of annotation metadata created against scientific documents on the Web. We believe AO can become a very useful common model for annotation metadata on Web documents, and will enable biomedical domain ontologies to be used quite widely to annotate the scientific literature. Potential collaborators and those with new relevant use cases are invited to contact the authors. PMID:21624159
An open annotation ontology for science on web 3.0.
Ciccarese, Paolo; Ocana, Marco; Garcia Castro, Leyla Jael; Das, Sudeshna; Clark, Tim
2011-05-17
There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables "stand-off" or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO's Google Code page: http://code.google.com/p/annotation-ontology/ . The Annotation Ontology meets critical requirements for an open, freely shareable model in OWL, of annotation metadata created against scientific documents on the Web. We believe AO can become a very useful common model for annotation metadata on Web documents, and will enable biomedical domain ontologies to be used quite widely to annotate the scientific literature. Potential collaborators and those with new relevant use cases are invited to contact the authors.
Hydrogen Financial Analysis Scenario Tool (H2FAST) Documentation
for the web and spreadsheet versions of H2FAST. H2FAST Web Tool User's Manual H2FAST Spreadsheet Tool User's Manual (DRAFT) Technical Support Send questions or feedback about H2FAST to H2FAST@nrel.gov. Home
ICCE/ICCAI 2000 Full & Short Papers (Methodologies).
ERIC Educational Resources Information Center
2000
This document contains the full text of the following full and short papers on methodologies from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction): (1) "A Methodology for Learning Pattern Analysis from Web Logs by Interpreting Web Page Contents" (Chih-Kai Chang and…
Search Interface Design Using Faceted Indexing for Web Resources.
ERIC Educational Resources Information Center
Devadason, Francis; Intaraksa, Neelawat; Patamawongjariya, Pornprapa; Desai, Kavita
2001-01-01
Describes an experimental system designed to organize and provide access to Web documents using a faceted pre-coordinate indexing system based on the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. (AEF)
Sweileh, Waleed M; Al-Jabi, Samah W; Sawalha, Ansam F; Zyoud, Sa'ed H
2014-01-01
Reducing nutrition-related health problems in Arab countries requires an understanding of the performance of Arab countries in the field of nutrition and dietetics research. Assessment of research activity from a particular country or region could be achieved through bibliometric analysis. This study was carried out to investigate research activity in "nutrition and dietetics" in Arab countries. Original and review articles published from Arab countries in "nutrition and dietetics" Web of Science category up until 2012 were retrieved and analyzed using the ISI Web of Science database. The total number of documents published in "nutrition and dietetics" category from Arab countries was 2062. This constitutes 1% of worldwide research activity in the field. Annual research productivity showed a significant increase after 2005. Approximately 60% of published documents originated from three Arab countries, particularly Egypt, Kingdom of Saudi Arabia, and Tunisia. However, Kuwait has the highest research productivity per million inhabitants. Main research areas of published documents were in "Food Science/Technology" and "Chemistry" which constituted 75% of published documents compared with 25% for worldwide documents in nutrition and dietetics. A total of 329 (15.96%) nutrition - related diabetes or obesity or cancer documents were published from Arab countries compared with 21% for worldwide published documents. Interest in nutrition and dietetics research is relatively recent in Arab countries. Focus of nutrition research is mainly toward food technology and chemistry with lesser activity toward nutrition-related health research. International cooperation in nutrition research will definitely help Arab researchers in implementing nutrition research that will lead to better national policies regarding nutrition.
NASA Astrophysics Data System (ADS)
Henze, F.; Magdalinski, N.; Schwarzbach, F.; Schulze, A.; Gerth, Ph.; Schäfer, F.
2013-07-01
Information systems play an important role in historical research as well as in heritage documentation. As part of a joint research project of the German Archaeological Institute, the Brandenburg University of Technology Cottbus and the Dresden University of Applied Sciences a web-based documentation system is currently being developed, which can easily be adapted to the needs of different projects with individual scientific concepts, methods and questions. Based on open source and standardized technologies it will focus on open and well-documented interfaces to ease the dissemination and re-use of its content via web-services and to communicate with desktop applications for further evaluation and analysis. Core of the system is a generic data model that represents a wide range of topics and methods of archaeological work. By the provision of a concerted amount of initial themes and attributes a cross project analysis of research data will be possible. The development of enhanced search and retrieval functionalities will simplify the processing and handling of large heterogeneous data sets. To achieve a high degree of interoperability with existing external data, systems and applications, standardized interfaces will be integrated. The analysis of spatial data shall be possible through the integration of web-based GIS functions. As an extension to this, customized functions for storage, processing and provision of 3D geo data are being developed. As part of the contribution system requirements and concepts will be presented and discussed. A particular focus will be on introducing the generic data model and the derived database schema. The research work on enhanced search and retrieval capabilities will be illustrated by prototypical developments, as well as concepts and first implementations for an integrated 2D/3D Web-GIS.
Going, going, still there: using the WebCite service to permanently archive cited web pages.
Eysenbach, Gunther; Trudel, Mathieu
2005-12-30
Scholars are increasingly citing electronic "web references" which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To "webcite" a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its "instructions for authors" accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) "prospectively" before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted "citing articles" (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have applications for research assessment exercises, being able to measure the impact of Web services and published Web documents through access and Web citation metrics.
Going, Going, Still There: Using the WebCite Service to Permanently Archive Cited Web Pages
Trudel, Mathieu
2005-01-01
Scholars are increasingly citing electronic “web references” which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To “webcite” a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its “instructions for authors” accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) “prospectively” before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted “citing articles” (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have applications for research assessment exercises, being able to measure the impact of Web services and published Web documents through access and Web citation metrics. PMID:16403724
NASA Astrophysics Data System (ADS)
Domenico, B.; Weber, J.
2012-04-01
For some years now, the authors have developed examples of online documents that allowed the reader to interact directly with datasets, but there were limitations that restricted the interaction to specific desktop analysis and display tools that were not generally available to all readers of the documents. Recent advances in web service technology and related standards are making it possible to develop systems for publishing online documents that enable readers to access, analyze, and display the data discussed in the publication from the perspective and in the manner from which the author wants it to be represented. By clicking on embedded links, the reader accesses not only the usual textual information in a publication, but also data residing on a local or remote web server as well as a set of processing tools for analyzing and displaying the data. With the option of having the analysis and display processing provided on the server (or in the cloud), there are now a broader set of possibilities on the client side where the reader can interact with the data via a thin web client, a rich desktop application, or a mobile platform "app." The presentation will outline the architecture of data interactive publications along with illustrative examples.
Data Interactive Publications Revisited
NASA Astrophysics Data System (ADS)
Domenico, B.; Weber, W. J.
2011-12-01
A few years back, the authors presented examples of online documents that allowed the reader to interact directly with datasets, but there were limitations that restricted the interaction to specific desktop analysis and display tools that were not generally available to all readers of the documents. Recent advances in web service technology and related standards are making it possible to develop systems for publishing online documents that enable readers to access, analyze, and display the data discussed in the publication from the perspective and in the manner from which the author wants it to be represented. By clicking on embedded links, the reader accesses not only the usual textual information in a publication, but also data residing on a local or remote web server as well as a set of processing tools for analyzing and displaying the data. With the option of having the analysis and display processing provided on the server, there are now a broader set of possibilities on the client side where the reader can interact with the data via a thin web client, a rich desktop application, or a mobile platform "app." The presentation will outline the architecture of data interactive publications along with illustrative examples.
ER2OWL: Generating OWL Ontology from ER Diagram
NASA Astrophysics Data System (ADS)
Fahad, Muhammad
Ontology is the fundamental part of Semantic Web. The goal of W3C is to bring the web into (its full potential) a semantic web with reusing previous systems and artifacts. Most legacy systems have been documented in structural analysis and structured design (SASD), especially in simple or Extended ER Diagram (ERD). Such systems need up-gradation to become the part of semantic web. In this paper, we present ERD to OWL-DL ontology transformation rules at concrete level. These rules facilitate an easy and understandable transformation from ERD to OWL. The set of rules for transformation is tested on a structured analysis and design example. The framework provides OWL ontology for semantic web fundamental. This framework helps software engineers in upgrading the structured analysis and design artifact ERD, to components of semantic web. Moreover our transformation tool, ER2OWL, reduces the cost and time for building OWL ontologies with the reuse of existing entity relationship models.
A Cross-Case Analysis of the Use of Web-Based ePortfolios in Higher Education
ERIC Educational Resources Information Center
McWhorter, Rochell R.; Delello, Julie A.; Roberts, Paul B.; Raisor, Cindy M.; Fowler, Debra A.
2013-01-01
Higher education is mandated to document student learning outcomes and ePortfolios have been offered as a panacea for assessment, evaluation, and accreditation. However, the student voice regarding the value students construct from building and utilizing web-based electronic portfolios (ePortfolios) in higher education has been sparse or…
Documenting clinical pharmacist intervention before and after the introduction of a web-based tool.
Nurgat, Zubeir A; Al-Jazairi, Abdulrazaq S; Abu-Shraie, Nada; Al-Jedai, Ahmed
2011-04-01
To develop a database for documenting pharmacist intervention through a web-based application. The secondary endpoint was to determine if the new, web-based application provides any benefits with regards to documentation compliance by clinical pharmacists and ease of calculating cost savings compared with our previous method of documenting pharmacist interventions. A tertiary care hospital in Saudi Arabia. The documentation of interventions using a web-based documentation application was retrospectively compared with previous methods of documentation of clinical pharmacists' interventions (multi-user PC software). The number and types of interventions recorded by pharmacists, data mining of archived data, efficiency, cost savings, and the accuracy of the data generated. The number of documented clinical interventions increased from 4,926, using the multi-user PC software, to 6,840 for the web-based application. On average, we observed 653 interventions per clinical pharmacist using the web-based application, which showed an increase compared to an average of 493 interventions using the old multi-user PC software. However, using a paired Student's t-test there was no statistical significance difference between the two means (P = 0.201). Using a χ² test, which captured management level and the type of system used, we found a strong effect of management level (P < 2.2 × 10⁻¹⁶) on the number of documented interventions. We also found a moderately significant relationship between educational level and the number of interventions documented (P = 0.045). The mean ± SD time required to document an intervention using the web-based application was 66.55 ± 8.98 s. Using the web-based application, 29.06% of documented interventions resulted in cost-savings, while using the multi-user PC software only 4.75% of interventions did so. The majority of cost savings across both platforms resulted from the discontinuation of unnecessary drugs and a change in dosage regimen. Data collection using the web-based application was consistently more complete when compared to the multi-user PC software. The web-based application is an efficient system for documenting pharmacist interventions. Its flexibility and accessibility, as well as its detailed report functionality is a useful tool that will hopefully encourage other primary and secondary care facilities to adopt similar applications.
KernPaeP - a web-based pediatric palliative documentation system for home care.
Hartz, Tobias; Verst, Hendrik; Ueckert, Frank
2009-01-01
KernPaeP is a new web-based on- and offline documentation system, which has been developed for pediatric palliative care-teams supporting patient documentation and communication among health care professionals. It provides a reliable system making fast and secure home care documentation possible. KernPaeP is accessible online by registered users using any web-browser. Home care teams use an offline version of KernPaeP running on a netbook for patient documentation on site. Identifying and medical patient data are strictly separated and stored on two database servers. The system offers a stable, enhanced two-way algorithm for synchronization between the offline component and the central database servers. KernPaeP is implemented meeting highest security standards while still maintaining high usability. The web-based documentation system allows ubiquitous and immediate access to patient data. Sumptuous paper work is replaced by secure and comprehensive electronic documentation. KernPaeP helps saving time and improving the quality of documentation. Due to development in close cooperation with pediatric palliative professionals, KernPaeP fulfils the broad needs of home-care documentation. The technique of web-based online and offline documentation is in general applicable for arbitrary home care scenarios.
Multi-Filter String Matching and Human-Centric Entity Matching for Information Extraction
ERIC Educational Resources Information Center
Sun, Chong
2012-01-01
More and more information is being generated in text documents, such as Web pages, emails and blogs. To effectively manage this unstructured information, one broadly used approach includes locating relevant content in documents, extracting structured information and integrating the extracted information for querying, mining or further analysis. In…
A Cost Analysis of Web-Enhanced Training to Reduce Alcohol Sales to Intoxicated Bar Patrons
ERIC Educational Resources Information Center
Page, Timothy F.; Nederhoff, Dawn M.; Ecklund, Alexandra M.; Horvath, Keith J.; Nelson, Toben F.; Erickson, Darin J.; Toomey, Traci L.
2015-01-01
Objective: The purpose of this study was to document the development and testing costs of the Enhanced Alcohol Risk Management (eARM) intervention, a web enhanced training program to prevent alcohol sales to intoxicated bar patrons and to estimate its implementation costs in a "real world", non-research setting. Methods: Data for this…
Lawrence; Giles
1998-04-03
The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the "indexable Web," the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages.
Semantic Metadata for Heterogeneous Spatial Planning Documents
NASA Astrophysics Data System (ADS)
Iwaniak, A.; Kaczmarek, I.; Łukowicz, J.; Strzelecki, M.; Coetzee, S.; Paluszyński, W.
2016-09-01
Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa). The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.
Croatian Medical Journal citation score in Web of Science, Scopus, and Google Scholar.
Sember, Marijan; Utrobicić, Ana; Petrak, Jelka
2010-04-01
To analyze the 2007 citation count of articles published by the Croatian Medical Journal in 2005-2006 based on data from the Web of Science, Scopus, and Google Scholar. Web of Science and Scopus were searched for the articles published in 2005-2006. As all articles returned by Scopus were included in Web of Science, the latter list was the sample for further analysis. Total citation counts for each article on the list were retrieved from Web of Science, Scopus, and Google Scholar. The overlap and unique citations were compared and analyzed. Proportions were compared using chi(2)-test. Google Scholar returned the greatest proportion of articles with citations (45%), followed by Scopus (42%), and Web of Science (38%). Almost a half (49%) of articles had no citations and 11% had an equal number of identical citations in all 3 databases. The greatest overlap was found between Web of Science and Scopus (54%), followed by Scopus and Google Scholar (51%), and Web of Science and Google Scholar (44%). The greatest number of unique citations was found by Google Scholar (n=86). The majority of these citations (64%) came from journals, followed by books and PhD theses. Approximately 55% of all citing documents were full-text resources in open access. The language of citing documents was mostly English, but as many as 25 citing documents (29%) were in Chinese. Google Scholar shares a total of 42% citations returned by two others, more influential, bibliographic resources. The list of unique citations in Google Scholar is predominantly journal based, but these journals are mainly of local character. Citations received by internationally recognized medical journals are crucial for increasing the visibility of small medical journals but Google Scholar may serve as an alternative bibliometric tool for an orientational citation insight.
Haunschild, Robin; Bornmann, Lutz
2017-01-01
In this short communication, we provide an overview of a relatively newly provided source of altmetrics data which could possibly be used for societal impact measurements in scientometrics. Recently, Altmetric-a start-up providing publication level metrics-started to make data for publications available which have been mentioned in policy-related documents. Using data from Altmetric, we study how many papers indexed in the Web of Science (WoS) are mentioned in policy-related documents. We find that less than 0.5% of the papers published in different subject categories are mentioned at least once in policy-related documents. Based on our results, we recommend that the analysis of (WoS) publications with at least one policy-related mention is repeated regularly (annually) in order to check the usefulness of the data. Mentions in policy-related documents should not be used for impact measurement until new policy-related sites are tracked.
Web document ranking via active learning and kernel principal component analysis
NASA Astrophysics Data System (ADS)
Cai, Fei; Chen, Honghui; Shu, Zhen
2015-09-01
Web document ranking arises in many information retrieval (IR) applications, such as the search engine, recommendation system and online advertising. A challenging issue is how to select the representative query-document pairs and informative features as well for better learning and exploring new ranking models to produce an acceptable ranking list of candidate documents of each query. In this study, we propose an active sampling (AS) plus kernel principal component analysis (KPCA) based ranking model, viz. AS-KPCA Regression, to study the document ranking for a retrieval system, i.e. how to choose the representative query-document pairs and features for learning. More precisely, we fill those documents gradually into the training set by AS such that each of which will incur the highest expected DCG loss if unselected. Then, the KPCA is performed via projecting the selected query-document pairs onto p-principal components in the feature space to complete the regression. Hence, we can cut down the computational overhead and depress the impact incurred by noise simultaneously. To the best of our knowledge, we are the first to perform the document ranking via dimension reductions in two dimensions, namely, the number of documents and features simultaneously. Our experiments demonstrate that the performance of our approach is better than that of the baseline methods on the public LETOR 4.0 datasets. Our approach brings an improvement against RankBoost as well as other baselines near 20% in terms of MAP metric and less improvements using P@K and NDCG@K, respectively. Moreover, our approach is particularly suitable for document ranking on the noisy dataset in practice.
Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.
Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao
2016-12-01
In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.
Demonstration of Data Interactive Publications
NASA Astrophysics Data System (ADS)
Domenico, B.; Weber, J.
2012-04-01
This is a demonstration version of the talk given in session ESSI2.4 "Full lifecycle of data." For some years now, the authors have developed examples of online documents that allowed the reader to interact directly with datasets, but there were limitations that restricted the interaction to specific desktop analysis and display tools that were not generally available to all readers of the documents. Recent advances in web service technology and related standards are making it possible to develop systems for publishing online documents that enable readers to access, analyze, and display the data discussed in the publication from the perspective and in the manner from which the author wants it to be represented. By clicking on embedded links, the reader accesses not only the usual textual information in a publication, but also data residing on a local or remote web server as well as a set of processing tools for analyzing and displaying the data. With the option of having the analysis and display processing provided on the server (or in the cloud), there are now a broader set of possibilities on the client side where the reader can interact with the data via a thin web client, a rich desktop application, or a mobile platform "app." The presentation will outline the architecture of data interactive publications along with illustrative examples.
Web Prep: How to Prepare NAS Reports For Publication on the Web
NASA Technical Reports Server (NTRS)
Walatka, Pamela; Balakrishnan, Prithika; Clucas, Jean; McCabe, R. Kevin; Felchle, Gail; Brickell, Cristy
1996-01-01
This document contains specific advice and requirements for NASA Ames Code IN authors of NAS reports. Much of the information may be of interest to other authors writing for the Web. WebPrep has a graphic Table of Contents in the form of a WebToon, which simulates a discussion between a scientist and a Web publishing consultant. In the WebToon, Frequently Asked Questions about preparing reports for the Web are linked to relevant text in the body of this document. We also provide a text-only Table of Contents. The text for this document is divided into chapters: each chapter corresponds to one frame of the WebToons. The chapter topics are: converting text to HTML, converting 2D graphic images to gif, creating imagemaps and tables, converting movie and audio files to Web formats, supplying 3D interactive data, and (briefly) JAVA capabilities. The last chapter is specifically for NAS staff authors. The Glossary-Index lists web related words and links to topics covered in the main text.
Mapping the Themes, Impact, and Cohesion of Creativity Research over the Last 25 Years
ERIC Educational Resources Information Center
Williams, Rich; Runco, Mark A.; Berlow, Eric
2016-01-01
This article describes the themes found in the past 25 years of creativity research. Computational methods and network analysis were used to map keyword theme development across ~1,400 documents and ~5,000 unique keywords from 1990 (the first year keywords are available in Web of Science) to 2015. Data were retrieved from Web of Science using the…
49 CFR 229.211 - Processing of petitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Docket Management System and posted on its Web site at http://www.regulations.gov. (3) In the event FRA..., FRA will consider proper documentation of competent engineering analysis, or practical demonstrations...
49 CFR 229.211 - Processing of petitions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Docket Management System and posted on its Web site at http://www.regulations.gov. (3) In the event FRA..., FRA will consider proper documentation of competent engineering analysis, or practical demonstrations...
Review of Web-Based Technical Documentation Processes. FY07 NAEP-QA Special Study Report. TR-08-17
ERIC Educational Resources Information Center
Gribben, Monica; Wise, Lauress; Becker, D. E.
2008-01-01
Beginning with the 2000 and 2001 National Assessment of Educational Progress (NAEP) assessments, the National Center for Education Statistics (NCES) has made technical documentation available on the worldwide web at http://nces.ed.gov/nationsreportcard/tdw/. The web-based documentation is designed to be less dense and more accessible than prior…
Lee, Eunjoo; Noh, Hyun Kyung
2016-01-01
To examine the effects of a web-based nursing process documentation system on the stress and anxiety of nursing students during their clinical practice. A quasi-experimental design was employed. The experimental group (n = 110) used a web-based nursing process documentation program for their case reports as part of assignments for a clinical practicum, whereas the control group (n = 106) used traditional paper-based case reports. Stress and anxiety levels were measured with a numeric rating scale before, 2 weeks after, and 4 weeks after using the web-based nursing process documentation program during a clinical practicum. The data were analyzed using descriptive statistics, t tests, chi-square tests, and repeated-measures analyses of variance. Nursing students who used the web-based nursing process documentation program showed significant lower levels of stress and anxiety than the control group. A web-based nursing process documentation program could be used to reduce the stress and anxiety of nursing students during clinical practicum, which ultimately would benefit nursing students by increasing satisfaction with and effectiveness of clinical practicum. © 2015 NANDA International, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowen, Benjamin; Ruebel, Oliver; Fischer, Curt Fischer R.
BASTet is an advanced software library written in Python. BASTet serves as the analysis and storage library for the OpenMSI project. BASTet is an integrate framework for: i) storage of spectral imaging data, ii) storage of derived analysis data, iii) provenance of analyses, iv) integration and execution of analyses via complex workflows. BASTet implements the API for the HDF5 storage format used by OpenMSI. Analyses that are developed using BASTet benefit from direct integration with storage format, automatic tracking of provenance, and direct integration with command-line and workflow execution tools. BASTet also defines interfaces to enable developers to directly integratemore » their analysis with OpenMSI's web-based viewing infrastruture without having to know OpenMSI. BASTet also provides numerous helper classes and tools to assist with the conversion of data files, ease parallel implementation of analysis algorithms, ease interaction with web-based functions, description methods for data reduction. BASTet also includes detailed developer documentation, user tutorials, iPython notebooks, and other supporting documents.« less
Analysis of co-occurrence toponyms in web pages based on complex networks
NASA Astrophysics Data System (ADS)
Zhong, Xiang; Liu, Jiajun; Gao, Yong; Wu, Lun
2017-01-01
A large number of geographical toponyms exist in web pages and other documents, providing abundant geographical resources for GIS. It is very common for toponyms to co-occur in the same documents. To investigate these relations associated with geographic entities, a novel complex network model for co-occurrence toponyms is proposed. Then, 12 toponym co-occurrence networks are constructed from the toponym sets extracted from the People's Daily Paper documents of 2010. It is found that two toponyms have a high co-occurrence probability if they are at the same administrative level or if they possess a part-whole relationship. By applying complex network analysis methods to toponym co-occurrence networks, we find the following characteristics. (1) The navigation vertices of the co-occurrence networks can be found by degree centrality analysis. (2) The networks express strong cluster characteristics, and it takes only several steps to reach one vertex from another one, implying that the networks are small-world graphs. (3) The degree distribution satisfies the power law with an exponent of 1.7, so the networks are free-scale. (4) The networks are disassortative and have similar assortative modes, with assortative exponents of approximately 0.18 and assortative indexes less than 0. (5) The frequency of toponym co-occurrence is weakly negatively correlated with geographic distance, but more strongly negatively correlated with administrative hierarchical distance. Considering the toponym frequencies and co-occurrence relationships, a novel method based on link analysis is presented to extract the core toponyms from web pages. This method is suitable and effective for geographical information retrieval.
Characteristics of food industry web sites and "advergames" targeting children.
Culp, Jennifer; Bell, Robert A; Cassady, Diana
2010-01-01
To assess the content of food industry Web sites targeting children by describing strategies used to prolong their visits and foster brand loyalty; and to document health-promoting messages on these Web sites. A content analysis was conducted of Web sites advertised on 2 children's networks, Cartoon Network and Nickelodeon. A total of 290 Web pages and 247 unique games on 19 Internet sites were examined. Games, found on 81% of Web sites, were the most predominant promotion strategy used. All games had at least 1 brand identifier, with logos being most frequently used. On average Web sites contained 1 "healthful" message for every 45 exposures to brand identifiers. Food companies use Web sites to extend their television advertising to promote brand loyalty among children. These sites almost exclusively promoted food items high in sugar and fat. Health professionals need to monitor food industry marketing practices used in "new media." Published by Elsevier Inc.
Web-Based Predictive Analytics to Improve Patient Flow in the Emergency Department
NASA Technical Reports Server (NTRS)
Buckler, David L.
2012-01-01
The Emergency Department (ED) simulation project was established to demonstrate how requirements-driven analysis and process simulation can help improve the quality of patient care for the Veterans Health Administration's (VHA) Veterans Affairs Medical Centers (VAMC). This project developed a web-based simulation prototype of patient flow in EDs, validated the performance of the simulation against operational data, and documented IT requirements for the ED simulation.
Morphosyntactic Neural Analysis for Generalized Lexical Normalization
ERIC Educational Resources Information Center
Leeman-Munk, Samuel Paul
2016-01-01
The phenomenal growth of social media, web forums, and online reviews has spurred a growing interest in automated analysis of user-generated text. At the same time, a proliferation of voice recordings and efforts to archive culture heritage documents are fueling demand for effective automatic speech recognition (ASR) and optical character…
Data warehousing as a basis for web-based documentation of data mining and analysis.
Karlsson, J; Eklund, P; Hallgren, C G; Sjödin, J G
1999-01-01
In this paper we present a case study for data warehousing intended to support data mining and analysis. We also describe a prototype for data retrieval. Further we discuss some technical issues related to a particular choice of a patient record environment.
Chang, Hsiao-Ting; Lin, Ming-Hwai; Chen, Chun-Ku; Hwang, Shinn-Jang; Hwang, I-Hsuan; Chen, Yu-Chun
2016-01-01
Academic publications are important for developing a medical specialty or discipline and improvements of quality of care. As hospice palliative care medicine is a rapidly growing medical specialty in Taiwan, this study aimed to analyze the hospice palliative care-related publications from 1993 through 2013 both worldwide and in Taiwan, by using the Web of Science database. Academic articles published with topics including "hospice", "palliative care", "end of life care", and "terminal care" were retrieved and analyzed from the Web of Science database, which includes documents published in Science Citation Index-Expanded and Social Science Citation Indexed journals from 1993 to 2013. Compound annual growth rates (CAGRs) were calculated to evaluate the trends of publications. There were a total of 27,788 documents published worldwide during the years 1993 to 2013. The top five most prolific countries/areas with published documents were the United States (11,419 documents, 41.09%), England (3620 documents, 13.03%), Canada (2428 documents, 8.74%), Germany (1598 documents, 5.75%), and Australia (1580 documents, 5.69%). Three hundred and ten documents (1.12%) were published from Taiwan, which ranks second among Asian countries (after Japan, with 594 documents, 2.14%) and 16(th) in the world. During this 21-year period, the number of hospice palliative care-related article publications increased rapidly. The worldwide CAGR for hospice palliative care publications during 1993 through 2013 was 12.9%. As for Taiwan, the CAGR for publications during 1999 through 2013 was 19.4%. The majority of these documents were submitted from universities or hospitals affiliated to universities. The number of hospice palliative care-related publications increased rapidly from 1993 to 2013 in the world and in Taiwan; however, the number of publications from Taiwan is still far below those published in several other countries. Further research is needed to identify and try to reduce the barriers to hospice palliative care research and publication in Taiwan. Copyright © 2015. Published by Elsevier Taiwan LLC.
Croatian Medical Journal Citation Score in Web of Science, Scopus, and Google Scholar
Šember, Marijan; Utrobičić, Ana; Petrak, Jelka
2010-01-01
Aim To analyze the 2007 citation count of articles published by the Croatian Medical Journal in 2005-2006 based on data from the Web of Science, Scopus, and Google Scholar. Methods Web of Science and Scopus were searched for the articles published in 2005-2006. As all articles returned by Scopus were included in Web of Science, the latter list was the sample for further analysis. Total citation counts for each article on the list were retrieved from Web of Science, Scopus, and Google Scholar. The overlap and unique citations were compared and analyzed. Proportions were compared using χ2-test. Results Google Scholar returned the greatest proportion of articles with citations (45%), followed by Scopus (42%), and Web of Science (38%). Almost a half (49%) of articles had no citations and 11% had an equal number of identical citations in all 3 databases. The greatest overlap was found between Web of Science and Scopus (54%), followed by Scopus and Google Scholar (51%), and Web of Science and Google Scholar (44%). The greatest number of unique citations was found by Google Scholar (n = 86). The majority of these citations (64%) came from journals, followed by books and PhD theses. Approximately 55% of all citing documents were full-text resources in open access. The language of citing documents was mostly English, but as many as 25 citing documents (29%) were in Chinese. Conclusion Google Scholar shares a total of 42% citations returned by two others, more influential, bibliographic resources. The list of unique citations in Google Scholar is predominantly journal based, but these journals are mainly of local character. Citations received by internationally recognized medical journals are crucial for increasing the visibility of small medical journals but Google Scholar may serve as an alternative bibliometric tool for an orientational citation insight. PMID:20401951
Home Page, Sweet Home Page: Creating a Web Presence.
ERIC Educational Resources Information Center
Falcigno, Kathleen; Green, Tim
1995-01-01
Focuses primarily on design issues and practical concerns involved in creating World Wide Web documents for use within an organization. Concerns for those developing Web home pages are: learning HyperText Markup Language (HTML); defining customer group; allocating staff resources for maintenance of documents; providing feedback mechanism for…
MAGMA: analysis of two-channel microarrays made easy.
Rehrauer, Hubert; Zoller, Stefan; Schlapbach, Ralph
2007-07-01
The web application MAGMA provides a simple and intuitive interface to identify differentially expressed genes from two-channel microarray data. While the underlying algorithms are not superior to those of similar web applications, MAGMA is particularly user friendly and can be used without prior training. The user interface guides the novice user through the most typical microarray analysis workflow consisting of data upload, annotation, normalization and statistical analysis. It automatically generates R-scripts that document MAGMA's entire data processing steps, thereby allowing the user to regenerate all results in his local R installation. The implementation of MAGMA follows the model-view-controller design pattern that strictly separates the R-based statistical data processing, the web-representation and the application logic. This modular design makes the application flexible and easily extendible by experts in one of the fields: statistical microarray analysis, web design or software development. State-of-the-art Java Server Faces technology was used to generate the web interface and to perform user input processing. MAGMA's object-oriented modular framework makes it easily extendible and applicable to other fields and demonstrates that modern Java technology is also suitable for rather small and concise academic projects. MAGMA is freely available at www.magma-fgcz.uzh.ch.
ICTNET at Web Track 2009 Diversity task
2009-11-01
performance. On the World Wide Web, there exist many documents which represents several implicit subtopics. We used commerce search engines to gather those...documents. In this task, our work can be divided into five steps. First, we collect documents returned by commerce search engines , and considered
The Number of Scholarly Documents on the Public Web
Khabsa, Madian; Giles, C. Lee
2014-01-01
The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%. PMID:24817403
The number of scholarly documents on the public web.
Khabsa, Madian; Giles, C Lee
2014-01-01
The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.
Rains, Stephen A; Bosch, Leslie A
2009-07-01
This article reports a content analysis of the privacy policy statements (PPSs) from 97 general reference health Web sites that was conducted to examine the ways in which visitors' privacy is constructed by health organizations. PPSs are formal documents created by the Web site owner to describe how information regarding site visitors and their behavior is collected and used. The results show that over 80% of the PPSs in the sample indicated automatically collecting or requesting that visitors voluntarily provide information about themselves, and only 3% met all five of the Federal Trade Commission's Fair Information Practices guidelines. Additionally, the results suggest that the manner in which PPSs are framed and the use of justifications for collecting information are tropes used by health organizations to foster a secondary exchange of visitors' personal information for access to Web site content.
RSAT 2015: Regulatory Sequence Analysis Tools
Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A.; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M.; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques
2015-01-01
RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. PMID:25904632
Detecting people of interest from internet data sources
NASA Astrophysics Data System (ADS)
Cardillo, Raymond A.; Salerno, John J.
2006-04-01
In previous papers, we have documented success in determining the key people of interest from a large corpus of real-world evidence. Our recent efforts focus on exploring additional domains and data sources. Internet data sources such as email, web pages, and news feeds make it easier to gather a large corpus of documents for various domains, but detecting people of interest in these sources introduces new challenges. Analyzing these massive sources magnifies entity resolution problems, and demands a storage management strategy that supports efficient algorithmic analysis and visualization techniques. This paper discusses the techniques we used in order to analyze the ENRON email repository, which are also applicable to analyzing web pages returned from our "Buddy" meta-search engine.
Results of a Citation Analysis of Knowledge Management in Education
ERIC Educational Resources Information Center
Uzunboylu, Huseyin; Eris, Hasan; Ozcinar, Zehra
2011-01-01
The purpose of this study was to examine research and trends in knowledge management in education (KME) published in selected professional sources during the period 1990-2008. Citation analysis was used in this study to investigate documents related to KME, which were indexed in the Web of Science, Education Researches Information Center and…
2013-11-26
Combination with Simple Features," lEE European Workshop on Handwriting Analysis and Recognition, pp. 6/1-6, Brussels, Jul. 1994. Bock, J., et a...Document Analysis and Recognition, pp. 147-150, Oct. 1993. Starner, T., eta!., "On-Line Cursive Handwriting Recognition Using Speech Recognition Methods
Schultz, Michael; Seo, Steven Bohwan; Holt, Alec; Regenbrecht, Holger
2015-11-18
Colorectal cancer (CRC) has a high incidence, especially in New Zealand. The reasons for this are unknown. While most cancers develop sporadically, a positive family history, determined by the number and age at diagnosis of affected first and second degree relatives with CRC is one of the major factors, which may increase an individual's lifetime risk. Before a patient can be enrolled in a surveillance program a detailed assessment and documentation of the family history is important but time consuming and often inaccurate. The documentation is usually paper-based. Our aim was therefore to develop and validate the usability and efficacy of a web-based family history assessment tool for CRC suitable for the general population. The tool was also to calculate the risk and make a recommendation for surveillance. Two versions of an electronic assessment tool, diagram-based and questionnaire-based, were developed with the risk analysis and recommendations for surveillance based on the New Zealand Guidelines Group recommendations. Accuracy of our tool was tested prior to the study by comparing risk calculations based on family history by experienced gastroenterologists with the electronic assessment. The general public, visiting a local science fair were asked to use and comment on the usability of the two interfaces. Ninety people assessed and commented on the two interfaces. Both interfaces were effective in assessing the risk to develop CRC through their familial history for CRC. However, the questionnaire-based interface performed with significantly better satisfaction (p = 0.001) than the diagram-based interface. There was no difference in efficacy though. We conclude that a web-based questionnaire tool can assist in the accurate documentation and analysis of the family history relevant to determine the individual risk of CRC based on local guidelines. The calculator is now implemented and assessable through the web-page of a local charity for colorectal cancer awareness and integral part of the local general practitioners' e-referral system for colonic imaging.
Readability of ASPS and ASAPS educational web sites: an analysis of consumer impact.
Aliu, Oluseyi; Chung, Kevin C
2010-04-01
Patients use the Internet to educate themselves about health-related topics, and learning about plastic surgery is a common activity for enthusiastic consumers in the United States. How to educate consumers regarding plastic surgical procedures is a continued concern for plastic surgeons when faced with the growing portion of the American population having relatively low health care literacy. The usefulness of health-related education materials on the Internet depends largely on their comprehensibility and understandability for all who visit the Web sites. The authors studied the readability of patient education materials related to common plastic surgery procedures from the American Society of Plastic Surgeons (ASPS) and the American Society for Aesthetic Plastic Surgery (ASAPS) Web sites and compared them with materials on similar topics from 10 popular health information-providing sites. The authors found that all analyzed documents on the ASPS and ASAPS Web sites targeted to the consumers were rated to be more difficult than the recommended reading grade level for most American adults, and these documents were consistently among the most difficult to read when compared with the other health information Web sites. The Internet is an increasingly popular avenue for patients to educate themselves about plastic surgery procedures. Patient education material provided on ASPS and ASAPS Web sites should be written at recommended reading grade levels to ensure that it is readable and comprehensible to the targeted audience.
In Search of a Better Search Engine
ERIC Educational Resources Information Center
Kolowich, Steve
2009-01-01
Early this decade, the number of Web-based documents stored on the servers of the University of Florida hovered near 300,000. By the end of 2006, that number had leapt to four million. Two years later, the university hosts close to eight million Web documents. Web sites for colleges and universities everywhere have become repositories for data…
Parents on the web: risks for quality management of cough in children.
Pandolfini, C; Impicciatore, P; Bonati, M
2000-01-01
Health information on the Internet, with respect to common, self-limited childhood illnesses, has been found to be unreliable. Therefore, parents navigating on the Internet risk finding advice that is incomplete or, more importantly, not evidence-based. The importance that a resource such as the Internet as a source of quality health information for consumers should, however, be taken into consideration. For this reason, studies need to be performed regarding the quality of material provided. Various strategies have been proposed that would allow parents to distinguish trustworthy web documents from unreliable ones. One of these strategies is the use of a checklist for the appraisal of web pages based on their technical aspects. The purpose of this study was to assess the quality of information present on the Internet regarding the home management of cough in children and to examine the applicability of a checklist strategy that would allow consumers to select more trustworthy web pages. The Internet was searched for web pages regarding the home treatment of cough in children with the use of different search engines. Medline and the Cochrane database were searched for available evidence concerning the management of cough in children. Three checklists were created to assess different aspects of the web documents. The first checklist was designed to allow for a technical appraisal of the web pages and was based on components such as the name of the author and references used. The second was constructed to examine the completeness of the health information contained in the documents, such as causes and mechanism of cough, and pharmacological and nonpharmacological treatment. The third checklist assessed the quality of the information by measuring it against a gold standard document. This document was created by combining the policy statement issued by the American Academy of Pediatrics regarding the pharmacological treatment of cough in children with the guide of the World Health Organization on drugs for children. For each checklist, the web page contents were analyzed and quantitative measurements were assigned. Of the 19 web pages identified, 9 explained the purpose and/or mechanism of cough and 14 the causes. The most frequently mentioned pharmacological treatments were single-ingredient suppressant preparations, followed by single-ingredient expectorants. Dextromethorphan was the most commonly referred to suppressant and guaifenesin the most common expectorant. No documents discouraged the use of suppressants, although 4 of the 10 web documents that addressed expectorants discouraged their use. Sixteen web pages addressed nonpharmacological treatment, 14 of which suggested exposure to a humid environment and/or extra fluid. In most cases, the criteria in the technical appraisal checklist were not present in the web documents; moreover, 2 web pages did not provide any of the items. Regarding content completeness, 3 web pages satisfied all the requirements considered in the checklist and 2 documents did not meet any of the criteria. Of the 3 web pages that scored highest in technical aspect, 2 also supplied complete information. No relationship was found, however, between the technical aspect and the content completeness. Concerning the quality of the health information supplied, 10 pages received a negative score because they contained more incorrect than correct information, and 1 web page received a high score. This document was 1 of the 2 that also scored high in technical aspect and content completeness. No relationship was found, however, among quality of information, technical aspect, and content completeness. As the results of this study show, a parent navigating the Internet for information on the home management of cough in children will no doubt find incorrect advice among the search results. (ABSTRACT TRUNCATED)
Features: Real-Time Adaptive Feature and Document Learning for Web Search.
ERIC Educational Resources Information Center
Chen, Zhixiang; Meng, Xiannong; Fowler, Richard H.; Zhu, Binhai
2001-01-01
Describes Features, an intelligent Web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Explains how Features learns from users' document relevance feedback and automatically extracts and suggests indexing keywords relevant to a search query, and learns from users' keyword relevance feedback…
Font adaptive word indexing of modern printed documents.
Marinai, Simone; Marino, Emanuele; Soda, Giovanni
2006-08-01
We propose an approach for the word-level indexing of modern printed documents which are difficult to recognize using current OCR engines. By means of word-level indexing, it is possible to retrieve the position of words in a document, enabling queries involving proximity of terms. Web search engines implement this kind of indexing, allowing users to retrieve Web pages on the basis of their textual content. Nowadays, digital libraries hold collections of digitized documents that can be retrieved either by browsing the document images or relying on appropriate metadata assembled by domain experts. Word indexing tools would therefore increase the access to these collections. The proposed system is designed to index homogeneous document collections by automatically adapting to different languages and font styles without relying on OCR engines for character recognition. The approach is based on three main ideas: the use of Self Organizing Maps (SOM) to perform unsupervised character clustering, the definition of one suitable vector-based word representation whose size depends on the word aspect-ratio, and the run-time alignment of the query word with indexed words to deal with broken and touching characters. The most appropriate applications are for processing modern printed documents (17th to 19th centuries) where current OCR engines are less accurate. Our experimental analysis addresses six data sets containing documents ranging from books of the 17th century to contemporary journals.
RSAT 2018: regulatory sequence analysis tools 20th anniversary.
Nguyen, Nga Thi Thuy; Contreras-Moreira, Bruno; Castro-Mondragon, Jaime A; Santana-Garcia, Walter; Ossio, Raul; Robles-Espinoza, Carla Daniela; Bahin, Mathieu; Collombet, Samuel; Vincens, Pierre; Thieffry, Denis; van Helden, Jacques; Medina-Rivera, Alejandra; Thomas-Chollier, Morgane
2018-05-02
RSAT (Regulatory Sequence Analysis Tools) is a suite of modular tools for the detection and the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, including from genome-wide datasets like ChIP-seq/ATAC-seq, (ii) motif scanning, (iii) motif analysis (quality assessment, comparisons and clustering), (iv) analysis of regulatory variations, (v) comparative genomics. Six public servers jointly support 10 000 genomes from all kingdoms. Six novel or refactored programs have been added since the 2015 NAR Web Software Issue, including updated programs to analyse regulatory variants (retrieve-variation-seq, variation-scan, convert-variations), along with tools to extract sequences from a list of coordinates (retrieve-seq-bed), to select motifs from motif collections (retrieve-matrix), and to extract orthologs based on Ensembl Compara (get-orthologs-compara). Three use cases illustrate the integration of new and refactored tools to the suite. This Anniversary update gives a 20-year perspective on the software suite. RSAT is well-documented and available through Web sites, SOAP/WSDL (Simple Object Access Protocol/Web Services Description Language) web services, virtual machines and stand-alone programs at http://www.rsat.eu/.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Reflecting Warfighter Needs in Air Force Programs: Prototype Analysis
2010-01-01
representation of RAND intellectual property is provided for non -commercial use only. Unauthorized posting of RAND PDFs to a non -RAND Web site is prohibited...duplicated for commercial purposes. Unauthorized posting of RAND documents to a non -RAND website is prohibited. RAND documents are protected under...Secretary of Defense for Acqui- sition, Technology, and Logistics (USD/AT&L). Step One is to sharpen understanding of the mission area and its seams
NASA Astrophysics Data System (ADS)
García Castro, Alexander; García-Castro, Leyla Jael; Labarga, Alberto; Giraldo, Olga; Montaña, César; O'Neil, Kieran; Bateman, John A.
Rather than a document that is being constantly re-written as in the wiki approach, the Living Document (LD) is one that acts as a document router, operating by means of structured and organized social tagging and existing ontologies. It offers an environment where users can manage papers and related information, share their knowledge with their peers and discover hidden associations among the shared knowledge. The LD builds upon both the Semantic Web, which values the integration of well-structured data, and the Social Web, which aims to facilitate interaction amongst people by means of user-generated content. In this vein, the LD is similar to a social networking system, with users as central nodes in the network, with the difference that interaction is focused on papers rather than people. Papers, with their ability to represent research interests, expertise, affiliations, and links to web based tools and databanks, represent a central axis for interaction amongst users. To begin to show the potential of this vision, we have implemented a novel web prototype that enables researchers to accomplish three activities central to the Semantic Web vision: organizing, sharing and discovering. Availability: http://www.scientifik.info/
Airline Quarterly Financial Review - Fourth Quarter 1997 Majors
DOT National Transportation Integrated Search
1997-01-01
This report contains staff comments, tables and charts on the financial : condition of the U.S. major airlines. An electronic version of this document can be obtained via the World Wide Web at: http://dms.dot.gov/ost/aviation/analysis.html : The data...
75 FR 30100 - FY 2010 Discretionary Livability Funding Opportunity: Alternatives Analysis Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-28
..., Alternatives Analysis Program, Office of Planning and Environment, by phone at (202) 493-0512 or by e-mail at... transportation planning process which includes: (1) An assessment of a wide range of public transportation... descriptions of the documents produced, can be found on FTA's Web site at http://www.fta.dot.gov/planning...
Personalization of Rule-based Web Services.
Choi, Okkyung; Han, Sang Yong
2008-04-04
Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.
A cross disciplinary study of link decay and the effectiveness of mitigation techniques
2013-01-01
Background The dynamic, decentralized world-wide-web has become an essential part of scientific research and communication. Researchers create thousands of web sites every year to share software, data and services. These valuable resources tend to disappear over time. The problem has been documented in many subject areas. Our goal is to conduct a cross-disciplinary investigation of the problem and test the effectiveness of existing remedies. Results We accessed 14,489 unique web pages found in the abstracts within Thomson Reuters' Web of Science citation index that were published between 1996 and 2010 and found that the median lifespan of these web pages was 9.3 years with 62% of them being archived. Survival analysis and logistic regression were used to find significant predictors of URL lifespan. The availability of a web page is most dependent on the time it is published and the top-level domain names. Similar statistical analysis revealed biases in current solutions: the Internet Archive favors web pages with fewer layers in the Universal Resource Locator (URL) while WebCite is significantly influenced by the source of publication. We also created a prototype for a process to submit web pages to the archives and increased coverage of our list of scientific webpages in the Internet Archive and WebCite by 22% and 255%, respectively. Conclusion Our results show that link decay continues to be a problem across different disciplines and that current solutions for static web pages are helping and can be improved. PMID:24266891
A cross disciplinary study of link decay and the effectiveness of mitigation techniques.
Hennessey, Jason; Ge, Steven
2013-01-01
The dynamic, decentralized world-wide-web has become an essential part of scientific research and communication. Researchers create thousands of web sites every year to share software, data and services. These valuable resources tend to disappear over time. The problem has been documented in many subject areas. Our goal is to conduct a cross-disciplinary investigation of the problem and test the effectiveness of existing remedies. We accessed 14,489 unique web pages found in the abstracts within Thomson Reuters' Web of Science citation index that were published between 1996 and 2010 and found that the median lifespan of these web pages was 9.3 years with 62% of them being archived. Survival analysis and logistic regression were used to find significant predictors of URL lifespan. The availability of a web page is most dependent on the time it is published and the top-level domain names. Similar statistical analysis revealed biases in current solutions: the Internet Archive favors web pages with fewer layers in the Universal Resource Locator (URL) while WebCite is significantly influenced by the source of publication. We also created a prototype for a process to submit web pages to the archives and increased coverage of our list of scientific webpages in the Internet Archive and WebCite by 22% and 255%, respectively. Our results show that link decay continues to be a problem across different disciplines and that current solutions for static web pages are helping and can be improved.
Document-Centred Discourse on the Web: A Publishing Tool for Students, Tutors and Researchers.
ERIC Educational Resources Information Center
Shum, Simon Buckingham; Sumner, Tamara
This paper describes how the authors are exploiting the potential of interactive World Wide Web media to support a central part of academic life--the publishing, critiquing, and discussion of documents. The paper begins with an overview of documents in academic life and a discussion of paper-based or "papyrocentric" print and scholarly…
MCM generator: a Java-based tool for generating medical metadata.
Munoz, F; Hersh, W
1998-01-01
In a previous paper we introduced the need to implement a mechanism to facilitate the discovery of relevant Web medical documents. We maintained that the use of META tags, specifically ones that define the medical subject and resource type of a document, help towards this goal. We have now developed a tool to facilitate the generation of these tags for the authors of medical documents. Written entirely in Java, this tool makes use of the SAPHIRE server, and helps the author identify the Medical Subject Heading terms that most appropriately describe the subject of the document. Furthermore, it allows the author to generate metadata tags for the 15 elements that the Dublin Core considers as core elements in the description of a document. This paper describes the use of this tool in the cataloguing of Web and non-Web medical documents, such as images, movie, and sound files.
Ramos, José Manuel; González-Alcaide, Gregorio; Gutiérrez, Félix
2016-03-01
The bibliometric analysis of production and impact of documents by knowledge area is a quantitative and qualitative indicator of research activity in this field. The aim of this article is to determine the contribution of Spanish research institutions in Infectious Diseases and Microbiology in recent years. Documents published in the journals included in the categories "Infectious Diseases" and "Microbiology" of the Web of Science (Science Citation Index Expanded) of the ISI Web of Knowledge from the year 2000-2013 were analysed. In Infectious Diseases, Spain ranked fourth worldwide, and contributed 5.7% of the 233,771 documents published in this specialty. In Microbiology, Spain was in sixth place with a production rate of 5.8% of the 149,269 documents of this category. The Spanish production increased over the study period, both in Infectious Diseases and Microbiology, from 325 and 619 documents in 2000 to 756 and 1245 documents in 2013, with a growth rate of 131% and 45.8%, respectively. The journal with the largest number of documents published was Enfermedades Infecciosas y Microbiología Clínica, with 8.6% and 8.2% of papers published in the categories of Infectious Diseases and Microbiology, respectively, and was the result of international collaborations, especially with institutions in the United States. The "index h" was 116 and 139 in Infectious Diseases and Microbiology, placing Spain in fifth place in both categories within countries of the European Union. In recent years, Spanish research in Infectious Diseases and Microbiology has reached a good level of production and international visibility, reaching a global leadership position. Copyright © 2015. Published by Elsevier España, S.L.U.
Web-based X-ray quality control documentation.
David, George; Burnett, Lou Ann; Schenkel, Robert
2003-01-01
The department of radiology at the Medical College of Georgia Hospital and Clinics has developed an equipment quality control web site. Our goal is to provide immediate access to virtually all medical physics survey data. The web site is designed to assist equipment engineers, department management and technologists. By improving communications and access to equipment documentation, we believe productivity is enhanced. The creation of the quality control web site was accomplished in three distinct steps. First, survey data had to be placed in a computer format. The second step was to convert these various computer files to a format supported by commercial web browsers. Third, a comprehensive home page had to be designed to provide convenient access to the multitude of surveys done in the various x-ray rooms. Because we had spent years previously fine-tuning the computerization of the medical physics quality control program, most survey documentation was already in spreadsheet or database format. A major technical decision was the method of conversion of survey spreadsheet and database files into documentation appropriate for the web. After an unsatisfactory experience with a HyperText Markup Language (HTML) converter (packaged with spreadsheet and database software), we tried creating Portable Document Format (PDF) files using Adobe Acrobat software. This process preserves the original formatting of the document and takes no longer than conventional printing; therefore, it has been very successful. Although the PDF file generated by Adobe Acrobat is a proprietary format, it can be displayed through a conventional web browser using the freely distributed Adobe Acrobat Reader program that is available for virtually all platforms. Once a user installs the software, it is automatically invoked by the web browser whenever the user follows a link to a file with a PDF extension. Although no confidential patient information is available on the web site, our legal department recommended that we secure the site in order to keep out those wishing to make mischief. Our interim solution has not been to password protect the page, which we feared would hinder access for occasional legitimate users, but also not to provide links to it from other hospital and department pages. Utility and productivity were improved and time and money were saved by making radiological equipment quality control documentation instantly available on-line.
Online CTE in the Community College
ERIC Educational Resources Information Center
Garza Mitchell, Regina L.; Etshim, Rachal; Dietz, Brian T.
2016-01-01
This single-site case study explored how one community college integrated online education into CTE courses and programs. Through semi-structured interviews and document analysis, the study explores how one college integrated online education (fully online, hybrid, and web-enhanced) into areas typically considered "hands-on".…
RSAT 2015: Regulatory Sequence Analysis Tools.
Medina-Rivera, Alejandra; Defrance, Matthieu; Sand, Olivier; Herrmann, Carl; Castro-Mondragon, Jaime A; Delerce, Jeremy; Jaeger, Sébastien; Blanchet, Christophe; Vincens, Pierre; Caron, Christophe; Staines, Daniel M; Contreras-Moreira, Bruno; Artufel, Marie; Charbonnier-Khamvongsa, Lucie; Hernandez, Céline; Thieffry, Denis; Thomas-Chollier, Morgane; van Helden, Jacques
2015-07-01
RSAT (Regulatory Sequence Analysis Tools) is a modular software suite for the analysis of cis-regulatory elements in genome sequences. Its main applications are (i) motif discovery, appropriate to genome-wide data sets like ChIP-seq, (ii) transcription factor binding motif analysis (quality assessment, comparisons and clustering), (iii) comparative genomics and (iv) analysis of regulatory variations. Nine new programs have been added to the 43 described in the 2011 NAR Web Software Issue, including a tool to extract sequences from a list of coordinates (fetch-sequences from UCSC), novel programs dedicated to the analysis of regulatory variants from GWAS or population genomics (retrieve-variation-seq and variation-scan), a program to cluster motifs and visualize the similarities as trees (matrix-clustering). To deal with the drastic increase of sequenced genomes, RSAT public sites have been reorganized into taxon-specific servers. The suite is well-documented with tutorials and published protocols. The software suite is available through Web sites, SOAP/WSDL Web services, virtual machines and stand-alone programs at http://www.rsat.eu/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
M-Learning and Augmented Reality: A Review of the Scientific Literature on the WoS Repository
ERIC Educational Resources Information Center
Fombona, Javier; Pascual-Sevillano, Maria-Angeles; González-Videgara, MariCarmen
2017-01-01
Augmented reality emerges as a tool, on which it is necessary to examine its real educational value. This paper shows the results of a bibliometric analysis performed on documents collected from the Web of Science repository, an Internet service that concentrates bibliographic information from more than 7,000 institutions. Our analysis included an…
NASA Astrophysics Data System (ADS)
Roganov, E. A.; Roganova, N. A.; Aleksandrov, A. I.; Ukolova, A. V.
2017-01-01
We implement a web portal which dynamically creates documents in more than 30 different formats including html, pdf and docx from a single original material source. It is obtained by using a number of free software such as Markdown (markup language), Pandoc (document converter), MathJax (library to display mathematical notation in web browsers), framework Ruby on Rails. The portal enables the creation of documents with a high quality visualization of mathematical formulas, is compatible with a mobile device and allows one to search documents by text or formula fragments. Moreover, it gives professors the ability to develop the latest technology educational materials, without qualified technicians' assistance, thus improving the quality of the whole educational process.
Globe Teachers Guide and Photographic Data on the Web
NASA Technical Reports Server (NTRS)
Kowal, Dan
2004-01-01
The task of managing the GLOBE Online Teacher s Guide during this time period focused on transforming the technology behind the delivery system of this document. The web application transformed from a flat file retrieval system to a dynamic database access approach. The new methodology utilizes Java Server Pages (JSP) on the front-end and an Oracle relational database on the backend. This new approach allows users of the web site, mainly teachers, to access content efficiently by grade level and/or by investigation or educational concept area. Moreover, teachers can gain easier access to data sheets and lab and field guides. The new online guide also included updated content for all GLOBE protocols. The GLOBE web management team was given documentation for maintaining the new application. Instructions for modifying the JSP templates and managing database content were included in this document. It was delivered to the team by the end of October, 2003. The National Geophysical Data Center (NGDC) continued to manage the school study site photos on the GLOBE website. 333 study site photo images were added to the GLOBE database and posted on the web during this same time period for 64 schools. Documentation for processing study site photos was also delivered to the new GLOBE web management team. Lastly, assistance was provided in transferring reference applications such as the Cloud and LandSat quizzes and Earth Systems Online Poster from NGDC servers to GLOBE servers along with documentation for maintaining these applications.
Case Studies in Describing Scientific Research Efforts as Linked Data
NASA Astrophysics Data System (ADS)
Gandara, A.; Villanueva-Rosales, N.; Gates, A.
2013-12-01
The Web is growing with numerous scientific resources, prompting increased efforts in information management to consider integration and exchange of scientific resources. Scientists have many options to share scientific resources on the Web; however, existing options provide limited support to scientists in annotating and relating research resources resulting from a scientific research effort. Moreover, there is no systematic approach to documenting scientific research and sharing it on the Web. This research proposes the Collect-Annotate-Refine-Publish (CARP) Methodology as an approach for guiding documentation of scientific research on the Semantic Web as scientific collections. Scientific collections are structured descriptions about scientific research that make scientific results accessible based on context. In addition, scientific collections enhance the Linked Data data space and can be queried by machines. Three case studies were conducted on research efforts at the Cyber-ShARE Research Center of Excellence in order to assess the effectiveness of the methodology to create scientific collections. The case studies exposed the challenges and benefits of leveraging the Semantic Web and Linked Data data space to facilitate access, integration and processing of Web-accessible scientific resources and research documentation. As such, we present the case study findings and lessons learned in documenting scientific research using CARP.
UW Inventory of Freight Emissions (WIFE3) heavy duty diesel vehicle web calculator methodology.
DOT National Transportation Integrated Search
2013-09-01
This document serves as an overview and technical documentation for the University of Wisconsin Inventory of : Freight Emissions (WIFE3) calculator. The WIFE3 web calculator rapidly estimates future heavy duty diesel : vehicle (HDDV) roadway emission...
BioTextQuest: a web-based biomedical text mining suite for concept discovery.
Papanikolaou, Nikolas; Pafilis, Evangelos; Nikolaou, Stavros; Ouzounis, Christos A; Iliopoulos, Ioannis; Promponas, Vasilis J
2011-12-01
BioTextQuest combines automated discovery of significant terms in article clusters with structured knowledge annotation, via Named Entity Recognition services, offering interactive user-friendly visualization. A tag-cloud-based illustration of terms labeling each document cluster are semantically annotated according to the biological entity, and a list of document titles enable users to simultaneously compare terms and documents of each cluster, facilitating concept association and hypothesis generation. BioTextQuest allows customization of analysis parameters, e.g. clustering/stemming algorithms, exclusion of documents/significant terms, to better match the biological question addressed. http://biotextquest.biol.ucy.ac.cy vprobon@ucy.ac.cy; iliopj@med.uoc.gr Supplementary data are available at Bioinformatics online.
Skyline: an open source document editor for creating and analyzing targeted proteomics experiments.
MacLean, Brendan; Tomazela, Daniela M; Shulman, Nicholas; Chambers, Matthew; Finney, Gregory L; Frewen, Barbara; Kern, Randall; Tabb, David L; Liebler, Daniel C; MacCoss, Michael J
2010-04-01
Skyline is a Windows client application for targeted proteomics method creation and quantitative data analysis. It is open source and freely available for academic and commercial use. The Skyline user interface simplifies the development of mass spectrometer methods and the analysis of data from targeted proteomics experiments performed using selected reaction monitoring (SRM). Skyline supports using and creating MS/MS spectral libraries from a wide variety of sources to choose SRM filters and verify results based on previously observed ion trap data. Skyline exports transition lists to and imports the native output files from Agilent, Applied Biosystems, Thermo Fisher Scientific and Waters triple quadrupole instruments, seamlessly connecting mass spectrometer output back to the experimental design document. The fast and compact Skyline file format is easily shared, even for experiments requiring many sample injections. A rich array of graphs displays results and provides powerful tools for inspecting data integrity as data are acquired, helping instrument operators to identify problems early. The Skyline dynamic report designer exports tabular data from the Skyline document model for in-depth analysis with common statistical tools. Single-click, self-updating web installation is available at http://proteome.gs.washington.edu/software/skyline. This web site also provides access to instructional videos, a support board, an issues list and a link to the source code project.
Web Mining for Web Image Retrieval.
ERIC Educational Resources Information Center
Chen, Zheng; Wenyin, Liu; Zhang, Feng; Li, Mingjing; Zhang, Hongjiang
2001-01-01
Presents a prototype system for image retrieval from the Internet using Web mining. Discusses the architecture of the Web image retrieval prototype; document space modeling; user log mining; and image retrieval experiments to evaluate the proposed system. (AEF)
Comparing Web, Group and Telehealth Formats of a Military Parenting Program
2017-06-01
AWARD NUMBER: W81XWH-14-1-0143 TITLE: Comparing Web, Group and Telehealth Formats of a Military Parenting Program PRINCIPAL INVESTIGATOR...be construed as an official Department of the Army position, policy or decision unless so designated by other documentation. REPORT DOCUMENTATION...2017 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Comparing Web, Group and Telehealth Formats of a Military Parenting Program 5b. GRANT NUMBER 5c
Scale-free characteristics of random networks: the topology of the world-wide web
NASA Astrophysics Data System (ADS)
Barabási, Albert-László; Albert, Réka; Jeong, Hawoong
2000-06-01
The world-wide web forms a large directed graph, whose vertices are documents and edges are links pointing from one document to another. Here we demonstrate that despite its apparent random character, the topology of this graph has a number of universal scale-free characteristics. We introduce a model that leads to a scale-free network, capturing in a minimal fashion the self-organization processes governing the world-wide web.
2009-01-01
representation of RAND intellectual property is provided for non-commercial use only. Unauthorized posting of RAND PDFs to a non-RAND Web site is...duplicated for commercial purposes. Unauthorized posting of RAND documents to a non-RAND Web site is prohibited. RAND documents are protected under...Employment; Manpower, Personnel, and Train- ing; Resource Management; and Strategy and Doctrine. Additional information about PAF is available on our Web
Using the web to validate document recognition results: experiments with business cards
NASA Astrophysics Data System (ADS)
Oertel, Clemens; O'Shea, Shauna; Bodnar, Adam; Blostein, Dorothea
2004-12-01
The World Wide Web is a vast information resource which can be useful for validating the results produced by document recognizers. Three computational steps are involved, all of them challenging: (1) use the recognition results in a Web search to retrieve Web pages that contain information similar to that in the document, (2) identify the relevant portions of the retrieved Web pages, and (3) analyze these relevant portions to determine what corrections (if any) should be made to the recognition result. We have conducted exploratory implementations of steps (1) and (2) in the business-card domain: we use fields of the business card to retrieve Web pages and identify the most relevant portions of those Web pages. In some cases, this information appears suitable for correcting OCR errors in the business card fields. In other cases, the approach fails due to stale information: when business cards are several years old and the business-card holder has changed jobs, then websites (such as the home page or company website) no longer contain information matching that on the business card. Our exploratory results indicate that in some domains it may be possible to develop effective means of querying the Web with recognition results, and to use this information to correct the recognition results and/or detect that the information is stale.
Using the web to validate document recognition results: experiments with business cards
NASA Astrophysics Data System (ADS)
Oertel, Clemens; O'Shea, Shauna; Bodnar, Adam; Blostein, Dorothea
2005-01-01
The World Wide Web is a vast information resource which can be useful for validating the results produced by document recognizers. Three computational steps are involved, all of them challenging: (1) use the recognition results in a Web search to retrieve Web pages that contain information similar to that in the document, (2) identify the relevant portions of the retrieved Web pages, and (3) analyze these relevant portions to determine what corrections (if any) should be made to the recognition result. We have conducted exploratory implementations of steps (1) and (2) in the business-card domain: we use fields of the business card to retrieve Web pages and identify the most relevant portions of those Web pages. In some cases, this information appears suitable for correcting OCR errors in the business card fields. In other cases, the approach fails due to stale information: when business cards are several years old and the business-card holder has changed jobs, then websites (such as the home page or company website) no longer contain information matching that on the business card. Our exploratory results indicate that in some domains it may be possible to develop effective means of querying the Web with recognition results, and to use this information to correct the recognition results and/or detect that the information is stale.
Automatic generation of Web mining environments
NASA Astrophysics Data System (ADS)
Cibelli, Maurizio; Costagliola, Gennaro
1999-02-01
The main problem related to the retrieval of information from the world wide web is the enormous number of unstructured documents and resources, i.e., the difficulty of locating and tracking appropriate sources. This paper presents a web mining environment (WME), which is capable of finding, extracting and structuring information related to a particular domain from web documents, using general purpose indices. The WME architecture includes a web engine filter (WEF), to sort and reduce the answer set returned by a web engine, a data source pre-processor (DSP), which processes html layout cues in order to collect and qualify page segments, and a heuristic-based information extraction system (HIES), to finally retrieve the required data. Furthermore, we present a web mining environment generator, WMEG, that allows naive users to generate a WME specific to a given domain by providing a set of specifications.
A Methodology for the Development of RESTful Semantic Web Services for Gene Expression Analysis
Guardia, Gabriela D. A.; Pires, Luís Ferreira; Vêncio, Ricardo Z. N.; Malmegrim, Kelen C. R.; de Farias, Cléver R. G.
2015-01-01
Gene expression studies are generally performed through multi-step analysis processes, which require the integrated use of a number of analysis tools. In order to facilitate tool/data integration, an increasing number of analysis tools have been developed as or adapted to semantic web services. In recent years, some approaches have been defined for the development and semantic annotation of web services created from legacy software tools, but these approaches still present many limitations. In addition, to the best of our knowledge, no suitable approach has been defined for the functional genomics domain. Therefore, this paper aims at defining an integrated methodology for the implementation of RESTful semantic web services created from gene expression analysis tools and the semantic annotation of such services. We have applied our methodology to the development of a number of services to support the analysis of different types of gene expression data, including microarray and RNASeq. All developed services are publicly available in the Gene Expression Analysis Services (GEAS) Repository at http://dcm.ffclrp.usp.br/lssb/geas. Additionally, we have used a number of the developed services to create different integrated analysis scenarios to reproduce parts of two gene expression studies documented in the literature. The first study involves the analysis of one-color microarray data obtained from multiple sclerosis patients and healthy donors. The second study comprises the analysis of RNA-Seq data obtained from melanoma cells to investigate the role of the remodeller BRG1 in the proliferation and morphology of these cells. Our methodology provides concrete guidelines and technical details in order to facilitate the systematic development of semantic web services. Moreover, it encourages the development and reuse of these services for the creation of semantically integrated solutions for gene expression analysis. PMID:26207740
A Methodology for the Development of RESTful Semantic Web Services for Gene Expression Analysis.
Guardia, Gabriela D A; Pires, Luís Ferreira; Vêncio, Ricardo Z N; Malmegrim, Kelen C R; de Farias, Cléver R G
2015-01-01
Gene expression studies are generally performed through multi-step analysis processes, which require the integrated use of a number of analysis tools. In order to facilitate tool/data integration, an increasing number of analysis tools have been developed as or adapted to semantic web services. In recent years, some approaches have been defined for the development and semantic annotation of web services created from legacy software tools, but these approaches still present many limitations. In addition, to the best of our knowledge, no suitable approach has been defined for the functional genomics domain. Therefore, this paper aims at defining an integrated methodology for the implementation of RESTful semantic web services created from gene expression analysis tools and the semantic annotation of such services. We have applied our methodology to the development of a number of services to support the analysis of different types of gene expression data, including microarray and RNASeq. All developed services are publicly available in the Gene Expression Analysis Services (GEAS) Repository at http://dcm.ffclrp.usp.br/lssb/geas. Additionally, we have used a number of the developed services to create different integrated analysis scenarios to reproduce parts of two gene expression studies documented in the literature. The first study involves the analysis of one-color microarray data obtained from multiple sclerosis patients and healthy donors. The second study comprises the analysis of RNA-Seq data obtained from melanoma cells to investigate the role of the remodeller BRG1 in the proliferation and morphology of these cells. Our methodology provides concrete guidelines and technical details in order to facilitate the systematic development of semantic web services. Moreover, it encourages the development and reuse of these services for the creation of semantically integrated solutions for gene expression analysis.
Poster — Thur Eve — 52: A Web-based Platform for Collaborative Document Management in Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kildea, J.; Joseph, A.
We describe DepDocs, a web-based platform that we have developed to manage the committee meetings, policies, procedures and other documents within our otherwise paperless radiotherapy clinic. DepDocs is essentially a document management system based on the popular Drupal content management software. For security and confidentiality, it is hosted on a linux server internal to our hospital network such that documents are never sent to the cloud or outside of the hospital firewall. We used Drupal's in-built role-based user rights management system to assign a role, and associated document editing rights, to each user. Documents are accessed for viewing using eithermore » a simple Google-like search or by generating a list of related documents from a taxonomy of categorization terms. Our system provides document revision tracking and an document review and approval mechanism for all official policies and procedures. Committee meeting schedules, agendas and minutes are maintained by committee chairs and are restricted to committee members. DepDocs has been operational within our department for over six months and has already 45 unique users and an archive of over 1000 documents, mostly policies and procedures. Documents are easily retrievable from the system using any web browser within our hospital's network.« less
National Centers for Environmental Prediction
Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Documentation, experiments, web content Nicole McKee Makefiles, scripts, launcher Edward Colon NEMSIO, post Yang GFS post Hui-ya Chuang NAM development Tom Black Dusan Jovic Jim Abeles GFS development S Moorthi
Software Project Management and Measurement on the World-Wide-Web (WWW)
NASA Technical Reports Server (NTRS)
Callahan, John; Ramakrishnan, Sudhaka
1996-01-01
We briefly describe a system for forms-based, work-flow management that helps members of a software development team overcome geographical barriers to collaboration. Our system, called the Web Integrated Software Environment (WISE), is implemented as a World-Wide-Web service that allows for management and measurement of software development projects based on dynamic analysis of change activity in the workflow. WISE tracks issues in a software development process, provides informal communication between the users with different roles, supports to-do lists, and helps in software process improvement. WISE minimizes the time devoted to metrics collection and analysis by providing implicit delivery of messages between users based on the content of project documents. The use of a database in WISE is hidden from the users who view WISE as maintaining a personal 'to-do list' of tasks related to the many projects on which they may play different roles.
Computational knowledge integration in biopharmaceutical research.
Ficenec, David; Osborne, Mark; Pradines, Joel; Richards, Dan; Felciano, Ramon; Cho, Raymond J; Chen, Richard O; Liefeld, Ted; Owen, James; Ruttenberg, Alan; Reich, Christian; Horvath, Joseph; Clark, Tim
2003-09-01
An initiative to increase biopharmaceutical research productivity by capturing, sharing and computationally integrating proprietary scientific discoveries with public knowledge is described. This initiative involves both organisational process change and multiple interoperating software systems. The software components rely on mutually supporting integration techniques. These include a richly structured ontology, statistical analysis of experimental data against stored conclusions, natural language processing of public literature, secure document repositories with lightweight metadata, web services integration, enterprise web portals and relational databases. This approach has already begun to increase scientific productivity in our enterprise by creating an organisational memory (OM) of internal research findings, accessible on the web. Through bringing together these components it has also been possible to construct a very large and expanding repository of biological pathway information linked to this repository of findings which is extremely useful in analysis of DNA microarray data. This repository, in turn, enables our research paradigm to be shifted towards more comprehensive systems-based understandings of drug action.
Wu, Bob J; Dietz, Patrick A; Bordley, James; Borgstrom, David C
2009-01-01
Practice-Based Learning and Improvement (PBLI) is 1 of 6 integral competencies required by the Accreditation Council for Graduate Medical Education (ACGME) for proof of adequate resident training and accreditation of residency programs. Moreover, the Outcome Project of the ACGME is beginning to enforce the provision of documented, objective evidence of resident PBLI. Current assessment tools, such as resident portfolios and performance evaluations, by faculty tend to be qualitative in nature. However, few objective, outcome-based, and quantitative evaluation tools have been developed. A web-based application was designed to assess every consultation performed by senior residents at a university-affiliated general surgery residency. In real time, residents documented patient presentations along with their initial impression and plan. As patient outcomes became available, they were also documented into this application, which allowed residents to self-assess whether their impressions and plans were correct. A running "batting average" (BA) is then calculated based on percentage correct. Seven senior residents participated in this study, which included a total of 459 consults: 222 documented by PGY4 residents and 237 documented by PGY5 residents. The average BA of PGY4 residents in their first 3 months was 82.9%, which was followed by 85.9%, 88.7%, and 94.3% for each of the next 3 quarters. For PGY5 residents, the corresponding results were 96.4%, 94.4%, 93.8%, and 96.4% respectively. A web-based outcome-tracking program is useful for conducting rapid and ongoing evaluation of residents' practice-based learning, generating data for analysis of individual resident knowledge gaps, stimulating self-assessment and targeted learning, as well as providing objective data of PBLI for accreditation purposes.
JPL, NASA and the Historical Record: Key Events/Documents in Lunar and Mars Exploration
NASA Technical Reports Server (NTRS)
Hooks, Michael Q.
1999-01-01
This document represents a presentation about the Jet Propulsion Laboratory (JPL) historical archives in the area of Lunar and Martian Exploration. The JPL archives documents the history of JPL's flight projects, research and development activities and administrative operations. The archives are in a variety of format. The presentation reviews the information available through the JPL archives web site, information available through the Regional Planetary Image Facility web site, and the information on past missions available through the web sites. The presentation also reviews the NASA historical resources at the NASA History Office and the National Archives and Records Administration.
Going, going, still there: using the WebCite service to permanently archive cited Web pages.
Eysenbach, Gunther
2006-01-01
Scholars are increasingly citing electronic "web references" which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To "webcite" a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page.
78 FR 68100 - Luminant Generation Company, LLC
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-13
... following methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID.../adams.html . To begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public Document Room (PDR) reference...
Biotea: RDFizing PubMed Central in support for the paper as an interface to the Web of Data
2013-01-01
Background The World Wide Web has become a dissemination platform for scientific and non-scientific publications. However, most of the information remains locked up in discrete documents that are not always interconnected or machine-readable. The connectivity tissue provided by RDF technology has not yet been widely used to support the generation of self-describing, machine-readable documents. Results In this paper, we present our approach to the generation of self-describing machine-readable scholarly documents. We understand the scientific document as an entry point and interface to the Web of Data. We have semantically processed the full-text, open-access subset of PubMed Central. Our RDF model and resulting dataset make extensive use of existing ontologies and semantic enrichment services. We expose our model, services, prototype, and datasets at http://biotea.idiginfo.org/ Conclusions The semantic processing of biomedical literature presented in this paper embeds documents within the Web of Data and facilitates the execution of concept-based queries against the entire digital library. Our approach delivers a flexible and adaptable set of tools for metadata enrichment and semantic processing of biomedical documents. Our model delivers a semantically rich and highly interconnected dataset with self-describing content so that software can make effective use of it. PMID:23734622
Documenting pharmacist interventions on an intranet.
Simonian, Armen I
2003-01-15
The process of developing and implementing an intranet Web site for clinical intervention documentation is described. An inpatient pharmacy department initiated an organizationwide effort to improve documentation of interventions by pharmacists at its seven hospitals to achieve real-time capture of meaningful benchmarking data. Standardization of intervention types would allow the health system to contrast and compare medication use, process improvement, and patient care initiatives among its hospitals. After completing a needs assessment and reviewing current methodologies, a computerized tracking tool was developed in-house and integrated with the organization's intranet. Representatives from all hospitals agreed on content and functionality requirements for the Web site. The site was completed and activated in February 2002. Before this Web site was established, the most documented intervention types were Renal Adjustment and Clarify Dose, with a daily average of four and three, respectively. After site activation, daily averages for Renal Adjustment remained unchanged, but Clarify Dose is now documented nine times per day. Drug Information and i.v.-to-p.o. intervention types, which previously averaged less than one intervention per day, are now documented an average of four times daily. Approximately 91% of staff pharmacists are using this site. Future plans for this site include enhanced accessibility to the site with wireless personal digital assistants. The design and implementation of an intranet Web site to document pharmacists' interventions doubled the rate of intervention documentation and standardized the intervention types among hospitals in the health system.
NASA Astrophysics Data System (ADS)
Bos, Nathan Daniel
This dissertation investigates the emerging affordance of the World Wide Web as a place for high school students to become authors and publishers of information. Two empirical studies lay groundwork for student publishing by examining learning issues related to audience adaptation in writing, motivation and engagement with hypermedia, design, problem-solving, and critical evaluation. Two models of student publishing on the World Wide Web were investigated over the course of two 11spth grade project-based science curriculums. In the first curricular model, students worked in pairs to design informative hypermedia projects about infectious diseases that were published on the Web. Four case studies were written, drawing on both product- and process-related data sources. Four theoretically important findings are illustrated through these cases: (1) multimedia, especially graphics, seemed to catalyze some students' design processes by affecting the sequence of their design process and by providing a connection between the science content and their personal interest areas, (2) hypermedia design can demand high levels of analysis and synthesis of science content, (3) students can learn to think about science content representation through engagement with challenging design tasks, and (4) students' consideration of an outside audience can be facilitated by teacher-given design principles. The second Web-publishing model examines how students critically evaluate scientific resources on the Web, and how students can contribute to the Web's organization and usability by publishing critical reviews. Students critically evaluated Web resources using a four-part scheme: summarization of content, content, evaluation of credibility, evaluation of organizational structure, and evaluation of appearance. Content analyses comparing students' reviews and reviewed Web documents showed that students were proficient at summarizing content of Web documents, identifying their publishing source, and evaluating their organizational features; however, students struggled to identify scientific evidence, bias, or sophisticated use of media in Web pages. Shortcomings were shown to be partly due to deficiencies in the Web pages themselves and partly due to students' inexperience with the medium or lack of critical evaluation skills. Future directions of this idea are discussed, including discussion of how students' reviews have been integrated into a current digital library development project.
2016-07-21
Todays internet has multiple webs. The surface web is what Google and other search engines index and pull based on links. Essentially, the surface...financial records, research and development), and personal data (medical records or legal documents). These are all deep web. Standard search engines dont
Creating Polyphony with Exploratory Web Documentation in Singapore
ERIC Educational Resources Information Center
Lim, Sirene; Hoo, Lum Chee
2012-01-01
We introduce and reflect on "Images of Teaching", an ongoing web documentation research project on preschool teaching in Singapore. This paper discusses the project's purpose, methodological process, and our learning points as researchers who aim to contribute towards inquiry-based professional learning. The website offers a window into…
E-Texts, Mobile Browsing, and Rich Internet Applications
ERIC Educational Resources Information Center
Godwin-Jones, Robert
2007-01-01
Online reading is evolving beyond the perusal of static documents with Web pages inviting readers to become commentators, collaborators, and critics. The much-ballyhooed Web 2.0 is essentially a transition from online consumer to consumer/producer/participant. An online document may well include embedded multimedia or contain other forms of…
Improving health care proxy documentation using a web-based interview through a patient portal
Crotty, Bradley H; Kowaloff, Hollis B; Safran, Charles; Slack, Warner V
2016-01-01
Objective Health care proxy (HCP) documentation is suboptimal. To improve rates of proxy selection and documentation, we sought to develop and evaluate a web-based interview to guide patients in their selection, and to capture their choices in their electronic health record (EHR). Methods We developed and implemented a HCP interview within the patient portal of a large academic health system. We analyzed the experience, together with demographic and clinical factors, of the first 200 patients who used the portal to complete the interview. We invited users to comment about their experience and analyzed their comments using established qualitative methods. Results From January 20, 2015 to March 13, 2015, 139 of the 200 patients who completed the interview submitted their HCP information for their clinician to review in the EHR. These patients had a median age of 57 years (Inter Quartile Range (IQR) 45–67) and most were healthy. The 99 patients who did not previously have HCP information in their EHR were more likely to complete and then submit their information than the 101 patients who previously had a proxy in their health record (odds ratio 2.4, P = .005). Qualitative analysis identified several ways in which the portal-based interview reminded, encouraged, and facilitated patients to complete their HCP. Conclusions Patients found our online interview convenient and helpful in facilitating selection and documentation of an HCP. Our study demonstrates that a web-based interview to collect and share a patient’s HCP information is both feasible and useful. PMID:26568608
Results from a Web Impact Factor Crawler.
ERIC Educational Resources Information Center
Thelwall, Mike
2001-01-01
Discusses Web impact factors (WIFs), Web versions of the impact factors for journals, and how they can be calculated by using search engines. Highlights include HTML and document indexing; Web page links; a Web crawler designed for calculating WIFs; and WIFs for United Kingdom universities that measured research profiles or capability. (Author/LRW)
An Educational Tool for Browsing the Semantic Web
ERIC Educational Resources Information Center
Yoo, Sujin; Kim, Younghwan; Park, Seongbin
2013-01-01
The Semantic Web is an extension of the current Web where information is represented in a machine processable way. It is not separate from the current Web and one of the confusions that novice users might have is where the Semantic Web is. In fact, users can easily encounter RDF documents that are components of the Semantic Web while they navigate…
Indexing and Retrieval for the Web.
ERIC Educational Resources Information Center
Rasmussen, Edie M.
2003-01-01
Explores current research on indexing and ranking as retrieval functions of search engines on the Web. Highlights include measuring search engine stability; evaluation of Web indexing and retrieval; Web crawlers; hyperlinks for indexing and ranking; ranking for metasearch; document structure; citation indexing; relevance; query evaluation;…
Introducing Text Analytics as a Graduate Business School Course
ERIC Educational Resources Information Center
Edgington, Theresa M.
2011-01-01
Text analytics refers to the process of analyzing unstructured data from documented sources, including open-ended surveys, blogs, and other types of web dialog. Text analytics has enveloped the concept of text mining, an analysis approach influenced heavily from data mining. While text mining has been covered extensively in various computer…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-16
... defect. ($17,335). Web-based Document Management System: Funding was provided to continue to provide a web-based document management system to better enable the handling of thousands of recreational... program strategy support to the nation-wide RBS effort. The goal is to coordinate the RBS outreach...
12 CFR 611.1216 - Public availability of documents related to the termination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... termination. 611.1216 Section 611.1216 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM ORGANIZATION Termination of System Institution Status § 611.1216 Public availability of documents related to the termination. (a) We may post on our Web site, or require you to post on your Web site: (1) Results...
World-Wide Web: The Information Universe.
ERIC Educational Resources Information Center
Berners-Lee, Tim; And Others
1992-01-01
Describes the World-Wide Web (W3) project, which is designed to create a global information universe using techniques of hypertext, information retrieval, and wide area networking. Discussion covers the W3 data model, W3 architecture, the document naming scheme, protocols, document formats, comparison with other systems, experience with the W3…
Experimenting with semantic web services to understand the role of NLP technologies in healthcare.
Jagannathan, V
2006-01-01
NLP technologies can play a significant role in healthcare where a predominant segment of the clinical documentation is in text form. In a graduate course focused on understanding semantic web services at West Virginia University, a class project was designed with the purpose of exploring potential use for NLP-based abstraction of clinical documentation. The role of NLP-technology was simulated using human abstractors and various workflows were investigated using public domain workflow and semantic web service technologies. This poster explores the potential use of NLP and the role of workflow and semantic web technologies in developing healthcare IT environments.
Semantic annotation of Web data applied to risk in food.
Hignette, Gaëlle; Buche, Patrice; Couvert, Olivier; Dibie-Barthélemy, Juliette; Doussot, David; Haemmerlé, Ollivier; Mettler, Eric; Soler, Lydie
2008-11-30
A preliminary step to risk in food assessment is the gathering of experimental data. In the framework of the Sym'Previus project (http://www.symprevius.org), a complete data integration system has been designed, grouping data provided by industrial partners and data extracted from papers published in the main scientific journals of the domain. Those data have been classified by means of a predefined vocabulary, called ontology. Our aim is to complement the database with data extracted from the Web. In the framework of the WebContent project (www.webcontent.fr), we have designed a semi-automatic acquisition tool, called @WEB, which retrieves scientific documents from the Web. During the @WEB process, data tables are extracted from the documents and then annotated with the ontology. We focus on the data tables as they contain, in general, a synthesis of data published in the documents. In this paper, we explain how the columns of the data tables are automatically annotated with data types of the ontology and how the relations represented by the table are recognised. We also give the results of our experimentation to assess the quality of such an annotation.
Polar Domain Discovery with Sparkler
NASA Astrophysics Data System (ADS)
Duerr, R.; Khalsa, S. J. S.; Mattmann, C. A.; Ottilingam, N. K.; Singh, K.; Lopez, L. A.
2017-12-01
The scientific web is vast and ever growing. It encompasses millions of textual, scientific and multimedia documents describing research in a multitude of scientific streams. Most of these documents are hidden behind forms which require user action to retrieve and thus can't be directly accessed by content crawlers. These documents are hosted on web servers across the world, most often on outdated hardware and network infrastructure. Hence it is difficult and time-consuming to aggregate documents from the scientific web, especially those relevant to a specific domain. Thus generating meaningful domain-specific insights is currently difficult. We present an automated discovery system (Figure 1) using Sparkler, an open-source, extensible, horizontally scalable crawler which facilitates high throughput and focused crawling of documents pertinent to a particular domain such as information about polar regions. With this set of highly domain relevant documents, we show that it is possible to answer analytical questions about that domain. Our domain discovery algorithm leverages prior domain knowledge to reach out to commercial/scientific search engines to generate seed URLs. Subject matter experts then annotate these seed URLs manually on a scale from highly relevant to irrelevant. We leverage this annotated dataset to train a machine learning model which predicts the `domain relevance' of a given document. We extend Sparkler with this model to focus crawling on documents relevant to that domain. Sparkler avoids disruption of service by 1) partitioning URLs by hostname such that every node gets a different host to crawl and by 2) inserting delays between subsequent requests. With an NSF-funded supercomputer Wrangler, we scaled our domain discovery pipeline to crawl about 200k polar specific documents from the scientific web, within a day.
NASA Astrophysics Data System (ADS)
Saint-Béat, Blanche; Maps, Frédéric; Babin, Marcel
2018-01-01
The extreme and variable environment shapes the functioning of Arctic ecosystems and the life cycles of its species. This delicate balance is now threatened by the unprecedented pace and magnitude of global climate change and anthropogenic pressure. Understanding the long-term consequences of these changes remains an elusive, yet pressing, goal. Our work was specifically aimed at identifying which biological processes impact Arctic planktonic ecosystem functioning, and how. Ecological Network Analysis (ENA) indices reveal emergent ecosystem properties that are not accessible through simple in situ observation. These indices are based on the architecture of carbon flows within food webs. But, despite the recent increase in in situ measurements from Arctic seas, many flow values remain unknown. Linear inverse modeling (LIM) allows missing flow values to be estimated from existing flow observations and, subsequent reconstruction of ecosystem food webs. Through a sensitivity analysis on a LIM model of the Amundsen Gulf in the Canadian Arctic, we were able to determine which processes affected the emergent properties of the planktonic ecosystem. The analysis highlighted the importance of an accurate knowledge of the various processes controlling bacterial production (e.g. bacterial growth efficiency and viral lysis). More importantly, a change in the fate of the microzooplankton within the food web can be monitored through the trophic level of mesozooplankton. It can be used as a "canary in the coal mine" signal, a forewarner of larger ecosystem change.
Cyber-T web server: differential analysis of high-throughput data.
Kayala, Matthew A; Baldi, Pierre
2012-07-01
The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.
mORCA: sailing bioinformatics world with mobile devices.
Díaz-Del-Pino, Sergio; Falgueras, Juan; Perez-Wohlfeil, Esteban; Trelles, Oswaldo
2018-03-01
Nearly 10 years have passed since the first mobile apps appeared. Given the fact that bioinformatics is a web-based world and that mobile devices are endowed with web-browsers, it seemed natural that bioinformatics would transit from personal computers to mobile devices but nothing could be further from the truth. The transition demands new paradigms, designs and novel implementations. Throughout an in-depth analysis of requirements of existing bioinformatics applications we designed and deployed an easy-to-use web-based lightweight mobile client. Such client is able to browse, select, compose automatically interface parameters, invoke services and monitor the execution of Web Services using the service's metadata stored in catalogs or repositories. mORCA is available at http://bitlab-es.com/morca/app as a web-app. It is also available in the App store by Apple and Play Store by Google. The software will be available for at least 2 years. ortrelles@uma.es. Source code, final web-app, training material and documentation is available at http://bitlab-es.com/morca. © The Author(s) 2017. Published by Oxford University Press.
ERIC Educational Resources Information Center
Adler, Steve
2000-01-01
Explains the use of Adobe Acrobat's Portable Document Format (PDF) for school Web sites and Intranets. Explains the PDF workflow; components for Web-based PDF delivery, including the Web server, preparing content of the PDF files, and the browser; incorporating PDFs into the Web site; incorporating multimedia; and software. (LRW)
HTML5 PivotViewer: high-throughput visualization and querying of image data on the web.
Taylor, Stephen; Noble, Roger
2014-09-15
Visualization and analysis of large numbers of biological images has generated a bottle neck in research. We present HTML5 PivotViewer, a novel, open source, platform-independent viewer making use of the latest web technologies that allows seamless access to images and associated metadata for each image. This provides a powerful method to allow end users to mine their data. Documentation, examples and links to the software are available from http://www.cbrg.ox.ac.uk/data/pivotviewer/. The software is licensed under GPLv2. © The Author 2014. Published by Oxford University Press.
Skyline: an open source document editor for creating and analyzing targeted proteomics experiments
MacLean, Brendan; Tomazela, Daniela M.; Shulman, Nicholas; Chambers, Matthew; Finney, Gregory L.; Frewen, Barbara; Kern, Randall; Tabb, David L.; Liebler, Daniel C.; MacCoss, Michael J.
2010-01-01
Summary: Skyline is a Windows client application for targeted proteomics method creation and quantitative data analysis. It is open source and freely available for academic and commercial use. The Skyline user interface simplifies the development of mass spectrometer methods and the analysis of data from targeted proteomics experiments performed using selected reaction monitoring (SRM). Skyline supports using and creating MS/MS spectral libraries from a wide variety of sources to choose SRM filters and verify results based on previously observed ion trap data. Skyline exports transition lists to and imports the native output files from Agilent, Applied Biosystems, Thermo Fisher Scientific and Waters triple quadrupole instruments, seamlessly connecting mass spectrometer output back to the experimental design document. The fast and compact Skyline file format is easily shared, even for experiments requiring many sample injections. A rich array of graphs displays results and provides powerful tools for inspecting data integrity as data are acquired, helping instrument operators to identify problems early. The Skyline dynamic report designer exports tabular data from the Skyline document model for in-depth analysis with common statistical tools. Availability: Single-click, self-updating web installation is available at http://proteome.gs.washington.edu/software/skyline. This web site also provides access to instructional videos, a support board, an issues list and a link to the source code project. Contact: brendanx@u.washington.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20147306
A document centric metadata registration tool constructing earth environmental data infrastructure
NASA Astrophysics Data System (ADS)
Ichino, M.; Kinutani, H.; Ono, M.; Shimizu, T.; Yoshikawa, M.; Masuda, K.; Fukuda, K.; Kawamoto, H.
2009-12-01
DIAS (Data Integration and Analysis System) is one of GEOSS activities in Japan. It is also a leading part of the GEOSS task with the same name defined in GEOSS Ten Year Implementation Plan. The main mission of DIAS is to construct data infrastructure that can effectively integrate earth environmental data such as observation data, numerical model outputs, and socio-economic data provided from the fields of climate, water cycle, ecosystem, ocean, biodiversity and agriculture. Some of DIAS's data products are available at the following web site of http://www.jamstec.go.jp/e/medid/dias. Most of earth environmental data commonly have spatial and temporal attributes such as the covering geographic scope or the created date. The metadata standards including these common attributes are published by the geographic information technical committee (TC211) in ISO (the International Organization for Standardization) as specifications of ISO 19115:2003 and 19139:2007. Accordingly, DIAS metadata is developed with basing on ISO/TC211 metadata standards. From the viewpoint of data users, metadata is useful not only for data retrieval and analysis but also for interoperability and information sharing among experts, beginners and nonprofessionals. On the other hand, from the viewpoint of data providers, two problems were pointed out after discussions. One is that data providers prefer to minimize another tasks and spending time for creating metadata. Another is that data providers want to manage and publish documents to explain their data sets more comprehensively. Because of solving these problems, we have been developing a document centric metadata registration tool. The features of our tool are that the generated documents are available instantly and there is no extra cost for data providers to generate metadata. Also, this tool is developed as a Web application. So, this tool does not demand any software for data providers if they have a web-browser. The interface of the tool provides the section titles of the documents and by filling out the content of each section, the documents for the data sets are automatically published in PDF and HTML format. Furthermore, the metadata XML file which is compliant with ISO19115 and ISO19139 is created at the same moment. The generated metadata are managed in the metadata database of the DIAS project, and will be used in various ISO19139 compliant metadata management tools, such as GeoNetwork.
The Implementation of Cosine Similarity to Calculate Text Relevance between Two Documents
NASA Astrophysics Data System (ADS)
Gunawan, D.; Sembiring, C. A.; Budiman, M. A.
2018-03-01
Rapidly increasing number of web pages or documents leads to topic specific filtering in order to find web pages or documents efficiently. This is a preliminary research that uses cosine similarity to implement text relevance in order to find topic specific document. This research is divided into three parts. The first part is text-preprocessing. In this part, the punctuation in a document will be removed, then convert the document to lower case, implement stop word removal and then extracting the root word by using Porter Stemming algorithm. The second part is keywords weighting. Keyword weighting will be used by the next part, the text relevance calculation. Text relevance calculation will result the value between 0 and 1. The closer value to 1, then both documents are more related, vice versa.
The Use of Supporting Documentation for Information Architecture by Australian Libraries
ERIC Educational Resources Information Center
Hider, Philip; Burford, Sally; Ferguson, Stuart
2009-01-01
This article reports the results of an online survey that examined the development of information architecture of Australian library Web sites with reference to documented methods and guidelines. A broad sample of library Web managers responded from across the academic, public, and special sectors. A majority of libraries used either in-house or…
Publishing Accessible Materials on the Web and CD-ROM.
ERIC Educational Resources Information Center
Federal Resource Center for Special Education, Washington, DC.
While it is generally simple to make electronic content accessible, it is also easy inadvertently to make it inaccessible. This guide covers the many formats of electronic documents and points out what to keep in mind and what procedures to follow to make documents accessible to all when disseminating information via the World Wide Web and on…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-21
... any of the following methods: Federal Rulemaking Web Site: Go to http://www.regulations.gov and search.../reading-rm/adams.html . To begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public Document Room (PDR...
eCDRweb User Guide–Primary Support
This document presents the user guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDR web tool. E-CDRweb is the electronic, web-based tool provided by the Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document is the user guide for the Primary Support user of the e-CDRweb tool.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-06
... protected through www.regulations.gov or e-mail. The www.regulations.gov Web site is an ``anonymous access... Can I Get Copies of This Document and Other Related Information? This Federal Register notice and.... EPA-HQ-SFUND-2009-0834. All documents in the docket are listed on the http://www.regulations.gov Web...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-17
.... EPA-HQ-OAR-2002-0037. All documents in the docket are listed on the http://www.regulations.gov Web... voluntary consensus standards VOC volatile organic compound WWW World Wide Web Organization of This Document. The following outline is provided to aid in locating information in this preamble. I. General...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-16
...--specific Web page at http://www.nrc.gov/reactors/new-reactors/col/fermi.html . The Ellis Library and... possesses and are publicly-available, using any of the following methods: Federal Rulemaking Web site: Go to... Documents Access and Management System (ADAMS): You may access publicly-available documents online in the...
eCDRweb User Guide–Secondary Support
This document presents the user guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDR web tool. E-CDRweb is the electronic, web-based tool provided by the Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document is the user guide for the Secondary Support user of the e-CDRweb tool.
Scheduled webinars can help you better manage EPA web content. Class topics include Drupal basics, creating different types of pages in the WebCMS such as document pages and forms, using Google Analytics, and best practices for metadata and accessibility.
NASA Technical Reports Server (NTRS)
Garcia, Joseph A.; Smith, Charles A. (Technical Monitor)
1998-01-01
The document consists of a publicly available web site (george.arc.nasa.gov) for Joseph A. Garcia's personal web pages in the AI division. Only general information will be posted and no technical material. All the information is unclassified.
How far has The Korean Journal of Internal Medicine advanced in terms of journal metrics?
Huh, Sun
2013-11-01
The Korean Journal of Internal Medicine has already been valued as an international journal, according to a citation analysis in 2011. Now, 2 years later, I would like to confirm how much the Journal has advanced from the point of view of journal metrics by looking at the impact factor, cites per document (2 years), SCImago Journal Rank (SJR), and the Hirsch index. These were obtained from a variety of databases, such as the Korean Medical Citation Index, KoreaMed Synapse, Web of Science, JCR Web, and SCImago Journal & Country Rank. The manually calculated 2012 impact factor was 1.252 in the Web of Science, with a ranking of 70/151 (46.4%) in the category of general and internal medicine. Cites per documents (2 years) for 2012 was 1.619, with a ranking of 267/1,588 (16.8%) in the category of medicine (miscellaneous). The 2012 SJR was 0.464, with a ranking of 348/1,588 (21.9%) in the category of medicine (miscellaneous). The Hirsch index from KoreaMed Synapse, Web of Science, and SCImago Journal & Country Rank were 12, 15, and 19, respectively. In comparison with data from 2010, the values of all the journal metrics increased consistently. These results reflect favorably on the increased competency of editors and authors of The Korean Journal of Internal Medicine.
How far has The Korean Journal of Internal Medicine advanced in terms of journal metrics?
2013-01-01
The Korean Journal of Internal Medicine has already been valued as an international journal, according to a citation analysis in 2011. Now, 2 years later, I would like to confirm how much the Journal has advanced from the point of view of journal metrics by looking at the impact factor, cites per document (2 years), SCImago Journal Rank (SJR), and the Hirsch index. These were obtained from a variety of databases, such as the Korean Medical Citation Index, KoreaMed Synapse, Web of Science, JCR Web, and SCImago Journal & Country Rank. The manually calculated 2012 impact factor was 1.252 in the Web of Science, with a ranking of 70/151 (46.4%) in the category of general and internal medicine. Cites per documents (2 years) for 2012 was 1.619, with a ranking of 267/1,588 (16.8%) in the category of medicine (miscellaneous). The 2012 SJR was 0.464, with a ranking of 348/1,588 (21.9%) in the category of medicine (miscellaneous). The Hirsch index from KoreaMed Synapse, Web of Science, and SCImago Journal & Country Rank were 12, 15, and 19, respectively. In comparison with data from 2010, the values of all the journal metrics increased consistently. These results reflect favorably on the increased competency of editors and authors of The Korean Journal of Internal Medicine. PMID:24307835
Mavrikakis, I; Mantas, J; Diomidous, M
2007-01-01
This paper is based on the research on the possible structure of an information system for the purposes of occupational health and safety management. We initiated a questionnaire in order to find the possible interest on the part of potential users in the subject of occupational health and safety. The depiction of the potential interest is vital both for the software analysis cycle and development according to previous models. The evaluation of the results tends to create pilot applications among different enterprises. Documentation and process improvements ascertained quality of services, operational support, occupational health and safety advice are the basics of the above applications. Communication and codified information among intersted parts is the other target of the survey regarding health issues. Computer networks can offer such services. The network will consist of certain nodes responsible to inform executives on Occupational Health and Safety. A web database has been installed for inserting and searching documents. The submission of files to a server and the answers to questionnaires through the web help the experts to perform their activities. Based on the requirements of enterprises we have constructed a web file server. We submit files so that users can retrieve the files which they need. The access is limited to authorized users. Digital watermarks authenticate and protect digital objects.
2016-01-01
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app. PMID:27302480
Riffle, Michael; Jaschob, Daniel; Zelter, Alex; Davis, Trisha N
2016-08-05
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app .
An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling.
Devi, R Suganya; Manjula, D; Siddharth, R K
2015-01-01
Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling.
An Efficient Approach for Web Indexing of Big Data through Hyperlinks in Web Crawling
Devi, R. Suganya; Manjula, D.; Siddharth, R. K.
2015-01-01
Web Crawling has acquired tremendous significance in recent times and it is aptly associated with the substantial development of the World Wide Web. Web Search Engines face new challenges due to the availability of vast amounts of web documents, thus making the retrieved results less applicable to the analysers. However, recently, Web Crawling solely focuses on obtaining the links of the corresponding documents. Today, there exist various algorithms and software which are used to crawl links from the web which has to be further processed for future use, thereby increasing the overload of the analyser. This paper concentrates on crawling the links and retrieving all information associated with them to facilitate easy processing for other uses. In this paper, firstly the links are crawled from the specified uniform resource locator (URL) using a modified version of Depth First Search Algorithm which allows for complete hierarchical scanning of corresponding web links. The links are then accessed via the source code and its metadata such as title, keywords, and description are extracted. This content is very essential for any type of analyser work to be carried on the Big Data obtained as a result of Web Crawling. PMID:26137592
Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text
NASA Astrophysics Data System (ADS)
Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.
2015-12-01
We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction process. We will describe our experience and implementation of our system and share lessons learned from our development. We will also discuss ways in which this could be adapted to other science fields. [1] Funk et al., 2014. [2] Kang et al., 2014. [3] Utopia Documents, http://utopiadocs.com [4] Apache cTAKES, http://ctakes.apache.org
Benefits and Challenges of Architecture Frameworks
2011-06-01
systems and identify emerging and obsolete standards. • The NATO Capability View ( NCV ) serves the analysis and optimization of military ca- pabilities... NCVs show the dependencies between different capabilities and allow detecting gaps and overlaps of capabilities. NCVs deliver indirectly requirements...Email (possibly with vendor-specific extensions/modifications) • Proprietary, and possibly not well-documented, message formats • Web services
ERIC Educational Resources Information Center
Liu, Shuyan; Oakland, Thomas
2016-01-01
The objective of this current study is to identify the growth and development of scholarly literature that specifically references the term "school psychology" in the Science Citation Index from 1907 through 2014. Documents from Web of Science were accessed and analyzed through the use of scientometric analyses, including HistCite and…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-06
... of the revised draft High Winds Guidance document, the EPA identifies example technical analyses that... identified analyses and any additional technical analyses that air agencies could use to demonstrate that the... Web site at http://www.epa.gov/ttn/analysis/exevents.htm for additional details on the draft non...
Role of Theories in the Design of Web-Based Person-Centered Support: A Critical Analysis
Ranerup, Agneta; Sparud-Lundin, Carina; Koinberg, Ingalill; Skärsäter, Ingela; Jenholt-Nolbris, Margaretha; Berg, Marie
2014-01-01
Objective. The aim of this study was to provide a critical understanding of the role of theories and their compatibility with a person-centered approach in the design and evaluation of web-based support for the management of chronic illness. Methods. Exploration of web-based support research projects focusing on four cases: (1) preschool children aged 4–6 with bladder dysfunction and urogenital malformation; (2) young adults aged 16–25 living with mental illness; (3) women with type 1 diabetes who are pregnant or in early motherhood; and (4) women who have undergone surgery for breast cancer. Data comprised interviews with research leaders and documented plans. Analysis was performed by means of a cross-case methodology. Results. The used theories concerned design, learning, health and well-being, or transition. All web support products had been developed using a participatory design (PD). Fundamental to the technology design and evaluation of outcomes were theories focusing on learning and on health and well-being. All theories were compatible with a person-centered approach. However, a notable exception was the relatively collective character of PD and Communities of Practice. Conclusion. Our results illustrate multifaceted ways for theories to be used in the design and evaluation of web-based support. PMID:26464860
Improving health care proxy documentation using a web-based interview through a patient portal.
Bajracharya, Adarsha S; Crotty, Bradley H; Kowaloff, Hollis B; Safran, Charles; Slack, Warner V
2016-05-01
Health care proxy (HCP) documentation is suboptimal. To improve rates of proxy selection and documentation, we sought to develop and evaluate a web-based interview to guide patients in their selection, and to capture their choices in their electronic health record (EHR). We developed and implemented a HCP interview within the patient portal of a large academic health system. We analyzed the experience, together with demographic and clinical factors, of the first 200 patients who used the portal to complete the interview. We invited users to comment about their experience and analyzed their comments using established qualitative methods. From January 20, 2015 to March 13, 2015, 139 of the 200 patients who completed the interview submitted their HCP information for their clinician to review in the EHR. These patients had a median age of 57 years (Inter Quartile Range (IQR) 45-67) and most were healthy. The 99 patients who did not previously have HCP information in their EHR were more likely to complete and then submit their information than the 101 patients who previously had a proxy in their health record (odds ratio 2.4, P = .005). Qualitative analysis identified several ways in which the portal-based interview reminded, encouraged, and facilitated patients to complete their HCP. Patients found our online interview convenient and helpful in facilitating selection and documentation of an HCP. Our study demonstrates that a web-based interview to collect and share a patient's HCP information is both feasible and useful. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Kortsch, Susanne; Primicerio, Raul; Fossheim, Maria; Dolgov, Andrey V; Aschan, Michaela
2015-09-07
Climate-driven poleward shifts, leading to changes in species composition and relative abundances, have been recently documented in the Arctic. Among the fastest moving species are boreal generalist fish which are expected to affect arctic marine food web structure and ecosystem functioning substantially. Here, we address structural changes at the food web level induced by poleward shifts via topological network analysis of highly resolved boreal and arctic food webs of the Barents Sea. We detected considerable differences in structural properties and link configuration between the boreal and the arctic food webs, the latter being more modular and less connected. We found that a main characteristic of the boreal fish moving poleward into the arctic region of the Barents Sea is high generalism, a property that increases connectance and reduces modularity in the arctic marine food web. Our results reveal that habitats form natural boundaries for food web modules, and that generalists play an important functional role in coupling pelagic and benthic modules. We posit that these habitat couplers have the potential to promote the transfer of energy and matter between habitats, but also the spread of pertubations, thereby changing arctic marine food web structure considerably with implications for ecosystem dynamics and functioning. © 2015 The Authors.
Kortsch, Susanne; Primicerio, Raul; Fossheim, Maria; Dolgov, Andrey V.; Aschan, Michaela
2015-01-01
Climate-driven poleward shifts, leading to changes in species composition and relative abundances, have been recently documented in the Arctic. Among the fastest moving species are boreal generalist fish which are expected to affect arctic marine food web structure and ecosystem functioning substantially. Here, we address structural changes at the food web level induced by poleward shifts via topological network analysis of highly resolved boreal and arctic food webs of the Barents Sea. We detected considerable differences in structural properties and link configuration between the boreal and the arctic food webs, the latter being more modular and less connected. We found that a main characteristic of the boreal fish moving poleward into the arctic region of the Barents Sea is high generalism, a property that increases connectance and reduces modularity in the arctic marine food web. Our results reveal that habitats form natural boundaries for food web modules, and that generalists play an important functional role in coupling pelagic and benthic modules. We posit that these habitat couplers have the potential to promote the transfer of energy and matter between habitats, but also the spread of pertubations, thereby changing arctic marine food web structure considerably with implications for ecosystem dynamics and functioning. PMID:26336179
Environmental Models as a Service: Enabling Interoperability ...
Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantage of streamlined deployment processes and affordable cloud access to move algorithms and data to the web for discoverability and consumption. In these deployments, environmental models can become available to end users through RESTful web services and consistent application program interfaces (APIs) that consume, manipulate, and store modeling data. RESTful modeling APIs also promote discoverability and guide usability through self-documentation. Embracing the RESTful paradigm allows models to be accessible via a web standard, and the resulting endpoints are platform- and implementation-agnostic while simultaneously presenting significant computational capabilities for spatial and temporal scaling. RESTful APIs present data in a simple verb-noun web request interface: the verb dictates how a resource is consumed using HTTP methods (e.g., GET, POST, and PUT) and the noun represents the URL reference of the resource on which the verb will act. The RESTful API can self-document in both the HTTP response and an interactive web page using the Open API standard. This lets models function as an interoperable service that promotes sharing, documentation, and discoverability. Here, we discuss the
Code of Federal Regulations, 2010 CFR
2010-07-01
... filed through the Office's web site, at http://www.uspto.gov. Paper documents and cover sheets to be... trademark documents can be ordered through the Office's web site at www.uspto.gov. Paper requests for...: Madrid Processing Unit, 600 Dulany Street, MDE-7B87, Alexandria, VA 22314-5793. [68 FR 48289, Aug. 13...
e-CDRweb User Guide – Secondary Authorized Official
This document presents the user guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by the Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document is the user guide for the Secondary Authorized Official (AO) user of the e-CDR web tool.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-26
... of guidance documents that the Center for Devices and Radiological Health (CDRH) is intending to... notice announces the Web site location of the two lists of guidance documents which CDRH is intending to... list. FDA and CDRH priorities are subject to change at any time. Topics on this and past guidance...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-04
... the Agency will post a list of guidance documents the Center for Devices and Radiological Health (CDRH... guidance documents that CDRH is considering for development and providing stakeholders an opportunity to.... This notice announces the Web site location of the list of guidances on which CDRH is intending to work...
Hart, Laura M; Jorm, Anthony F; Paxton, Susan J; Cvetkovski, Stefan
2012-11-01
Mental health first aid guidelines provide the public with consensus-based information about how to assist someone who is developing a mental illness or experiencing a mental health crisis. The aim of the current study was to evaluate the usefulness and impact of the guidelines on web users who download them. Web users who downloaded the documents were invited to respond to an initial demographic questionnaire, then a follow up about how the documents had been used, their perceived usefulness, whether first-aid situations had been encountered and if these were influenced by the documents. Over 9.8 months, 706 web users responded to the initial questionnaire and 154 responded to the second. A majority reported downloading the document because their job involved contact with people with mental illness. Sixty-three web users reported providing first aid, 44 of whom reported that the person they were assisting had sought professional care as a result of their suggestion. Twenty-three web users reported seeking care themselves. A majority of those who provided first aid reported feeling that they had been successful in helping the person, that they had been able to assist in a way that was more knowledgeable, skilful and supportive, and that the guidelines had contributed to these outcomes. Information made freely available on the Internet, about how to provide mental health first aid to someone who is developing a mental health problem or experiencing a mental health crisis, is associated with more positive, empathic and successful helping behaviours. © 2012 Wiley Publishing Asia Pty Ltd.
ERIC Educational Resources Information Center
Sun, Yanyan; Gao, Fei
2014-01-01
Web annotation is a Web 2.0 technology that allows learners to work collaboratively on web pages or electronic documents. This study explored the use of Web annotation as an online discussion tool by comparing it to a traditional threaded discussion forum. Ten graduate students participated in the study. Participants had access to both a Web…
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-01-01
Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools. PMID:18021453
Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S
2007-11-19
Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools.
SureChEMBL: a large-scale, chemically annotated patent document database.
Papadatos, George; Davies, Mark; Dedman, Nathan; Chambers, Jon; Gaulton, Anna; Siddle, James; Koks, Richard; Irvine, Sean A; Pettersson, Joe; Goncharoff, Nicko; Hersey, Anne; Overington, John P
2016-01-04
SureChEMBL is a publicly available large-scale resource containing compounds extracted from the full text, images and attachments of patent documents. The data are extracted from the patent literature according to an automated text and image-mining pipeline on a daily basis. SureChEMBL provides access to a previously unavailable, open and timely set of annotated compound-patent associations, complemented with sophisticated combined structure and keyword-based search capabilities against the compound repository and patent document corpus; given the wealth of knowledge hidden in patent documents, analysis of SureChEMBL data has immediate applications in drug discovery, medicinal chemistry and other commercial areas of chemical science. Currently, the database contains 17 million compounds extracted from 14 million patent documents. Access is available through a dedicated web-based interface and data downloads at: https://www.surechembl.org/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Object-Oriented Approach for 3d Archaeological Documentation
NASA Astrophysics Data System (ADS)
Valente, R.; Brumana, R.; Oreni, D.; Banfi, F.; Barazzetti, L.; Previtali, M.
2017-08-01
Documentation on archaeological fieldworks needs to be accurate and time-effective. Many features unveiled during excavations can be recorded just once, since the archaeological workflow physically removes most of the stratigraphic elements. Some of them have peculiar characteristics which make them hardly recognizable as objects and prevent a full 3D documentation. The paper presents a suitable feature-based method to carry on archaeological documentation with a three-dimensional approach, tested on the archaeological site of S. Calocero in Albenga (Italy). The method is based on one hand on the use of structure from motion techniques for on-site recording and 3D Modelling to represent the three-dimensional complexity of stratigraphy. The entire documentation workflow is carried out through digital tools, assuring better accuracy and interoperability. Outputs can be used in GIS to perform spatial analysis; moreover, a more effective dissemination of fieldworks results can be assured with the spreading of datasets and other information through web-services.
SureChEMBL: a large-scale, chemically annotated patent document database
Papadatos, George; Davies, Mark; Dedman, Nathan; Chambers, Jon; Gaulton, Anna; Siddle, James; Koks, Richard; Irvine, Sean A.; Pettersson, Joe; Goncharoff, Nicko; Hersey, Anne; Overington, John P.
2016-01-01
SureChEMBL is a publicly available large-scale resource containing compounds extracted from the full text, images and attachments of patent documents. The data are extracted from the patent literature according to an automated text and image-mining pipeline on a daily basis. SureChEMBL provides access to a previously unavailable, open and timely set of annotated compound-patent associations, complemented with sophisticated combined structure and keyword-based search capabilities against the compound repository and patent document corpus; given the wealth of knowledge hidden in patent documents, analysis of SureChEMBL data has immediate applications in drug discovery, medicinal chemistry and other commercial areas of chemical science. Currently, the database contains 17 million compounds extracted from 14 million patent documents. Access is available through a dedicated web-based interface and data downloads at: https://www.surechembl.org/. PMID:26582922
Web-based routing assistance tool to reduce pavement damage by overweight and oversize vehicles.
DOT National Transportation Integrated Search
2016-10-30
This report documents the results of a completed project titled Web-Based Routing Assistance Tool to Reduce Pavement Damage by Overweight and Oversize Vehicles. The tasks involved developing a Web-based GIS routing assistance tool and evaluate ...
Information Retrieval and Graph Analysis Approaches for Book Recommendation.
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.
Information Retrieval and Graph Analysis Approaches for Book Recommendation
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments. PMID:26504899
36 CFR 219.54 - Filing an objection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... or regulation. (2) Forest Service Directive System documents and land management plans or other... the objection process. (b) Including documents by reference is not allowed, except for the following... relevant section of the cited document. All other documents or Web links to those documents, or both must...
36 CFR 219.54 - Filing an objection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... or regulation. (2) Forest Service Directive System documents and land management plans or other... the objection process. (b) Including documents by reference is not allowed, except for the following... relevant section of the cited document. All other documents or Web links to those documents, or both must...
Araia, Makda H; Potter, Beth K
2011-09-01
The Internet is a potentially important medium for communication about public health programs including newborn screening. This study explores whether the information available on official newborn screening program websites is consistent with existing guidelines regarding educational content for parents. We conducted a systematic search of the public websites of newborn screening programs in the US and Canada, identifying web pages and downloadable brochures that contained educational information. Two researchers independently reviewed all documents to determine the extent to which they included 14 key recommended educational messages. We identified 85 documents containing educational information on 46 US and 6 Canadian newborn screening program websites. The documents contained from 1 to 14 of the recommended messages. The majority of identified materials emphasized the importance and benefits of screening. The differences between US and Canadian materials were related to the importance of parental involvement in follow-up and issues of consent and storage of blood spots. Our findings are consistent with studies of non-web-based newborn screening education materials. The results emphasize the need for further evaluation of newborn screening education, including internet-based resources, particularly in terms of the impact of particular messages on parental attitudes and behaviors.
Human rights abuses, transparency, impunity and the Web.
Miles, Steven H
2007-01-01
This paper reviews how human rights advocates during the "war-on-terror" have found new ways to use the World Wide Web (Web) to combat human rights abuses. These include posting of human rights reports; creating large, open-access and updated archives of government documents and other data, tracking CIA rendition flights and maintaining blogs, e-zines, list-serves and news services that rapidly distribute information between journalists, scholars and human rights advocates. The Web is a powerful communication tool for human rights advocates. It is international, instantaneous, and accessible for uploading, archiving, locating and downloading information. For its human rights potential to be fully realized, international law must be strengthened to promote the declassification of government documents, as is done by various freedom of information acts. It is too early to assess the final impact of the Web on human rights abuses in the "war-on-terror". Wide dissemination of government documents and human rights advocates' reports has put the United States government on the defensive and some of its policies have changed in response to public pressure. Even so, the essential elements of secret prisons, detention without charges or trials, and illegal rendition remain intact.
32 CFR 701.119 - Privacy and the web.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 5 2010-07-01 2010-07-01 false Privacy and the web. 701.119 Section 701.119... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.119 Privacy and the web. DON activities shall consult SECNAVINST 5720.47B for guidance on what may be posted on a Navy Web site. ...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-27
... representative, already holds an NRC- issued digital ID certificate). Based upon this information, the Secretary... online, Web-based submission form. In order to serve documents through EIE, users will be required to install a Web browser plug-in from the NRC Web site. Further information on the Web- based submission form...
Della Seta, Maurella; Sellitri, Cinzia
2004-01-01
The research project "Collection and dissemination of bioethical information through an integrated electronic system", started in 2001 by the Istituto Superiore di Sanità (ISS), had among its objectives, the realization of an integrated system for data collection and exchange of documents related to bioethics. The system should act as a reference tool for those research activities impacting on citizens' health and welfare. This paper aims at presenting some initiatives, developed in the project framework, in order to establish an Italian documentation network, among which: a) exchange of ISS publications with Italian institutions active in this field; b) survey through a questionnaire aimed at assessing Italian informative resources, state-of-the-art and holdings of documentation centres and ethical committees; c) Italian Internet resources analysis. The results of the survey, together with the analysis of web sites, show that at present in Italy there are many interesting initiatives for collecting and spreading of documentation in the bioethical fields, but there is an urgent need for an integration of such resources. Ethical committees generally speaking need a larger availability of documents, while there are good potentialities for the establishment of an electronic network for document retrieval and delivery.
Analysis of Technique to Extract Data from the Web for Improved Performance
NASA Astrophysics Data System (ADS)
Gupta, Neena; Singh, Manish
2010-11-01
The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.
Comparing Web, Group and Telehealth Formats of a Military Parenting Program
2017-06-01
directed approaches. Comparative effectiveness will be tested by specifying a non - equivalence hypothesis for group -based and web-facilitated relative...Comparative effectiveness will be tested by specifying a non - equivalence hypothesis fro group based and individualized facilitated relative to self-directed...documents for review and approval. 1a. Finalize human subjects protocol and consent documents for pilot group (N=5 families), and randomized controlled
ERIC Educational Resources Information Center
Whang, Michael
2007-01-01
Measuring website success is critical not only to the web development process but also to demonstrate the value of library services to the institution. This article documents one library's approach to the measurement of website success. LibQUAL+[TM] results and strategic-planning documents indicated a need for a new type of measurement. The…
Documentation systems for educators seeking academic promotion in U.S. medical schools.
Simpson, Deborah; Hafler, Janet; Brown, Diane; Wilkerson, LuAnn
2004-08-01
To explore the state and use of teaching portfolios in promotion and tenure in U.S. medical schools. A two-phase qualitative study using a Web-based search procedure and telephone interviews was conducted. The first phase assessed the penetration of teaching portfolio-like systems in U.S. medical schools using a keyword search of medical school Web sites. The second phase examined the current use of teaching portfolios in 16 U.S. medical schools that reported their use in a survey in 1992. The individual designated as having primary responsibility for faculty appointments/promotions was contacted to participate in a 30-60 minute interview. The Phase 1 search of U.S. medical schools' Web sites revealed that 76 medical schools have Web-based access to information on documenting educational activities for promotion. A total of 16 of 17 medical schools responded to Phase 2. All 16 continued to use a portfolio-like system in 2003. Two documentation categories, honors/awards and philosophy/personal statement regarding education, were included by six more of these schools than used these categories in 1992. Dissemination of work to colleagues is now a key inclusion at 15 of the Phase 2 schools. The most common type of evidence used to document education was learner and/or peer ratings with infrequent use of outcome measures and internal/external review. The number of medical schools whose promotion packets include portfolio-like documentation associated with a faculty member's excellence in education has increased by more than 400% in just over ten years. Among early-responder schools the types of documentation categories have increased, but students' ratings of teaching remain the primary evidence used to document the quality or outcomes of the educational efforts reported.
BioRuby: bioinformatics software for the Ruby programming language.
Goto, Naohisa; Prins, Pjotr; Nakao, Mitsuteru; Bonnal, Raoul; Aerts, Jan; Katayama, Toshiaki
2010-10-15
The BioRuby software toolkit contains a comprehensive set of free development tools and libraries for bioinformatics and molecular biology, written in the Ruby programming language. BioRuby has components for sequence analysis, pathway analysis, protein modelling and phylogenetic analysis; it supports many widely used data formats and provides easy access to databases, external programs and public web services, including BLAST, KEGG, GenBank, MEDLINE and GO. BioRuby comes with a tutorial, documentation and an interactive environment, which can be used in the shell, and in the web browser. BioRuby is free and open source software, made available under the Ruby license. BioRuby runs on all platforms that support Ruby, including Linux, Mac OS X and Windows. And, with JRuby, BioRuby runs on the Java Virtual Machine. The source code is available from http://www.bioruby.org/. katayama@bioruby.org
A novel architecture for information retrieval system based on semantic web
NASA Astrophysics Data System (ADS)
Zhang, Hui
2011-12-01
Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.
Chemical markup, XML and the World-Wide Web. 3. Toward a signed semantic chemical web of trust.
Gkoutos, G V; Murray-Rust, P; Rzepa, H S; Wright, M
2001-01-01
We describe how a collection of documents expressed in XML-conforming languages such as CML and XHTML can be authenticated and validated against digital signatures which make use of established X.509 certificate technology. These can be associated either with specific nodes in the XML document or with the entire document. We illustrate this with two examples. An entire journal article expressed in XML has its individual components digitally signed by separate authors, and the collection is placed in an envelope and again signed. The second example involves using a software robot agent to acquire a collection of documents from a specified URL, to perform various operations and transformations on the content, including expressing molecules in CML, and to automatically sign the various components and deposit the result in a repository. We argue that these operations can used as components for building what we term an authenticated and semantic chemical web of trust.
Web application and database modeling of traffic impact analysis using Google Maps
NASA Astrophysics Data System (ADS)
Yulianto, Budi; Setiono
2017-06-01
Traffic impact analysis (TIA) is a traffic study that aims at identifying the impact of traffic generated by development or change in land use. In addition to identifying the traffic impact, TIA is also equipped with mitigation measurement to minimize the arising traffic impact. TIA has been increasingly important since it was defined in the act as one of the requirements in the proposal of Building Permit. The act encourages a number of TIA studies in various cities in Indonesia, including Surakarta. For that reason, it is necessary to study the development of TIA by adopting the concept Transportation Impact Control (TIC) in the implementation of the TIA standard document and multimodal modeling. It includes TIA's standardization for technical guidelines, database and inspection by providing TIA checklists, monitoring and evaluation. The research was undertaken by collecting the historical data of junctions, modeling of the data in the form of relational database, building a user interface for CRUD (Create, Read, Update and Delete) the TIA data in the form of web programming with Google Maps libraries. The result research is a system that provides information that helps the improvement and repairment of TIA documents that exist today which is more transparent, reliable and credible.
78 FR 5838 - NRC Enforcement Policy
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-28
... submit comments by any of the following methods: Federal Rulemaking Web site: Go to http://www... of the following methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search... the search, select ``ADAMS Public Documents'' and then select ``Begin Web-based ADAMS Search.'' For...
The Technical Work Plan Tracking Tool
NASA Technical Reports Server (NTRS)
Chullen, Cinda; Leighton, Adele; Weller, Richard A.; Woodfill, Jared; Parkman, William E.; Ellis, Glenn L.; Wilson, Marilyn M.
2003-01-01
The Technical Work Plan Tracking Tool is a web-based application that enables interactive communication and approval of contract requirements that pertain to the administration of the Science, Engineering, Analysis, and Test (SEAT) contract at Johnson Space Center. The implementation of the application has (1) shortened the Technical Work Plan approval process, (2) facilitated writing and documenting requirements in a performance-based environment with associated surveillance plans, (3) simplified the contractor s estimate of the cost for the required work, and (4) allowed for the contractor to document how they plan to accomplish the work. The application is accessible to over 300 designated NASA and contractor employees via two Web sites. For each employee, the application regulates access according to the employee s authority to enter, view, and/or print out diverse information, including reports, work plans, purchase orders, and financial data. Advanced features of this application include on-line approval capability, automatic e-mail notifications requesting review by subsequent approvers, and security inside and outside the firewall.
The Document Management Alliance.
ERIC Educational Resources Information Center
Fay, Chuck
1998-01-01
Describes the Document Management Alliance, a standards effort for document management systems that manages and tracks changes to electronic documents created and used by collaborative teams, provides secure access, and facilitates online information retrieval via the Internet and World Wide Web. Future directions are also discussed. (LRW)
[Analysis of the web pages of the intensive care units of Spain].
Navarro-Arnedo, J M
2009-01-01
In order to determine the Intensive Care Units (ICU) of Spanish hospitals that had a web site, to analyze the information they offered and to know what information they needed to offer according to a sample of ICU nurses, a cross-sectional observational, descriptive study was carried out between January and September 2008. For each ICU website, an analysis was made on the information available on the unit, its care, teaching and research activity on nursing. Simultaneously, based on a sample of intensive care nurses, the information that should be contained on an ICU website was determined. The results, expressed in absolute numbers and percentage, showed that 66 of the 292 hospitals with ICU (22.6%) had a web site; 50.7% of the sites showed the number of beds, 19.7% the activity report, 11.3% the published articles/studies and followed research lines and 9.9% the organized formation courses. 14 webs (19.7%) displayed images of nurses. However, only 1 (1.4%) offered guides on the actions followed. No web site offered a navigation section for nursing, the E-mail of the chief nursing, the nursing documentation used or if any nursing model of their own was used. It is concluded that only one-fourth of the Spanish hospitals with ICU have a web site; number of beds was the data offered by the most sites, whereas information on care, educational and investigating activities was very reduced and that on nursing was practically omitted on the web pages of intensive care units.
Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.
2006-12-01
The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. Once an analysis has been specified for a chunk or day of data, it can be easily repeated with different control parameters or over months of data. Recently, the Earth Science Information Partners (ESIP) Federation sponsored a collaborative activity in which several ESIP members advertised their respective WMS/WCS and SOAP services, developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. For several scenarios, the same collaborative workflow was executed in three ways: using hand-coded scripts, by executing a SciFlo document, and by executing a BPEL workflow document. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, and further collaborations that are being pursued.
NASA Astrophysics Data System (ADS)
Suzuki, Izumi; Mikami, Yoshiki; Ohsato, Ario
A technique that acquires documents in the same category with a given short text is introduced. Regarding the given text as a training document, the system marks up the most similar document, or sufficiently similar documents, from among the document domain (or entire Web). The system then adds the marked documents to the training set to learn the set, and this process is repeated until no more documents are marked. Setting a monotone increasing property to the similarity as it learns enables the system to 1) detect the correct timing so that no more documents remain to be marked and to 2) decide the threshold value that the classifier uses. In addition, under the condition that the normalization process is limited to what term weights are divided by a p-norm of the weights, the linear classifier in which training documents are indexed in a binary manner is the only instance that satisfies the monotone increasing property. The feasibility of the proposed technique was confirmed through an examination of binary similarity and using English and German documents randomly selected from the Web.
A web-based referral system for neurosurgery--a solution to our problems?
Choo, Melissa C; Thennakon, Shyamica; Shapey, Jonathan; Tolias, Christos M
2011-06-01
Accurate handover is very important in the running of all modern neurosurgical units. Referrals are notoriously difficult to track and review due to poor quality of written paper-based recorded information for handover (illegibility, incomplete paper trail, repetition of information and loss of patients). We have recently introduced a web-based referral system to three of our referring hospitals. To review the experience of a tertiary neurosurgical unit in using the UK's first real time online referral system and to discuss its strengths and weaknesses in comparison to the currently used written paper-based referral system. A retrospective analysis of all paper-based referrals made to our unit in March 2009, compared to 14 months' referrals through the web system. Patterns of information recorded in both systems were investigated and advantages and disadvantages of each identified. One hundred ninety-six patients were referred using the online system, 483 using the traditional method. Significant problems of illegibility and missing information were identified with the paper-based referrals. In comparison, 100% documentation was achieved with the online referral system. Only 63% penetrance in the best performing trust was found using the online system, with significant delays in responding to referrals. Traditional written paper-based referrals do not provide an acceptable level of documentation. We present our experience and difficulties implementing a web-based system to address this. Although our data are unable to show improved patient care, we believe the potential benefits of a fully integrated system may offer a solution.
12 CFR 611.1216 - Public availability of documents related to the termination.
Code of Federal Regulations, 2012 CFR
2012-01-01
... the termination. (a) We may post on our Web site, or require you to post on your Web site: (1) Results... related transactions. (b) We will not post confidential information on our Web site and will not require you to post it on your Web site. (c) You may request that we treat specific information as...
12 CFR 611.1216 - Public availability of documents related to the termination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... the termination. (a) We may post on our Web site, or require you to post on your Web site: (1) Results... related transactions. (b) We will not post confidential information on our Web site and will not require you to post it on your Web site. (c) You may request that we treat specific information as...
12 CFR 611.1216 - Public availability of documents related to the termination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... the termination. (a) We may post on our Web site, or require you to post on your Web site: (1) Results... related transactions. (b) We will not post confidential information on our Web site and will not require you to post it on your Web site. (c) You may request that we treat specific information as...
12 CFR 611.1216 - Public availability of documents related to the termination.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the termination. (a) We may post on our Web site, or require you to post on your Web site: (1) Results... related transactions. (b) We will not post confidential information on our Web site and will not require you to post it on your Web site. (c) You may request that we treat specific information as...
XML Content Finally Arrives on the Web!
ERIC Educational Resources Information Center
Funke, Susan
1998-01-01
Explains extensible markup language (XML) and how it differs from hypertext markup language (HTML) and standard generalized markup language (SGML). Highlights include features of XML, including better formatting of documents, better searching capabilities, multiple uses for hyperlinking, and an increase in Web applications; Web browsers; and what…
Automating Information Discovery Within the Invisible Web
NASA Astrophysics Data System (ADS)
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
Health search engine with e-document analysis for reliable search results.
Gaudinat, Arnaud; Ruch, Patrick; Joubert, Michel; Uziel, Philippe; Strauss, Anne; Thonnet, Michèle; Baud, Robert; Spahni, Stéphane; Weber, Patrick; Bonal, Juan; Boyer, Celia; Fieschi, Marius; Geissbuhler, Antoine
2006-01-01
After a review of the existing practical solution available to the citizen to retrieve eHealth document, the paper describes an original specialized search engine WRAPIN. WRAPIN uses advanced cross lingual information retrieval technologies to check information quality by synthesizing medical concepts, conclusions and references contained in the health literature, to identify accurate, relevant sources. Thanks to MeSH terminology [1] (Medical Subject Headings from the U.S. National Library of Medicine) and advanced approaches such as conclusion extraction from structured document, reformulation of the query, WRAPIN offers to the user a privileged access to navigate through multilingual documents without language or medical prerequisites. The results of an evaluation conducted on the WRAPIN prototype show that results of the WRAPIN search engine are perceived as informative 65% (59% for a general-purpose search engine), reliable and trustworthy 72% (41% for the other engine) by users. But it leaves room for improvement such as the increase of database coverage, the explanation of the original functionalities and an audience adaptability. Thanks to evaluation outcomes, WRAPIN is now in exploitation on the HON web site (http://www.healthonnet.org), free of charge. Intended to the citizen it is a good alternative to general-purpose search engines when the user looks up trustworthy health and medical information or wants to check automatically a doubtful content of a Web page.
Dickinson, Jesse; Hanson, R.T.; Mehl, Steffen W.; Hill, Mary C.
2011-01-01
The computer program described in this report, MODPATH-LGR, is designed to allow simulation of particle tracking in locally refined grids. The locally refined grids are simulated by using MODFLOW-LGR, which is based on MODFLOW-2005, the three-dimensional groundwater-flow model published by the U.S. Geological Survey. The documentation includes brief descriptions of the methods used and detailed descriptions of the required input files and how the output files are typically used. The code for this model is available for downloading from the World Wide Web from a U.S. Geological Survey software repository. The repository is accessible from the U.S. Geological Survey Water Resources Information Web page at http://water.usgs.gov/software/ground_water.html. The performance of the MODPATH-LGR program has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program by using the email address available on the Web site. Updates might occasionally be made to this document and to the MODPATH-LGR program, and users should check the Web site periodically.
WikiHyperGlossary (WHG): an information literacy technology for chemistry documents.
Bauer, Michael A; Berleant, Daniel; Cornell, Andrew P; Belford, Robert E
2015-01-01
The WikiHyperGlossary is an information literacy technology that was created to enhance reading comprehension of documents by connecting them to socially generated multimedia definitions as well as semantically relevant data. The WikiHyperGlossary enhances reading comprehension by using the lexicon of a discipline to generate dynamic links in a document to external resources that can provide implicit information the document did not explicitly provide. Currently, the most common method to acquire additional information when reading a document is to access a search engine and browse the web. This may lead to skimming of multiple documents with the novice actually never returning to the original document of interest. The WikiHyperGlossary automatically brings information to the user within the current document they are reading, enhancing the potential for deeper document understanding. The WikiHyperGlossary allows users to submit a web URL or text to be processed against a chosen lexicon, returning the document with tagged terms. The selection of a tagged term results in the appearance of the WikiHyperGlossary Portlet containing a definition, and depending on the type of word, tabs to additional information and resources. Current types of content include multimedia enhanced definitions, ChemSpider query results, 3D molecular structures, and 2D editable structures connected to ChemSpider queries. Existing glossaries can be bulk uploaded, locked for editing and associated with multiple social generated definitions. The WikiHyperGlossary leverages both social and semantic web technologies to bring relevant information to a document. This can not only aid reading comprehension, but increases the users' ability to obtain additional information within the document. We have demonstrated a molecular editor enabled knowledge framework that can result in a semantic web inductive reasoning process, and integration of the WikiHyperGlossary into other software technologies, like the Jikitou Biomedical Question and Answer system. Although this work was developed in the chemical sciences and took advantage of open science resources and initiatives, the technology is extensible to other knowledge domains. Through the DeepLit (Deeper Literacy: Connecting Documents to Data and Discourse) startup, we seek to extend WikiHyperGlossary technologies to other knowledge domains, and integrate them into other knowledge acquisition workflows.
39 CFR 3001.12 - Service of documents.
Code of Federal Regulations, 2010 CFR
2010-07-01
... or presiding officer has determined is unable to receive service through the Commission's Web site... presiding officer has determined is unable to receive service through the Commission Web site shall be by... service list for each current proceeding will be available on the Commission's Web site http://www.prc.gov...
32 CFR 701.119 - Privacy and the web.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 5 2013-07-01 2013-07-01 false Privacy and the web. 701.119 Section 701.119 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY REGULATIONS... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.119 Privacy and the web. DON...
32 CFR 701.119 - Privacy and the web.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 5 2011-07-01 2011-07-01 false Privacy and the web. 701.119 Section 701.119 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY REGULATIONS... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.119 Privacy and the web. DON...
32 CFR 701.119 - Privacy and the web.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 5 2012-07-01 2012-07-01 false Privacy and the web. 701.119 Section 701.119 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY REGULATIONS... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.119 Privacy and the web. DON...
32 CFR 701.119 - Privacy and the web.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 5 2014-07-01 2014-07-01 false Privacy and the web. 701.119 Section 701.119 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY UNITED STATES NAVY REGULATIONS... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.119 Privacy and the web. DON...
Avoiding Pornography Landmines while Traveling the Information Superhighway.
ERIC Educational Resources Information Center
Lehmann, Kay
2002-01-01
Discusses how to avoid pornographic sites when using the Internet in classrooms. Highlights include re-setting the Internet home page; putting appropriate links in a Word document; creating a Web page with appropriate links; downloading the content of a Web site; educating the students; and re-checking all Web addresses. (LRW)
ICCE/ICCAI 2000 Full & Short Papers (Web-Based Learning).
ERIC Educational Resources Information Center
2000
This document contains full and short papers on World Wide Web-based learning from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction). Topics covered include: design and development of CAL (Computer Assisted Learning) systems; design and development of WBI (Web-Based…
Social Networking on the Semantic Web
ERIC Educational Resources Information Center
Finin, Tim; Ding, Li; Zhou, Lina; Joshi, Anupam
2005-01-01
Purpose: Aims to investigate the way that the semantic web is being used to represent and process social network information. Design/methodology/approach: The Swoogle semantic web search engine was used to construct several large data sets of Resource Description Framework (RDF) documents with social network information that were encoded using the…
What Can Pictures Tell Us About Web Pages? Improving Document Search Using Images.
Rodriguez-Vaamonde, Sergio; Torresani, Lorenzo; Fitzgibbon, Andrew W
2015-06-01
Traditional Web search engines do not use the images in the HTML pages to find relevant documents for a given query. Instead, they typically operate by computing a measure of agreement between the keywords provided by the user and only the text portion of each page. In this paper we study whether the content of the pictures appearing in a Web page can be used to enrich the semantic description of an HTML document and consequently boost the performance of a keyword-based search engine. We present a Web-scalable system that exploits a pure text-based search engine to find an initial set of candidate documents for a given query. Then, the candidate set is reranked using visual information extracted from the images contained in the pages. The resulting system retains the computational efficiency of traditional text-based search engines with only a small additional storage cost needed to encode the visual information. We test our approach on one of the TREC Million Query Track benchmarks where we show that the exploitation of visual content yields improvement in accuracies for two distinct text-based search engines, including the system with the best reported performance on this benchmark. We further validate our approach by collecting document relevance judgements on our search results using Amazon Mechanical Turk. The results of this experiment confirm the improvement in accuracy produced by our image-based reranker over a pure text-based system.
Spieler, Bernadette; Burgsteiner, Harald; Messer-Misak, Karin; Gödl-Purrer, Barbara; Salchinger, Beate
2015-01-01
Findings in physiotherapy have standardized approaches in treatment, but there is also a significant margin of differences in how to implement these standards. Clinical decisions require experience and continuous learning processes to consolidate personal values and opinions and studies suggest that lecturers can influence students positively. Recently, the study course of Physiotherapy at the University of Applied Science in Graz has offered a paper based finding document. This document supported decisions through the adaption of the clinical reasoning process. The document was the starting point for our learning application called "EasyAssess", a Java based web-application for a digital findings documentation. A central point of our work was to ensure efficiency, effectiveness and usability of the web-application through usability tests utilized by both students and lecturers. Results show that our application fulfills the previously defined requirements and can be efficiently used in daily routine largely because of its simple user interface and its modest design. Due to the close cooperation with the study course Physiotherapy, the application has incorporated the various needs of the target audiences and confirmed the usefulness of our application.
Biotool2Web: creating simple Web interfaces for bioinformatics applications.
Shahid, Mohammad; Alam, Intikhab; Fuellen, Georg
2006-01-01
Currently there are many bioinformatics applications being developed, but there is no easy way to publish them on the World Wide Web. We have developed a Perl script, called Biotool2Web, which makes the task of creating web interfaces for simple ('home-made') bioinformatics applications quick and easy. Biotool2Web uses an XML document containing the parameters to run the tool on the Web, and generates the corresponding HTML and common gateway interface (CGI) files ready to be published on a web server. This tool is available for download at URL http://www.uni-muenster.de/Bioinformatics/services/biotool2web/ Georg Fuellen (fuellen@alum.mit.edu).
BioconductorBuntu: a Linux distribution that implements a web-based DNA microarray analysis server.
Geeleher, Paul; Morris, Dermot; Hinde, John P; Golden, Aaron
2009-06-01
BioconductorBuntu is a custom distribution of Ubuntu Linux that automatically installs a server-side microarray processing environment, providing a user-friendly web-based GUI to many of the tools developed by the Bioconductor Project, accessible locally or across a network. System installation is via booting off a CD image or by using a Debian package provided to upgrade an existing Ubuntu installation. In its current version, several microarray analysis pipelines are supported including oligonucleotide, dual-or single-dye experiments, including post-processing with Gene Set Enrichment Analysis. BioconductorBuntu is designed to be extensible, by server-side integration of further relevant Bioconductor modules as required, facilitated by its straightforward underlying Python-based infrastructure. BioconductorBuntu offers an ideal environment for the development of processing procedures to facilitate the analysis of next-generation sequencing datasets. BioconductorBuntu is available for download under a creative commons license along with additional documentation and a tutorial from (http://bioinf.nuigalway.ie).
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-08-21
NREL's Developer Network, developer.nrel.gov, provides data that users can access to provide data to their own analyses, mobile and web applications. Developers can retrieve the data through a Web services API (application programming interface). The Developer Network handles overhead of serving up web services such as key management, authentication, analytics, reporting, documentation standards, and throttling in a common architecture, while allowing web services and APIs to be maintained and managed independently.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-06
...] Medical Device User Fee and Modernization Act; Notice to Public of Web Site Location of Fiscal Year 2014... and Drug Administration (FDA or the Agency) is announcing the Web site location where the Agency will... documents, FDA has committed to updating its Web site in a timely manner to reflect the Agency's review of...
A Query Integrator and Manager for the Query Web
Brinkley, James F.; Detwiler, Landon T.
2012-01-01
We introduce two concepts: the Query Web as a layer of interconnected queries over the document web and the semantic web, and a Query Web Integrator and Manager (QI) that enables the Query Web to evolve. QI permits users to write, save and reuse queries over any web accessible source, including other queries saved in other installations of QI. The saved queries may be in any language (e.g. SPARQL, XQuery); the only condition for interconnection is that the queries return their results in some form of XML. This condition allows queries to chain off each other, and to be written in whatever language is appropriate for the task. We illustrate the potential use of QI for several biomedical use cases, including ontology view generation using a combination of graph-based and logical approaches, value set generation for clinical data management, image annotation using terminology obtained from an ontology web service, ontology-driven brain imaging data integration, small-scale clinical data integration, and wider-scale clinical data integration. Such use cases illustrate the current range of applications of QI and lead us to speculate about the potential evolution from smaller groups of interconnected queries into a larger query network that layers over the document and semantic web. The resulting Query Web could greatly aid researchers and others who now have to manually navigate through multiple information sources in order to answer specific questions. PMID:22531831
Web Application Software for Ground Operations Planning Database (GOPDb) Management
NASA Technical Reports Server (NTRS)
Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey
2013-01-01
A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.
Borkowski, A; Lee, D H; Sydnor, D L; Johnson, R J; Rabinovitch, A; Moore, G W
2001-01-01
The Pathology and Laboratory Medicine Service of the Veterans Affairs Maryland Health Care System is inspected biannually by the College of American Pathologists (CAP). As of the year 2000, all documentation in the Anatomic Pathology Section is available to all staff through the VA Intranet. Signed, supporting paper documents are on file in the office of the department chair. For the year 2000 CAP inspection, inspectors conducted their document review by use of these Web-based documents, in which each CAP question had a hyperlink to the corresponding section of the procedure manual. Thus inspectors were able to locate the documents relevant to each question quickly and efficiently. The procedure manuals consist of 87 procedures for surgical pathology, 52 procedures for cytopathology, and 25 procedures for autopsy pathology. Each CAP question requiring documentation had from one to three hyperlinks to the corresponding section of the procedure manual. Intranet documentation allows for easier sharing among decentralized institutions and for centralized updates of the laboratory documentation. These documents can be upgraded to allow for multimedia presentations, including text search for key words, hyperlinks to other documents, and images, audio, and video. Use of Web-based documents can improve the efficiency of the inspection process.
ADASS Web Database XML Project
NASA Astrophysics Data System (ADS)
Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.
In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.
deepTools: a flexible platform for exploring deep-sequencing data.
Ramírez, Fidel; Dündar, Friederike; Diehl, Sarah; Grüning, Björn A; Manke, Thomas
2014-07-01
We present a Galaxy based web server for processing and visualizing deeply sequenced data. The web server's core functionality consists of a suite of newly developed tools, called deepTools, that enable users with little bioinformatic background to explore the results of their sequencing experiments in a standardized setting. Users can upload pre-processed files with continuous data in standard formats and generate heatmaps and summary plots in a straight-forward, yet highly customizable manner. In addition, we offer several tools for the analysis of files containing aligned reads and enable efficient and reproducible generation of normalized coverage files. As a modular and open-source platform, deepTools can easily be expanded and customized to future demands and developments. The deepTools webserver is freely available at http://deeptools.ie-freiburg.mpg.de and is accompanied by extensive documentation and tutorials aimed at conveying the principles of deep-sequencing data analysis. The web server can be used without registration. deepTools can be installed locally either stand-alone or as part of Galaxy. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
GeneXplorer: an interactive web application for microarray data visualization and analysis.
Rees, Christian A; Demeter, Janos; Matese, John C; Botstein, David; Sherlock, Gavin
2004-10-01
When publishing large-scale microarray datasets, it is of great value to create supplemental websites where either the full data, or selected subsets corresponding to figures within the paper, can be browsed. We set out to create a CGI application containing many of the features of some of the existing standalone software for the visualization of clustered microarray data. We present GeneXplorer, a web application for interactive microarray data visualization and analysis in a web environment. GeneXplorer allows users to browse a microarray dataset in an intuitive fashion. It provides simple access to microarray data over the Internet and uses only HTML and JavaScript to display graphic and annotation information. It provides radar and zoom views of the data, allows display of the nearest neighbors to a gene expression vector based on their Pearson correlations and provides the ability to search gene annotation fields. The software is released under the permissive MIT Open Source license, and the complete documentation and the entire source code are freely available for download from CPAN http://search.cpan.org/dist/Microarray-GeneXplorer/.
HTML5 PivotViewer: high-throughput visualization and querying of image data on the web
Taylor, Stephen; Noble, Roger
2014-01-01
Motivation: Visualization and analysis of large numbers of biological images has generated a bottle neck in research. We present HTML5 PivotViewer, a novel, open source, platform-independent viewer making use of the latest web technologies that allows seamless access to images and associated metadata for each image. This provides a powerful method to allow end users to mine their data. Availability and implementation: Documentation, examples and links to the software are available from http://www.cbrg.ox.ac.uk/data/pivotviewer/. The software is licensed under GPLv2. Contact: stephen.taylor@imm.ox.ac.uk and roger@coritsu.com PMID:24849578
NASA Astrophysics Data System (ADS)
Rahman, Fuad; Tarnikova, Yuliya; Hartono, Rachmat; Alam, Hassan
2006-01-01
This paper presents a novel automatic web publishing solution, Pageview (R). PageView (R) is a complete working solution for document processing and management. The principal aim of this tool is to allow workgroups to share, access and publish documents on-line on a regular basis. For example, assuming that a person is working on some documents. The user will, in some fashion, organize his work either in his own local directory or in a shared network drive. Now extend that concept to a workgroup. Within a workgroup, some users are working together on some documents, and they are saving them in a directory structure somewhere on a document repository. The next stage of this reasoning is that a workgroup is working on some documents, and they want to publish them routinely on-line. Now it may happen that they are using different editing tools, different software, and different graphics tools. The resultant documents may be in PDF, Microsoft Office (R), HTML, or Word Perfect format, just to name a few. In general, this process needs the documents to be processed in a fashion so that they are in the HTML format, and then a web designer needs to work on that collection to make them available on-line. PageView (R) takes care of this whole process automatically, making the document workflow clean and easy to follow. PageView (R) Server publishes documents, complete with the directory structure, for online use. The documents are automatically converted to HTML and PDF so that users can view the content without downloading the original files, or having to download browser plug-ins. Once published, other users can access the documents as if they are accessing them from their local folders. The paper will describe the complete working system and will discuss possible applications within the document management research.
Web-based document image processing
NASA Astrophysics Data System (ADS)
Walker, Frank L.; Thoma, George R.
1999-12-01
Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.
The Next Linear Collider Program
posted to the new SLAC ILC web site http://www-project.slac.stanford.edu/ilc/. Also, see the new site for . The NLC web site will remain accessible as an archive of important work done on the many systems | Navbar || || Documentation | NLC Playpen | Web Comments & Suggestions | Desktop Trouble Call | LC
Implementing a Dynamic Database-Driven Course Using LAMP
ERIC Educational Resources Information Center
Laverty, Joseph Packy; Wood, David; Turchek, John
2011-01-01
This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…
Google Wave: Collaboration Reworked
ERIC Educational Resources Information Center
Rethlefsen, Melissa L.
2010-01-01
Over the past several years, Internet users have become accustomed to Web 2.0 and cloud computing-style applications. It's commonplace and even intuitive to drag and drop gadgets on personalized start pages, to comment on a Facebook post without reloading the page, and to compose and save documents through a web browser. The web paradigm has…
77 FR 36583 - NRC Form 5, Occupational Dose Record for a Monitoring Period
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-19
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2012... following methods: Federal Rulemaking Web Site: Go to http://www.regulations.gov and search for Docket ID... begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search...
BioCatalogue: a universal catalogue of web services for the life sciences
Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A.
2010-01-01
The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable ‘Web 2.0’-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community. PMID:20484378
BioCatalogue: a universal catalogue of web services for the life sciences.
Bhagat, Jiten; Tanoh, Franck; Nzuobontane, Eric; Laurent, Thomas; Orlowski, Jerzy; Roos, Marco; Wolstencroft, Katy; Aleksejevs, Sergejs; Stevens, Robert; Pettifer, Steve; Lopez, Rodrigo; Goble, Carole A
2010-07-01
The use of Web Services to enable programmatic access to on-line bioinformatics is becoming increasingly important in the Life Sciences. However, their number, distribution and the variable quality of their documentation can make their discovery and subsequent use difficult. A Web Services registry with information on available services will help to bring together service providers and their users. The BioCatalogue (http://www.biocatalogue.org/) provides a common interface for registering, browsing and annotating Web Services to the Life Science community. Services in the BioCatalogue can be described and searched in multiple ways based upon their technical types, bioinformatics categories, user tags, service providers or data inputs and outputs. They are also subject to constant monitoring, allowing the identification of service problems and changes and the filtering-out of unavailable or unreliable resources. The system is accessible via a human-readable 'Web 2.0'-style interface and a programmatic Web Service interface. The BioCatalogue follows a community approach in which all services can be registered, browsed and incrementally documented with annotations by any member of the scientific community.
A new information architecture, website and services for the CMS experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Lucas; Rusack, Eleanor; Zemleris, Vidmantas
2012-01-01
The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe themore » information architecture, the system design, implementation and monitoring, the document and content database, security aspects, and our deployment strategy, which ensured continual smooth operation of all systems at all times.« less
A new Information Architecture, Website and Services for the CMS Experiment
NASA Astrophysics Data System (ADS)
Taylor, Lucas; Rusack, Eleanor; Zemleris, Vidmantas
2012-12-01
The age and size of the CMS collaboration at the LHC means it now has many hundreds of inhomogeneous web sites and services, and hundreds of thousands of documents. We describe a major initiative to create a single coherent CMS internal and public web site. This uses the Drupal web Content Management System (now supported by CERN/IT) on top of a standard LAMP stack (Linux, Apache, MySQL, and php/perl). The new navigation, content and search services are coherently integrated with numerous existing CERN services (CDS, EDMS, Indico, phonebook, Twiki) as well as many CMS internal Web services. We describe the information architecture; the system design, implementation and monitoring; the document and content database; security aspects; and our deployment strategy, which ensured continual smooth operation of all systems at all times.
Crossroads 2000 proceedings [table of contents hyperlinked to documents
DOT National Transportation Integrated Search
1998-08-19
This document's table of contents hyperlinks to the 76 papers presented at the Crossroads 2000 Conference. The documents are housed at the web site for Iowa State University Center for Transportation Research and Education. A selection of 14 individu...
Supporting online learning with games
NASA Astrophysics Data System (ADS)
Yao, JingTao; Kim, DongWon; Herbert, Joseph P.
2007-04-01
This paper presents a study on Web-based learning support systems that is enhanced with two major subsystems: a Web-based learning game and a learning-oriented Web search. The Internet and theWeb may be considered as a first resource for students seeking for information and help. However, much of the information available online is not related to the course contents or is wrong in the worse case. The search subsystem aims to provide students with precise, relative and adaptable documents about certain courses or classes. Therefore, students do not have to spend time to verify the relationship of documents to the class. The learning game subsystem stimulates students to study, enables students to review their studies and to perform self-evaluation through a Web-based learning game such as a treasure hunt game. During the challenge and entertaining learning and evaluation process, it is hoped that students will eventually understand and master the course concepts easily. The goal of developing such a system is to provide students with an efficient and effective learning environment.
GISMO: A MATLAB toolbox for seismic research, monitoring, & education
NASA Astrophysics Data System (ADS)
Thompson, G.; Reyes, C. G.; Kempler, L. A.
2017-12-01
GISMO is an open-source MATLAB toolbox which provides an object-oriented framework to build workflows and applications that read, process, visualize and write seismic waveform, catalog and instrument response data. GISMO can retrieve data from a variety of sources (e.g. FDSN web services, Earthworm/Winston servers) and data formats (SAC, Seisan, etc.). It can handle waveform data that crosses file boundaries. All this alleviates one of the most time consuming part for scientists developing their own codes. GISMO simplifies seismic data analysis by providing a common interface for your data, regardless of its source. Several common plots are built-in to GISMO, such as record section plots, spectrograms, depth-time sections, event count per unit time, energy release per unit time, etc. Other visualizations include map views and cross-sections of hypocentral data. Several common processing methods are also included, such as an extensive set of tools for correlation analysis. Support is being added to interface GISMO with ObsPy. GISMO encourages community development of an integrated set of codes and accompanying documentation, eliminating the need for seismologists to "reinvent the wheel". By sharing code the consistency and repeatability of results can be enhanced. GISMO is hosted on GitHub with documentation both within the source code and in the project wiki. GISMO has been used at the University of South Florida and University of Alaska Fairbanks in graduate-level courses including Seismic Data Analysis, Time Series Analysis and Computational Seismology. GISMO has also been tailored to interface with the common seismic monitoring software and data formats used by volcano observatories in the US and elsewhere. As an example, toolbox training was delivered to researchers at INETER (Nicaragua). Applications built on GISMO include IceWeb (e.g. web-based spectrograms), which has been used by Alaska Volcano Observatory since 1998 and became the prototype for the USGS Pensive system.
Guardia, Gabriela D A; Ferreira Pires, Luís; da Silva, Eduardo G; de Farias, Cléver R G
2017-02-01
Gene expression studies often require the combined use of a number of analysis tools. However, manual integration of analysis tools can be cumbersome and error prone. To support a higher level of automation in the integration process, efforts have been made in the biomedical domain towards the development of semantic web services and supporting composition environments. Yet, most environments consider only the execution of simple service behaviours and requires users to focus on technical details of the composition process. We propose a novel approach to the semantic composition of gene expression analysis services that addresses the shortcomings of the existing solutions. Our approach includes an architecture designed to support the service composition process for gene expression analysis, and a flexible strategy for the (semi) automatic composition of semantic web services. Finally, we implement a supporting platform called SemanticSCo to realize the proposed composition approach and demonstrate its functionality by successfully reproducing a microarray study documented in the literature. The SemanticSCo platform provides support for the composition of RESTful web services semantically annotated using SAWSDL. Our platform also supports the definition of constraints/conditions regarding the order in which service operations should be invoked, thus enabling the definition of complex service behaviours. Our proposed solution for semantic web service composition takes into account the requirements of different stakeholders and addresses all phases of the service composition process. It also provides support for the definition of analysis workflows at a high-level of abstraction, thus enabling users to focus on biological research issues rather than on the technical details of the composition process. The SemanticSCo source code is available at https://github.com/usplssb/SemanticSCo. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
David, Peter; Hansen, Nichole; Nolan, James J.; Alcocer, Pedro
2015-05-01
The growth in text data available online is accompanied by a growth in the diversity of available documents. Corpora with extreme heterogeneity in terms of file formats, document organization, page layout, text style, and content are common. The absence of meaningful metadata describing the structure of online and open-source data leads to text extraction results that contain no information about document structure and are cluttered with page headers and footers, web navigation controls, advertisements, and other items that are typically considered noise. We describe an approach to document structure and metadata recovery that uses visual analysis of documents to infer the communicative intent of the author. Our algorithm identifies the components of documents such as titles, headings, and body content, based on their appearance. Because it operates on an image of a document, our technique can be applied to any type of document, including scanned images. Our approach to document structure recovery considers a finer-grained set of component types than prior approaches. In this initial work, we show that a machine learning approach to document structure recovery using a feature set based on the geometry and appearance of images of documents achieves a 60% greater F1- score than a baseline random classifier.
The CloudBoard Research Platform: an interactive whiteboard for corporate users
NASA Astrophysics Data System (ADS)
Barrus, John; Schwartz, Edward L.
2013-03-01
Over one million interactive whiteboards (IWBs) are sold annually worldwide, predominantly for classroom use with few sales for corporate use. Unmet needs for IWB corporate use were investigated and the CloudBoard Research Platform (CBRP) was developed to investigate and test technology for meeting these needs. The CBRP supports audio conferencing with shared remote drawing activity, casual capture of whiteboard activity for long-term storage and retrieval, use of standard formats such as PDF for easy import of documents via the web and email and easy export of documents. Company RFID badges and key fobs provide secure access to documents at the board and automatic logout occurs after a period of inactivity. Users manage their documents with a web browser. Analytics and remote device management is provided for administrators. The IWB hardware consists of off-the-shelf components (a Hitachi UST Projector, SMART Technologies, Inc. IWB hardware, Mac Mini, Polycom speakerphone, etc.) and a custom occupancy sensor. The three back-end servers provide the web interface, document storage, stroke and audio streaming. Ease of use, security, and robustness sufficient for internal adoption was achieved. Five of the 10 boards installed at various Ricoh sites have been in daily or weekly use for the past year and total system downtime was less than an hour in 2012. Since CBRP was installed, 65 registered users, 9 of whom use the system regularly, have created over 2600 documents.
PC-based web authoring: How to learn as little unix as possible while getting on the Web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gennari, L.T.; Breaux, M.; Minton, S.
1996-09-01
This document is a general guide for creating Web pages, using commonly available word processing and file transfer applications. It is not a full guide to HTML, nor does it provide an introduction to the many WYSIWYG HTML editors available. The viability of the authoring method it describes will not be affected by changes in the HTML specification or the rapid release-and-obsolescence cycles of commercial WYSIWYG HTML editors. This document provides a gentle introduction to HTML for the beginner, and as the user gains confidence and experience, encourages greater familiarity with HTML through continued exposure to and hands-on usage ofmore » HTML code.« less
Mac-based Web authoring: How to learn as little Unix as possible while getting on the Web.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gennari, L.T.
1996-06-01
This document is a general guide for creating Web pages, using commonly available word processing and file transfer applications. It is not a full guide to HTML, nor does it provide an introduction to the many WYSIWYG HTML editors available. The viability of the authoring method it describes will not be affected by changes in the HTML specification or the rapid release-and-obsolescence cycles of commercial WYSIWYG HTML editors. This document provides a gentle introduction to HTML for the beginner and as the user gains confidence and experience, encourages greater familiarity with HTML through continued exposure to and hands-on usage ofmore » HTML code.« less
Semantic Document Model to Enhance Data and Knowledge Interoperability
NASA Astrophysics Data System (ADS)
Nešić, Saša
To enable document data and knowledge to be efficiently shared and reused across application, enterprise, and community boundaries, desktop documents should be completely open and queryable resources, whose data and knowledge are represented in a form understandable to both humans and machines. At the same time, these are the requirements that desktop documents need to satisfy in order to contribute to the visions of the Semantic Web. With the aim of achieving this goal, we have developed the Semantic Document Model (SDM), which turns desktop documents into Semantic Documents as uniquely identified and semantically annotated composite resources, that can be instantiated into human-readable (HR) and machine-processable (MP) forms. In this paper, we present the SDM along with an RDF and ontology-based solution for the MP document instance. Moreover, on top of the proposed model, we have built the Semantic Document Management System (SDMS), which provides a set of services that exploit the model. As an application example that takes advantage of SDMS services, we have extended MS Office with a set of tools that enables users to transform MS Office documents (e.g., MS Word and MS PowerPoint) into Semantic Documents, and to search local and distant semantic document repositories for document content units (CUs) over Semantic Web protocols.
Embedding the shapes of regions of interest into a Clinical Document Architecture document.
Minh, Nguyen Hai; Yi, Byoung-Kee; Kim, Il Kon; Song, Joon Hyun; Binh, Pham Viet
2015-03-01
Sharing a medical image visually annotated by a region of interest with a remotely located specialist for consultation is a good practice. It may, however, require a special-purpose (and most likely expensive) system to send and view them, which is an unfeasible solution in developing countries such as Vietnam. In this study, we design and implement interoperable methods based on the HL7 Clinical Document Architecture and the eXtensible Markup Language Stylesheet Language for Transformation standards to seamlessly exchange and visually present the shapes of regions of interest using web browsers. We also propose a new integration architecture for a Clinical Document Architecture generator that enables embedding of regions of interest and simultaneous auto-generation of corresponding style sheets. Using the Clinical Document Architecture document and style sheet, a sender can transmit clinical documents and medical images together with coordinate values of regions of interest to recipients. Recipients can easily view the documents and display embedded regions of interest by rendering them in their web browser of choice. © The Author(s) 2014.
ERIC Educational Resources Information Center
Stadtler, Marc; Bromme, Rainer
2007-01-01
Drawing on the theory of documents representation (Perfetti et al., Toward a theory of documents representation. In: H. v. Oostendorp & S. R. Goldman (Eds.), "The construction of mental representations during reading." Mahwah, NJ: Erlbaum, 1999), we argue that successfully dealing with multiple documents on the World Wide Web requires readers to…
ERIC Educational Resources Information Center
Wood, Eileen; Anderson, Alissa; Piquette-Tomei, Noella; Savage, Robert; Mueller, Julie
2011-01-01
Support requests were documented for 10 teachers (4 kindergarten, 4 grade one, and 2 grade one/two teachers) who received just-in-time instructional support over a 2 1/2 month period while implementing a novel reading software program as part of their literacy instruction. In-class observations were made of each instructional session. Analysis of…
Leng, Zikuan; He, Xijing; Li, Haopeng; Wang, Dong; Cao, Kai
2013-05-15
Olfactory ensheathing cell (OEC) transplantation is a promising new approach for the treatment of spinal cord injury (SCI), and an increasing number of scientific publications are devoted to this treatment strategy. This bibliometric analysis was conducted to assess global research trends in OEC transplantation for SCI. All of the data in this study originate from the Web of Science maintained by the Institute for Scientific Information, USA, and includes SCI-EXPANDED, SSCI, A&HCI, CPCI-S, CPCI-SSH, BKCI-S, BKCI-SSH, CCR-EXPANDED and IC. The Institute for Scientific Information's Web of Science was searched using the keywords "olfactory ensheathing cells" or "OECs" or "olfactory ensheathing glia" or "OEG" or "olfactory ensheathing glial cells" or "OEGs" and "spinal cord injury" or "SCI" or "spinal injury" or "spinal transection" for literature published from January 1898 to May 2012. Original articles, reviews, proceedings papers and meeting abstracts, book chapters and editorial materials on OEC transplantation for SCI were included. Simultaneously, unpublished literature and literature for which manual information retrieval was required were excluded. ALL SELECTED LITERATURES ADDRESSING OEC TRANSPLANTATION FOR SCI WERE EVALUATED IN THE FOLLOWING ASPECTS: publication year, document type, language, author, institution, times cited, Web of Science category, core source title, countries/territories and funding agency. In the Web of Science published by the Institute for Scientific Information, the earliest literature record was in April, 1995. Four hundred and fourteen publications addressing OEC transplantation for SCI were added to the data library in the past 18 years, with an annually increasing trend. Of 415 records, 405 publications were in English. Two hundred and fifty-nine articles ranked first in the distribution of document type, followed by 141 reviews. Thirty articles and 20 reviews, cited more than 55 times by the date the publication data were downloaded by us, can be regarded as the most classical references. The journal Experimental Neurology published the most literature (32 records), followed by Glia. The United States had the most literature, followed by China. In addition, Yale University was the most productive institution in the world, while The Second Military Medical University contributed the most in China. The journal Experimental Neurology published the most OEC transplantation literature in the United States, while Neural Regeneration Research published the most in China. This analysis provides insight into the current state and trends in OEC transplantation for SCI research. Furthermore, we anticipate that this analysis will help encourage international cooperation and teamwork on OEC transplantation for SCI to facilitate the development of more effective treatments for SCI.
47 CFR 73.8000 - Incorporation by reference.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Engineering and Technology (OET) Web site: http://www.fcc.gov/oet/info/documents/bulletins/. (1) OET Bulletin...., Suite 1200, Washington, DC 20006, or at the ATSC Web site: http://www.atsc.org/standards.html. (1) ATSC... Standards Institute (ANSI), 25 West 43rd Street, 4th Floor, New York, NY 10036 or at the ANSI Web site: http...
Ontology-Based Approaches to Improve RDF Triple Store
ERIC Educational Resources Information Center
Albahli, Saleh M.
2016-01-01
The World Wide Web enables an easy, instant access to a huge quantity of information. Over the last few decades, a number of improvements have been achieved that helped the web reach its current state. However, the current Internet links documents together without understanding them, and thus, makes the content of web only human-readable rather…
Methodology for Localized and Accessible Image Formation and Elucidation
ERIC Educational Resources Information Center
Patil, Sandeep R.; Katiyar, Manish
2009-01-01
Accessibility is one of the key checkpoints in all software products, applications, and Web sites. Accessibility with digital images has always been a major challenge for the industry. Images form an integral part of certain type of documents and most Web 2.0-compliant Web sites. Individuals challenged with blindness and many dyslexics only make…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-29
... NRC-2008- 0252. You may submit comments by any of the following methods: Federal Rulemaking Web site... publicly available, by any of the following methods: Federal Rulemaking Web site: Go to http://www... ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public Document Room...
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth; Jang, Jiann-Woei; McCants, Edward; Omohundro, Zachary; Ring, Tom; Templeton, Jeremy; Zoss, Jeremy; Wallace, Jonathan; Ziegler, Philip
2011-01-01
Draper Station Analysis Tool (DSAT) is a computer program, built on commercially available software, for simulating and analyzing complex dynamic systems. Heretofore used in designing and verifying guidance, navigation, and control systems of the International Space Station, DSAT has a modular architecture that lends itself to modification for application to spacecraft or terrestrial systems. DSAT consists of user-interface, data-structures, simulation-generation, analysis, plotting, documentation, and help components. DSAT automates the construction of simulations and the process of analysis. DSAT provides a graphical user interface (GUI), plus a Web-enabled interface, similar to the GUI, that enables a remotely located user to gain access to the full capabilities of DSAT via the Internet and Webbrowser software. Data structures are used to define the GUI, the Web-enabled interface, simulations, and analyses. Three data structures define the type of analysis to be performed: closed-loop simulation, frequency response, and/or stability margins. DSAT can be executed on almost any workstation, desktop, or laptop computer. DSAT provides better than an order of magnitude improvement in cost, schedule, and risk assessment for simulation based design and verification of complex dynamic systems.
Cat swarm optimization based evolutionary framework for multi document summarization
NASA Astrophysics Data System (ADS)
Rautray, Rasmita; Balabantaray, Rakesh Chandra
2017-07-01
Today, World Wide Web has brought us enormous quantity of on-line information. As a result, extracting relevant information from massive data has become a challenging issue. In recent past text summarization is recognized as one of the solution to extract useful information from vast amount documents. Based on number of documents considered for summarization, it is categorized as single document or multi document summarization. Rather than single document, multi document summarization is more challenging for the researchers to find accurate summary from multiple documents. Hence in this study, a novel Cat Swarm Optimization (CSO) based multi document summarizer is proposed to address the problem of multi document summarization. The proposed CSO based model is also compared with two other nature inspired based summarizer such as Harmony Search (HS) based summarizer and Particle Swarm Optimization (PSO) based summarizer. With respect to the benchmark Document Understanding Conference (DUC) datasets, the performance of all algorithms are compared in terms of different evaluation metrics such as ROUGE score, F score, sensitivity, positive predicate value, summary accuracy, inter sentence similarity and readability metric to validate non-redundancy, cohesiveness and readability of the summary respectively. The experimental analysis clearly reveals that the proposed approach outperforms the other summarizers included in the study.
Arnold, Corey W; Bui, Alex A T; Morioka, Craig; El-Saden, Suzie; Kangarloo, Hooshang
2007-01-01
The communication of imaging findings to a referring physician is an important role of the radiologist. However, communication between onsite and offsite physicians is a time-consuming process that can obstruct work flow and frequently involves no exchange of visual information, which is especially problematic given the importance of radiologic images for diagnosis and treatment. A prototype World Wide Web-based image documentation and reporting system was developed for use in supporting a "communication loop" that is based on the concept of a classic "wet-read" system. The proposed system represents an attempt to address many of the problems seen in current communication work flows by implementing a well-documented and easily accessible communication loop that is adaptable to different types of imaging study evaluation. Images are displayed in a native (DICOM) Digital Imaging and Communications in Medicine format with a Java applet, which allows accurate presentation along with use of various image manipulation tools. The Web-based infrastructure consists of a server that stores imaging studies and reports, with Web browsers that download and install necessary client software on demand. Application logic consists of a set of PHP (hypertext preprocessor) modules that are accessible with an application programming interface. The system may be adapted to any clinician-specialist communication loop, and, because it integrates radiologic standards with Web-based technologies, can more effectively communicate and document imaging data. RSNA, 2007
WebGIVI: a web-based gene enrichment analysis and visualization tool.
Sun, Liang; Zhu, Yongnan; Mahmood, A S M Ashique; Tudor, Catalina O; Ren, Jia; Vijay-Shanker, K; Chen, Jian; Schmidt, Carl J
2017-05-04
A major challenge of high throughput transcriptome studies is presenting the data to researchers in an interpretable format. In many cases, the outputs of such studies are gene lists which are then examined for enriched biological concepts. One approach to help the researcher interpret large gene datasets is to associate genes and informative terms (iTerm) that are obtained from the biomedical literature using the eGIFT text-mining system. However, examining large lists of iTerm and gene pairs is a daunting task. We have developed WebGIVI, an interactive web-based visualization tool ( http://raven.anr.udel.edu/webgivi/ ) to explore gene:iTerm pairs. WebGIVI was built via Cytoscape and Data Driven Document JavaScript libraries and can be used to relate genes to iTerms and then visualize gene and iTerm pairs. WebGIVI can accept a gene list that is used to retrieve the gene symbols and corresponding iTerm list. This list can be submitted to visualize the gene iTerm pairs using two distinct methods: a Concept Map or a Cytoscape Network Map. In addition, WebGIVI also supports uploading and visualization of any two-column tab separated data. WebGIVI provides an interactive and integrated network graph of gene and iTerms that allows filtering, sorting, and grouping, which can aid biologists in developing hypothesis based on the input gene lists. In addition, WebGIVI can visualize hundreds of nodes and generate a high-resolution image that is important for most of research publications. The source code can be freely downloaded at https://github.com/sunliang3361/WebGIVI . The WebGIVI tutorial is available at http://raven.anr.udel.edu/webgivi/tutorial.php .
Dooley, Jennifer Allyson; Jones, Sandra C; Iverson, Don
2014-01-01
Web 2.0 experts working in social marketing participated in qualitative in-depth interviews. The research aimed to document the current state of Web 2.0 practice. Perceived strengths (such as the viral nature of Web 2.0) and weaknesses (such as the time consuming effort it took to learn new Web 2.0 platforms) existed when using Web 2.0 platforms for campaigns. Lessons learned were identified--namely, suggestions for engaging in specific types of content creation strategies (such as plain language and transparent communication practices). Findings present originality and value to practitioners working in social marketing who want to effectively use Web 2.0.
Olivas-Ávila, José A; Musi-Lechuga, Bertha
2010-11-01
The present work is a descriptive study by means of document analysis that aims to make the analysis of the more productive professors of psychology in Spain trough indexed Web of Science journal articles. The sample was conformed of the first one hundred more productive professors of each one of the six academic areas of Spanish Psychology. A total of 85492 records were analyzed of which 8770 correspond to the 610 analyzed professors. The main results are that from the more productive professors ranking, six belong to the Psychobiology area and only 4 belong to different areas. With respect to the average proportion of articles by Professor of the six areas of psychology, it was found that that range of the proportion oscillates between 25 and 6. The journal Psicothema maintains the most frequency of records among the professors of the sample since they are 1461 which represents a 17% of the total. Finally, we discuss the results and mentioned the implications in the professor's evaluation.
Health on the Net Foundation: assessing the quality of health web pages all over the world.
Boyer, Célia; Gaudinat, Arnaud; Baujard, Vincent; Geissbühler, Antoine
2007-01-01
The Internet provides a great amount of information and has become one of the communication media which is most widely used [1]. However, the problem is no longer finding information but assessing the credibility of the publishers as well as the relevance and accuracy of the documents retrieved from the web. This problem is particularly relevant in the medical area which has a direct impact on the well-being of citizens. In this paper, we assume that the quality of web pages can be controlled, even when a huge amount of documents has to be reviewed. But this must be supported by both specific automatic tools and human expertise. In this context, we present various initiatives of the Health on the Net Foundation informing the citizens about the reliability of the medical content on the web.
Navy Controls for Invoice, Receipt, Acceptance, and Property Transfer System Need Improvement
2016-02-25
iR APT as a web-based system to electronically invoice, receipt, and accept ser vices and product s from its contractors and vendors. The iR APT system...electronically shares document s bet ween DoD and it s contractors and vendors to eliminate redundant data entr y, increase data accuracy, and reduce...The iR APT system allows contractors to submit and track invoices and receipt and acceptance documents over the web and allows government personnel to
MSAViewer: interactive JavaScript visualization of multiple sequence alignments.
Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E; Rost, Burkhard; Goldberg, Tatyana
2016-11-15
The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is 'web ready': written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/Supplementary information: Supplementary data are available at Bioinformatics online. msa@bio.sh. © The Author 2016. Published by Oxford University Press.
MSAViewer: interactive JavaScript visualization of multiple sequence alignments
Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E.; Rost, Burkhard; Goldberg, Tatyana
2016-01-01
Summary: The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is ‘web ready’: written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. Availability and Implementation: The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: msa@bio.sh PMID:27412096
New Interfaces to Web Documents and Services
NASA Technical Reports Server (NTRS)
Carlisle, W. H.
1996-01-01
This paper reports on investigations into how to extend capabilities of the Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1996 Summer Faculty Fellowship program, and involved research into and prototype development of software components that provide documents and services for the World Wide Web (WWW). The WWW has become a de-facto standard for sharing resources over the internet, primarily because web browsers are freely available for the most common hardware platforms and their operating systems. As a consequence of the popularity of the internet, tools, and techniques associated with web browsers are changing rapidly. New capabilities are offered by companies that support web browsers in order to achieve or remain a dominant participant in internet services. Because a goal of the VRC is to build an environment for NASA centers, universities, and industrial partners to share information associated with Advanced Concepts Office activities, the VRC tracks new techniques and services associated with the web in order to determine the their usefulness for distributed and collaborative engineering research activities. Most recently, Java has emerged as a new tool for providing internet services. Because the major web browser providers have decided to include Java in their software, investigations into Java were conducted this summer.
This January 2004 document contains 14 diagrams illustrating the different compliance options available for those facilities that fall under the Paper and Web Coating Maximum Achievable control Technology (MACT).
LCS Content Document Application
NASA Technical Reports Server (NTRS)
Hochstadt, Jake
2011-01-01
My project at KSC during my spring 2011 internship was to develop a Ruby on Rails application to manage Content Documents..A Content Document is a collection of documents and information that describes what software is installed on a Launch Control System Computer. It's important for us to make sure the tools we use everyday are secure, up-to-date, and properly licensed. Previously, keeping track of the information was done by Excel and Word files between different personnel. The goal of the new application is to be able to manage and access the Content Documents through a single database backed web application. Our LCS team will benefit greatly with this app. Admin's will be able to login securely to keep track and update the software installed on each computer in a timely manner. We also included exportability such as attaching additional documents that can be downloaded from the web application. The finished application will ease the process of managing Content Documents while streamlining the procedure. Ruby on Rails is a very powerful programming language and I am grateful to have the opportunity to build this application.
Capitalizing on Web 2.0 in the Social Studies Context
ERIC Educational Resources Information Center
Holcomb, Lori B.; Beal, Candy M.
2010-01-01
This paper focuses primarily on the integration of Web 2.0 technologies into social studies education. It documents how various Web 2.0 tools can be utilized in the social studies context to support and enhance teaching and learning. For the purposes of focusing on one specific topic, global connections at the middle school level will be the…
Automated MeSH indexing of the World-Wide Web.
Fowler, J.; Kouramajian, V.; Maram, S.; Devadhar, V.
1995-01-01
To facilitate networked discovery and information retrieval in the biomedical domain, we have designed a system for automatic assignment of Medical Subject Headings to documents retrieved from the World-Wide Web. Our prototype implementations show significant promise. We describe our methods and discuss the further development of a completely automated indexing tool called the "Web-MeSH Medibot." PMID:8563421
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-08
... filing system, EFS-Web, and selecting the document description of ``Certification and Request for Missing... Filing System Web (EFS-Web), 74 FR 55200 (Oct. 27, 2009), 1348 Off. Gaz. Pat. Office 394 (Nov. 24, 2009... parts notice, including increased use of the eighteen-month publication system, more time for applicants...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-24
... rule, the participant must file the document using the NRC's online, Web-based submission form. In... form, including the installation of the Web browser plug-in, is available on the NRC's public Web site... 61010, and near Braidwood at the Fossil Ridge (Braidwood) Public Library, 386 W. Kennedy Road, Braidwood...
ERIC Educational Resources Information Center
Ellero, Nadine P.
2002-01-01
Describes the use of the World Wide Web as a name authority resource and tool for special collections' analytic-level cataloging, based on experiences at The Claude Moore Health Sciences Library. Highlights include primary documents and metadata; authority control and the Web as authority source information; and future possibilities. (Author/LRW)
Samal, Lipika; D'Amore, John D; Bates, David W; Wright, Adam
2017-11-01
Clinical decision support tools for risk prediction are readily available, but typically require workflow interruptions and manual data entry so are rarely used. Due to new data interoperability standards for electronic health records (EHRs), other options are available. As a clinical case study, we sought to build a scalable, web-based system that would automate calculation of kidney failure risk and display clinical decision support to users in primary care practices. We developed a single-page application, web server, database, and application programming interface to calculate and display kidney failure risk. Data were extracted from the EHR using the Consolidated Clinical Document Architecture interoperability standard for Continuity of Care Documents (CCDs). EHR users were presented with a noninterruptive alert on the patient's summary screen and a hyperlink to details and recommendations provided through a web application. Clinic schedules and CCDs were retrieved using existing application programming interfaces to the EHR, and we provided a clinical decision support hyperlink to the EHR as a service. We debugged a series of terminology and technical issues. The application was validated with data from 255 patients and subsequently deployed to 10 primary care clinics where, over the course of 1 year, 569 533 CCD documents were processed. We validated the use of interoperable documents and open-source components to develop a low-cost tool for automated clinical decision support. Since Consolidated Clinical Document Architecture-based data extraction extends to any certified EHR, this demonstrates a successful modular approach to clinical decision support. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Code of Federal Regulations, 2011 CFR
2011-07-01
... shall be in the manner specified by the Commission's Web site (http://www.OSHRC.gov). (2) A document...: (i) If Social Security numbers must be included in a document, only the last four digits of that...
Pérez-Pérez, Martín; Glez-Peña, Daniel; Fdez-Riverola, Florentino; Lourenço, Anália
2015-02-01
Document annotation is a key task in the development of Text Mining methods and applications. High quality annotated corpora are invaluable, but their preparation requires a considerable amount of resources and time. Although the existing annotation tools offer good user interaction interfaces to domain experts, project management and quality control abilities are still limited. Therefore, the current work introduces Marky, a new Web-based document annotation tool equipped to manage multi-user and iterative projects, and to evaluate annotation quality throughout the project life cycle. At the core, Marky is a Web application based on the open source CakePHP framework. User interface relies on HTML5 and CSS3 technologies. Rangy library assists in browser-independent implementation of common DOM range and selection tasks, and Ajax and JQuery technologies are used to enhance user-system interaction. Marky grants solid management of inter- and intra-annotator work. Most notably, its annotation tracking system supports systematic and on-demand agreement analysis and annotation amendment. Each annotator may work over documents as usual, but all the annotations made are saved by the tracking system and may be further compared. So, the project administrator is able to evaluate annotation consistency among annotators and across rounds of annotation, while annotators are able to reject or amend subsets of annotations made in previous rounds. As a side effect, the tracking system minimises resource and time consumption. Marky is a novel environment for managing multi-user and iterative document annotation projects. Compared to other tools, Marky offers a similar visually intuitive annotation experience while providing unique means to minimise annotation effort and enforce annotation quality, and therefore corpus consistency. Marky is freely available for non-commercial use at http://sing.ei.uvigo.es/marky. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Levy, David M.; Huttenlocher, Dan; Moll, Angela; Smith, MacKenzie; Hodge, Gail M.; Chandler, Adam; Foley, Dan; Hafez, Alaaeldin M.; Redalen, Aaron; Miller, Naomi
2000-01-01
Includes six articles focusing on the purpose of digital public libraries; encoding electronic documents through compression techniques; a distributed finding aid server; digital archiving practices in the framework of information life cycle management; converting metadata into MARC format and Dublin Core formats; and evaluating Web sites through…
Topic Models for Link Prediction in Document Networks
ERIC Educational Resources Information Center
Kataria, Saurabh
2012-01-01
Recent explosive growth of interconnected document collections such as citation networks, network of web pages, content generated by crowd-sourcing in collaborative environments, etc., has posed several challenging problems for data mining and machine learning community. One central problem in the domain of document networks is that of "link…
Paper and Other Web Coating: National Emission Standards for Hazardous Air Pollutants (NESHAP)
Find information on the NESHAP for paper and other web coatings. Read the rule summary, history and supporting documents including fact sheets, responses to public comments, related rules, and compliance and applicability information for this regulation.
An experiment with content distribution methods in touchscreen mobile devices.
Garcia-Lopez, Eva; Garcia-Cabot, Antonio; de-Marcos, Luis
2015-09-01
This paper compares the usability of three different content distribution methods (scrolling, paging and internal links) in touchscreen mobile devices as means to display web documents. Usability is operationalized in terms of effectiveness, efficiency and user satisfaction. These dimensions are then measured in an experiment (N = 23) in which users are required to find words in regular-length web documents. Results suggest that scrolling is statistically better in terms of efficiency and user satisfaction. It is also found to be more effective but results were not significant. Our findings are also compared with existing literature to propose the following guideline: "try to use vertical scrolling in web pages for mobile devices instead of paging or internal links, except when the content is too large, then paging is recommended". With an ever increasing number of touchscreen web-enabled mobile devices, this new guideline can be relevant for content developers targeting the mobile web as well as institutions trying to improve the usability of their content for mobile platforms. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Substance use disorders in Arab countries: research activity and bibliometric analysis
2014-01-01
Background Substance use disorders, which include substance abuse and substance dependence, are present in all regions of the world including Middle Eastern Arab countries. Bibliometric analysis is an increasingly used tool for research assessment. The main objective of this study was to assess research productivity in the field of substance use disorders in Arab countries using bibliometric indicators. Methodology Original or review research articles authored or co-authored by investigators from Arab countries about substance use disorders during the period 1900 – 2013 were retrieved using the ISI Web of Science database. Research activity was assessed by analyzing the annual research productivity, contribution of each Arab country, names of journals, citations, and types of abused substances. Results Four hundred and thirteen documents in substance use disorders were retrieved. Annual research productivity was low but showed a significant increase in the last few years. In terms of quantity, Kingdom of Saudi Arabia (83 documents) ranked first in research about substance use disorders while Lebanon (17.4 documents per million) ranked first in terms of number of documents published per million inhabitants. Retrieved documents were found in different journal titles and categories, mostly in Drug and Alcohol Dependence Journal. Authors from USA appeared in 117 documents published by investigators from Arab countries. Citation analysis of retrieved documents showed that the average citation per document was 10.76 and the h - index was 35. The majority of retrieved documents were about tobacco and smoking (175 documents) field while alcohol consumption and abuse research was the least with 69 documents. Conclusion The results obtained suggest that research in this field was largely neglected in the past. However, recent research interest was observed. Research output on tobacco and smoking was relatively high compared to other substances of abuse like illicit drugs and medicinal agents. Governmental funding for academics and mental health graduate programs to do research in the field of substance use disorders is highly recommended. PMID:25148888
Emergency Response Capability Baseline Needs Assessment - Requirements Document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharry, John A.
This document was prepared by John A. Sharry, LLNL Fire Marshal and LLNL Division Leader for Fire Protection and reviewed by LLNL Emergency Management Department Head James Colson. The document follows and expands upon the format and contents of the DOE Model Fire Protection Baseline Capabilities Assessment document contained on the DOE Fire Protection Web Site, but only addresses emergency response.
SUSHI: an exquisite recipe for fully documented, reproducible and reusable NGS data analysis.
Hatakeyama, Masaomi; Opitz, Lennart; Russo, Giancarlo; Qi, Weihong; Schlapbach, Ralph; Rehrauer, Hubert
2016-06-02
Next generation sequencing (NGS) produces massive datasets consisting of billions of reads and up to thousands of samples. Subsequent bioinformatic analysis is typically done with the help of open source tools, where each application performs a single step towards the final result. This situation leaves the bioinformaticians with the tasks to combine the tools, manage the data files and meta-information, document the analysis, and ensure reproducibility. We present SUSHI, an agile data analysis framework that relieves bioinformaticians from the administrative challenges of their data analysis. SUSHI lets users build reproducible data analysis workflows from individual applications and manages the input data, the parameters, meta-information with user-driven semantics, and the job scripts. As distinguishing features, SUSHI provides an expert command line interface as well as a convenient web interface to run bioinformatics tools. SUSHI datasets are self-contained and self-documented on the file system. This makes them fully reproducible and ready to be shared. With the associated meta-information being formatted as plain text tables, the datasets can be readily further analyzed and interpreted outside SUSHI. SUSHI provides an exquisite recipe for analysing NGS data. By following the SUSHI recipe, SUSHI makes data analysis straightforward and takes care of documentation and administration tasks. Thus, the user can fully dedicate his time to the analysis itself. SUSHI is suitable for use by bioinformaticians as well as life science researchers. It is targeted for, but by no means constrained to, NGS data analysis. Our SUSHI instance is in productive use and has served as data analysis interface for more than 1000 data analysis projects. SUSHI source code as well as a demo server are freely available.
Code of Federal Regulations, 2010 CFR
2010-07-01
...), available on OFAC's Web site. New names of persons determined to be the Government of Iran and changes to...'s Web site. Appendix A to Part 560 will be republished annually. This document and additional information concerning OFAC are available from OFAC's Web site (http://www.treas.gov/ofac). Certain general...
Code of Federal Regulations, 2011 CFR
2011-07-01
... through the following page on OFAC's Web site: http://www.treasury.gov/sdn. Additional information.... This document and additional information concerning OFAC are available from OFAC's Web site: http://www... via facsimile through a 24-hour fax-on-demand service, tel.: 202/622-0077. Please consult OFAC's Web...
Free Web-based personal health records: an analysis of functionality.
Fernández-Alemán, José Luis; Seva-Llor, Carlos Luis; Toval, Ambrosio; Ouhbi, Sofia; Fernández-Luque, Luis
2013-12-01
This paper analyzes and assesses the functionality of free Web-based PHRs as regards health information, user actions and connection with other tools. A systematic literature review in Medline, ACM Digital Library, IEEE Digital Library and ScienceDirect was used to select 19 free Web-based PHRs from the 47 PHRs identified. The results show that none of the PHRs selected met 100% of the 28 functions presented in this paper. Two free Web-based PHRs target a particular public. Around 90 % of the PHRs identified allow users throughout the world to create their own profiles without any geographical restrictions. Only half of the PHRs selected provide physicians with user actions. Few PHRs can connect with other tools. There was considerable variability in the types of data included in free Web-based PHRs. Functionality may have implications for PHR use and adoption, particularly as regards patients with chronic illnesses or disabilities. Support for standard medical document formats and protocols are required to enable data to be exchanged with other stakeholders in the health care domain. The results of our study may assist users in selecting the PHR that best fits their needs, since no significant connection exists between the number of functions of the PHRs identified and their popularity.
A suite of R packages for web-enabled modeling and analysis of surface waters
NASA Astrophysics Data System (ADS)
Read, J. S.; Winslow, L. A.; Nüst, D.; De Cicco, L.; Walker, J. I.
2014-12-01
Researchers often create redundant methods for downloading, manipulating, and analyzing data from online resources. Moreover, the reproducibility of science can be hampered by complicated and voluminous data, lack of time for documentation and long-term maintenance of software, and fear of exposing programming skills. The combination of these factors can encourage unshared one-off programmatic solutions instead of openly provided reusable methods. Federal and academic researchers in the water resources and informatics domains have collaborated to address these issues. The result of this collaboration is a suite of modular R packages that can be used independently or as elements in reproducible analytical workflows. These documented and freely available R packages were designed to fill basic needs for the effective use of water data: the retrieval of time-series and spatial data from web resources (dataRetrieval, geoknife), performing quality assurance and quality control checks of these data with robust statistical methods (sensorQC), the creation of useful data derivatives (including physically- and biologically-relevant indices; GDopp, LakeMetabolizer), and the execution and evaluation of models (glmtools, rLakeAnalyzer). Here, we share details and recommendations for the collaborative coding process, and highlight the benefits of an open-source tool development pattern with a popular programming language in the water resources discipline (such as R). We provide examples of reproducible science driven by large volumes of web-available data using these tools, explore benefits of accessing packages as standardized web processing services (WPS) and present a working platform that allows domain experts to publish scientific algorithms in a service-oriented architecture (WPS4R). We assert that in the era of open data, tools that leverage these data should also be freely shared, transparent, and developed in an open innovation environment.
SU-F-P-10: A Web-Based Radiation Safety Relational Database Module for Regulatory Compliance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosen, C; Ramsay, B; Konerth, S
Purpose: Maintaining compliance with Radioactive Materials Licenses is inherently a time-consuming task requiring focus and attention to detail. Staff tasked with these responsibilities, such as the Radiation Safety Officer and associated personnel must retain disparate records for eventual placement into one or more annual reports. Entering results and records in a relational database using a web browser as the interface, and storing that data in a cloud-based storage site, removes procedural barriers. The data becomes more adaptable for mining and sharing. Methods: Web-based code was written utilizing the web framework Django, written in Python. Additionally, the application utilizes JavaScript formore » front-end interaction, SQL, HTML and CSS. Quality assurance code testing is performed in a sequential style, and new code is only added after the successful testing of the previous goals. Separate sections of the module include data entry and analysis for audits, surveys, quality management, and continuous quality improvement. Data elements can be adapted for quarterly and annual reporting, and for immediate notification of user determined alarm settings. Results: Current advances are focusing on user interface issues, and determining the simplest manner by which to teach the user to build query forms. One solution has been to prepare library documents that a user can select or edit in place of creation a new document. Forms are being developed based upon Nuclear Regulatory Commission federal code, and will be expanded to include State Regulations. Conclusion: Establishing a secure website to act as the portal for data entry, storage and manipulation can lead to added efficiencies for a Radiation Safety Program. Access to multiple databases can lead to mining for big data programs, and for determining safety issues before they occur. Overcoming web programming challenges, a category that includes mathematical handling, is providing challenges that are being overcome.« less
49 CFR 40.45 - What form is used to document a DOT urine collection?
Code of Federal Regulations, 2011 CFR
2011-10-01
... view this form on the Department's web site (http://www.dot.gov/ost/dapc) or the HHS web site (http... employee (other than a social security number (SSN) or other employee identification (ID) number) to a...
49 CFR 40.45 - What form is used to document a DOT urine collection?
Code of Federal Regulations, 2010 CFR
2010-10-01
... view this form on the Department's web site (http://www.dot.gov/ost/dapc) or the HHS web site (http... employee (other than a social security number (SSN) or other employee identification (ID) number) to a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durfee, Justin David; Frazier, Christopher Rawls; Bandlow, Alisa
2016-05-01
The Contingency Contractor Optimization Tool - Prototype (CCOT-P) requires several third-party software packages. These are documented below for each of the CCOT-P elements: client, web server, database server, solver, web application and polling application.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-04
... Transition Order is also available on the Internet at the Commission's Electronic Filing System Web Page at... you may contact BCPI at its Web site: http://www.BCPIWEB.com . When ordering documents from BCPI...
This May 2003 document contains questions and answers on the Paper and Web Coating National Emission Standards for Hazardous Air Pollutants (NESHAP) regulation. The questions cover topics such as compliance, applicability, and initial notification.
Federal Register 2010, 2011, 2012, 2013, 2014
2005-11-16
... Reference System (TRS) [see http://www.epa.gov/trs ] in order to better support future semantic Web needs... creation of glossaries for Web pages and documents, a common vocabulary for search engines, and in the...
Web-Education Systems in Europe. ZIFF Papiere.
ERIC Educational Resources Information Center
Paulsen, Morten; Keegan, Desmond; Dias, Ana; Dias, Paulo; Pimenta, Pedro; Fritsch, Helmut; Follmer, Holger; Micincova, Maria; Olsen, Gro-Anett
This document contains the following papers on Web-based education systems in Europe: (1) "European Experiences with Learning Management Systems" (Morten Flate Paulsen and Desmond Keegan); (2) "Online Education Systems: Definition of Terms" (Morten Flate Paulsen); (3) "Learning Management Systems (LMS) Used in Southern…
Tags Extarction from Spatial Documents in Search Engines
NASA Astrophysics Data System (ADS)
Borhaninejad, S.; Hakimpour, F.; Hamzei, E.
2015-12-01
Nowadays the selective access to information on the Web is provided by search engines, but in the cases which the data includes spatial information the search task becomes more complex and search engines require special capabilities. The purpose of this study is to extract the information which lies in spatial documents. To that end, we implement and evaluate information extraction from GML documents and a retrieval method in an integrated approach. Our proposed system consists of three components: crawler, database and user interface. In crawler component, GML documents are discovered and their text is parsed for information extraction; storage. The database component is responsible for indexing of information which is collected by crawlers. Finally the user interface component provides the interaction between system and user. We have implemented this system as a pilot system on an Application Server as a simulation of Web. Our system as a spatial search engine provided searching capability throughout the GML documents and thus an important step to improve the efficiency of search engines has been taken.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-22
... NUCLEAR REGULATORY COMMISSION [Docket No. 70-3098; NRC-2011-0081] Shaw AREVA MOX Services, Mixed... following methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for documents... publicly available documents related to this notice using the following methods: NRC's Public Document Room...
NASA Technical Reports Server (NTRS)
Muhsin, Mansour; Walters, Ian
2004-01-01
The Document Concurrence System is a combination of software modules for routing users expressions of concurrence with documents. This system enables determination of the current status of concurrences and eliminates the need for the prior practice of manually delivering paper documents to all persons whose approvals were required. This system runs on a server, and participants gain access via personal computers equipped with Web-browser and electronic-mail software. A user can begin a concurrence routing process by logging onto an administration module, naming the approvers and stating the sequence for routing among them, and attaching documents. The server then sends a message to the first person on the list. Upon concurrence by the first person, the system sends a message to the second person, and so forth. A person on the list indicates approval, places the documents on hold, or indicates disapproval, via a Web-based module. When the last person on the list has concurred, a message is sent to the initiator, who can then finalize the process through the administration module. A background process running on the server identifies concurrence processes that are overdue and sends reminders to the appropriate persons.
An efficient scheme for automatic web pages categorization using the support vector machine
NASA Astrophysics Data System (ADS)
Bhalla, Vinod Kumar; Kumar, Neeraj
2016-07-01
In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.
75 FR 4449 - Requested Administrative Waiver of the Coastwise Trade Laws
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-27
... electronic version of this document and all documents entered into this docket is available on the World Wide Web at http://www.regulations.gov . FOR FURTHER INFORMATION CONTACT: Joann Spittle, U.S. Department of...
2016 eCDRweb User Guide–Primary Support
This document presents the user guide for the Office of Pollution Prevention and Toxics’ (OPPT) 2016 e-CDR web tool. This document is the user guide for the Primary Support user of the 2016 e-CDRweb tool.
Using Sentence-Level Classifiers for Cross-Domain Sentiment Analysis
2014-09-01
National Defence, 2014 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2014 DRDC-RDDC...domain sentiment classification via spectral feature alignment. In Proceedings of the 19th international conference on World Wide Web, WWW ’10...Dennis, S. 5. DATE OF PUBLICATION (Month and year of publication of document.) September 2014 6a. NO. OF PAGES (Total containing information
33 CFR 148.207 - How and where may I view docketed documents?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Docket Management System Web site at http://www.dot.dms.gov. The projects are also listed by name and the assigned docket number at the G-PSO-5 Web site: http://www.uscg.mil/hq/g-m/mso/mso5.htm. ...
78 FR 69710 - Luminant Generation Company, LLC
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-20
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2008... . To begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public [[Page 69711
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-19
... be charged for copying. World Wide Web. The EPA Web site for this rulemaking is at: http://www.epa... period will end on February 1, 2012, rather than January 20, 2012. How can I get copies of this document...
Graduate and Inservice Education. [SITE 2002 Section].
ERIC Educational Resources Information Center
Crawford, Caroline M., Ed.
This document contains the papers on graduate and inservice education from the SITE (Society for Information Technology & Teacher Education) 2002 conference. Topics covered include: Geographic Information Systems in teacher education; re-certification and accreditation; construction of a Web site by graduate teacher education students; Web-based…
Setti, E; Musumeci, R
2001-06-01
The world wide web is an exciting service that allows one to publish electronic documents made of text and images on the internet. Client software called a web browser can access these documents, and display and print them. The most popular browsers are currently Microsoft Internet Explorer (Microsoft, Redmond, WA) and Netscape Communicator (Netscape Communications, Mountain View, CA). These browsers can display text in hypertext markup language (HTML) format and images in Joint Photographic Expert Group (JPEG) and Graphic Interchange Format (GIF). Currently, neither browser can display radiologic images in native Digital Imaging and Communications in Medicine (DICOM) format. With the aim to publish radiologic images on the internet, we wrote a dedicated Java applet. Our software can display radiologic and histologic images in DICOM, JPEG, and GIF formats, and provides a a number of functions like windowing and magnification lens. The applet is compatible with some web browsers, even the older versions. The software is free and available from the author.
WebDMS: A Web-Based Data Management System for Environmental Data
NASA Astrophysics Data System (ADS)
Ekstrand, A. L.; Haderman, M.; Chan, A.; Dye, T.; White, J. E.; Parajon, G.
2015-12-01
DMS is an environmental Data Management System to manage, quality-control (QC), summarize, document chain-of-custody, and disseminate data from networks ranging in size from a few sites to thousands of sites, instruments, and sensors. The server-client desktop version of DMS is used by local and regional air quality agencies (including the Bay Area Air Quality Management District, the South Coast Air Quality Management District, and the California Air Resources Board), the EPA's AirNow Program, and the EPA's AirNow-International (AirNow-I) program, which offers countries the ability to run an AirNow-like system. As AirNow's core data processing engine, DMS ingests, QCs, and stores real-time data from over 30,000 active sensors at over 5,280 air quality and meteorological sites from over 130 air quality agencies across the United States. As part of the AirNow-I program, several instances of DMS are deployed in China, Mexico, and Taiwan. The U.S. Department of State's StateAir Program also uses DMS for five regions in China and plans to expand to other countries in the future. Recent development has begun to migrate DMS from an onsite desktop application to WebDMS, a web-based application designed to take advantage of cloud hosting and computing services to increase scalability and lower costs. WebDMS will continue to provide easy-to-use data analysis tools, such as time-series graphs, scatterplots, and wind- or pollution-rose diagrams, as well as allowing data to be exported to external systems such as the EPA's Air Quality System (AQS). WebDMS will also provide new GIS analysis features and a suite of web services through a RESTful web API. These changes will better meet air agency needs and allow for broader national and international use (for example, by the AirNow-I partners). We will talk about the challenges and advantages of migrating DMS to the web, modernizing the DMS user interface, and making it more cost-effective to enhance and maintain over time.
What Are the Usage Conditions of Web 2.0 Tools Faculty of Education Students?
ERIC Educational Resources Information Center
Agir, Ahmet
2014-01-01
As a result of advances in technology and then the emergence of using Internet in every step of life, web that provides access to the documents such as picture, audio, animation and text in Internet started to be used. At first, web consists of only visual and text pages that couldn't enable to make user's interaction. However, it is seen that not…
Cloud Computing Trace Characterization and Synthetic Workload Generation
2013-03-01
measurements [44]. Olio is primarily for learning Web 2.0 technologies, evaluating the three implementations (PHP, Java EE, and RubyOnRails (ROR...Add Event 17 Olio is well documented, but assumes prerequisite knowledge with setup and operation of apache web servers and MySQL databases. Olio...Faban supports numerous servers such as Apache httpd, Sun Java System Web, Portal and Mail Servers, Oracle RDBMS, memcached, and others [18]. Perhaps
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-07
... use of a Web-based mapping tool, such as MapQuest, as part of documenting that the hospital meets the... only through the Internet on the CMS Web site at http://www.cms.hhs.gov/AcuteInpatientPPS/01_overview...)'' hospitals with claims in the March 2012 update of the FY 2011 MedPAR file, is also available on the CMS Web...
Rassinoux, A-M
2011-01-01
To summarize excellent current research in the field of knowledge representation and management (KRM). A synopsis of the articles selected for the IMIA Yearbook 2011 is provided and an attempt to highlight the current trends in the field is sketched. This last decade, with the extension of the text-based web towards a semantic-structured web, NLP techniques have experienced a renewed interest in knowledge extraction. This trend is corroborated through the five papers selected for the KRM section of the Yearbook 2011. They all depict outstanding studies that exploit NLP technologies whenever possible in order to accurately extract meaningful information from various biomedical textual sources. Bringing semantic structure to the meaningful content of textual web pages affords the user with cooperative sharing and intelligent finding of electronic data. As exemplified by the best paper selection, more and more advanced biomedical applications aim at exploiting the meaningful richness of free-text documents in order to generate semantic metadata and recently to learn and populate domain ontologies. These later are becoming a key piece as they allow portraying the semantics of the Semantic Web content. Maintaining their consistency with documents and semantic annotations that refer to them is a crucial challenge of the Semantic Web for the coming years.
Information extraction for enhanced access to disease outbreak reports.
Grishman, Ralph; Huttunen, Silja; Yangarber, Roman
2002-08-01
Document search is generally based on individual terms in the document. However, for collections within limited domains it is possible to provide more powerful access tools. This paper describes a system designed for collections of reports of infectious disease outbreaks. The system, Proteus-BIO, automatically creates a table of outbreaks, with each table entry linked to the document describing that outbreak; this makes it possible to use database operations such as selection and sorting to find relevant documents. Proteus-BIO consists of a Web crawler which gathers relevant documents; an information extraction engine which converts the individual outbreak events to a tabular database; and a database browser which provides access to the events and, through them, to the documents. The information extraction engine uses sets of patterns and word classes to extract the information about each event. Preparing these patterns and word classes has been a time-consuming manual operation in the past, but automated discovery tools now make this task significantly easier. A small study comparing the effectiveness of the tabular index with conventional Web search tools demonstrated that users can find substantially more documents in a given time period with Proteus-BIO.
BioServices: a common Python package to access biological Web Services programmatically.
Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio
2013-12-15
Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.
Is nursing ready for WebQuests?
Lahaie, Ulysses David
2008-12-01
Based on an inquiry-oriented framework, WebQuests facilitate the construction of effective learning activities. Developed by Bernie Dodge and Tom March in 1995 at the San Diego State University, WebQuests have gained worldwide popularity among educators in the kindergarten through grade 12 educational sector. However, their application at the college and university levels is not well documented. WebQuests enhance and promote higher order-thinking skills, are consistent with Bloom's Taxonomy, and reflect a learner-centered instructional methodology (constructivism). They are based on solid theoretical foundations and promote critical thinking, inquiry, and problem solving. There is a role for WebQuests in nursing education. A WebQuest example is described in this article.
2016 eCDRweb User Guide–Primary Authorized Official
This document presents the user guide for the Office of Pollution Prevention and Toxics’ (OPPT) 2016 e-CDRweb tool. This document is the user guide for the Primary Authorized Official (AO) user of the 2016 e-CDR web tool.
32 CFR 21.330 - How are the DoDGARs published and maintained?
Code of Federal Regulations, 2013 CFR
2013-07-01
..., and sections, to parallel the CFR publication. Cross references within the DoD document are stated as... document on the World Wide Web at http://www.dtic.mil/whs/directives. (c) A standing working group...
32 CFR 21.330 - How are the DoDGARs published and maintained?
Code of Federal Regulations, 2012 CFR
2012-07-01
..., and sections, to parallel the CFR publication. Cross references within the DoD document are stated as... document on the World Wide Web at http://www.dtic.mil/whs/directives. (c) A standing working group...
32 CFR 21.330 - How are the DoDGARs published and maintained?
Code of Federal Regulations, 2014 CFR
2014-07-01
..., and sections, to parallel the CFR publication. Cross references within the DoD document are stated as... document on the World Wide Web at http://www.dtic.mil/whs/directives. (c) A standing working group...
7 CFR 3430.55 - Technical reporting.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the Current Research Information System (CRIS). (b) Initial Documentation in the CRIS Database... identification of equipment purchased with any Federal funds under the award and any subsequent use of such equipment. (e) CRIS Web Site Via Internet. The CRIS database is available to the public on the worldwide web...
49 CFR 571.5 - Matter incorporated by reference.
Code of Federal Regulations, 2011 CFR
2011-10-01
...), Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402 Illuminating Engineering... Services, Hyattsville, MD 20782. Phone: 1-800-232-4636; Web: http://www.cdc.gov/nchs National Highway..., Warrendale, Pennsylvania 15096. Phone: 1-724-776-4841; Web: http://www.sae.org Society of Automotive...
The Quebec National Library on the Web.
ERIC Educational Resources Information Center
Kieran, Shirley; Sauve, Diane
1997-01-01
Provides an overview of the Quebec National Library (Bibliotheque Nationale du Quebec, or BNQ) Web site. Highlights include issues related to content, design, and technology; IRIS, the BNQ online public access catalog; development of the multimedia catalog; software; digitization of documents; links to bibliographic records; and future…
Assessing Greek Public Hospitals' Websites.
Tsirintani, Maria; Binioris, Spyros
2015-01-01
Following a previous (2011) survey, this study assesses the web pages of Greek public hospitals according to specific criteria, which are included in the same web page evaluation model. Our purpose is to demonstrate the evolution of hospitals' web pages and document e-health applications trends. Using descriptive methods we found that public hospitals have made significant steps towards establishing and improving their web presence but there is still a lot of work that needs to be carried out in order to take advantage of the benefits of new technologies in the e-health ecosystem.
NASA Astrophysics Data System (ADS)
Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Wei, Y.
2010-12-01
Terrestrial ecology data sets are produced from diverse data sources such as model output, field data collection, laboratory analysis and remote sensing observation. These data sets can be created, distributed, and consumed in diverse ways as well. However, this diversity can hinder the usability of the data, and limit data users’ abilities to validate and reuse data for science and application purposes. Geospatial web services, such as those described in this paper, are an important means of reducing this burden. Terrestrial ecology researchers generally create the data sets in diverse file formats, with file and data structures tailored to the specific needs of their project, possibly as tabular data, geospatial images, or documentation in a report. Data centers may reformat the data to an archive-stable format and distribute the data sets through one or more protocols, such as FTP, email, and WWW. Because of the diverse data preparation, delivery, and usage patterns, users have to invest time and resources to bring the data into the format and structure most useful for their analysis. This time-consuming data preparation process shifts valuable resources from data analysis to data assembly. To address these issues, the ORNL DAAC, a NASA-sponsored terrestrial ecology data center, has utilized geospatial Web service technology, such as Open Geospatial Consortium (OGC) Web Map Service (WMS) and OGC Web Coverage Service (WCS) standards, to increase the usability and availability of terrestrial ecology data sets. Data sets are standardized into non-proprietary file formats and distributed through OGC Web Service standards. OGC Web services allow the ORNL DAAC to store data sets in a single format and distribute them in multiple ways and formats. Registering the OGC Web services through search catalogues and other spatial data tools allows for publicizing the data sets and makes them more available across the Internet. The ORNL DAAC has also created a Web-based graphical user interface called Spatial Data Access Tool (SDAT) that utilizes OGC Web services standards and allows data distribution and consumption for users not familiar with OGC standards. SDAT also allows for users to visualize the data set prior to download. Google Earth visualizations of the data set are also provided through SDAT. The use of OGC Web service standards at the ORNL DAAC has enabled an increase in data consumption. In one case, a data set had ~10 fold increase in download through OGC Web service in comparison to the conventional FTP and WWW method of access. The increase in download suggests that users are not only finding the data sets they need but also able to consume them readily in the format they need.
4 CFR 201.3 - Publicly available documents and electronic reading room.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 4 Accounts 1 2011-01-01 2011-01-01 false Publicly available documents and electronic reading room. 201.3 Section 201.3 Accounts RECOVERY ACCOUNTABILITY AND TRANSPARENCY BOARD PUBLIC INFORMATION AND REQUESTS § 201.3 Publicly available documents and electronic reading room. (a) Many Board records are available electronically at the Board's Web sit...
4 CFR 201.3 - Publicly available documents and electronic reading room.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 4 Accounts 1 2012-01-01 2012-01-01 false Publicly available documents and electronic reading room. 201.3 Section 201.3 Accounts RECOVERY ACCOUNTABILITY AND TRANSPARENCY BOARD PUBLIC INFORMATION AND REQUESTS § 201.3 Publicly available documents and electronic reading room. (a) Many Board records are available electronically at the Board's Web sit...
4 CFR 201.3 - Publicly available documents and electronic reading room.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 4 Accounts 1 2013-01-01 2013-01-01 false Publicly available documents and electronic reading room. 201.3 Section 201.3 Accounts RECOVERY ACCOUNTABILITY AND TRANSPARENCY BOARD PUBLIC INFORMATION AND REQUESTS § 201.3 Publicly available documents and electronic reading room. (a) Many Board records are available electronically at the Board's Web sit...
4 CFR 201.3 - Publicly available documents and electronic reading room.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 4 Accounts 1 2014-01-01 2013-01-01 true Publicly available documents and electronic reading room. 201.3 Section 201.3 Accounts RECOVERY ACCOUNTABILITY AND TRANSPARENCY BOARD PUBLIC INFORMATION AND REQUESTS § 201.3 Publicly available documents and electronic reading room. (a) Many Board records are available electronically at the Board's Web site...
Ajay, Dara; Gangwal, Rahul P; Sangamwar, Abhay T
2015-01-01
Intelligent Patent Analysis Tool (IPAT) is an online data retrieval tool, operated based on text mining algorithm to extract specific patent information in a predetermined pattern into an Excel sheet. The software is designed and developed to retrieve and analyze technology information from multiple patent documents and generate various patent landscape graphs and charts. The software is C# coded in visual studio 2010, which extracts the publicly available patent information from the web pages like Google Patent and simultaneously study the various technology trends based on user-defined parameters. In other words, IPAT combined with the manual categorization will act as an excellent technology assessment tool in competitive intelligence and due diligence for predicting the future R&D forecast.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piscotty, M A; Nazario, O L
2007-06-20
The objective of this project is the delivery of an application that will provide a unified, web-based system for collecting, verifying and analyzing the achievements for Laboratory employees. The application will enable individual Directorates to manage and report achievement record data for their employees using an LLNL standard web browser. In addition, cross directorate data reporting and analysis will be available for such organizations as LSTO and programmatic directorates. This system is intended to store reference data and metadata for employee achievements. Abstracts and entire publications will not be stored in this system.Directorates are expected to use this system atmore » all levels of management in preparing for Annual Self-Assessments, peer reviews, LDRD reviews, work force reviews, performance appraisals, and requests from sponsors. This document represents the primary deliverable for the Requirements Definition stage of system development. As part of a successful Requirements Definition, this document provides the development staff, the project sponsor, and the user community with a clear understanding of the product's operational, data, and other requirements. With this understanding, the development staff will take the opportunity to refine estimates regarding the cost, schedule, and deliverables reflected in it.« less
Exploring Remote Sensing Products Online with Giovanni for Studying Urbanization
NASA Technical Reports Server (NTRS)
Shen, Suhung; Leptoukh, Gregory G.; Gerasimov, Irina; Kempler, Steve
2012-01-01
Recently, a Large amount of MODIS land products at multi-spatial resolutions have been integrated into the online system, Giovanni, to support studies on land cover and land use changes focused on Northern Eurasia and Monsoon Asia regions. Giovanni (Goddard Interactive Online Visualization ANd aNalysis Infrastructure) is a Web-based application developed by the NASA Goddard Earth Sciences Data and Information Services Center (GES-DISC) providing a simple and intuitive way to visualize, analyze, and access Earth science remotely-sensed and modeled data. The customized Giovanni Web portals (Giovanni-NEESPI and Giovanni-MAIRS) are created to integrate land, atmospheric, cryospheric, and social products, that enable researchers to do quick exploration and basic analyses of land surface changes and their relationships to climate at global and regional scales. This presentation documents MODIS land surface products in Giovanni system. As examples, images and statistical analysis results on land surface and local climate changes associated with urbanization over Yangtze River Delta region, China, using data in Giovanni are shown.
Webizing mobile augmented reality content
NASA Astrophysics Data System (ADS)
Ahn, Sangchul; Ko, Heedong; Yoo, Byounghyun
2014-01-01
This paper presents a content structure for building mobile augmented reality (AR) applications in HTML5 to achieve a clean separation of the mobile AR content and the application logic for scaling as on the Web. We propose that the content structure contains the physical world as well as virtual assets for mobile AR applications as document object model (DOM) elements and that their behaviour and user interactions are controlled through DOM events by representing objects and places with a uniform resource identifier. Our content structure enables mobile AR applications to be seamlessly developed as normal HTML documents under the current Web eco-system.
Phylowood: interactive web-based animations of biogeographic and phylogeographic histories.
Landis, Michael J; Bedford, Trevor
2014-01-01
Phylowood is a web service that uses JavaScript to generate in-browser animations of biogeographic and phylogeographic histories from annotated phylogenetic input. The animations are interactive, allowing the user to adjust spatial and temporal resolution, and highlight phylogenetic lineages of interest. All documentation and source code for Phylowood is freely available at https://github.com/mlandis/phylowood, and a live web application is available at https://mlandis.github.io/phylowood.
Structuring and extracting knowledge for the support of hypothesis generation in molecular biology
Roos, Marco; Marshall, M Scott; Gibson, Andrew P; Schuemie, Martijn; Meij, Edgar; Katrenko, Sophia; van Hage, Willem Robert; Krommydas, Konstantinos; Adriaans, Pieter W
2009-01-01
Background Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable its extraction from literature. Their combination makes prior knowledge available for computational analysis and inference. While some tools provide complete solutions that limit the control over the modeling and extraction processes, we seek a methodology that supports control by the experimenter over these critical processes. Results We describe progress towards automated support for the generation of biomolecular hypotheses. Semantic Web technologies are used to structure and store knowledge, while a workflow extracts knowledge from text. We designed minimal proto-ontologies in OWL for capturing different aspects of a text mining experiment: the biological hypothesis, text and documents, text mining, and workflow provenance. The models fit a methodology that allows focus on the requirements of a single experiment while supporting reuse and posterior analysis of extracted knowledge from multiple experiments. Our workflow is composed of services from the 'Adaptive Information Disclosure Application' (AIDA) toolkit as well as a few others. The output is a semantic model with putative biological relations, with each relation linked to the corresponding evidence. Conclusion We demonstrated a 'do-it-yourself' approach for structuring and extracting knowledge in the context of experimental research on biomolecular mechanisms. The methodology can be used to bootstrap the construction of semantically rich biological models using the results of knowledge extraction processes. Models specific to particular experiments can be constructed that, in turn, link with other semantic models, creating a web of knowledge that spans experiments. Mapping mechanisms can link to other knowledge resources such as OBO ontologies or SKOS vocabularies. AIDA Web Services can be used to design personalized knowledge extraction procedures. In our example experiment, we found three proteins (NF-Kappa B, p21, and Bax) potentially playing a role in the interplay between nutrients and epigenetic gene regulation. PMID:19796406
Web Standard: PDF - When to Use, Document Metadata, PDF Sections
PDF files provide some benefits when used appropriately. PDF files should not be used for short documents ( 5 pages) unless retaining the format for printing is important. PDFs should have internal file metadata and meet section 508 standards.
2016 e-CDRweb User Guide – Secondary Authorized Official
This document presents the user guide for the Office of Pollution Prevention and Toxics’ (OPPT) 2016 e-CDRweb tool. This document is the user guide for the Secondary Authorized Official (AO) user of the 2016 e-CDR web tool.
77 FR 42197 - Small Business Size Standards: Construction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-18
... ``exception'') under NAICS 237990, Other Heavy and Civil Engineering Construction, from $20 million to $30... available on its Web site at www.sba.gov/size for public review and comments. The ``Size Standards... developing, reviewing, and modifying size standards when necessary. SBA published the document on its Web...
Interactive Information Organization: Techniques and Evaluation
2001-05-01
information search and access. Locating interesting information on the World Wide Web is the main task of on-line search engines . Such engines accept a...likelihood of being relevant to the user’s request. The majority of today’s Web search engines follow this scenario. The ordering of documents in the
77 FR 26321 - Virginia Electric and Power Company
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-03
... NUCLEAR REGULATORY COMMISSION [Docket Nos. 50-338 and 50-339; NRC-2012-0051; License Nos. NPF-4...: Federal Rulemaking Web Site: Go to http://www.regulations.gov and search for Docket ID NRC-2012-0051... search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems...
77 FR 31917 - Energy Conservation Program: Energy Conservation Standards for Residential Dishwashers
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-30
... the docket Web page can be found at: http://www.regulations.gov/#!docketDetail ;D=EERE-2011-BT-STD-0060. The regulations.gov Web page contains instructions on how to access all documents, including...: (202) 586-7796. Email: [email protected] . SUPPLEMENTARY INFORMATION: Table of Contents I...
32 CFR 701.102 - Online resources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 5 2011-07-01 2011-07-01 false Online resources. 701.102 Section 701.102... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.102 Online resources. (a) Navy PA online Web site (http://www.privacy.navy.mil). This Web site supplements this subpart and subpart G. It...
32 CFR 701.102 - Online resources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 32 National Defense 5 2014-07-01 2014-07-01 false Online resources. 701.102 Section 701.102... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.102 Online resources. (a) Navy PA online Web site (http://www.privacy.navy.mil). This Web site supplements this subpart and subpart G. It...
32 CFR 701.102 - Online resources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 32 National Defense 5 2012-07-01 2012-07-01 false Online resources. 701.102 Section 701.102... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.102 Online resources. (a) Navy PA online Web site (http://www.privacy.navy.mil). This Web site supplements this subpart and subpart G. It...
32 CFR 701.102 - Online resources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 5 2013-07-01 2013-07-01 false Online resources. 701.102 Section 701.102... THE NAVY DOCUMENTS AFFECTING THE PUBLIC DON Privacy Program § 701.102 Online resources. (a) Navy PA online Web site (http://www.privacy.navy.mil). This Web site supplements this subpart and subpart G. It...
This April 2004 document is a table that details the various requirements of the Paper and Other Web Coating NESHAP, broken down by category. This table covers applicability, recordkeeping, emission limits, work practice standards, and other requirements
World Wide Web Page Design: A Structured Approach.
ERIC Educational Resources Information Center
Gregory, Gwen; Brown, M. Marlo
1997-01-01
Describes how to develop a World Wide Web site based on structured programming concepts. Highlights include flowcharting, first page design, evaluation, page titles, documenting source code, text, graphics, and browsers. Includes a template for HTML writers, tips for using graphics, a sample homepage, guidelines for authoring structured HTML, and…
Viewing Files — EDRN Public Portal
In addition to standard HTML Web pages, our web site contain other file formats. You may need additional software or browser plug-ins to view some of the information available on our site. This document lists show each format, along with links to the corresponding freely available plug-ins or viewers.
Wikis and Collaborative Inquiry
ERIC Educational Resources Information Center
Lamb, Annette; Johnson, Larry
2009-01-01
Wikis are simply Web sites that provide easy-to-use tools for creating, editing, and sharing digital documents, images, and media files. Multiple participants can enter, submit, manage, and update a single Web workspace creating a community of authors and editors. Wiki projects help young people shift from being "consumers" of the Internet to…
Academic Research Integration System
ERIC Educational Resources Information Center
Surugiu, Iula; Velicano, Manole
2008-01-01
This paper comprises results concluding the research activity done so far regarding enhanced web services and system integration. The objective of the paper is to define the software architecture for a coherent framework and methodology for enhancing existing web services into an integrated system. This document presents the research work that has…
76 FR 43960 - NARA Records Reproduction Fees
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
... transferred to NARA and maintain its fee schedule on NARA's Web site http://www.archives.gov . The proposed... document is faint or too dark, it requires additional time to obtain a readable image. In TABLE 1 below... our Web site ( http://www.archives.gov ) annually when announcing that records reproduction fees will...
Biomedical Online Learning: The Route to Success
ERIC Educational Resources Information Center
Harvey, Patricia J.; Cookson, Barry; Meerabeau, Elizabeth; Muggleston, Diana
2003-01-01
The potential of the World Wide Web for rapid global communication is driving the creation of specifically tailored courses for employees, yet few practitioners have the necessary experience in on-line teaching methods, or in preparing documents for the web. Experience gained in developing six online training modules for the biotechnology and…
WorldWide Web: Hypertext from CERN.
ERIC Educational Resources Information Center
Nickerson, Gord
1992-01-01
Discussion of software tools for accessing information on the Internet focuses on the WorldWideWeb (WWW) system, which was developed at the European Particle Physics Laboratory (CERN) in Switzerland to build a worldwide network of hypertext links using available networking technology. Its potential for use with multimedia documents is also…
Deep pelagic food web structure as revealed by in situ feeding observations.
Choy, C Anela; Haddock, Steven H D; Robison, Bruce H
2017-12-06
Food web linkages, or the feeding relationships between species inhabiting a shared ecosystem, are an ecological lens through which ecosystem structure and function can be assessed, and thus are fundamental to informing sustainable resource management. Empirical feeding datasets have traditionally been painstakingly generated from stomach content analysis, direct observations and from biochemical trophic markers (stable isotopes, fatty acids, molecular tools). Each approach carries inherent biases and limitations, as well as advantages. Here, using 27 years (1991-2016) of in situ feeding observations collected by remotely operated vehicles (ROVs), we quantitatively characterize the deep pelagic food web of central California within the California Current, complementing existing studies of diet and trophic interactions with a unique perspective. Seven hundred and forty-three independent feeding events were observed with ROVs from near-surface waters down to depths approaching 4000 m, involving an assemblage of 84 different predators and 82 different prey types, for a total of 242 unique feeding relationships. The greatest diversity of prey was consumed by narcomedusae, followed by physonect siphonophores, ctenophores and cephalopods. We highlight key interactions within the poorly understood 'jelly web', showing the importance of medusae, ctenophores and siphonophores as key predators, whose ecological significance is comparable to large fish and squid species within the central California deep pelagic food web. Gelatinous predators are often thought to comprise relatively inefficient trophic pathways within marine communities, but we build upon previous findings to document their substantial and integral roles in deep pelagic food webs. © 2017 The Authors.
Interactive metagenomic visualization in a Web browser.
Ondov, Brian D; Bergman, Nicholas H; Phillippy, Adam M
2011-09-30
A critical output of metagenomic studies is the estimation of abundances of taxonomical or functional groups. The inherent uncertainty in assignments to these groups makes it important to consider both their hierarchical contexts and their prediction confidence. The current tools for visualizing metagenomic data, however, omit or distort quantitative hierarchical relationships and lack the facility for displaying secondary variables. Here we present Krona, a new visualization tool that allows intuitive exploration of relative abundances and confidences within the complex hierarchies of metagenomic classifications. Krona combines a variant of radial, space-filling displays with parametric coloring and interactive polar-coordinate zooming. The HTML5 and JavaScript implementation enables fully interactive charts that can be explored with any modern Web browser, without the need for installed software or plug-ins. This Web-based architecture also allows each chart to be an independent document, making them easy to share via e-mail or post to a standard Web server. To illustrate Krona's utility, we describe its application to various metagenomic data sets and its compatibility with popular metagenomic analysis tools. Krona is both a powerful metagenomic visualization tool and a demonstration of the potential of HTML5 for highly accessible bioinformatic visualizations. Its rich and interactive displays facilitate more informed interpretations of metagenomic analyses, while its implementation as a browser-based application makes it extremely portable and easily adopted into existing analysis packages. Both the Krona rendering code and conversion tools are freely available under a BSD open-source license, and available from: http://krona.sourceforge.net.
cPath: open source software for collecting, storing, and querying biological pathways.
Cerami, Ethan G; Bader, Gary D; Gross, Benjamin E; Sander, Chris
2006-11-13
Biological pathways, including metabolic pathways, protein interaction networks, signal transduction pathways, and gene regulatory networks, are currently represented in over 220 diverse databases. These data are crucial for the study of specific biological processes, including human diseases. Standard exchange formats for pathway information, such as BioPAX, CellML, SBML and PSI-MI, enable convenient collection of this data for biological research, but mechanisms for common storage and communication are required. We have developed cPath, an open source database and web application for collecting, storing, and querying biological pathway data. cPath makes it easy to aggregate custom pathway data sets available in standard exchange formats from multiple databases, present pathway data to biologists via a customizable web interface, and export pathway data via a web service to third-party software, such as Cytoscape, for visualization and analysis. cPath is software only, and does not include new pathway information. Key features include: a built-in identifier mapping service for linking identical interactors and linking to external resources; built-in support for PSI-MI and BioPAX standard pathway exchange formats; a web service interface for searching and retrieving pathway data sets; and thorough documentation. The cPath software is freely available under the LGPL open source license for academic and commercial use. cPath is a robust, scalable, modular, professional-grade software platform for collecting, storing, and querying biological pathways. It can serve as the core data handling component in information systems for pathway visualization, analysis and modeling.
NASA Technical Reports Server (NTRS)
Scudder, J. D.; Hall, Van Allen
1998-01-01
The science activities are: 1) Hydra is still operating successfully on orbit. 2) A large amount of analysis and discovery has occurred with the Hydra ground data processing this past year. 3) Full interdetector calibration has been implemented and documented. This intercalibration was necessitated by the incorrect installation of bias resistors in the pre-acceleration stage to the electron channeltrons. This had the effect of making the counting efficiency for electrons energy dependent as well as channeltron specific. The nature of the error had no impact on the ion detection efficiency since they have a different bias arrangement. This intercalibration is so effective, that the electron and ion moment densities are routinely produced with a level of agreement better than 20%. 4) The data processing routinely removes glint in the sensors and produces public energy time spectrograms on the web overnight. 6) Routine, but more intensive computer processing codes are operational that determine for electrons and ions, the density, the flow vector, the pressure tensor and the heat flux by numerical integration. These codes use the magnetic field to sustain the quality of their output. To gain access to this high quality magnetic field within our data stream we have monitored Russell's web page for zero levels and timing files (since his data acquisition is not telemetry synchronous) and have a local reconstruction of B for our use. We have also detected a routine anomaly in the magnetometer data stream that we have documented to Chris Russell and developed an editing algorithm to intercept these "hits" and remove them from the geophysical analysis.
Electronic Derivative Classifier/Reviewing Official
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Joshua C; McDuffie, Gregory P; Light, Ken L
2017-02-17
The electronic Derivative Classifier, Reviewing Official (eDC/RO) is a web based document management and routing system that reduces security risks and increases workflow efficiencies. The system automates the upload, notification review request, and document status tracking of documents for classification review on a secure server. It supports a variety of document formats (i.e., pdf, doc, docx, xls, xlsx, xlsm, ppt, pptx, vsd, vsdx and txt), and allows for the dynamic placement of classification markings such as the classification level, category and caveats on the document, in addition to a document footer and digital signature.
Documenting the use of computers in Swedish Health Care up to 1980.
Peterson, H E; Lundin, P
2011-01-01
This paper describes a documentation project to create, collect and preserve previously unavailable sources on informatics in Sweden (including health care as one of 16 subgroups), and making them available on the Web. Time was critical as the personal documentation and artifacts of early pioneers could be irretrievably lost. The criteria for participation were that a person had developed a system in a clinical environment which was used by others prior to 1980. Participants were interviewed and asked for early documentation such as notes, minutes from meetings, drawings, test results and early models - together with related artifacts. The approach included traditional oral history interviews, collection of autobiographies and new self-structuring and time saving methods, such as witness seminars and an Internet-based repository of their recollections (the Writers' Web). The combination of methods obtained new information on system errors, and challenges in reaching the goals due partly to inadequacies of the early technology, and partly to the insufficient understanding of the complexity of the many problems which needed to be solved before a useful electronic patient record could be realized. A very important result was the development of a method to collect information in an easier, faster and much less expensive way than using the traditional scientific method, and still reach results that are qualitative and quantitative for the purpose of documenting the early period of computer-based health care technology. The witness seminars and the Writers' Web yielded especially large amounts of hitherto-unknown information. With all material in one database available to everyone on the Web, it is accessed very frequently - especially by students, researchers, journalists and teachers. Study of the materials explains and clarifies the reasons behind the delays and difficulties that have been encountered in developing electronic patient records, as described in an article [3] published in the IMIA Yearbook 2006.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-17
... this document and all documents entered into this docket is available on the World Wide Web at http... individual submitting the comment (or signing the comment, if submitted on behalf of an association, business...
de Belvis, A G; Biasco, A; Pelone, F; Romaniello, A; De Micco, F; Volpe, M; Ricciardi, W
2009-01-01
The objective of our research is to report on the diffusion of Clinical Governance, as introduced with the National Health Plan 2006-2008, by analysing the planning instruments set up by each Region (Regional Health Plans and Emergency Plans in regions with budget deficit), the organizational frameworks (Atti Aziendali, firm acts), and the surveys on performance and quality of healthcare among the Italian Local Health Units (Health Surveys). Our research was realized on September-December 2007 and consisted of the collection of all retrieved documents available on the web and on the online public access catalog (OPAC SBN) of the National Library Service. Futhermore, each document has been classified and analysed according to Chambers' Clinical Governance definition. A descriptive statistical and inferential analysis by applying the Chi-2 Test was performed to test the correlation between the diffusion of such a classified documents and the geographical partition of each LHU. Our results show a scarce diffusion of Firm acts (43%) and Health Surveys (24.9% of the total). Any remind to Clinical Governance instruments and methods inside each document resulted even poorer among both the organizational and performance surveys and the regional health planning frameworks, respectively.
Suplatov, Dmitry; Kirilin, Eugeny; Arbatsky, Mikhail; Takhaveev, Vakil; Švedas, Vytas
2014-01-01
The new web-server pocketZebra implements the power of bioinformatics and geometry-based structural approaches to identify and rank subfamily-specific binding sites in proteins by functional significance, and select particular positions in the structure that determine selective accommodation of ligands. A new scoring function has been developed to annotate binding sites by the presence of the subfamily-specific positions in diverse protein families. pocketZebra web-server has multiple input modes to meet the needs of users with different experience in bioinformatics. The server provides on-site visualization of the results as well as off-line version of the output in annotated text format and as PyMol sessions ready for structural analysis. pocketZebra can be used to study structure–function relationship and regulation in large protein superfamilies, classify functionally important binding sites and annotate proteins with unknown function. The server can be used to engineer ligand-binding sites and allosteric regulation of enzymes, or implemented in a drug discovery process to search for potential molecular targets and novel selective inhibitors/effectors. The server, documentation and examples are freely available at http://biokinet.belozersky.msu.ru/pocketzebra and there are no login requirements. PMID:24852248
ERIC Educational Resources Information Center
Herrera-Viedma, Enrique; Peis, Eduardo
2003-01-01
Presents a fuzzy evaluation method of SGML documents based on computing with words. Topics include filtering the amount of information available on the Web to assist users in their search processes; document type definitions; linguistic modeling; user-system interaction; and use with XML and other markup languages. (Author/LRW)
Generalized Intelligent Framework for Tutoring (GIFT) Cloud/Virtual Open Campus Quick-Start Guide
2016-03-01
distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This document serves as the quick-start guide for GIFT Cloud, the web -based...to users with a GIFT Account at no cost. GIFT Cloud is a new implementation of GIFT. This web -based application allows learners, authors, and...distribution is unlimited. 3 3. Requirements for GIFT Cloud GIFT Cloud is accessed via a web browser. Officially, GIFT Cloud has been tested to work on
Web Application Design Using Server-Side JavaScript
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, J.; Simons, R.
1999-02-01
This document describes the application design philosophy for the Comprehensive Nuclear Test Ban Treaty Research & Development Web Site. This design incorporates object-oriented techniques to produce a flexible and maintainable system of applications that support the web site. These techniques will be discussed at length along with the issues they address. The overall structure of the applications and their relationships with one another will also be described. The current problems and future design changes will be discussed as well.
Migration of the ATLAS Metadata Interface (AMI) to Web 2.0 and cloud
NASA Astrophysics Data System (ADS)
Odier, J.; Albrand, S.; Fulachier, J.; Lambert, F.
2015-12-01
The ATLAS Metadata Interface (AMI), a mature application of more than 10 years of existence, is currently under adaptation to some recently available technologies. The web interfaces, which previously manipulated XML documents using XSL transformations, are being migrated to Asynchronous JavaScript (AJAX). Web development is considerably simplified by the introduction of a framework based on JQuery and Twitter Bootstrap. Finally, the AMI services are being migrated to an OpenStack cloud infrastructure.
NASA Astrophysics Data System (ADS)
Allebach, J. P.; Ortiz Segovia, Maria; Atkins, C. Brian; O'Brien-Strain, Eamonn; Damera-Venkata, Niranjan; Bhatti, Nina; Liu, Jerry; Lin, Qian
2010-02-01
Businesses have traditionally relied on different types of media to communicate with existing and potential customers. With the emergence of the Web, the relation between the use of print and electronic media has continually evolved. In this paper, we investigate one possible scenario that combines the use of the Web and print. Specifically, we consider the scenario where a small- or medium-sized business (SMB) has an existing web site from which they wish to pull content to create a print piece. Our assumption is that the web site was developed by a professional designer, working in conjunction with the business owner or marketing team, and that it contains a rich assembly of content that is presented in an aesthetically pleasing manner. Our goal is to understand the process that a designer would follow to create an effective and aesthetically pleasing print piece. We are particularly interested to understand the choices made by the designer with respect to placement and size of the text and graphic elements on the page. Toward this end, we conducted an experiment in which professional designers worked with SMBs to create print pieces from their respective web pages. In this paper, we report our findings from this experiment, and examine the underlying conclusions regarding the resulting document aesthetics in the context of the existing design, and engineering and computer science literatures that address this topic
Using component technologies for web based wavelet enhanced mammographic image visualization.
Sakellaropoulos, P; Costaridou, L; Panayiotakis, G
2000-01-01
The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.
NASA Astrophysics Data System (ADS)
Pierce, S. A.; Gentle, J.
2015-12-01
The multi-criteria decision support system (MCSDSS) is a newly completed application for touch-enabled group decision support that uses D3 data visualization tools, a geojson conversion utility that we developed, and Paralelex to create an interactive tool. The MCSDSS is a prototype system intended to demonstrate the potential capabilities of a single page application (SPA) running atop a web and cloud based architecture utilizing open source technologies. The application is implemented on current web standards while supporting human interface design that targets both traditional mouse/keyboard interactions and modern touch/gesture enabled interactions. The technology stack for MCSDSS was selected with the goal of creating a robust and dynamic modular codebase that can be adjusted to fit many use cases and scale to support usage loads that range between simple data display to complex scientific simulation-based modelling and analytics. The application integrates current frameworks for highly performant agile development with unit testing, statistical analysis, data visualization, mapping technologies, geographic data manipulation, and cloud infrastructure while retaining support for traditional HTML5/CSS3 web standards. The software lifecylcle for MCSDSS has following best practices to develop, share, and document the codebase and application. Code is documented and shared via an online repository with the option for programmers to see, contribute, or fork the codebase. Example data files and tutorial documentation have been shared with clear descriptions and data object identifiers. And the metadata about the application has been incorporated into an OntoSoft entry to ensure that MCSDSS is searchable and clearly described. MCSDSS is a flexible platform that allows for data fusion and inclusion of large datasets in an interactive front-end application capable of connecting with other science-based applications and advanced computing resources. In addition, MCSDSS offers functionality that enables communication with non-technical users for policy, education, or engagement with groups around scientific topics with societal relevance.
Yang, Xiaofeng
2012-08-05
To identify global research trends in stem cell transplantation for treating Duchenne muscular dystrophy using a bibliometric analysis of Web of Science. We performed a bibliometric analysis of studies on stem cell transplantation for treating Duchenne muscular dystrophy from 2002 to 2011 retrieved from Web of Science. (a) peer-reviewed published articles on stem cell transplantation for treating Duchenne muscular dystrophy indexed in Web of Science; (b) original research articles, reviews, meeting abstracts, proceedings papers, book chapters, editorial material, and news items; and (c) publication between 2002 and 2011. (a) articles that required manual searching or telephone access; (b) documents that were not published in the public domain; and (c) corrected papers. (1) Annual publication output; (2) distribution according to subject areas; (3) distribution according to journals; (4) distribution according to country; (5) distribution according to institution; (6) distribution according to institution in China; (7) distribution according to institution that cooperated with Chinese institutions; (8) top-cited articles from 2002 to 2006; (9) top-cited articles from 2007 to 2011. A total of 318 publications on stem cell transplantation for treating Duchenne muscular dystrophy were retrieved from Web of Science from 2002 to 2011, of which almost half derived from American authors and institutes. The number of publications has gradually increased over the past 10 years. Most papers appeared in journals with a focus on gene and molecular research, such as Molecular Therapy, Neuromuscular Disorders, and PLoS One. The 10 most-cited papers from 2002 to 2006 were mostly about different kinds of stem cell transplantation for muscle regeneration, while the 10 most-cited papers from 2007 to 2011 were mostly about new techniques of stem cell transplantation for treating Duchenne muscular dystrophy. The publications on stem cell transplantation for treating Duchenne muscular dystrophy were relatively few. It also needs more research to confirm that stem cell therapy is a reliable treatment for Duchenne muscular dystrophy.
Software Tools Streamline Project Management
NASA Technical Reports Server (NTRS)
2009-01-01
Three innovative software inventions from Ames Research Center (NETMARK, Program Management Tool, and Query-Based Document Management) are finding their way into NASA missions as well as industry applications. The first, NETMARK, is a program that enables integrated searching of data stored in a variety of databases and documents, meaning that users no longer have to look in several places for related information. NETMARK allows users to search and query information across all of these sources in one step. This cross-cutting capability in information analysis has exponentially reduced the amount of time needed to mine data from days or weeks to mere seconds. NETMARK has been used widely throughout NASA, enabling this automatic integration of information across many documents and databases. NASA projects that use NETMARK include the internal reporting system and project performance dashboard, Erasmus, NASA s enterprise management tool, which enhances organizational collaboration and information sharing through document routing and review; the Integrated Financial Management Program; International Space Station Knowledge Management; Mishap and Anomaly Information Reporting System; and management of the Mars Exploration Rovers. Approximately $1 billion worth of NASA s projects are currently managed using Program Management Tool (PMT), which is based on NETMARK. PMT is a comprehensive, Web-enabled application tool used to assist program and project managers within NASA enterprises in monitoring, disseminating, and tracking the progress of program and project milestones and other relevant resources. The PMT consists of an integrated knowledge repository built upon advanced enterprise-wide database integration techniques and the latest Web-enabled technologies. The current system is in a pilot operational mode allowing users to automatically manage, track, define, update, and view customizable milestone objectives and goals. The third software invention, Query-Based Document Management (QBDM) is a tool that enables content or context searches, either simple or hierarchical, across a variety of databases. The system enables users to specify notification subscriptions where they associate "contexts of interest" and "events of interest" to one or more documents or collection(s) of documents. Based on these subscriptions, users receive notification when the events of interest occur within the contexts of interest for associated document or collection(s) of documents. Users can also associate at least one notification time as part of the notification subscription, with at least one option for the time period of notifications.
This February 2003 document contains a diagram of dates and events for compliance with the NESHAP for Paper and Other Web Coating. Also on this page is an April 2004 flow chart to determine if the NESHAP applies to your facility.
77 FR 33786 - NRC Enforcement Policy Revision
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-07
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2011... search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems... either 2.3.2.a. or b. must be met for the disposition of a violation as an NCV.'' The following new...
Documenting historical data and accessing it on the World Wide Web
Malchus B. Baker; Daniel P. Huebner; Peter F. Ffolliott
2000-01-01
New computer technologies facilitate the storage, retrieval, and summarization of watershed-based data sets on the World Wide Web. These data sets are used by researchers when testing and validating predictive models, managers when planning and implementing watershed management practices, educators when learning about hydrologic processes, and decisionmakers when...
The Full Monty: Locating Resources, Creating, and Presenting a Web Enhanced History Course.
ERIC Educational Resources Information Center
Bazillion, Richard J.; Braun, Connie L.
2001-01-01
Discusses how to develop a history course using the World Wide Web; course development software; full text digitized articles, electronic books, primary documents, images, and audio files; and computer equipment such as LCD projectors and interactive whiteboards. Addresses the importance of support for faculty using technology in teaching. (PAL)
Does Interface Matter? A Study of Web Authoring and Editing by Inexperienced Web Writers
ERIC Educational Resources Information Center
Dick, Rodney F.
2006-01-01
This study explores the complicated nature of the interface as a mediational tool for inexperienced writers as they composed hypertext documents. Because technology can become so quickly and inextricably connected to people's everyday lives, it is essential to explore the effects on these technologies before they become invisible. Because…
2003-07-01
Technical Report WEB-BASED INTERACTIVE ELECTRONIC TECHNICAL MANUAL (IETM) COMMON USER INTERFACE STYLE GUIDE Version 2.0 – July 2003 by L. John Junod ...ACKNOWLEDGEMENTS The principal authors of this document were: John Junod – NSWC, Carderock Division, Phil Deuell – AMSEC LLC, Kathleen Moore
The New Frontier: Conquering the World Wide Web by Mule.
ERIC Educational Resources Information Center
Gresham, Morgan
1999-01-01
Examines effects of teaching hypertext markup language on students' perceptions of class goals in a networked composition classroom. Suggests sending documents via file transfer protocol by command line and viewing the Web with a textual browser shifted emphasis from writing to coding. Argues that helping students identify a balance between…
World Wide Web Server Standards and Guidelines.
ERIC Educational Resources Information Center
Stubbs, Keith M.
This document defines the specific standards and general guidelines which the U.S. Department of Education (ED) will use to make information available on the World Wide Web (WWW). The purpose of providing such guidance is to ensure high quality and consistent content, organization, and presentation of information on ED WWW servers, in order to…
78 FR 7818 - Duane Arnold Energy Center; Application for Amendment to Facility Operating License
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-04
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2013... search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems... INFORMATION CONTACT: Karl D. Feintuch, Project Manager, Office of Nuclear Reactor Regulation, U.S. Nuclear...
77 FR 67837 - Callaway Plant, Unit 1; Application for Amendment to Facility Operating License
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-14
... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2012... search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems... INFORMATION CONTACT: Carl F. Lyon, Project Manager, Office of Nuclear Reactor Regulation, U.S. Nuclear...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-17
... publicly available, such as information that is exempt from public disclosure. A link to the docket Web.... The regulations.gov Web page will contain simple instructions on how to access all documents...: (202) 287-6307. Email: [email protected] . SUPPLEMENTARY INFORMATION: Table of Contents I. Summary...
Some Thoughts on Free Textbooks
ERIC Educational Resources Information Center
Stewart, Robert
2009-01-01
The author publishes and freely distributes three online textbooks. "Introduction to Physical Oceanography" is available as a typeset book in Portable Document Format (PDF) or as web pages. "Our Ocean Planet: Oceanography in the 21st Century" and "Environmental Science in the 21st Century" are both available as web pages. All three books, which…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-22
... Order Imposing Procedures for Access to Sensitive Unclassified Non-Safeguards Information for Contention... related to the license renewal application using any of the following methods: Federal Rulemaking Web site..., select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-22
... Amendment to Facility Operating License, Proposed No Significant Hazards Consideration Determination, and Opportunity for a Hearing and Order Imposing Procedures for Document Access to Sensitive Unclassified Non... on the NRC Web site and on the Federal rulemaking Web site, http://www.regulations.gov . Because your...
Engineering a Multi-Purpose Test Collection for Web Retrieval Experiments.
ERIC Educational Resources Information Center
Bailey, Peter; Craswell, Nick; Hawking, David
2003-01-01
Describes a test collection that was developed as a multi-purpose testbed for experiments on the Web in distributed information retrieval, hyperlink algorithms, and conventional ad hoc retrieval. Discusses inter-server connectivity, integrity of server holdings, inclusion of documents related to a wide spread of likely queries, and distribution of…
The Status of African Studies Digitized Content: Three Metadata Schemes.
ERIC Educational Resources Information Center
Kuntz, Patricia S.
The proliferation of Web pages and digitized material mounted on Internet servers has become unmanageable. Librarians and users are concerned that documents and information are being lost in cyberspace as a result of few bibliographic controls and common standards. Librarians in cooperation with software creators and Web page designers are…
Now That We've Found the "Hidden Web," What Can We Do with It?
ERIC Educational Resources Information Center
Cole, Timothy W.; Kaczmarek, Joanne; Marty, Paul F.; Prom, Christopher J.; Sandore, Beth; Shreeves, Sarah
The Open Archives Initiative (OAI) Protocol for Metadata Harvesting (PMH) is designed to facilitate discovery of the "hidden web" of scholarly information, such as that contained in databases, finding aids, and XML documents. OAI-PMH supports standardized exchange of metadata describing items in disparate collections, of such as those…
MetaSpider: Meta-Searching and Categorization on the Web.
ERIC Educational Resources Information Center
Chen, Hsinchun; Fan, Haiyan; Chau, Michael; Zeng, Daniel
2001-01-01
Discusses the difficulty of locating relevant information on the Web and studies two approaches to addressing the low precision and poor presentation of search results: meta-search and document categorization. Introduces MetaSpider, a meta-search engine, and presents results of a user evaluation study that compared three search engines.…
Finding Information on the World Wide Web: The Retrieval Effectiveness of Search Engines.
ERIC Educational Resources Information Center
Pathak, Praveen; Gordon, Michael
1999-01-01
Describes a study that examined the effectiveness of eight search engines for the World Wide Web. Calculated traditional information-retrieval measures of recall and precision at varying numbers of retrieved documents to use as the bases for statistical comparisons of retrieval effectiveness. Also examined the overlap between search engines.…
UFOs, NGOs, or IGOs: Using International Documents for General Reference.
ERIC Educational Resources Information Center
Shreve, Catherine
1997-01-01
Discusses accessing and using documents from international (intergovernmental) organizations. Profiles the United Nations, the European Union and other Intergovernmental Organizations (IGOs). Discusses the librarian as "Web detective," notes questions to focus on, and presents examples to demonstrate navigation of IGO sites. Lists basic…
Relevance of Web Documents:Ghosts Consensus Method.
ERIC Educational Resources Information Center
Gorbunov, Andrey L.
2002-01-01
Discusses how to improve the quality of Internet search systems and introduces the Ghosts Consensus Method which is free from the drawbacks of digital democracy algorithms and is based on linear programming tasks. Highlights include vector space models; determining relevant documents; and enriching query terms. (LRW)
Linking to EPA Publications in the National Service Center for Environmental Publications (NSCEP)
Linking to a document at NSCEP rather than uploading your own copy meets EPA standards and best practices for web content. If you follow this procedure, you can link directly to the PDF document without NSCEP's viewing pane or navigation.
10 CFR 2.1303 - Availability of documents.
Code of Federal Regulations, 2011 CFR
2011-01-01
... NUCLEAR REGULATORY COMMISSION RULES OF PRACTICE FOR DOMESTIC LICENSING PROCEEDINGS AND ISSUANCE OF ORDERS Procedures for Hearings on License Transfer Applications § 2.1303 Availability of documents. Unless exempt... for a license transfer requiring Commission approval will be placed at the NRC Web site, http://www...
10 CFR 2.1303 - Availability of documents.
Code of Federal Regulations, 2010 CFR
2010-01-01
... NUCLEAR REGULATORY COMMISSION RULES OF PRACTICE FOR DOMESTIC LICENSING PROCEEDINGS AND ISSUANCE OF ORDERS Procedures for Hearings on License Transfer Applications § 2.1303 Availability of documents. Unless exempt... for a license transfer requiring Commission approval will be placed at the NRC Web site, http://www...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsmith, John E. M.; Brennan, James S.; Brubaker, Erik
A wide range of NSC (Neutron Scatter Camera) activities were conducted under this lifecycle plan. This document outlines the highlights of those activities, broadly characterized as system improvements, laboratory measurements, and deployments, and presents sample results in these areas. Additional information can be found in the documents that reside in WebPMIS.
Hu, Xiangen; Graesser, Arthur C
2004-05-01
The Human Use Regulatory Affairs Advisor (HURAA) is a Web-based facility that provides help and training on the ethical use of human subjects in research, based on documents and regulations in United States federal agencies. HURAA has a number of standard features of conventional Web facilities and computer-based training, such as hypertext, multimedia, help modules, glossaries, archives, links to other sites, and page-turning didactic instruction. HURAA also has these intelligent features: (1) an animated conversational agent that serves as a navigational guide for the Web facility, (2) lessons with case-based and explanation-based reasoning, (3) document retrieval through natural language queries, and (4) a context-sensitive Frequently Asked Questions segment, called Point & Query. This article describes the functional learning components of HURAA, specifies its computational architecture, and summarizes empirical tests of the facility on learners.
Semantic enrichment of medical forms - semi-automated coding of ODM-elements via web services.
Breil, Bernhard; Watermann, Andreas; Haas, Peter; Dziuballe, Philipp; Dugas, Martin
2012-01-01
Semantic interoperability is an unsolved problem which occurs while working with medical forms from different information systems or institutions. Standards like ODM or CDA assure structural homogenization but in order to compare elements from different data models it is necessary to use semantic concepts and codes on an item level of those structures. We developed and implemented a web-based tool which enables a domain expert to perform semi-automated coding of ODM-files. For each item it is possible to inquire web services which result in unique concept codes without leaving the context of the document. Although it was not feasible to perform a totally automated coding we have implemented a dialog based method to perform an efficient coding of all data elements in the context of the whole document. The proportion of codable items was comparable to results from previous studies.
Documentation of Heritage Structures Through Geo-Crowdsourcing and Web-Mapping
NASA Astrophysics Data System (ADS)
Dhonju, H. K.; Xiao, W.; Shakya, B.; Mills, J. P.; Sarhosis, V.
2017-09-01
Heritage documentation has become increasingly urgent due to both natural impacts and human influences. The documentation of countless heritage sites around the globe is a massive project that requires significant amounts of financial and labour resources. With the concepts of volunteered geographic information (VGI) and citizen science, heritage data such as digital photographs can be collected through online crowd participation. Whilst photographs are not strictly geographic data, they can be geo-tagged by the participants. They can also be automatically geo-referenced into a global coordinate system if collected via mobile phones which are now ubiquitous. With the assistance of web-mapping, an online geo-crowdsourcing platform has been developed to collect and display heritage structure photographs. Details of platform development are presented in this paper. The prototype is demonstrated with several heritage examples. Potential applications and advancements are discussed.
E-Portfolio Web-based for Students’ Internship Program Activities
NASA Astrophysics Data System (ADS)
Juhana, A.; Abdullah, A. G.; Somantri, M.; Aryadi, S.; Zakaria, D.; Amelia, N.; Arasid, W.
2018-02-01
Internship program is an important part in vocational education process to improve the quality of competent graduates. The complete work documentation process in electronic portfolio (e-Portfolio) platform will facilitate students in reporting the results of their work to both university and industry supervisor. The purpose of this research is to create a more easily accessed e-Portfolio which is appropriate for students and supervisors’ need in documenting their work and monitoring process. The method used in this research is fundamental research. This research is focused on the implementation of internship e-Portfolio features by demonstrating them to students who have conducted internship program. The result of this research is to create a proper web-based e-Portfolio which can be used to facilitate students in documenting the results of their work and aid supervisors in monitoring process during internship.
deepTools2: a next generation web server for deep-sequencing data analysis.
Ramírez, Fidel; Ryan, Devon P; Grüning, Björn; Bhardwaj, Vivek; Kilpert, Fabian; Richter, Andreas S; Heyne, Steffen; Dündar, Friederike; Manke, Thomas
2016-07-08
We present an update to our Galaxy-based web server for processing and visualizing deeply sequenced data. Its core tool set, deepTools, allows users to perform complete bioinformatic workflows ranging from quality controls and normalizations of aligned reads to integrative analyses, including clustering and visualization approaches. Since we first described our deepTools Galaxy server in 2014, we have implemented new solutions for many requests from the community and our users. Here, we introduce significant enhancements and new tools to further improve data visualization and interpretation. deepTools continue to be open to all users and freely available as a web service at deeptools.ie-freiburg.mpg.de The new deepTools2 suite can be easily deployed within any Galaxy framework via the toolshed repository, and we also provide source code for command line usage under Linux and Mac OS X. A public and documented API for access to deepTools functionality is also available. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
2007-12-01
Boyle, “Important issues in hypertext documentation usability,” In Proceedings of the 9th Annual international Conference on Systems Documentation...Tufte’s principles of information design to creating effective Web sites.” In Proceedings of the 15th Annual international Conference on Computer...usability,” In Proceedings of the 9th Annual international Conference on Systems Documentation (Chicago, Illinois, 1991). SIGDOC . ACM, New York, NY
NASA Astrophysics Data System (ADS)
Rahgozar, M. Armon; Hastings, Tom; McCue, Daniel L.
1997-04-01
The Internet is rapidly changing the traditional means of creation, distribution and retrieval of information. Today, information publishers leverage the capabilities provided by Internet technologies to rapidly communicate information to a much wider audience in unique customized ways. As a result, the volume of published content has been astronomically increasing. This, in addition to the ease of distribution afforded by the Internet has resulted in more and more documents being printed. This paper introduces several axes along which Internet printing may be examined and addresses some of the technological challenges that lay ahead. Some of these axes include: (1) submission--the use of the Internet protocols for selecting printers and submitting documents for print, (2) administration--the management and monitoring of printing engines and other print resources via Web pages, and (3) formats--printing document formats whose spectrum now includes HTML documents with simple text, layout-enhanced documents with Style Sheets, documents that contain audio, graphics and other active objects as well as the existing desktop and PDL formats. The format axis of the Internet Printing becomes even more exciting when one considers that the Web documents are inherently compound and the traversal into the various pieces may uncover various formats. The paper also examines some imaging specific issues that are paramount to Internet Printing. These include formats and structures for representing raster documents and images, compression, fonts rendering and color spaces.
Berquist, Rachel M.; Gledhill, Kristen M.; Peterson, Matthew W.; Doan, Allyson H.; Baxter, Gregory T.; Yopak, Kara E.; Kang, Ning; Walker, H. J.; Hastings, Philip A.; Frank, Lawrence R.
2012-01-01
Museum fish collections possess a wealth of anatomical and morphological data that are essential for documenting and understanding biodiversity. Obtaining access to specimens for research, however, is not always practical and frequently conflicts with the need to maintain the physical integrity of specimens and the collection as a whole. Non-invasive three-dimensional (3D) digital imaging therefore serves a critical role in facilitating the digitization of these specimens for anatomical and morphological analysis as well as facilitating an efficient method for online storage and sharing of this imaging data. Here we describe the development of the Digital Fish Library (DFL, http://www.digitalfishlibrary.org), an online digital archive of high-resolution, high-contrast, magnetic resonance imaging (MRI) scans of the soft tissue anatomy of an array of fishes preserved in the Marine Vertebrate Collection of Scripps Institution of Oceanography. We have imaged and uploaded MRI data for over 300 marine and freshwater species, developed a data archival and retrieval system with a web-based image analysis and visualization tool, and integrated these into the public DFL website to disseminate data and associated metadata freely over the web. We show that MRI is a rapid and powerful method for accurately depicting the in-situ soft-tissue anatomy of preserved fishes in sufficient detail for large-scale comparative digital morphology. However these 3D volumetric data require a sophisticated computational and archival infrastructure in order to be broadly accessible to researchers and educators. PMID:22493695
Berquist, Rachel M; Gledhill, Kristen M; Peterson, Matthew W; Doan, Allyson H; Baxter, Gregory T; Yopak, Kara E; Kang, Ning; Walker, H J; Hastings, Philip A; Frank, Lawrence R
2012-01-01
Museum fish collections possess a wealth of anatomical and morphological data that are essential for documenting and understanding biodiversity. Obtaining access to specimens for research, however, is not always practical and frequently conflicts with the need to maintain the physical integrity of specimens and the collection as a whole. Non-invasive three-dimensional (3D) digital imaging therefore serves a critical role in facilitating the digitization of these specimens for anatomical and morphological analysis as well as facilitating an efficient method for online storage and sharing of this imaging data. Here we describe the development of the Digital Fish Library (DFL, http://www.digitalfishlibrary.org), an online digital archive of high-resolution, high-contrast, magnetic resonance imaging (MRI) scans of the soft tissue anatomy of an array of fishes preserved in the Marine Vertebrate Collection of Scripps Institution of Oceanography. We have imaged and uploaded MRI data for over 300 marine and freshwater species, developed a data archival and retrieval system with a web-based image analysis and visualization tool, and integrated these into the public DFL website to disseminate data and associated metadata freely over the web. We show that MRI is a rapid and powerful method for accurately depicting the in-situ soft-tissue anatomy of preserved fishes in sufficient detail for large-scale comparative digital morphology. However these 3D volumetric data require a sophisticated computational and archival infrastructure in order to be broadly accessible to researchers and educators.
Alzforum and SWAN: the present and future of scientific web communities.
Clark, Tim; Kinoshita, June
2007-05-01
Scientists drove the early development of the World Wide Web, primarily as a means for rapid communication, document sharing and data access. They have been far slower to adopt the web as a medium for building research communities. Yet, web-based communities hold great potential for accelerating the pace of scientific research. In this article, we will describe the 10-year experience of the Alzheimer Research Forum ('Alzforum'), a unique example of a thriving scientific web community, and explain the features that contributed to its success. We will then outline the SWAN (Semantic Web Applications in Neuromedicine) project, in which Alzforum curators are collaborating with informatics researchers to develop novel approaches that will enable communities to share richly contextualized information about scientific data, claims and hypotheses.
Improving the Accuracy of Attribute Extraction using the Relatedness between Attribute Values
NASA Astrophysics Data System (ADS)
Bollegala, Danushka; Tani, Naoki; Ishizuka, Mitsuru
Extracting attribute-values related to entities from web texts is an important step in numerous web related tasks such as information retrieval, information extraction, and entity disambiguation (namesake disambiguation). For example, for a search query that contains a personal name, we can not only return documents that contain that personal name, but if we have attribute-values such as the organization for which that person works, we can also suggest documents that contain information related to that organization, thereby improving the user's search experience. Despite numerous potential applications of attribute extraction, it remains a challenging task due to the inherent noise in web data -- often a single web page contains multiple entities and attributes. We propose a graph-based approach to select the correct attribute-values from a set of candidate attribute-values extracted for a particular entity. First, we build an undirected weighted graph in which, attribute-values are represented by nodes, and the edge that connects two nodes in the graph represents the degree of relatedness between the corresponding attribute-values. Next, we find the maximum spanning tree of this graph that connects exactly one attribute-value for each attribute-type. The proposed method outperforms previously proposed attribute extraction methods on a dataset that contains 5000 web pages.
Bjerkan, Jorunn; Vatne, Solfrid; Hollingen, Anne
2014-01-01
Background and objective The Individual Care Plan (ICP) was introduced in Norway to meet new statutory requirements for user participation in health care planning, incorporating multidisciplinary and cross-sector collaboration. A web-based solution (electronic ICP [e-ICP]) was used to support the planning and documentation. The aim of this study was to investigate how web-based collaboration challenged user and professional roles. Methods Data were obtained from 15 semistructured interviews with users and eight with care professionals, and from two focus-group interviews with eight care professionals in total. The data were analyzed using systematic text condensation in a stepwise analysis model. Results Users and care professionals took either a proactive or a reluctant role in e-ICP collaboration. Where both user and care professionals were proactive, the pairing helped to ensure that the planning worked well; so did pairings of proactive care professionals and reluctant users. Proactive users paired with reluctant care professionals also made care planning work, thanks to the availability of information and the users’ own capacity or willingness to conduct the planning. Where both parties were reluctant, no planning activities occurred. Conclusion Use of the e-ICP challenged the user–professional relationship. In some cases, a power transition took place in the care process, which led to patient empowerment. This knowledge might be used to develop a new understanding of how role function can be challenged when users and care professionals have equal access to health care documentation and planning tools. PMID:25525367
Exchanging the Context between OGC Geospatial Web clients and GIS applications using Atom
NASA Astrophysics Data System (ADS)
Maso, Joan; Díaz, Paula; Riverola, Anna; Pons, Xavier
2013-04-01
Currently, the discovery and sharing of geospatial information over the web still presents difficulties. News distribution through website content was simplified by the use of Really Simple Syndication (RSS) and Atom syndication formats. This communication exposes an extension of Atom to redistribute references to geospatial information in a Spatial Data Infrastructure distributed environment. A geospatial client can save the status of an application that involves several OGC services of different kind and direct data and share this status with other users that need the same information and use different client vendor products in an interoperable way. The extensibility of the Atom format was essential to define a format that could be used in RSS enabled web browser, Mass Market map viewers and emerging geospatial enable integrated clients that support Open Geospatial Consortium (OGC) services. Since OWS Context has been designed as an Atom extension, it is possible to see the document in common places where Atom documents are valid. Internet web browsers are able to present the document as a list of items with title, abstract, time, description and downloading features. OWS Context uses GeoRSS so that, the document can be to be interpreted by both Google maps and Bing Maps as items that have the extent represented in a dynamic map. Another way to explode a OWS Context is to develop an XSLT to transform the Atom feed into an HTML5 document that shows the exact status of the client view window that saved the context document. To accomplish so, we use the width and height of the client window, and the extent of the view in world (geographic) coordinates in order to calculate the scale of the map. Then, we can mix elements in world coordinates (such as CF-NetCDF files or GML) with elements in pixel coordinates (such as WMS maps, WMTS tiles and direct SVG content). A smarter map browser application called MiraMon Map Browser is able to write a context document and read it again to recover the context of the previous view or load a context generated by another application. The possibility to store direct links to direct files in OWS Context is particularly interesting for GIS desktop solutions. This communication also presents the development made in the MiraMon desktop GIS solution to include OWS Context. MiraMon software is able to deal either with local files, web services and database connections. As in any other GIS solution, MiraMon team designed its own file (MiraMon Map MMM) for storing and sharing the status of a GIS session. The new OWS Context format is now adopted as an interoperable substitution of the MMM. The extensibility of the format makes it possible to map concepts in the MMM to current OWS Context elements (such as titles, data links, extent, etc) and to generate new elements that are able to include all extra metadata not currently covered by OWS Context. These developments were done in the nine edition of the OpenGIS Web Services Interoperability Experiment (OWS-9) and are demonstrated in this communication.
78 FR 29159 - Electric Power Research Institute; Seismic Evaluation Guidance
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-17
..., conducted field investigations, and used more recent methods than were previously available. In performing... available, by searching on http://www.regulations.gov under Docket ID NRC-2013-0038. Federal Rulemaking Web... Agencywide Documents Access and Management System (ADAMS): You may access publicly-available documents online...
Storing and Viewing Electronic Documents.
ERIC Educational Resources Information Center
Falk, Howard
1999-01-01
Discusses the conversion of fragile library materials to computer storage and retrieval to extend the life of the items and to improve accessibility through the World Wide Web. Highlights include entering the images, including scanning; optical character recognition; full text and manual indexing; and available document- and image-management…
EVALUATION OF THE POTENTIAL CARCINOGENICITY OF ELECTROMAGNETIC FIELDS (EXTERNAL REVIEW DRAFT)
The U.S. Environmental Protection Agency (EPA or Agency) is posting on this web site a draft document related to the potential adverse human health effects resulting from exposure to electromagnetic fields (EMF). This document was never finalized after EPA activities were discon...
Scientists as Communicators: Inclusion of a Science/Education Liaison on Research Expeditions
NASA Astrophysics Data System (ADS)
Sautter, L. R.
2004-12-01
Communication of research and scientific results to an audience outside of one's field poses a challenge to many scientists. Many research scientists have a natural ability to address the challenge, while others may chose to seek assistance. Research cruise PIs maywish to consider including a Science/Education Liaison (SEL) on future grants. The SEL is a marine scientist whose job before, during and after the cruise is to work with the shipboard scientists to document the science conducted. The SEL's role is three-fold: (1) to communicate shipboard science activities near-real-time to the public via the web; (2) to develop a variety of web-based resources based on the scientific operations; and (3) to assist educators with the integration of these resources into classroom curricula. The first role involves at-sea writing and relaying from ship-to-shore (via email) a series of Daily Logs. NOAA Ocean Exploration (OE) has mastered the use of web-posted Daily Logs for their major expeditions (see their OceanExplorer website), introducing millions of users to deep sea exploration. Project Oceanica uses the OE daily log model to document research expeditions. In addition to writing daily logs and participating on OE expeditions, Oceanica's SEL also documents the cruise's scientific operations and preliminary findings using video and photos, so that web-based resources (photo galleries, video galleries, and PhotoDocumentaries) can be developed during and following the cruise, and posted on the expedition's home page within the Oceanica web site (see URL). We have created templates for constructing these science resources which allow the shipboard scientists to assist with web resource development. Bringing users to the site is achieved through email communications to a growing list of educators, scientists, and students, and through collaboration with the COSEE network. With a large research expedition-based inventory of web resources now available, Oceanica is training teachers and college faculty on the use and incorporation of these resources into middle school, high school and introductory college classrooms. Support for a SEL on shipboard expeditions serves to catalyze the dissemination of the scientific operations to a broad audience of users.
Chee, Wonshik; Kim, Sangmi; Chu, Tsung-Lan; Ji, Xiaopeng; Zhang, Jingwen; Chee, Eunice; Im, Eun-Ok
2016-01-01
Background With advances in computer technologies, Web-based interventions are widely accepted and welcomed by health care providers and researchers. Although the benefits of Web-based interventions on physical activity promotion have been documented, the programs have rarely targeted Asian Americans, including Asian American midlife women. Subsequently, culturally competent Web-based physical activity programs for Asian Americans may be necessary. Objective The purpose of our study was to explore practical issues in developing and implementing a culturally competent Web-based physical activity promotion program for 2 groups of Asian American women—Chinese American and Korean American midlife women—and to provide implications for future research. Methods While conducting the study, the research team members wrote individual memos on issues and their inferences on plausible reasons for the issues. The team had group discussions each week and kept the minutes of the discussions. Then, the memos and minutes were analyzed using a content analysis method. Results We identified practical issues in 4 major idea categories: (1) bilingual translators’ language orientations, (2) cultural sensitivity requirement, (3) low response rate, interest, and retention, and (4) issues in implementation logistics. Conclusions Based on the issues, we make several suggestions for the use of bilingual translators, motivational strategies, and implementation logistics. PMID:27872035
NASA Astrophysics Data System (ADS)
Gan, T.; Tarboton, D. G.; Dash, P. K.; Gichamo, T.; Horsburgh, J. S.
2017-12-01
Web based apps, web services and online data and model sharing technology are becoming increasingly available to support research. This promises benefits in terms of collaboration, platform independence, transparency and reproducibility of modeling workflows and results. However, challenges still exist in real application of these capabilities and the programming skills researchers need to use them. In this research we combined hydrologic modeling web services with an online data and model sharing system to develop functionality to support reproducible hydrologic modeling work. We used HydroDS, a system that provides web services for input data preparation and execution of a snowmelt model, and HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. To make the web services easy to use, we developed a HydroShare app (based on the Tethys platform) to serve as a browser based user interface for HydroDS. In this integration, HydroDS receives web requests from the HydroShare app to process the data and execute the model. HydroShare supports storage and sharing of the results generated by HydroDS web services. The snowmelt modeling example served as a use case to test and evaluate this approach. We show that, after the integration, users can prepare model inputs or execute the model through the web user interface of the HydroShare app without writing program code. The model input/output files and metadata describing the model instance are stored and shared in HydroShare. These files include a Python script that is automatically generated by the HydroShare app to document and reproduce the model input preparation workflow. Once stored in HydroShare, inputs and results can be shared with other users, or published so that other users can directly discover, repeat or modify the modeling work. This approach provides a collaborative environment that integrates hydrologic web services with a data and model sharing system to enable model development and execution. The entire system comprised of the HydroShare app, HydroShare and HydroDS web services is open source and contributes to capability for web based modeling research.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Mazzocut, Mauro; Truccolo, Ivana; Antonini, Marialuisa; Rinaldi, Fabio; Omero, Paolo; Ferrarin, Emanuela; De Paoli, Paolo; Tasso, Carlo
2016-06-16
The use of complementary and alternative medicine (CAM) among cancer patients is widespread and mostly self-administrated. Today, one of the most relevant topics is the nondisclosure of CAM use to doctors. This general lack of communication exposes patients to dangerous behaviors and to less reliable information channels, such as the Web. The Italian context scarcely differs from this trend. Today, we are able to mine and analyze systematically the unstructured information available in the Web, to get an insight of people's opinions, beliefs, and rumors concerning health topics. Our aim was to analyze Italian Web conversations about CAM, identifying the most relevant Web sources, therapies, and diseases and measure the related sentiment. Data have been collected using the Web Intelligence tool ifMONITOR. The workflow consisted of 6 phases: (1) eligibility criteria definition for the ifMONITOR search profile; (2) creation of a CAM terminology database; (3) generic Web search and automatic filtering, the results have been manually revised to refine the search profile, and stored in the ifMONITOR database; (4) automatic classification using the CAM database terms; (5) selection of the final sample and manual sentiment analysis using a 1-5 score range; (6) manual indexing of the Web sources and CAM therapies type retrieved. Descriptive univariate statistics were computed for each item: absolute frequency, percentage, central tendency (mean sentiment score [MSS]), and variability (standard variation σ). Overall, 212 Web sources, 423 Web documents, and 868 opinions have been retrieved. The overall sentiment measured tends to a good score (3.6 of 5). Quite a high polarization in the opinions of the conversation partaking emerged from standard variation analysis (σ≥1). In total, 126 of 212 (59.4%) Web sources retrieved were nonhealth-related. Facebook (89; 21%) and Yahoo Answers (41; 9.7%) were the most relevant. In total, 94 CAM therapies have been retrieved. Most belong to the "biologically based therapies or nutrition" category: 339 of 868 opinions (39.1%), showing an MSS of 3.9 (σ=0.83). Within nutrition, "diets" collected 154 opinions (18.4%) with an MSS of 3.8 (σ=0.87); "food as CAM" overall collected 112 opinions (12.8%) with a MSS of 4 (σ=0.68). Excluding diets and food, the most discussed CAM therapy is the controversial Italian "Di Bella multitherapy" with 102 opinions (11.8%) with an MSS of 3.4 (σ=1.21). Breast cancer was the most mentioned disease: 81 opinions of 868. Conversations about CAM and cancer are ubiquitous. There is a great concern about the biologically based therapies, perceived as harmless and useful, under-rating all risks related to dangerous interactions or malnutrition. Our results can be useful to doctors to be aware of the implications of these beliefs for the clinical practice. Web conversation exploitation could be a strategy to gain insights of people's perspective for other controversial topics.
Lightweight Advertising and Scalable Discovery of Services, Datasets, and Events Using Feedcasts
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Movva, S.
2010-12-01
Broadcast feeds (Atom or RSS) are a mechanism for advertising the existence of new data objects on the web, with metadata and links to further information. Users then subscribe to the feed to receive updates. This concept has already been used to advertise the new granules of science data as they are produced (datacasting), with browse images and metadata, and to advertise bundles of web services (service casting). Structured metadata is introduced into the XML feed format by embedding new XML tags (in defined namespaces), using typed links, and reusing built-in Atom feed elements. This “infocasting” concept can be extended to include many other science artifacts, including data collections, workflow documents, topical geophysical events (hurricanes, forest fires, etc.), natural hazard warnings, and short articles describing a new science result. The common theme is that each infocast contains machine-readable, structured metadata describing the object and enabling further manipulation. For example, service casts contain type links pointing to the service interface description (e.g., WSDL for SOAP services), service endpoint, and human-readable documentation. Our Infocasting project has three main goals: (1) define and evangelize micro-formats (metadata standards) so that providers can easily advertise their web services, datasets, and topical geophysical events by adding structured information to broadcast feeds; (2) develop authoring tools so that anyone can easily author such service advertisements, data casts, and event descriptions; and (3) provide a one-stop, Google-like search box in the browser that allows discovery of service, data and event casts visible on the web, and services & data registered in the GEOSS repository and other NASA repositories (GCMD & ECHO). To demonstrate the event casting idea, a series of micro-articles—with accompanying event casts containing links to relevant datasets, web services, and science analysis workflows--will be authored for several kinds of geophysical events, such as hurricanes, smoke plume events, tsunamis, etc. The talk will describe our progress so far, and some of the issues with leveraging existing metadata standards to define lightweight micro-formats.
Li, Runhui
2012-06-05
To identify global research trends of stem cell transplantation for treating Parkinson's disease using a bibliometric analysis of the Web of Science. We performed a bibliometric analysis of data retrievals for stem cell transplantation for treating Parkinson's disease from 2002 to 2011 using the Web of Science. (a) peer-reviewed articles on stem cell transplantation for treating Parkinson's disease which were published and indexed in the Web of Science; (b) type of articles: original research articles, reviews, meeting abstracts, proceedings papers, book chapters, editorial material and news items; (c) year of publication: 2002-2011. (a) articles that required manual searching or telephone access; (b) we excluded documents that were not published in the public domain; (c) we excluded a number of corrected papers from the total number of articles. (1) Type of literature; (2) annual publication output; (3) distribution according to journals; (4) distribution according to subject areas; (5) distribution according to country; (6) distribution according to institution; (7) comparison of countries that published the most papers on stem cell transplantation from different cell sources for treating Parkinson's disease; (8) comparison of institutions that published the most papers on stem cell transplantation from different cell sources for treating Parkinson's disease in the Web of Science from 2002 to 2011; (9) comparison of studies on stem cell transplantation from different cell sources for treating Parkinson's disease. In total, 1 062 studies on stem cell transplantation for treating Parkinson's disease appeared in the Web of Science from 2002 to 2011, almost one third of which were from American authors and institutes. The number of studies on stem cell transplantation for treating Parkinson's disease had gradually increased over the past 10 years. Papers on stem cell transplantation for treating Parkinson's disease appeared in journals such as Stem Cells and Experimental Neurology. Although the United States published more articles addressing neural stem cell and embryonic stem cell transplantation for treating Parkinson's disease, China ranked first for articles published on bone marrow mesenchymal stem cell transplantation for treating Parkinson's disease. From our analysis of the literature and research trends, we found that stem cell transplantation for treating Parkinson's disease may offer further benefits in regenerative medicine.
Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin
2017-01-21
RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA comparison tools, RNApdist and RNAdistance, showcased that RNA-TVcurve can efficiently capture subtle relationships among RNAs for mutation detection and non-coding RNA classification. All the relevant results were shown in an intuitive graphical manner, and can be freely downloaded from this server. RNA-TVcurve, along with test examples and detailed documents, are available at: http://ml.jlu.edu.cn/tvcurve/ .
The Impact on Education of the World Wide Web.
ERIC Educational Resources Information Center
Hobbs, D. J.; Taylor, R. J.
This paper describes a project which created a set of World Wide Web (WWW) pages documenting the state of the art in educational multimedia design; a prototype WWW-based multimedia teaching tool--a podiatry test using HTML forms, 24-bit color images and MPEG video--was also designed, developed, and evaluated. The project was conducted between…
Using Web 2.0 Technology to Enhance, Scaffold and Assess Problem-Based Learning
ERIC Educational Resources Information Center
Hack, Catherine
2013-01-01
Web 2.0 technologies, such as social networks, wikis, blogs, and virtual worlds provide a platform for collaborative working, facilitating sharing of resources and joint document production. They can act as a stimulus to promote active learning and provide an engaging and interactive environment for students, and as such align with the philosophy…
ERIC Educational Resources Information Center
Otamendi, Francisco Javier; Doncel, Luis Miguel
2013-01-01
Experimental teaching in general, and simulation in particular, have primarily been used in lecture rooms but in the future must also be adapted to e-learning. The integration of web simulators into virtual learning environments, coupled with specific supporting video documentation and the use of videoconference tools, results in robust…
The Impact of Subject Indexes on Semantic Indeterminacy in Enterprise Document Retrieval
ERIC Educational Resources Information Center
Schymik, Gregory
2012-01-01
Ample evidence exists to support the conclusion that enterprise search is failing its users. This failure is costing corporate America billions of dollars every year. Most enterprise search engines are built using web search engines as their foundations. These search engines are optimized for web use and are inadequate when used inside the…
Spiders and Worms and Crawlers, Oh My: Searching on the World Wide Web.
ERIC Educational Resources Information Center
Eagan, Ann; Bender, Laura
Searching on the world wide web can be confusing. A myriad of search engines exist, often with little or no documentation, and many of these search engines work differently from the standard search engines people are accustomed to using. Intended for librarians, this paper defines search engines, directories, spiders, and robots, and covers basics…
Second Language Acquisition: Implications of Web 2.0 and Beyond
ERIC Educational Resources Information Center
Chang, Ching-Wen; Pearman, Cathy; Farha, Nicholas
2012-01-01
Language laboratories, developed in the 1970s under the influence of the Audiolingual Method, were superseded several decades later by computer-assisted language learning (CALL) work stations (Gündüz, 2005). The World Wide Web was developed shortly thereafter. From this introduction and the well-documented and staggering growth of the Internet and…
ERIC Educational Resources Information Center
Stoet, Gijsbert
2017-01-01
This article reviews PsyToolkit, a free web-based service designed for setting up, running, and analyzing online questionnaires and reaction-time (RT) experiments. It comes with extensive documentation, videos, lessons, and libraries of free-to-use psychological scales and RT experiments. It provides an elaborate interactive environment to use (or…
Identifying Experts and Authoritative Documents in Social Bookmarking Systems
ERIC Educational Resources Information Center
Grady, Jonathan P.
2013-01-01
Social bookmarking systems allow people to create pointers to Web resources in a shared, Web-based environment. These services allow users to add free-text labels, or "tags", to their bookmarks as a way to organize resources for later recall. Ease-of-use, low cognitive barriers, and a lack of controlled vocabulary have allowed social…
ERIC Educational Resources Information Center
Perez, Stella
This document describes LeagueTLC: Transformational Learning Connections (http://www.league.org/leaguetlc/index.htm), a Web site created by the League for Innovation in the Community College with funding from the Fund for the Improvement of Post Secondary Education (FIPSE). This Web site serves as a resource for community colleges by disseminating…
Collaborating across Time Zones: How 2.0 Technology Can Bring Your Global Team Together
ERIC Educational Resources Information Center
Hastings, Robin
2008-01-01
The Web 2.0 tools and services that are making socializing, networking, and communicating in general so easy are also making group projects seriously simple. With the judicious use of a few of the popular tools that use Web 2.0 technologies and philosophies, one can collaboratively create documents, spreadsheets, presentations, websites, project…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-09
... Facility Operating License, Proposed No Significant Hazards Consideration Determination, and Opportunity... following methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID... related to this document by any of the following methods: Federal Rulemaking Web Site: Go to http://www...
ERIC Educational Resources Information Center
Herder, P. M.; Subrahmanian, E.; Talukdar, S.; Turk, A. L.; Westerberg, A. W.
2002-01-01
Explains distance education approach applied to the 'Engineering Design Problem Formulation' course simultaneously at the Delft University of Technology (the Netherlands) and at Carnegie Mellon University (CMU, Pittsburgh, USA). Uses video taped lessons, video conferencing, electronic mails and web-accessible document management system LIRE in the…
QUT Para at TREC 2012 Web Track: Word Associations for Retrieving Web Documents
2012-11-01
zero for the QUTParaTQEg1 sys- tem (and the best performance across all participants was non-zero), included: 1. Topic 157: The beatles rock band 2...Topic 162: dnr 3. Topic 163: arkansas 5 4. Topic 167: barbados 5. Topic 170: scooters 6. Topic 179: black history 7. Topic 188: internet phone service
Toward a Web Based Environment for Evaluation and Design of Pedagogical Hypermedia
ERIC Educational Resources Information Center
Trigano, Philippe C.; Pacurar-Giacomini, Ecaterina
2004-01-01
We are working on a method, called CEPIAH. We propose a web based system used to help teachers to design multimedia documents and to evaluate their prototypes. Our current research objectives are to create a methodology to sustain the educational hypermedia design and evaluation. A module is used to evaluate multimedia software applied in…
World Wide Web Indexes and Hierarchical Lists: Finding Tools for the Internet.
ERIC Educational Resources Information Center
Munson, Kurt I.
1996-01-01
In World Wide Web indexing: (1) the creation process is automated; (2) the indexes are merely descriptive, not analytical of document content; (3) results may be sorted differently depending on the search engine; and (4) indexes link directly to the resources. This article compares the indexing methods and querying options of the search engines…
Visual Links in the World-Wide Web: The Uses and Limitations of Image Maps.
ERIC Educational Resources Information Center
Cochenour, John J.; And Others
As information delivery systems on the Internet increasingly evolve into World Wide Web browsers, understanding key graphical elements of the browser interface is critical to the design of effective information display and access tools. Image maps are one such element, and this document describes a pilot study that collected, reviewed, and…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-13
... FEDERAL COMMUNICATIONS COMMISSION [DA 11-1930] Mandatory Electronic Filing for Cable Special... Web site http://www.BCPIWEB.com using document number DA 11-1930 for the CSR and CSC Electronic Filing... Commission's Web site: http://hraunfoss.fcc.gov/edocs_public/attachmatch/DA-11-1930A1.doc ; http://hraunfoss...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-11
... deep saline geologic formations for permanent geologic storage. DATES: DOE invites the public to...; or by fax (304) 285-4403. The Draft EIS is available on DOE's NEPA Web page at: http://nepa.energy.gov/DOE_NEPA_documents.htm ; and on the National Energy Technology Laboratory's Web page at: http...
Secure Web-Site Access with Tickets and Message-Dependent Digests
Donato, David I.
2008-01-01
Although there are various methods for restricting access to documents stored on a World Wide Web (WWW) site (a Web site), none of the widely used methods is completely suitable for restricting access to Web applications hosted on an otherwise publicly accessible Web site. A new technique, however, provides a mix of features well suited for restricting Web-site or Web-application access to authorized users, including the following: secure user authentication, tamper-resistant sessions, simple access to user state variables by server-side applications, and clean session terminations. This technique, called message-dependent digests with tickets, or MDDT, maintains secure user sessions by passing single-use nonces (tickets) and message-dependent digests of user credentials back and forth between client and server. Appendix 2 provides a working implementation of MDDT with PHP server-side code and JavaScript client-side code.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento.
This document offers additional guidelines for school facilities in California in the areas of safety and security, lighting, and cleanliness. It also offers a description of technology resources available on the World Wide Web. On the topic of safety and security, the document offers guidelines in the areas of entrances, doors, and controlled…
50 CFR 600.150 - Disposition of records.
Code of Federal Regulations, 2011 CFR
2011-10-01
... active duty status belong to the Federal Government. When employees leave the Council, they may not take... generally available to the public on its Internet site. Documents for posting must include: fishery... of interest to the public. For documents too large to maintain on the Web site, not available...
50 CFR 600.150 - Disposition of records.
Code of Federal Regulations, 2014 CFR
2014-10-01
... active duty status belong to the Federal Government. When employees leave the Council, they may not take... generally available to the public on its Internet site. Documents for posting must include: fishery... of interest to the public. For documents too large to maintain on the Web site, not available...
Linking to a document at NSCEP rather than uploading your own copy meets EPA standards and best practices for web content. If you follow this procedure, you can link directly to the PDF document without NSCEP's viewing pane or navigation.
10 CFR 52.3 - Written communications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... either by mail addressed: ATTN: Document Control Desk, U.S. Nuclear Regulatory Commission, Washington, DC... submissions can be obtained by visiting the NRC's Web site at http://www.nrc.gov/site-help/e-submittals.html...) through (b)(7) of this section: to the NRC's Document Control Desk (if on paper, the signed original...
10 CFR 63.4 - Communications and records.
Code of Federal Regulations, 2011 CFR
2011-01-01
... follows: (1) By mail addressed: ATTN: Document Control Desk; Director, Office of Nuclear Material Safety... the NRC's offices at 11555 Rockville Pike, Rockville, Maryland; ATTN: Document Control Desk: Director... obtained by visiting the NRC's Web site at http://www.nrc.gov/site-help/e-submittals.html; by e-mail to...
10 CFR 63.4 - Communications and records.
Code of Federal Regulations, 2010 CFR
2010-01-01
... follows: (1) By mail addressed: ATTN: Document Control Desk; Director, Office of Nuclear Material Safety... the NRC's offices at 11555 Rockville Pike, Rockville, Maryland; ATTN: Document Control Desk: Director... obtained by visiting the NRC's Web site at http://www.nrc.gov/site-help/e-submittals.html; by e-mail to...
48 CFR 252.232-7006 - Wide Area WorkFlow Payment Instructions.
Code of Federal Regulations, 2013 CFR
2013-10-01
...— (1) Have a designated electronic business point of contact in the System for Award Management at... submission. Document submissions may be via Web entry, Electronic Data Interchange, or File Transfer Protocol... that uniquely identifies a unit, activity, or organization. Document type means the type of payment...
48 CFR 252.232-7006 - Wide Area WorkFlow Payment Instructions.
Code of Federal Regulations, 2014 CFR
2014-10-01
...— (1) Have a designated electronic business point of contact in the System for Award Management at... submission. Document submissions may be via Web entry, Electronic Data Interchange, or File Transfer Protocol... that uniquely identifies a unit, activity, or organization. Document type means the type of payment...
78 FR 64970 - New Deadlines for Public Comment on Draft Environmental Documents
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-30
... interested parties to contact Service personnel and Web sites for information about these draft documents. As..., fdsys/pkg/FR-2013- CO; Comprehensive Conservation 08-07/pdf/2013- Plan and Environmental Impact 19052.pdf. Statement; Two Ponds National Wildlife Refuge, Arvada, CO; Comprehensive Conservation Plan and...
Bioinformatics data distribution and integration via Web Services and XML.
Li, Xiao; Zhang, Yizheng
2003-11-01
It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.
Talkoot Portals: Discover, Tag, Share, and Reuse Collaborative Science Workflows
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Ramachandran, R.; Lynnes, C.
2009-05-01
A small but growing number of scientists are beginning to harness Web 2.0 technologies, such as wikis, blogs, and social tagging, as a transformative way of doing science. These technologies provide researchers easy mechanisms to critique, suggest and share ideas, data and algorithms. At the same time, large suites of algorithms for science analysis are being made available as remotely-invokable Web Services, which can be chained together to create analysis workflows. This provides the research community an unprecedented opportunity to collaborate by sharing their workflows with one another, reproducing and analyzing research results, and leveraging colleagues' expertise to expedite the process of scientific discovery. However, wikis and similar technologies are limited to text, static images and hyperlinks, providing little support for collaborative data analysis. A team of information technology and Earth science researchers from multiple institutions have come together to improve community collaboration in science analysis by developing a customizable "software appliance" to build collaborative portals for Earth Science services and analysis workflows. The critical requirement is that researchers (not just information technologists) be able to build collaborative sites around service workflows within a few hours. We envision online communities coming together, much like Finnish "talkoot" (a barn raising), to build a shared research space. Talkoot extends a freely available, open source content management framework with a series of modules specific to Earth Science for registering, creating, managing, discovering, tagging and sharing Earth Science web services and workflows for science data processing, analysis and visualization. Users will be able to author a "science story" in shareable web notebooks, including plots or animations, backed up by an executable workflow that directly reproduces the science analysis. New services and workflows of interest will be discoverable using tag search, and advertised using "service casts" and "interest casts" (Atom feeds). Multiple science workflow systems will be plugged into the system, with initial support for UAH's Mining Workflow Composer and the open-source Active BPEL engine, and JPL's SciFlo engine and the VizFlow visual programming interface. With the ability to share and execute analysis workflows, Talkoot portals can be used to do collaborative science in addition to communicate ideas and results. It will be useful for different science domains, mission teams, research projects and organizations. Thus, it will help to solve the "sociological" problem of bringing together disparate groups of researchers, and the technical problem of advertising, discovering, developing, documenting, and maintaining inter-agency science workflows. The presentation will discuss the goals of and barriers to Science 2.0, the social web technologies employed in the Talkoot software appliance (e.g. CMS, social tagging, personal presence, advertising by feeds, etc.), illustrate the resulting collaborative capabilities, and show early prototypes of the web interfaces (e.g. embedded workflows).
Kessel, K A; Habermehl, D; Bohn, C; Jäger, A; Floca, R O; Zhang, L; Bougatf, N; Bendl, R; Debus, J; Combs, S E
2012-12-01
Especially in the field of radiation oncology, handling a large variety of voluminous datasets from various information systems in different documentation styles efficiently is crucial for patient care and research. To date, conducting retrospective clinical analyses is rather difficult and time consuming. With the example of patients with pancreatic cancer treated with radio-chemotherapy, we performed a therapy evaluation by using an analysis system connected with a documentation system. A total number of 783 patients have been documented into a professional, database-based documentation system. Information about radiation therapy, diagnostic images and dose distributions have been imported into the web-based system. For 36 patients with disease progression after neoadjuvant chemoradiation, we designed and established an analysis workflow. After an automatic registration of the radiation plans with the follow-up images, the recurrence volumes are segmented manually. Based on these volumes the DVH (dose volume histogram) statistic is calculated, followed by the determination of the dose applied to the region of recurrence. All results are saved in the database and included in statistical calculations. The main goal of using an automatic analysis tool is to reduce time and effort conducting clinical analyses, especially with large patient groups. We showed a first approach and use of some existing tools, however manual interaction is still necessary. Further steps need to be taken to enhance automation. Already, it has become apparent that the benefits of digital data management and analysis lie in the central storage of data and reusability of the results. Therefore, we intend to adapt the analysis system to other types of tumors in radiation oncology.
F-OWL: An Inference Engine for Semantic Web
NASA Technical Reports Server (NTRS)
Zou, Youyong; Finin, Tim; Chen, Harry
2004-01-01
Understanding and using the data and knowledge encoded in semantic web documents requires an inference engine. F-OWL is an inference engine for the semantic web language OWL language based on F-logic, an approach to defining frame-based systems in logic. F-OWL is implemented using XSB and Flora-2 and takes full advantage of their features. We describe how F-OWL computes ontology entailment and compare it with other description logic based approaches. We also describe TAGA, a trading agent environment that we have used as a test bed for F-OWL and to explore how multiagent systems can use semantic web concepts and technology.
Strong regularities in world wide web surfing
Huberman; Pirolli; Pitkow; Lukose
1998-04-03
One of the most common modes of accessing information in the World Wide Web is surfing from one document to another along hyperlinks. Several large empirical studies have revealed common patterns of surfing behavior. A model that assumes that users make a sequence of decisions to proceed to another page, continuing as long as the value of the current page exceeds some threshold, yields the probability distribution for the number of pages that a user visits within a given Web site. This model was verified by comparing its predictions with detailed measurements of surfing patterns. The model also explains the observed Zipf-like distributions in page hits observed at Web sites.
Scientific production of Sports Science in Iran: A Scientometric Analysis.
Yaminfirooz, Mousa; Siamian, Hasan; Jahani, Mohammad Ali; Yaminifirouz, Masoud
2014-06-01
Physical education and sports science is one of the branches of humanities. The purpose of this study is determining the quantitative and qualitative rate of progress in scientific Production of Iran's researcher in Web of Science. Research Methods is Scientometric survey and Statistical Society Includes 233 Documents From 1993 to 2012 are indexed in ISI. Results showed that the time of this study, Iranian researchers' published 233 documents in this base during this period of time which has been cited 1106(4.76 times on average). The H- index has also been 17. Iran's most scientific productions in sports science realm was indexed in 2010 with 57 documents and the least in 2000. By considering the numbers of citations and the obtained H- index, it can be said that the quality of Iranian's articles is rather acceptable but in comparison to prestigious universities and large number of professors and university students in this field, the quantity of outputted articles is very low.
Interactive metagenomic visualization in a Web browser
2011-01-01
Background A critical output of metagenomic studies is the estimation of abundances of taxonomical or functional groups. The inherent uncertainty in assignments to these groups makes it important to consider both their hierarchical contexts and their prediction confidence. The current tools for visualizing metagenomic data, however, omit or distort quantitative hierarchical relationships and lack the facility for displaying secondary variables. Results Here we present Krona, a new visualization tool that allows intuitive exploration of relative abundances and confidences within the complex hierarchies of metagenomic classifications. Krona combines a variant of radial, space-filling displays with parametric coloring and interactive polar-coordinate zooming. The HTML5 and JavaScript implementation enables fully interactive charts that can be explored with any modern Web browser, without the need for installed software or plug-ins. This Web-based architecture also allows each chart to be an independent document, making them easy to share via e-mail or post to a standard Web server. To illustrate Krona's utility, we describe its application to various metagenomic data sets and its compatibility with popular metagenomic analysis tools. Conclusions Krona is both a powerful metagenomic visualization tool and a demonstration of the potential of HTML5 for highly accessible bioinformatic visualizations. Its rich and interactive displays facilitate more informed interpretations of metagenomic analyses, while its implementation as a browser-based application makes it extremely portable and easily adopted into existing analysis packages. Both the Krona rendering code and conversion tools are freely available under a BSD open-source license, and available from: http://krona.sourceforge.net. PMID:21961884
cPath: open source software for collecting, storing, and querying biological pathways
Cerami, Ethan G; Bader, Gary D; Gross, Benjamin E; Sander, Chris
2006-01-01
Background Biological pathways, including metabolic pathways, protein interaction networks, signal transduction pathways, and gene regulatory networks, are currently represented in over 220 diverse databases. These data are crucial for the study of specific biological processes, including human diseases. Standard exchange formats for pathway information, such as BioPAX, CellML, SBML and PSI-MI, enable convenient collection of this data for biological research, but mechanisms for common storage and communication are required. Results We have developed cPath, an open source database and web application for collecting, storing, and querying biological pathway data. cPath makes it easy to aggregate custom pathway data sets available in standard exchange formats from multiple databases, present pathway data to biologists via a customizable web interface, and export pathway data via a web service to third-party software, such as Cytoscape, for visualization and analysis. cPath is software only, and does not include new pathway information. Key features include: a built-in identifier mapping service for linking identical interactors and linking to external resources; built-in support for PSI-MI and BioPAX standard pathway exchange formats; a web service interface for searching and retrieving pathway data sets; and thorough documentation. The cPath software is freely available under the LGPL open source license for academic and commercial use. Conclusion cPath is a robust, scalable, modular, professional-grade software platform for collecting, storing, and querying biological pathways. It can serve as the core data handling component in information systems for pathway visualization, analysis and modeling. PMID:17101041
Open access to high-level data and analysis tools in the CMS experiment at the LHC
Calderon, A.; Colling, D.; Huffman, A.; ...
2015-12-23
The CMS experiment, in recognition of its commitment to data preservation and open access as well as to education and outreach, has made its first public release of high-level data under the CC0 waiver: up to half of the proton-proton collision data (by volume) at 7 TeV from 2010 in CMS Analysis Object Data format. CMS has prepared, in collaboration with CERN and the other LHC experiments, an open-data web portal based on Invenio. The portal provides access to CMS public data as well as to analysis tools and documentation for the public. The tools include an event display andmore » histogram application that run in the browser. In addition a virtual machine containing a CMS software environment along with XRootD access to the data is available. Within the virtual machine the public can analyse CMS data, example code is provided. As a result, we describe the accompanying tools and documentation and discuss the first experiences of data use.« less
Desktop document delivery using portable document format (PDF) files and the Web.
Shipman, J P; Gembala, W L; Reeder, J M; Zick, B A; Rainwater, M J
1998-01-01
Desktop access to electronic full-text literature was rated one of the most desirable services in a client survey conducted by the University of Washington Libraries. The University of Washington Health Sciences Libraries (UW HSL) conducted a ten-month pilot test from August 1996 to May 1997 to determine the feasibility of delivering electronic journal articles via the Internet to remote faculty. Articles were scanned into Adobe Acrobat Portable Document Format (PDF) files and delivered to individuals using Multipurpose Internet Mail Extensions (MIME) standard e-mail attachments and the Web. Participants retrieved scanned articles and used the Adobe Acrobat Reader software to view and print files. The pilot test required a special programming effort to automate the client notification and file deletion processes. Test participants were satisfied with the pilot test despite some technical difficulties. Desktop delivery is now offered as a routine delivery method from the UW HSL. PMID:9681165
Desktop document delivery using portable document format (PDF) files and the Web.
Shipman, J P; Gembala, W L; Reeder, J M; Zick, B A; Rainwater, M J
1998-07-01
Desktop access to electronic full-text literature was rated one of the most desirable services in a client survey conducted by the University of Washington Libraries. The University of Washington Health Sciences Libraries (UW HSL) conducted a ten-month pilot test from August 1996 to May 1997 to determine the feasibility of delivering electronic journal articles via the Internet to remote faculty. Articles were scanned into Adobe Acrobat Portable Document Format (PDF) files and delivered to individuals using Multipurpose Internet Mail Extensions (MIME) standard e-mail attachments and the Web. Participants retrieved scanned articles and used the Adobe Acrobat Reader software to view and print files. The pilot test required a special programming effort to automate the client notification and file deletion processes. Test participants were satisfied with the pilot test despite some technical difficulties. Desktop delivery is now offered as a routine delivery method from the UW HSL.
Automating testbed documentation and database access using World Wide Web (WWW) tools
NASA Technical Reports Server (NTRS)
Ames, Charles; Auernheimer, Brent; Lee, Young H.
1994-01-01
A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.
International Laser Ranging Services (ILRS) 2001 Annual Report
NASA Technical Reports Server (NTRS)
Pearlman, Michael (Editor); Torrence, Mark (Editor); Noll, Carey (Editor)
2002-01-01
This 2001 Annual Report of the International Laser Ranging Services (ILRS) is comprised of individual contributions from ILRS components within the international geodetic community. This report documents the work of the ILRS components for the year 2001. The report documents changes and progress of the ILRS. This document is also available on the ILRS Web site at http://ilrs.gsfc.nasa.gov/reports/ilrs_reports/ilrsar_2001.html.
VisualUrText: A Text Analytics Tool for Unstructured Textual Data
NASA Astrophysics Data System (ADS)
Zainol, Zuraini; Jaymes, Mohd T. H.; Nohuddin, Puteri N. E.
2018-05-01
The growing amount of unstructured text over Internet is tremendous. Text repositories come from Web 2.0, business intelligence and social networking applications. It is also believed that 80-90% of future growth data is available in the form of unstructured text databases that may potentially contain interesting patterns and trends. Text Mining is well known technique for discovering interesting patterns and trends which are non-trivial knowledge from massive unstructured text data. Text Mining covers multidisciplinary fields involving information retrieval (IR), text analysis, natural language processing (NLP), data mining, machine learning statistics and computational linguistics. This paper discusses the development of text analytics tool that is proficient in extracting, processing, analyzing the unstructured text data and visualizing cleaned text data into multiple forms such as Document Term Matrix (DTM), Frequency Graph, Network Analysis Graph, Word Cloud and Dendogram. This tool, VisualUrText, is developed to assist students and researchers for extracting interesting patterns and trends in document analyses.
The centrality of meta-programming in the ES-DOC eco-system
NASA Astrophysics Data System (ADS)
Greenslade, Mark
2017-04-01
The Earth System Documentation (ES-DOC) project is an international effort aiming to deliver a robust earth system model inter-comparison project documentation infrastructure. Such infrastructure both simplifies & standardizes the process of documenting (in detail) projects, experiments, models, forcings & simulations. In support of CMIP6, ES-DOC has upgraded its eco-system of tools, web-services & web-sites. The upgrade consolidates the existing infrastructure (built for CMIP5) and extends it with the introduction of new capabilities. The strategic focus of the upgrade is improvements in the documentation experience and broadening the range of scientific use-cases that the archived documentation may help deliver. Whether it is highlighting dataset errors, exploring experimental protocols, comparing forcings across ensemble runs, understanding MIP objectives, reviewing citations, exploring component properties of configured models, visualising inter-model relationships, scientists involved in CMIP6 will find the ES-DOC infrastructure helpful. This presentation underlines the centrality of meta-programming within the ES-DOC eco-system. We will demonstrate how agility is greatly enhanced by taking a meta-programming approach to representing data models and controlled vocabularies. Such an approach nicely decouples representations from encodings. Meta-models will be presented along with the associated tooling chain that forward engineers artefacts as diverse as: class hierarchies, IPython notebooks, mindmaps, configuration files, OWL & SKOS documents, spreadsheets …etc.
Indexing the medical open access literature for textual and content-based visual retrieval.
Eggel, Ivan; Müller, Henning
2010-01-01
Over the past few years an increasing amount of scientific journals have been created in an open access format. Particularly in the medical field the number of openly accessible journals is enormous making a wide body of knowledge available for analysis and retrieval. Part of the trend towards open access publications can be linked to funding bodies such as the NIH1 (National Institutes of Health) and the Swiss National Science Foundation (SNF2) requiring funded projects to make all articles of funded research available publicly. This article describes an approach to make part of the knowledge of open access journals available for retrieval including the textual information but also the images contained in the articles. For this goal all articles of 24 journals related to medical informatics and medical imaging were crawled from the web pages of BioMed Central. Text and images of the PDF (Portable Document Format) files were indexed separately and a web-based retrieval interface allows for searching via keyword queries or by visual similarity queries. Starting point for a visual similarity query can be an image on the local hard disk that is uploaded or any image found via the textual search. Search for similar documents is also possible.
An Infrastructure for Indexing and Organizing Best Practices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Liming; Staples, Mark; Gorton, Ian
Industry best practices are widely held but not necessarily empirically verified software engineering beliefs. Best practices can be documented in distributed web-based public repositories as pattern catalogues or practice libraries. There is a need to systematically index and organize these practices to enable their better practical use and scientific evaluation. In this paper, we propose a semi-automatic approach to index and organise best practices. A central repository acts as an information overlay on top of other pre-existing resources to facilitate organization, navigation, annotation and meta-analysis while maintaining synchronization with those resources. An initial population of the central repository is automatedmore » using Yahoo! contextual search services. The collected data is organized using semantic web technologies so that the data can be more easily shared and used for innovative analyses. A prototype has demonstrated the capability of the approach.« less
Semantic Analysis of Email Using Domain Ontologies and WordNet
NASA Technical Reports Server (NTRS)
Berrios, Daniel C.; Keller, Richard M.
2005-01-01
The problem of capturing and accessing knowledge in paper form has been supplanted by a problem of providing structure to vast amounts of electronic information. Systems that can construct semantic links for natural language documents like email messages automatically will be a crucial element of semantic email tools. We have designed an information extraction process that can leverage the knowledge already contained in an existing semantic web, recognizing references in email to existing nodes in a network of ontology instances by using linguistic knowledge and knowledge of the structure of the semantic web. We developed a heuristic score that uses several forms of evidence to detect references in email to existing nodes in the Semanticorganizer repository's network. While these scores cannot directly support automated probabilistic inference, they can be used to rank nodes by relevance and link those deemed most relevant to email messages.
Lin, Jimmy
2008-01-01
Background Graph analysis algorithms such as PageRank and HITS have been successful in Web environments because they are able to extract important inter-document relationships from manually-created hyperlinks. We consider the application of these techniques to biomedical text retrieval. In the current PubMed® search interface, a MEDLINE® citation is connected to a number of related citations, which are in turn connected to other citations. Thus, a MEDLINE record represents a node in a vast content-similarity network. This article explores the hypothesis that these networks can be exploited for text retrieval, in the same manner as hyperlink graphs on the Web. Results We conducted a number of reranking experiments using the TREC 2005 genomics track test collection in which scores extracted from PageRank and HITS analysis were combined with scores returned by an off-the-shelf retrieval engine. Experiments demonstrate that incorporating PageRank scores yields significant improvements in terms of standard ranked-retrieval metrics. Conclusion The link structure of content-similarity networks can be exploited to improve the effectiveness of information retrieval systems. These results generalize the applicability of graph analysis algorithms to text retrieval in the biomedical domain. PMID:18538027
Staccini, Pascal; Joubert, Michel; Quaranta, Jean-François; Fieschi, Marius
2003-01-01
Today, the economic and regulatory environment are pressuring hospitals and healthcare professionals to account for their results and methods of care delivery. The evaluation of the quality and the safety of care, the traceability of the acts performed and the evaluation of practices are some of the reasons underpinning current interest in clinical and hospital information systems. The structured collection of users' needs and system requirements is fundamental when installing such systems. This stage takes time and is generally misconstrued by caregivers and is of limited efficacy to analysis. We used a modelling technique designed for manufacturing processes (SADT: Structured Analysis and Design Technique). We enhanced the initial model of activity of this method and programmed a web-based tool in an object-oriented environment. This tool makes it possible to extract the data dictionary from the description of a given process and to locate documents (procedures, recommendations, instructions). Aimed at structuring needs and storing information provided by teams directly involved regarding the workings of an institution (or at least part of it), the process mapping approach has an important contribution to make in the analysis of clinical information systems.
Data display and analysis with μView
NASA Astrophysics Data System (ADS)
Tucakov, Ivan; Cosman, Jacob; Brewer, Jess H.
2006-03-01
The μView utility is a new Java applet version of the old db program, extended to include direct access to MUD data files, from which it can construct a variety of spectrum types, including complex and RRF-transformed spectra. By using graphics features built into all modern Web browsers, it provides full graphical display capabilities consistently across all platforms. It has the full command-line functionality of db as well as a more intuitive graphical user interface and extensive documentation, and can read and write db, csv and XML format files.
Cytoscape.js: a graph theory library for visualisation and analysis.
Franz, Max; Lopes, Christian T; Huck, Gerardo; Dong, Yue; Sumer, Onur; Bader, Gary D
2016-01-15
Cytoscape.js is an open-source JavaScript-based graph library. Its most common use case is as a visualization software component, so it can be used to render interactive graphs in a web browser. It also can be used in a headless manner, useful for graph operations on a server, such as Node.js. Cytoscape.js is implemented in JavaScript. Documentation, downloads and source code are available at http://js.cytoscape.org. gary.bader@utoronto.ca. © The Author 2015. Published by Oxford University Press.
Ad-Hoc Queries over Document Collections - A Case Study
NASA Astrophysics Data System (ADS)
Löser, Alexander; Lutter, Steffen; Düssel, Patrick; Markl, Volker
We discuss the novel problem of supporting analytical business intelligence queries over web-based textual content, e.g., BI-style reports based on 100.000's of documents from an ad-hoc web search result. Neither conventional search engines nor conventional Business Intelligence and ETL tools address this problem, which lies at the intersection of their capabilities. "Google Squared" or our system GOOLAP.info, are examples of these kinds of systems. They execute information extraction methods over one or several document collections at query time and integrate extracted records into a common view or tabular structure. Frequent extraction and object resolution failures cause incomplete records which could not be joined into a record answering the query. Our focus is the identification of join-reordering heuristics maximizing the size of complete records answering a structured query. With respect to given costs for document extraction we propose two novel join-operations: The multi-way CJ-operator joins records from multiple relationships extracted from a single document. The two-way join-operator DJ ensures data density by removing incomplete records from results. In a preliminary case study we observe that our join-reordering heuristics positively impact result size, record density and lower execution costs.
Computational Tools and Facilities for the Next-Generation Analysis and Design Environment
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)
1997-01-01
This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.
Deep pelagic food web structure as revealed by in situ feeding observations
Haddock, Steven H. D.; Robison, Bruce H.
2017-01-01
Food web linkages, or the feeding relationships between species inhabiting a shared ecosystem, are an ecological lens through which ecosystem structure and function can be assessed, and thus are fundamental to informing sustainable resource management. Empirical feeding datasets have traditionally been painstakingly generated from stomach content analysis, direct observations and from biochemical trophic markers (stable isotopes, fatty acids, molecular tools). Each approach carries inherent biases and limitations, as well as advantages. Here, using 27 years (1991–2016) of in situ feeding observations collected by remotely operated vehicles (ROVs), we quantitatively characterize the deep pelagic food web of central California within the California Current, complementing existing studies of diet and trophic interactions with a unique perspective. Seven hundred and forty-three independent feeding events were observed with ROVs from near-surface waters down to depths approaching 4000 m, involving an assemblage of 84 different predators and 82 different prey types, for a total of 242 unique feeding relationships. The greatest diversity of prey was consumed by narcomedusae, followed by physonect siphonophores, ctenophores and cephalopods. We highlight key interactions within the poorly understood ‘jelly web’, showing the importance of medusae, ctenophores and siphonophores as key predators, whose ecological significance is comparable to large fish and squid species within the central California deep pelagic food web. Gelatinous predators are often thought to comprise relatively inefficient trophic pathways within marine communities, but we build upon previous findings to document their substantial and integral roles in deep pelagic food webs. PMID:29212727
NASA Technical Reports Server (NTRS)
Muniz, R.; Hochstadt, J.; Boelke J.; Dalton, A.
2011-01-01
The Content Documents are created and managed under the System Software group with. Launch Control System (LCS) project. The System Software product group is lead by NASA Engineering Control and Data Systems branch (NEC3) at Kennedy Space Center. The team is working on creating Operating System Images (OSI) for different platforms (i.e. AIX, Linux, Solaris and Windows). Before the OSI can be created, the team must create a Content Document which provides the information of a workstation or server, with the list of all the software that is to be installed on it and also the set where the hardware belongs. This can be for example in the LDS, the ADS or the FR-l. The objective of this project is to create a User Interface Web application that can manage the information of the Content Documents, with all the correct validations and filters for administrator purposes. For this project we used one of the most excellent tools in agile development applications called Ruby on Rails. This tool helps pragmatic programmers develop Web applications with Rails framework and Ruby programming language. It is very amazing to see how a student can learn about OOP features with the Ruby language, manage the user interface with HTML and CSS, create associations and queries with gems, manage databases and run a server with MYSQL, run shell commands with command prompt and create Web frameworks with Rails. All of this in a real world project and in just fifteen weeks!
The Advancement of Public Awareness, Concerning TRU Waste Characterization, Using a Virtual Document
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, T. B.; Burns, T. P.; Estill, W. G.
2002-02-28
Building public trust and confidence through openness is a goal of the DOE Carlsbad Field Office for the Waste Isolation Pilot Plant (WIPP). The objective of the virtual document described in this paper is to give the public an overview of the waste characterization steps, an understanding of how waste characterization instrumentation works, and the type and amount of data generated from a batch of drums. The document is intended to be published on a web page and/or distributed at public meetings on CDs. Users may gain as much information as they desire regarding the transuranic (TRU) waste characterization program,more » starting at the highest level requirements (drivers) and progressing to more and more detail regarding how the requirements are met. Included are links to: drivers (which include laws, permits and DOE Orders); various characterization steps required for transportation and disposal under WIPP's Hazardous Waste Facility Permit; physical/chemical basis for each characterization method; types of data produced; and quality assurance process that accompanies each measurement. Examples of each type of characterization method in use across the DOE complex are included. The original skeleton of the document was constructed in a PowerPoint presentation and included descriptions of each section of the waste characterization program. This original document had a brief overview of Acceptable Knowledge, Non-Destructive Examination, Non-Destructive Assay, Small Quantity sites, and the National Certification Team. A student intern was assigned the project of converting the document to a virtual format and to discuss each subject in depth. The resulting product is a fully functional virtual document that works in a web browser and functions like a web page. All documents that were referenced, linked to, or associated, are included on the virtual document's CD. WIPP has been engaged in a variety of Hazardous Waste Facility Permit modification activities. During the public meetings, discussion centered on proposed changes to the characterization program. The philosophy behind the virtual document is to show the characterization process as a whole, rather than as isolated parts. In addition to public meetings, other uses for the information might be as a training tool for new employees at the WIPP facility to show them where their activities fit into the overall scheme, as well as an employee review to help prepare for waste certification audits.« less
A highly scalable information system as extendable framework solution for medical R&D projects.
Holzmüller-Laue, Silke; Göde, Bernd; Stoll, Regina; Thurow, Kerstin
2009-01-01
For research projects in preventive medicine a flexible information management is needed that offers a free planning and documentation of project specific examinations. The system should allow a simple, preferably automated data acquisition from several distributed sources (e.g., mobile sensors, stationary diagnostic systems, questionnaires, manual inputs) as well as an effective data management, data use and analysis. An information system fulfilling these requirements has been developed at the Center for Life Science Automation (celisca). This system combines data of multiple investigations and multiple devices and displays them on a single screen. The integration of mobile sensor systems for comfortable, location-independent capture of time-based physiological parameter and the possibility of observation of these measurements directly by this system allow new scenarios. The web-based information system presented in this paper is configurable by user interfaces. It covers medical process descriptions, operative process data visualizations, a user-friendly process data processing, modern online interfaces (data bases, web services, XML) as well as a comfortable support of extended data analysis with third-party applications.
ERIC Educational Resources Information Center
Lessne, Deborah; Yanez, Christina
2016-01-01
This document reports data from the 2015 School Crime Supplement (SCS) to the National Crime Victimization Survey (NCVS). The Web Tables show the extent to which students with different personal characteristics report being bullied. Estimates include responses by student characteristics: student sex, race/ethnicity, grade, and household income.…
27 CFR 73.31 - May I submit forms electronically to TTB?
Code of Federal Regulations, 2010 CFR
2010-04-01
... requirement in this chapter, only if: (a) We have published a notice in the Federal Register and on our Web... Register and on our Web site as stated above; (c) You submit the electronic form to an electronic document receiving system that we have designated for the receipt of that specific form; and (d) The electronic form...
ERIC Educational Resources Information Center
Hammond, Carol, Ed.
This document contains three papers presented at the 1995 Arizona Library Association conference. Papers include: (1) "ERLs and URLs: ASU Libraries Database Delivery Through Web Technology" (Dennis Brunning & Philip Konomos), which illustrates how and why the libraries at Arizona State University developed a world wide web server and…
ERIC Educational Resources Information Center
Lee, John K.; Calandra, Brendan
2004-01-01
Two versions of a Web site on the United States Constitution were used by students in separate high school history classes to solve problems that emerged from four constitutional scenarios. One site contained embedded conceptual scaffolding devices in the form of textual annotations; the other did not. The results of our study demonstrated the…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-20
... 20426. The EA also may be viewed on the Commission's Internet Web site at ( www.ferc.gov ) using the ``eLibrary'' link. Enter the docket number excluding the last three digits in the docket number field to access the document. Additional information about the project is available from the Commission's Web site...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-07
... electronically from the ADAMS Public Library component on the NRC Web site, http://www.nrc.gov . Persons who do... Agencywide Documents Access and Management System (ADAMS) at accession numbers ML131300009 and ML131300160... Web site at http://www.nrc.gov/public-involve/public-meetings/ . FOR FURTHER INFORMATION CONTACT...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-03
...'' field when using either the Web-based search (advanced search) engine or the ADAMS FIND tool in Citrix... should enter ``05200011'' in the ``Docket Number'' field in the web-based search (advanced search) engine... ML100740441. To search for documents in ADAMS using Vogtle Units 3 and 4 COL application docket numbers, 52...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-04
... capacity of 450 kilowatts; (4) an existing 10- foot-wide, 8-foot-deep intake canal; (5) new trash racks... Commission's Web site under the ``eFiling'' link. If unable to be filed electronically, documents may be... information on how to submit these types of filings please go to the Commission's Web site located at http...
U.S. Public Libraries and the Use of Web Technologies, 2012. A Closer Look
ERIC Educational Resources Information Center
Wanucha, Meghan; Hofschire, Linda
2013-01-01
In 2008, researchers at the Library Research Service (LRS) undertook the "U.S. Public Libraries and the Use of Web Technologies" study, with the intent to document the use of various Internet technologies on the websites of public libraries throughout the nation (Lietzau, 2009). The results of that study set a baseline for the adoption…
Use of World Wide Web Server and Browser Software To Support a First-Year Medical Physiology Course.
ERIC Educational Resources Information Center
Davis, Michael J.; And Others
1997-01-01
Describes the use of a World Wide Web server to support a team-taught physiology course for first-year medical students. The students' evaluations indicate that computer use in class made lecture material more interesting, while the online documents helped reinforce lecture materials and textbooks. Lists factors which contribute to the…
Suplatov, Dmitry; Kirilin, Eugeny; Arbatsky, Mikhail; Takhaveev, Vakil; Svedas, Vytas
2014-07-01
The new web-server pocketZebra implements the power of bioinformatics and geometry-based structural approaches to identify and rank subfamily-specific binding sites in proteins by functional significance, and select particular positions in the structure that determine selective accommodation of ligands. A new scoring function has been developed to annotate binding sites by the presence of the subfamily-specific positions in diverse protein families. pocketZebra web-server has multiple input modes to meet the needs of users with different experience in bioinformatics. The server provides on-site visualization of the results as well as off-line version of the output in annotated text format and as PyMol sessions ready for structural analysis. pocketZebra can be used to study structure-function relationship and regulation in large protein superfamilies, classify functionally important binding sites and annotate proteins with unknown function. The server can be used to engineer ligand-binding sites and allosteric regulation of enzymes, or implemented in a drug discovery process to search for potential molecular targets and novel selective inhibitors/effectors. The server, documentation and examples are freely available at http://biokinet.belozersky.msu.ru/pocketzebra and there are no login requirements. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Kasabwala, Khushabu; Agarwal, Nitin; Hansberry, David R; Baredes, Soly; Eloy, Jean Anderson
2012-09-01
Americans are increasingly turning to the Internet as a source of health care information. These online resources should be written at a level readily understood by the average American. This study evaluates the readability of online patient education information available from the American Academy of Otolaryngology--Head and Neck Surgery Foundation (AAO-HNSF) professional Web site using 7 different assessment tools that analyze the materials for reading ease and grade level of its target audience. Analysis of Internet-based patient education material from the AAO-HNSF Web site. Online patient education material from the AAO-HNSF was downloaded in January 2012 and assessed for level of readability using the Flesch Reading Ease, Flesch-Kincaid Grade Level, SMOG grading, Coleman-Liau Index, Gunning-Fog Index, Raygor Readability Estimate graph, and Fry Readability graph. The text from each subsection was pasted as plain text into Microsoft Word document, and each subsection was subjected to readability analysis using the software package Readability Studio Professional Edition Version 2012.1. All health care education material assessed is written between an 11th grade and graduate reading level and is considered "difficult to read" by the assessment scales. Online patient education materials on the AAO-HNSF Web site are written above the recommended 6th grade level and may need to be revised to make them more easily understood by a broader audience.
Monitoring the Deterioration of Stone at Mindener MUSEUM'S Lapidarium
NASA Astrophysics Data System (ADS)
Pomaska, G.
2013-07-01
Mindener Museum's Lapidarium incorporates a collection of stone work like reliefs, sculptures and inscriptions from different time epochs as advices of the city's history. These gems must be protected against environmental influences and deterioration. In advance of the measures a 3D reconstruction and detailed documentation has to be taken. The framework to establish hard- and software must match the museum's infrastructure. Two major question will be answered. Are low-cost scanning devices like depth cameras and digital of the shelf cameras suitable for the data acquisition? Does the functionality of open source and freeware covers the demand on investigation and analysis in this application? The working chain described in this contribution covers the structure from motion method and the reconstruction with RGB-D cameras. Mesh processing such as cleaning, smoothing, poisson surface reconstruction and texturing will be accomplished with MeshLab. Data acquisition and modelling continues in structure analysis. Therefore the focus lies as well on latest software developments related to 3D printing technologies. Repairing and finishing of meshes is a task for MeshMixer. Netfabb as a tool for positioning, dimensioning and slicing enables virtual handling of the items. On the Sketchfab web site one can publish and share 3D objects with integration into web pages supported by WebGL. Finally if a prototype is needed, the mesh can be uploaded to a 3D printing device provided by an online service.
In Brief: Web site for human spaceflight review committee
NASA Astrophysics Data System (ADS)
Showstack, Randy
2009-06-01
As part of an independent review of human spaceflight plans and programs, NASA has established a Web site for the Review of U.S. Human Space Flight Plans Committee (http://hsfnasagov). The Web site provides the committee's charter, relevant documents, information about meetings and members, and ways to contact the committee. “The human spaceflight program belongs to everyone. Our committee would hope to benefit from the views of all who would care to contact us,” noted committee chairman Norman Augustine, retired chair and CEO of Lockheed Martin Corporation.
An Auto-management Thesis Program WebMIS Based on Workflow
NASA Astrophysics Data System (ADS)
Chang, Li; Jie, Shi; Weibo, Zhong
An auto-management WebMIS based on workflow for bachelor thesis program is given in this paper. A module used for workflow dispatching is designed and realized using MySQL and J2EE according to the work principle of workflow engine. The module can automatively dispatch the workflow according to the date of system, login information and the work status of the user. The WebMIS changes the management from handwork to computer-work which not only standardizes the thesis program but also keeps the data and documents clean and consistent.
48 CFR 252.232-7006 - Wide Area WorkFlow Payment Instructions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... submission. Document submissions may be via Web entry, Electronic Data Interchange, or File Transfer Protocol... acceptance locations or “Not applicable.”) (3) Document routing. The Contractor shall use the information in the Routing Data Table below only to fill in applicable fields in WAWF when creating payment requests...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-06
... System Web site at https://edis.usitc.gov . Failure to comply with the requirements of this chapter and... Electronic Document Information System (EDIS) already accepts electronic filing of certain documents, and..., regardless of whether the electronic docketing system is operational. The ITC TLA makes a similar comment...
David J. Gross and the Strong Force
allows physicist to predict experimental results to within one part in 100 million. ... The new Nobelists available in electronic documents and on the Web. Documents: Ultraviolet Behavior of Non-Abelian Gauge Gross, Interview (video) Top Some links on this page may take you to non-federal websites. Their
10 CFR 50.4 - Written communications.
Code of Federal Regulations, 2011 CFR
2011-01-01
...: Document Control Desk, U.S. Nuclear Regulatory Commission, Washington, DC 20555-0001; by hand delivery to... NRC's Web site at http://www.nrc.gov/site-help/e-submittals.html; by e-mail to [email protected] otherwise specified in paragraphs (b)(2) through (b)(7) of this section: to the NRC's Document Control Desk...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-21
... #0;notices is to give interested persons an opportunity to participate in #0;the rule making prior to... public comment period. SUMMARY: This document announces a reopening of the comment period for interested....gov web page contains simple instructions on how to access all documents, including public comments...
14 CFR 11.45 - Where and when do I file my comments?
Code of Federal Regulations, 2013 CFR
2013-01-01
... do I file my comments? (a) Send your comments to the location specified in the rulemaking document on which you are commenting. If you are asked to send your comments to the Federal Document Management... you do not follow the electronic filing instructions at the Federal Docket Management System Web site...
14 CFR 11.45 - Where and when do I file my comments?
Code of Federal Regulations, 2011 CFR
2011-01-01
... do I file my comments? (a) Send your comments to the location specified in the rulemaking document on which you are commenting. If you are asked to send your comments to the Federal Document Management... you do not follow the electronic filing instructions at the Federal Docket Management System Web site...