Sample records for generation web part

  1. Usability Testing for e-Resource Discovery: How Students Find and Choose e-Resources Using Library Web Sites

    ERIC Educational Resources Information Center

    Fry, Amy; Rich, Linda

    2011-01-01

    In early 2010, library staff at Bowling Green State University (BGSU) in Ohio designed and conducted a usability study of key parts of the library web site, focusing on the web pages generated by the library's electronic resources management system (ERM) that list and describe the library's databases. The goal was to discover how users find and…

  2. ER2OWL: Generating OWL Ontology from ER Diagram

    NASA Astrophysics Data System (ADS)

    Fahad, Muhammad

    Ontology is the fundamental part of Semantic Web. The goal of W3C is to bring the web into (its full potential) a semantic web with reusing previous systems and artifacts. Most legacy systems have been documented in structural analysis and structured design (SASD), especially in simple or Extended ER Diagram (ERD). Such systems need up-gradation to become the part of semantic web. In this paper, we present ERD to OWL-DL ontology transformation rules at concrete level. These rules facilitate an easy and understandable transformation from ERD to OWL. The set of rules for transformation is tested on a structured analysis and design example. The framework provides OWL ontology for semantic web fundamental. This framework helps software engineers in upgrading the structured analysis and design artifact ERD, to components of semantic web. Moreover our transformation tool, ER2OWL, reduces the cost and time for building OWL ontologies with the reuse of existing entity relationship models.

  3. Web site development: applying aesthetics to promote breast health education and awareness.

    PubMed

    Thomas, Barbara; Goldsmith, Susan B; Forrest, Anne; Marshall, Renée

    2002-01-01

    This article describes the process of establishing a Web site as part of a collaborative project using visual art to promote breast health education. The need for a more "user-friendly" comprehensive breast health Web site that is aesthetically rewarding was identified after an analysis of current Web sites available through the World Wide Web. Two predetermined sets of criteria, accountability and aesthetics, were used to analyze these sites and to generate ideas for creating a breast health education Web site using visual art. Results of the analyses conducted are included as well as the factors to consider for incorporating into a Web site. The process specified is thorough and can be applied to establish a Web site that is aesthetically rewarding and informative for a variety of educational purposes.

  4. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    PubMed

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  5. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    PubMed Central

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  6. Social network extraction based on Web: 3. the integrated superficial method

    NASA Astrophysics Data System (ADS)

    Nasution, M. K. M.; Sitompul, O. S.; Noah, S. A.

    2018-03-01

    The Web as a source of information has become part of the social behavior information. Although, by involving only the limitation of information disclosed by search engines in the form of: hit counts, snippets, and URL addresses of web pages, the integrated extraction method produces a social network not only trusted but enriched. Unintegrated extraction methods may produce social networks without explanation, resulting in poor supplemental information, or resulting in a social network of durmise laden, consequently unrepresentative social structures. The integrated superficial method in addition to generating the core social network, also generates an expanded network so as to reach the scope of relation clues, or number of edges computationally almost similar to n(n - 1)/2 for n social actors.

  7. Is there a "net generation" in veterinary medicine? A comparative study on the use of the Internet and Web 2.0 by students and the veterinary profession.

    PubMed

    Tenhaven, Christoph; Tipold, Andrea; Fischer, Martin R; Ehlers, Jan P

    2013-01-01

    Informal and formal lifelong learning is essential at university and in the workplace. Apart from classical learning techniques, Web 2.0 tools can be used. It is controversial whether there is a so-called net generation amongst people under 30. To test the hypothesis that a net generation among students and young veterinarians exists. An online survey of students and veterinarians was conducted in the German-speaking countries which was advertised via online media and traditional print media. 1780 people took part in the survey. Students and veterinarians have different usage patterns regarding social networks (91.9% vs. 69%) and IM (55.9% vs. 24.5%). All tools were predominantly used passively and in private, to a lesser extent also professionally and for studying. The use of Web 2.0 tools is useful, however, teaching information and media skills, preparing codes of conduct for the internet and verification of user generated content is essential.

  8. Ontology Research and Development. Part 1-A Review of Ontology Generation.

    ERIC Educational Resources Information Center

    Ding, Ying; Foo, Schubert

    2002-01-01

    Discusses the role of ontology in knowledge representation, including enabling content-based access, interoperability, communications, and new levels of service on the Semantic Web; reviews current ontology generation studies and projects as well as problems facing such research; and discusses ontology mapping, information extraction, natural…

  9. But It All Happened Online--Where Do We Stand as a School?: Student Rights in Noninstructional Matters

    ERIC Educational Resources Information Center

    Gomez, Juan; McNamara, Patrick; Brooks, Jeffrey S.

    2006-01-01

    This case was developed as part of a school law course in an educational leadership preparation program and focuses on a school's legal right to discipline a student for creating and posting two offensive Web sites: One was generated during instructional time, and the other was completed off-site during noninstructional time. The second Web site…

  10. FROG: Time Series Analysis for the Web Service Era

    NASA Astrophysics Data System (ADS)

    Allan, A.

    2005-12-01

    The FROG application is part of the next generation Starlink{http://www.starlink.ac.uk} software work (Draper et al. 2005) and released under the GNU Public License{http://www.gnu.org/copyleft/gpl.html} (GPL). Written in Java, it has been designed for the Web and Grid Service era as an extensible, pluggable, tool for time series analysis and display. With an integrated SOAP server the packages functionality is exposed to the user for use in their own code, and to be used remotely over the Grid, as part of the Virtual Observatory (VO).

  11. BI-NATIONAL LOWER FOOD WEB ASSESSMENT: 2005 BENTHOS RESULTS

    EPA Science Inventory

    Findings have been generated as part of a bi-national coordinated partnership for lakewide sampling to support needs expressed by the Great Lakes Fisheries Committee, the Lake Superior Technical Committee, and the Lake Superior LaMP.

  12. Deconstructing Digital Natives: Young People, Technology, and the New Literacies

    ERIC Educational Resources Information Center

    Thomas, Michael, Ed.

    2011-01-01

    There have been many attempts to define the generation of students who emerged with the Web and new digital technologies in the early 1990s. The term "digital native" refers to the generation born after 1980, which has grown up in a world where digital technologies and the internet are a normal part of everyday life. Young people…

  13. Automatic Earth observation data service based on reusable geo-processing workflow

    NASA Astrophysics Data System (ADS)

    Chen, Nengcheng; Di, Liping; Gong, Jianya; Yu, Genong; Min, Min

    2008-12-01

    A common Sensor Web data service framework for Geo-Processing Workflow (GPW) is presented as part of the NASA Sensor Web project. This framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node and BPEL engine. An abstract model designer is used to design the top level GPW model, model instantiation service is used to generate the concrete BPEL, and the BPEL execution engine is adopted. The framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A scenario for an EO-1 Sensor Web data service for fire classification is used to test the feasibility of the proposed framework. The execution time and influences of the service framework are evaluated. The experiments show that this framework can improve the quality of services for sensor data retrieval and processing.

  14. Keynote Talk: Mining the Web 2.0 for Improved Image Search

    NASA Astrophysics Data System (ADS)

    Baeza-Yates, Ricardo

    There are several semantic sources that can be found in the Web that are either explicit, e.g. Wikipedia, or implicit, e.g. derived from Web usage data. Most of them are related to user generated content (UGC) or what is called today the Web 2.0. In this talk we show how to use these sources of evidence in Flickr, such as tags, visual annotations or clicks, which represent the the wisdom of crowds behind UGC, to improve image search. These results are the work of the multimedia retrieval team at Yahoo! Research Barcelona and they are already being used in Yahoo! image search. This work is part of a larger effort to produce a virtuous data feedback circuit based on the right combination many different technologies to leverage the Web itself.

  15. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-05-04

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroicmore » effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.« less

  16. Is there a “net generation” in veterinary medicine? A comparative study on the use of the Internet and Web 2.0 by students and the veterinary profession

    PubMed Central

    Tenhaven, Christoph; Tipold, Andrea; Fischer, Martin R.; Ehlers, Jan P.

    2013-01-01

    Introduction: Informal and formal lifelong learning is essential at university and in the workplace. Apart from classical learning techniques, Web 2.0 tools can be used. It is controversial whether there is a so-called net generation amongst people under 30. Aims: To test the hypothesis that a net generation among students and young veterinarians exists. Methods: An online survey of students and veterinarians was conducted in the German-speaking countries which was advertised via online media and traditional print media. Results: 1780 people took part in the survey. Students and veterinarians have different usage patterns regarding social networks (91.9% vs. 69%) and IM (55.9% vs. 24.5%). All tools were predominantly used passively and in private, to a lesser extent also professionally and for studying. Outlook: The use of Web 2.0 tools is useful, however, teaching information and media skills, preparing codes of conduct for the internet and verification of user generated content is essential. PMID:23467682

  17. Automating Visualization Service Generation with the WATT Compiler

    NASA Astrophysics Data System (ADS)

    Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.

    2007-12-01

    As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web services. In particular, we will detail the generation of a charge density visualization service applicable to output from the quantum calculations of the VLab computation workflows, plus another service for mantle convection visualization. We also discuss WATT-LIVE [2], a web-based interface that allows users to interact with WATT. With WATT-LIVE users submit Tcl code, retrieve its C++ translation with various files and scripts necessary to locally install the tailor-made web service, or launch the service for a limited session on our test server. This work is supported by NSF through the ITR grant NSF-0426867. [1] Virtual Laboratory for Earth and Planetary Materials, http://vlab.msi.umn.edu, September 2007. [2] WATT-LIVE website, http://vlab2.scs.fsu.edu/watt-live, September 2007.

  18. Beyond Web 2.0 … and Beyond the Semantic Web

    NASA Astrophysics Data System (ADS)

    Bénel, Aurélien; Zhou, Chao; Cahier, Jean-Pierre

    Tim O'Reilly, the famous technology book publisher, changed the life of many of us when he coined the name "Web 2.0" (O' Reilly 2005). Our research topics suddenly became subjects for open discussion in various cultural formats such as radio and TV, while at the same time they became part of an inappropriate marketing discourse according to several scientific reviewers. Indeed Tim O'Reilly's initial thoughts were about economic consequence, since it was about the resurrection of the Web after the bursting of the dot-com bubble. Some opponents of the concept do not think the term should be used at all since it is underpinned by no technological revolution. In contrast, we think that there was a paradigm shift when several sites based on user-generated content became some of the most visited Web sites and massive adoption of that kind is worthy of researchers' attention.

  19. The Future of Web Maps in Next Generation Textbooks

    NASA Astrophysics Data System (ADS)

    DiBiase, D.; Prasad, S.

    2014-12-01

    The reformation of the "Object Formerly Known as Textbook" (coined by the Chronicle of Higher Education) toward a digital future is underway. Emerging nextgen texts look less like electronic books ("ebooks") and more like online courseware. In addition to text and illustrations, nextgen textbooks for STEM subjects are likely to combine quizzes, grade management tools, support for social learning, and interactive media including web maps. Web maps are interactive, multi-scale, online maps that enable teachers and learners to explore, interrogate, and mash up the wide variety of map layers available in the cloud. This presentation will show how web maps coupled with interactive quizzes enable students' purposeful explorations and interpretations of spatial patterns related to humankind's interactions with the earth. Attendees will also learn about Esri's offer to donate ArcGIS Online web mapping subscriptions to every U.S. school as part of the President Obama's ConnectED initiative.

  20. A novel artificial intelligence method for weekly dietary menu planning.

    PubMed

    Gaál, B; Vassányi, I; Kozmann, G

    2005-01-01

    Menu planning is an important part of personalized lifestyle counseling. The paper describes the results of an automated menu generator (MenuGene) of the web-based lifestyle counseling system Cordelia that provides personalized advice to prevent cardiovascular diseases. The menu generator uses genetic algorithms to prepare weekly menus for web users. The objectives are derived from personal medical data collected via forms in Cordelia, combined with general nutritional guidelines. The weekly menu is modeled as a multilevel structure. Results show that the genetic algorithm-based method succeeds in planning dietary menus that satisfy strict numerical constraints on every nutritional level (meal, daily basis, weekly basis). The rule-based assessment proved capable of manipulating the mean occurrence of the nutritional components thus providing a method for adjusting the variety and harmony of the menu plans. By splitting the problem into well determined sub-problems, weekly menu plans that satisfy nutritional constraints and have well assorted components can be generated with the same method that is for daily and meal plan generation.

  1. Prototype Implementation of Web and Desktop Applications for ALMA Science Verification Data and the Lessons Learned

    NASA Astrophysics Data System (ADS)

    Eguchi, S.; Kawasaki, W.; Shirasaki, Y.; Komiya, Y.; Kosugi, G.; Ohishi, M.; Mizumoto, Y.

    2013-10-01

    ALMA is estimated to generate TB scale data during only one observation; astronomers need to identify which part of the data they are really interested in. We have been developing new GUI software for this purpose utilizing the VO interface: ALMA Web Quick Look System (ALMAWebQL) and ALMA Desktop Application (Vissage). The former is written in JavaScript and HTML5 generated from Java code by the Google Web Toolkit, and the latter is in pure Java. An essential point of our approach is how to reduce network traffic: we prepare, in advance, “compressed” FITS files of 2x2x1 (horizontal, vertical, and spectral directions, respectively) binning, 2 x 2 x 2 binning, 4 x 4 x 2 binning data, and so on. These files are hidden from users, and Web QL automatically chooses the proper one for each user operation. Through this work, we find that network traffic in our system is still a bottleneck towards TB scale data distribution. Hence we have to develop alternative data containers for much faster data processing. In this paper, we introduce our data analysis systems, and describe what we learned through the development.

  2. New tool to assemble repetitive regions using next-generation sequencing data

    NASA Astrophysics Data System (ADS)

    Kuśmirek, Wiktor; Nowak, Robert M.; Neumann, Łukasz

    2017-08-01

    The next generation sequencing techniques produce a large amount of sequencing data. Some part of the genome are composed of repetitive DNA sequences, which are very problematic for the existing genome assemblers. We propose a modification of the algorithm for a DNA assembly, which uses the relative frequency of reads to properly reconstruct repetitive sequences. The new approach was implemented and tested, as a demonstration of the capability of our software we present some results for model organisms. The new implementation, using a three-layer software architecture was selected, where the presentation layer, data processing layer, and data storage layer were kept separate. Source code as well as demo application with web interface and the additional data are available at project web-page: http://dnaasm.sourceforge.net.

  3. Web based tools for data manipulation, visualisation and validation with interactive georeferenced graphs

    NASA Astrophysics Data System (ADS)

    Ivankovic, D.; Dadic, V.

    2009-04-01

    Some of oceanographic parameters have to be manually inserted into database; some (for example data from CTD probe) are inserted from various files. All this parameters requires visualization, validation and manipulation from research vessel or scientific institution, and also public presentation. For these purposes is developed web based system, containing dynamic sql procedures and java applets. Technology background is Oracle 10g relational database, and Oracle application server. Web interfaces are developed using PL/SQL stored database procedures (mod PL/SQL). Additional parts for data visualization include use of Java applets and JavaScript. Mapping tool is Google maps API (javascript) and as alternative java applet. Graph is realized as dynamically generated web page containing java applet. Mapping tool and graph are georeferenced. That means that click on some part of graph, automatically initiate zoom or marker onto location where parameter was measured. This feature is very useful for data validation. Code for data manipulation and visualization are partially realized with dynamic SQL and that allow as to separate data definition and code for data manipulation. Adding new parameter in system requires only data definition and description without programming interface for this kind of data.

  4. All about Food Chains. Animal Life for Children. [Videotape].

    ERIC Educational Resources Information Center

    2000

    Whether animals are herbivores, carnivores, or omnivores, each one is part of an eternal food chain that carries on from one generation to the next. In this videotape, students learn more about terms like "predator,""pre-consumer" and "producer," as well as the cycles of food chains and food webs and how they support…

  5. Reliable Execution Based on CPN and Skyline Optimization for Web Service Composition

    PubMed Central

    Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets. PMID:23935431

  6. Reliable execution based on CPN and skyline optimization for Web service composition.

    PubMed

    Chen, Liping; Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.

  7. Does ontogenetic change in orb web asymmetry reflect biogenetic law?

    NASA Astrophysics Data System (ADS)

    Nakata, Kensuke

    2010-11-01

    Most orb web spiders face downward on the web hub, and their webs are vertically asymmetrical, that is, the lower part of the web is larger than the upper part and the ratio of the lower part to the whole web area increases as the spider grows. This phenomenon may reflect biogenetic law such that young animals exhibit a general ancestral trait whereas adults exhibit specific and derived traits. An alternative explanation is that vertical asymmetry may arise from the difference in time required by spiders to move up or down the web to capture prey. The present study tested these two hypotheses for Eriophora sagana. Subadults of this species build their webs with reverse asymmetry in that the upper part of the web area is larger than the lower part. In both subadults and adults, the upper proportion decreased with spider weight, and adult spiders built more symmetric webs. These results support the capture time difference hypothesis.

  8. deepTools: a flexible platform for exploring deep-sequencing data.

    PubMed

    Ramírez, Fidel; Dündar, Friederike; Diehl, Sarah; Grüning, Björn A; Manke, Thomas

    2014-07-01

    We present a Galaxy based web server for processing and visualizing deeply sequenced data. The web server's core functionality consists of a suite of newly developed tools, called deepTools, that enable users with little bioinformatic background to explore the results of their sequencing experiments in a standardized setting. Users can upload pre-processed files with continuous data in standard formats and generate heatmaps and summary plots in a straight-forward, yet highly customizable manner. In addition, we offer several tools for the analysis of files containing aligned reads and enable efficient and reproducible generation of normalized coverage files. As a modular and open-source platform, deepTools can easily be expanded and customized to future demands and developments. The deepTools webserver is freely available at http://deeptools.ie-freiburg.mpg.de and is accompanied by extensive documentation and tutorials aimed at conveying the principles of deep-sequencing data analysis. The web server can be used without registration. deepTools can be installed locally either stand-alone or as part of Galaxy. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Life as an emergent phenomenon: studies from a large-scale boid simulation and web data.

    PubMed

    Ikegami, Takashi; Mototake, Yoh-Ichi; Kobori, Shintaro; Oka, Mizuki; Hashimoto, Yasuhiro

    2017-12-28

    A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).

  10. Life as an emergent phenomenon: studies from a large-scale boid simulation and web data

    NASA Astrophysics Data System (ADS)

    Ikegami, Takashi; Mototake, Yoh-ichi; Kobori, Shintaro; Oka, Mizuki; Hashimoto, Yasuhiro

    2017-11-01

    A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  11. Using Peer Feedback to Improve Students' Scientific Inquiry

    NASA Astrophysics Data System (ADS)

    Tasker, Tammy Q.; Herrenkohl, Leslie Rupert

    2016-02-01

    This article examines a 7th grade teacher's pedagogical practices to support her students to provide peer feedback to one another using technology during scientific inquiry. This research is part of a larger study in which teachers in California and Washington and their classes engaged in inquiry projects using a Web-based system called Web of Inquiry. Videotapes of classroom lessons and artifacts such as student work were collected as part of the corpus of data. In the case examined, Ms. E supports her students to collectively define "meaningful feedback," thereby improving the quality of feedback that was provided in the future. This is especially timely, given the attention in Next Generation Science Standards to cross-cutting concepts and practices that require students discuss and debate ideas with each other in order to improve their understanding and their written inquiry reports (NGSS, 2013).

  12. Teaching Web 2.0 technologies using Web 2.0 technologies.

    PubMed

    Rethlefsen, Melissa L; Piorun, Mary; Prince, J Dale

    2009-10-01

    The research evaluated participant satisfaction with the content and format of the "Web 2.0 101: Introduction to Second Generation Web Tools" course and measured the impact of the course on participants' self-evaluated knowledge of Web 2.0 tools. The "Web 2.0 101" online course was based loosely on the Learning 2.0 model. Content was provided through a course blog and covered a wide range of Web 2.0 tools. All Medical Library Association members were invited to participate. Participants were asked to complete a post-course survey. Respondents who completed the entire course or who completed part of the course self-evaluated their knowledge of nine social software tools and concepts prior to and after the course using a Likert scale. Additional qualitative information about course strengths and weaknesses was also gathered. Respondents' self-ratings showed a significant change in perceived knowledge for each tool, using a matched pair Wilcoxon signed rank analysis (P<0.0001 for each tool/concept). Overall satisfaction with the course appeared high. Hands-on exercises were the most frequently identified strength of the course; the length and time-consuming nature of the course were considered weaknesses by some. Learning 2.0-style courses, though demanding time and self-motivation from participants, can increase knowledge of Web 2.0 tools.

  13. Accounting Data to Web Interface Using PERL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hargeaves, C

    2001-08-13

    This document will explain the process to create a web interface for the accounting information generated by the High Performance Storage Systems (HPSS) accounting report feature. The accounting report contains useful data but it is not easily accessed in a meaningful way. The accounting report is the only way to see summarized storage usage information. The first step is to take the accounting data, make it meaningful and store the modified data in persistent databases. The second step is to generate the various user interfaces, HTML pages, that will be used to access the data. The third step is tomore » transfer all required files to the web server. The web pages pass parameters to Common Gateway Interface (CGI) scripts that generate dynamic web pages and graphs. The end result is a web page with specific information presented in text with or without graphs. The accounting report has a specific format that allows the use of regular expressions to verify if a line is storage data. Each storage data line is stored in a detailed database file with a name that includes the run date. The detailed database is used to create a summarized database file that also uses run date in its name. The summarized database is used to create the group.html web page that includes a list of all storage users. Scripts that query the database folder to build a list of available databases generate two additional web pages. A master script that is run monthly as part of a cron job, after the accounting report has completed, manages all of these individual scripts. All scripts are written in the PERL programming language. Whenever possible data manipulation scripts are written as filters. All scripts are written to be single source, which means they will function properly on both the open and closed networks at LLNL. The master script handles the command line inputs for all scripts, file transfers to the web server and records run information in a log file. The rest of the scripts manipulate the accounting data or use the files created to generate HTML pages. Each script will be described in detail herein. The following is a brief description of HPSS taken directly from an HPSS web site. ''HPSS is a major development project, which began in 1993 as a Cooperative Research and Development Agreement (CRADA) between government and industry. The primary objective of HPSS is to move very large data objects between high performance computers, workstation clusters, and storage libraries at speeds many times faster than is possible with today's software systems. For example, HPSS can manage parallel data transfers from multiple network-connected disk arrays at rates greater than 1 Gbyte per second, making it possible to access high definition digitized video in real time.'' The HPSS accounting report is a canned report whose format is controlled by the HPSS developers.« less

  14. Semantic Annotations and Querying of Web Data Sources

    NASA Astrophysics Data System (ADS)

    Hornung, Thomas; May, Wolfgang

    A large part of the Web, actually holding a significant portion of the useful information throughout the Web, consists of views on hidden databases, provided by numerous heterogeneous interfaces that are partly human-oriented via Web forms ("Deep Web"), and partly based on Web Services (only machine accessible). In this paper we present an approach for annotating these sources in a way that makes them citizens of the Semantic Web. We illustrate how queries can be stated in terms of the ontology, and how the annotations are used to selected and access appropriate sources and to answer the queries.

  15. A web service system supporting three-dimensional post-processing of medical images based on WADO protocol.

    PubMed

    He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian

    2015-02-01

    Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.

  16. A simple method for serving Web hypermaps with dynamic database drill-down

    PubMed Central

    Boulos, Maged N Kamel; Roudsari, Abdul V; Carson, Ewart R

    2002-01-01

    Background HealthCyberMap aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems. PMID:12437788

  17. WebWise 2.0: The Power of Community. WebWise Conference on Libraries and Museums in the Digital World Proceedings (9th, Miami Beach, Florida, March 5-7, 2008)

    ERIC Educational Resources Information Center

    Green, David

    2009-01-01

    Since it was coined by Tim O'Reilly in formulating the first Web 2.0 Conference in 2004, the term "Web 2.0" has definitely caught on as a designation of a second generation of Web design and experience that emphasizes a high degree of interaction with, and among, users. Rather than simply consulting and reading Web pages, the Web 2.0 generation is…

  18. Knowledge Base for Automatic Generation of Online IMS LD Compliant Course Structures

    ERIC Educational Resources Information Center

    Pacurar, Ecaterina Giacomini; Trigano, Philippe; Alupoaie, Sorin

    2006-01-01

    Our article presents a pedagogical scenarios-based web application that allows the automatic generation and development of pedagogical websites. These pedagogical scenarios are represented in the IMS Learning Design standard. Our application is a web portal helping teachers to dynamically generate web course structures, to edit pedagogical content…

  19. Business Analytics Programs Offered by AACSB-Accredited U.S. Colleges of Business: A Web Mining Study

    ERIC Educational Resources Information Center

    Zhao, Jensen; Zhao, Sherry Y.

    2016-01-01

    E-business, e-education, e-government, social media, and mobile services generate and capture trillions of bytes of data every second about customers, suppliers, employees, and other types of data. The growing quantity of big data is an important part of every sector in the global economy. However, there is a significant shortage of business data…

  20. Web Tools: The Second Generation

    ERIC Educational Resources Information Center

    Pascopella, Angela

    2008-01-01

    Web 2.0 tools and technologies, or second generation tools, help districts to save time and money, and eliminate the need to transfer or move files back and forth across computers. Many Web 2.0 tools help students think critically and solve problems, which falls under the 21st-century skills. The second-generation tools are growing in popularity…

  1. TrawlerWeb: an online de novo motif discovery tool for next-generation sequencing datasets.

    PubMed

    Dang, Louis T; Tondl, Markus; Chiu, Man Ho H; Revote, Jerico; Paten, Benedict; Tano, Vincent; Tokolyi, Alex; Besse, Florence; Quaife-Ryan, Greg; Cumming, Helen; Drvodelic, Mark J; Eichenlaub, Michael P; Hallab, Jeannette C; Stolper, Julian S; Rossello, Fernando J; Bogoyevitch, Marie A; Jans, David A; Nim, Hieu T; Porrello, Enzo R; Hudson, James E; Ramialison, Mirana

    2018-04-05

    A strong focus of the post-genomic era is mining of the non-coding regulatory genome in order to unravel the function of regulatory elements that coordinate gene expression (Nat 489:57-74, 2012; Nat 507:462-70, 2014; Nat 507:455-61, 2014; Nat 518:317-30, 2015). Whole-genome approaches based on next-generation sequencing (NGS) have provided insight into the genomic location of regulatory elements throughout different cell types, organs and organisms. These technologies are now widespread and commonly used in laboratories from various fields of research. This highlights the need for fast and user-friendly software tools dedicated to extracting cis-regulatory information contained in these regulatory regions; for instance transcription factor binding site (TFBS) composition. Ideally, such tools should not require prior programming knowledge to ensure they are accessible for all users. We present TrawlerWeb, a web-based version of the Trawler_standalone tool (Nat Methods 4:563-5, 2007; Nat Protoc 5:323-34, 2010), to allow for the identification of enriched motifs in DNA sequences obtained from next-generation sequencing experiments in order to predict their TFBS composition. TrawlerWeb is designed for online queries with standard options common to web-based motif discovery tools. In addition, TrawlerWeb provides three unique new features: 1) TrawlerWeb allows the input of BED files directly generated from NGS experiments, 2) it automatically generates an input-matched biologically relevant background, and 3) it displays resulting conservation scores for each instance of the motif found in the input sequences, which assists the researcher in prioritising the motifs to validate experimentally. Finally, to date, this web-based version of Trawler_standalone remains the fastest online de novo motif discovery tool compared to other popular web-based software, while generating predictions with high accuracy. TrawlerWeb provides users with a fast, simple and easy-to-use web interface for de novo motif discovery. This will assist in rapidly analysing NGS datasets that are now being routinely generated. TrawlerWeb is freely available and accessible at: http://trawler.erc.monash.edu.au .

  2. SynBioSS designer: a web-based tool for the automated generation of kinetic models for synthetic biological constructs

    PubMed Central

    Weeding, Emma; Houle, Jason

    2010-01-01

    Modeling tools can play an important role in synthetic biology the same way modeling helps in other engineering disciplines: simulations can quickly probe mechanisms and provide a clear picture of how different components influence the behavior of the whole. We present a brief review of available tools and present SynBioSS Designer. The Synthetic Biology Software Suite (SynBioSS) is used for the generation, storing, retrieval and quantitative simulation of synthetic biological networks. SynBioSS consists of three distinct components: the Desktop Simulator, the Wiki, and the Designer. SynBioSS Designer takes as input molecular parts involved in gene expression and regulation (e.g. promoters, transcription factors, ribosome binding sites, etc.), and automatically generates complete networks of reactions that represent transcription, translation, regulation, induction and degradation of those parts. Effectively, Designer uses DNA sequences as input and generates networks of biomolecular reactions as output. In this paper we describe how Designer uses universal principles of molecular biology to generate models of any arbitrary synthetic biological system. These models are useful as they explain biological phenotypic complexity in mechanistic terms. In turn, such mechanistic explanations can assist in designing synthetic biological systems. We also discuss, giving practical guidance to users, how Designer interfaces with the Registry of Standard Biological Parts, the de facto compendium of parts used in synthetic biology applications. PMID:20639523

  3. Development of STEP-NC Adaptor for Advanced Web Manufacturing System

    NASA Astrophysics Data System (ADS)

    Ajay Konapala, Mr.; Koona, Ramji, Dr.

    2017-08-01

    Information systems play a key role in the modern era of Information Technology. Rapid developments in IT & global competition calls for many changes in basic CAD/CAM/CAPP/CNC manufacturing chain of operations. ‘STEP-NC’ an enhancement to STEP for operating CNC machines, creating new opportunities for collaborative, concurrent, adaptive works across the manufacturing chain of operations. Schemas and data models defined by ISO14649 in liaison with ISO10303 standards made STEP-NC file rich with feature based, rather than mere point to point information of G/M Code format. But one needs to have a suitable information system to understand and modify these files. Various STEP-NC information systems are reviewed to understand the suitability of STEP-NC for web manufacturing. Present work also deals with the development of an adaptor which imports STEP-NC file, organizes its information, allowing modifications to entity values and finally generates a new STEP-NC file to export. The system is designed and developed to work on web to avail additional benefits through the web and also to be part of a proposed ‘Web based STEP-NC manufacturing platform’ which is under development and explained as future scope.

  4. Weaving Silos--A Leadership Challenge: A Cross-Functional Team Approach to Supporting Web-Based Student Services

    ERIC Educational Resources Information Center

    Kleemann, Gary L.

    2005-01-01

    The author reviews the evolution of Web services--from information sharing to transactional to relationship building--and the progression from first-generation to fourth-generation Web sites. (Contains 3 figures.)

  5. Automatic generation of Web mining environments

    NASA Astrophysics Data System (ADS)

    Cibelli, Maurizio; Costagliola, Gennaro

    1999-02-01

    The main problem related to the retrieval of information from the world wide web is the enormous number of unstructured documents and resources, i.e., the difficulty of locating and tracking appropriate sources. This paper presents a web mining environment (WME), which is capable of finding, extracting and structuring information related to a particular domain from web documents, using general purpose indices. The WME architecture includes a web engine filter (WEF), to sort and reduce the answer set returned by a web engine, a data source pre-processor (DSP), which processes html layout cues in order to collect and qualify page segments, and a heuristic-based information extraction system (HIES), to finally retrieve the required data. Furthermore, we present a web mining environment generator, WMEG, that allows naive users to generate a WME specific to a given domain by providing a set of specifications.

  6. Method and apparatus for measuring web material wound on a reel

    NASA Technical Reports Server (NTRS)

    Muller, R. M. (Inventor)

    1977-01-01

    The method and apparatus for measuring the number of layers of a web material of known thickness wound on a storage or take-up reel is presented. The method and apparatus are based on the principle that, at a relatively large radius, the loci of layers of a thin web wound on the reel approximate a family of concentric circles having radii respectively successively increasing by a length equal to the web thickness. Tachometer pulses are generated in response to linear movement of the web and reset pulses are generated in response to rotation of the reel. A digital circuit, responsive to the tachometer and reset pulses, generates data indicative of the layer number of any layer of the web and of position of the web within the layer without requiring numerical interpolation.

  7. Semi-automated ontology generation within OBO-Edit.

    PubMed

    Wächter, Thomas; Schroeder, Michael

    2010-06-15

    Ontologies and taxonomies have proven highly beneficial for biocuration. The Open Biomedical Ontology (OBO) Foundry alone lists over 90 ontologies mainly built with OBO-Edit. Creating and maintaining such ontologies is a labour-intensive, difficult, manual process. Automating parts of it is of great importance for the further development of ontologies and for biocuration. We have developed the Dresden Ontology Generator for Directed Acyclic Graphs (DOG4DAG), a system which supports the creation and extension of OBO ontologies by semi-automatically generating terms, definitions and parent-child relations from text in PubMed, the web and PDF repositories. DOG4DAG is seamlessly integrated into OBO-Edit. It generates terms by identifying statistically significant noun phrases in text. For definitions and parent-child relations it employs pattern-based web searches. We systematically evaluate each generation step using manually validated benchmarks. The term generation leads to high-quality terms also found in manually created ontologies. Up to 78% of definitions are valid and up to 54% of child-ancestor relations can be retrieved. There is no other validated system that achieves comparable results. By combining the prediction of high-quality terms, definitions and parent-child relations with the ontology editor OBO-Edit we contribute a thoroughly validated tool for all OBO ontology engineers. DOG4DAG is available within OBO-Edit 2.1 at http://www.oboedit.org. Supplementary data are available at Bioinformatics online.

  8. Lossy compression for Animated Web Visualisation

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Tomlinson, J.; Robinson, N.; Arribas, A.

    2017-12-01

    This talk will discuss an technique for lossy data compression specialised for web animation. We set ourselves the challenge of visualising a full forecast weather field as an animated 3D web page visualisation. This data is richly spatiotemporal, however it is routinely communicated to the public as a 2D map, and scientists are largely limited to visualising data via static 2D maps or 1D scatter plots. We wanted to present Met Office weather forecasts in a way that represents all the generated data. Our approach was to repurpose the technology used to stream high definition videos. This enabled us to achieve high rates of compression, while being compatible with both web browsers and GPU processing. Since lossy compression necessarily involves discarding information, evaluating the results is an important and difficult problem. This is essentially a problem of forecast verification. The difficulty lies in deciding what it means for two weather fields to be "similar", as simple definitions such as mean squared error often lead to undesirable results. In the second part of the talk, I will briefly discuss some ideas for alternative measures of similarity.

  9. Visualization of seismic tomography on Google Earth: Improvement of KML generator and its web application to accept the data file in European standard format

    NASA Astrophysics Data System (ADS)

    Yamagishi, Y.; Yanaka, H.; Tsuboi, S.

    2009-12-01

    We have developed a conversion tool for the data of seismic tomography into KML, called KML generator, and made it available on the web site (http://www.jamstec.go.jp/pacific21/google_earth). The KML generator enables us to display vertical and horizontal cross sections of the model on Google Earth in three-dimensional manner, which would be useful to understand the Earth's interior. The previous generator accepts text files of grid-point data having longitude, latitude, and seismic velocity anomaly. Each data file contains the data for each depth. Metadata, such as bibliographic reference, grid-point interval, depth, are described in other information file. We did not allow users to upload their own tomographic model to the web application, because there is not standard format to represent tomographic model. Recently European seismology research project, NEIRES (Network of Research Infrastructures for European Seismology), advocates that the data of seismic tomography should be standardized. They propose a new format based on JSON (JavaScript Object Notation), which is one of the data-interchange formats, as a standard one for the tomography. This format consists of two parts, which are metadata and grid-point data values. The JSON format seems to be powerful to handle and to analyze the tomographic model, because the structure of the format is fully defined by JavaScript objects, thus the elements are directly accessible by a script. In addition, there exist JSON libraries for several programming languages. The International Federation of Digital Seismograph Network (FDSN) adapted this format as a FDSN standard format for seismic tomographic model. There might be a possibility that this format would not only be accepted by European seismologists but also be accepted as the world standard. Therefore we improve our KML generator for seismic tomography to accept the data file having also JSON format. We also improve the web application of the generator so that the JSON formatted data file can be uploaded. Users can convert any tomographic model data to KML. The KML obtained through the new generator should provide an arena to compare various tomographic models and other geophysical observations on Google Earth, which may act as a common platform for geoscience browser.

  10. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    PubMed

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Services for Emodnet-Chemistry Data Products

    NASA Astrophysics Data System (ADS)

    Santinelli, Giorgio; Hendriksen, Gerrit; Barth, Alexander

    2016-04-01

    In the framework of Emodnet Chemistry lot, data products from regional leaders were made available in order to transform information into a database. This has been done by using functions and scripts, reading so-called enriched ODV files and inserting data directly into a cloud relational geodatabase. The main table is the one of observations which contains the main data and meta-data associated with the enriched ODV files. A particular implementation in data loading is used in order to improve on-the-fly computational speed. Data from Baltic Sea, North Sea, Mediterrean, Black Sea and part of the Atlantic region has been entered into the geodatabase, and consequently being instantly available from the OceanBrowser Emodnet portal. Furthermore, Deltares has developed an application that provides additional visualisation services for the aggregated and validated data collections. The visualisations are produced by making use of part of the OpenEarthTool stack (http://www.openearth.eu), by the integration of Web Feature Services and by the implementation of Web Processing Services. The goal is the generation of server-side plots of timeseries, profiles, timeprofiles and maps of selected parameters from data sets of selected stations. Regional data collections are retrieved using Emodnet Chemistry cloud relational geo-database. The spatial resolution in time and the intensity of data availability for selected parameters is shown using Web Service requests via the OceanBrowser Emodnet Web portal. OceanBrowser also shows station reference codes, which are used to establish a link for additional metadata, further data shopping and download.

  12. Sentiment Analysis of Web Sites Related to Vaginal Mesh Use in Pelvic Reconstructive Surgery.

    PubMed

    Hobson, Deslyn T G; Meriwether, Kate V; Francis, Sean L; Kinman, Casey L; Stewart, J Ryan

    2018-05-02

    The purpose of this study was to utilize sentiment analysis to describe online opinions toward vaginal mesh. We hypothesized that sentiment in legal Web sites would be more negative than that in medical and reference Web sites. We generated a list of relevant key words related to vaginal mesh and searched Web sites using the Google search engine. Each unique uniform resource locator (URL) was sorted into 1 of 6 categories: "medical", "legal", "news/media", "patient generated", "reference", or "unrelated". Sentiment of relevant Web sites, the primary outcome, was scored on a scale of -1 to +1, and mean sentiment was compared across all categories using 1-way analysis of variance. Tukey test evaluated differences between category pairs. Google searches of 464 unique key words resulted in 11,405 URLs. Sentiment analysis was performed on 8029 relevant URLs (3472 legal, 1625 "medical", 1774 "reference", 666 "news media", 492 "patient generated"). The mean sentiment for all relevant Web sites was +0.01 ± 0.16; analysis of variance revealed significant differences between categories (P < 0.001). Web sites categorized as "legal" and "news/media" had a slightly negative mean sentiment, whereas those categorized as "medical," "reference," and "patient generated" had slightly positive mean sentiments. Tukey test showed differences between all category pairs except the "medical" versus "reference" in comparison with the largest mean difference (-0.13) seen in the "legal" versus "reference" comparison. Web sites related to vaginal mesh have an overall mean neutral sentiment, and Web sites categorized as "medical," "reference," and "patient generated" have significantly higher sentiment scores than related Web sites in "legal" and "news/media" categories.

  13. An exploratory study of live vs. web-based delivery of a phlebotomy program.

    PubMed

    Fydryszewski, Nadine A; Scanlan, Craig; Guiles, H Jesse; Tucker, Ann

    2010-01-01

    Changes in student population and increased Web-based education offerings provided the impetus to assess pedagogy, cognitive outcomes and perceptions of course quality. This study explored cognitive outcomes and students' perception of course quality related to the Seven Principles for Good Practice in Undergraduate Education between live classroom delivery, compared to a Web-based delivery of a phlebotomy program. Quasi-experimental; students self-selected to enroll in live or Web-based program. For cognitive outcomes, no significant difference was found between the groups. Student perception of course quality differed only for Principle One (student-instructor contact). Students in the live classroom rated Principle One higher for the Part I course compared to the Web-based group. For the Part II course, there was no significant difference in perception of course quality related to any of the Seven Principles. The more constructivist pedagogy in the Part II course did not improve cognitive outcomes however, it may have contributed to knowledge retention. The live group rated Principle One in the Part II course evaluation relatively the same as they did for the Part I course evaluation. However, the Web-based group rated Principle One considerable higher for the Part II course than for Part I course. Future studies with a larger sample could explore improved course quality assessment instruments.

  14. Web buckling behavior under in-plane compression and shear loads for web reinforced composite sandwich core

    NASA Astrophysics Data System (ADS)

    Toubia, Elias Anis

    Sandwich construction is one of the most functional forms of composite structures developed by the composite industry. Due to the increasing demand of web-reinforced core for composite sandwich construction, a research study is needed to investigate the web plate instability under shear, compression, and combined loading. If the web, which is an integral part of the three dimensional web core sandwich structure, happens to be slender with respect to one or two of its spatial dimensions, then buckling phenomena become an issue in that it must be quantified as part of a comprehensive strength model for a fiber reinforced core. In order to understand the thresholds of thickness, web weight, foam type, and whether buckling will occur before material yielding, a thorough investigation needs to be conducted, and buckling design equations need to be developed. Often in conducting a parametric study, a special purpose analysis is preferred over a general purpose analysis code, such as a finite element code, due to the cost and effort usually involved in generating a large number of results. A suitable methodology based on an energy method is presented to solve the stability of symmetrical and specially orthotropic laminated plates on an elastic foundation. Design buckling equations were developed for the web modeled as a laminated plate resting on elastic foundations. The proposed equations allow for parametric studies without limitation regarding foam stiffness, geometric dimensions, or mechanical properties. General behavioral trends of orthotropic and symmetrical anisotropic plates show pronounced contribution of the elastic foundation and fiber orientations on the buckling resistance of the plate. The effects of flexural anisotropy on the buckling behavior of long rectangular plates when subjected to pure shear loading are well represented in the model. The reliability of the buckling equations as a design tool is confirmed by comparison with experimental results. Comparing to predicted values, the experimental plate shear test results range between 15 and 35 percent, depending on the boundary conditions considered. The compression testing yielded conservative results, and as such, can provide a valuable tool for the designer.

  15. Automatic Semantic Generation and Arabic Translation of Mathematical Expressions on the Web

    ERIC Educational Resources Information Center

    Doush, Iyad Abu; Al-Bdarneh, Sondos

    2013-01-01

    Automatic processing of mathematical information on the web imposes some difficulties. This paper presents a novel technique for automatic generation of mathematical equations semantic and Arabic translation on the web. The proposed system facilitates unambiguous representation of mathematical equations by correlating equations to their known…

  16. Effect of Temporal Relationships in Associative Rule Mining for Web Log Data

    PubMed Central

    Mohd Khairudin, Nazli; Mustapha, Aida

    2014-01-01

    The advent of web-based applications and services has created such diverse and voluminous web log data stored in web servers, proxy servers, client machines, or organizational databases. This paper attempts to investigate the effect of temporal attribute in relational rule mining for web log data. We incorporated the characteristics of time in the rule mining process and analysed the effect of various temporal parameters. The rules generated from temporal relational rule mining are then compared against the rules generated from the classical rule mining approach such as the Apriori and FP-Growth algorithms. The results showed that by incorporating the temporal attribute via time, the number of rules generated is subsequently smaller but is comparable in terms of quality. PMID:24587757

  17. Facilitating Student-Generated Content Using Web 2.0 Technologies

    ERIC Educational Resources Information Center

    Lee, Eunbae

    2011-01-01

    Web 2.0 technologies have created a trend of user-generated content by supporting media production, collaboration, communication, and dissemination. User-generated content is translated into student-generated content (SGC) in education. SGC engages learners in an authentic project that fosters students' autonomy, creativity, and real-world…

  18. 31 CFR Appendix A to Part 560 - Persons Determined to be the Government of Iran, as defined in § 560.304 of This Part

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...), available on OFAC's Web site. New names of persons determined to be the Government of Iran and changes to...'s Web site. Appendix A to Part 560 will be republished annually. This document and additional information concerning OFAC are available from OFAC's Web site (http://www.treas.gov/ofac). Certain general...

  19. Upside-down spiders build upside-down orb webs: web asymmetry, spider orientation and running speed in Cyclosa.

    PubMed

    Nakata, Kensuke; Zschokke, Samuel

    2010-10-07

    Almost all spiders building vertical orb webs face downwards when sitting on the hubs of their webs, and their webs exhibit an up-down size asymmetry, with the lower part of the capture area being larger than the upper. However, spiders of the genus Cyclosa, which all build vertical orb webs, exhibit inter- and intraspecific variation in orientation. In particular, Cyclosa ginnaga and C. argenteoalba always face upwards, and C. octotuberculata always face downwards, whereas some C. confusa face upwards and others face downwards or even sideways. These spiders provide a unique opportunity to examine why most spiders face downwards and have asymmetrical webs. We found that upward-facing spiders had upside-down webs with larger upper parts, downward-facing spiders had normal webs with larger lower parts and sideways-facing spiders had more symmetrical webs. Downward-facing C. confusa spiders were larger than upward- and sideways-facing individuals. We also found that during prey attacks, downward-facing spiders ran significantly faster downwards than upwards, which was not the case in upward-facing spiders. These results suggest that the spider's orientation at the hub and web asymmetry enhance its foraging efficiency by minimizing the time to reach prey trapped in the web.

  20. Upside-down spiders build upside-down orb webs: web asymmetry, spider orientation and running speed in Cyclosa

    PubMed Central

    Nakata, Kensuke; Zschokke, Samuel

    2010-01-01

    Almost all spiders building vertical orb webs face downwards when sitting on the hubs of their webs, and their webs exhibit an up–down size asymmetry, with the lower part of the capture area being larger than the upper. However, spiders of the genus Cyclosa, which all build vertical orb webs, exhibit inter- and intraspecific variation in orientation. In particular, Cyclosa ginnaga and C. argenteoalba always face upwards, and C. octotuberculata always face downwards, whereas some C. confusa face upwards and others face downwards or even sideways. These spiders provide a unique opportunity to examine why most spiders face downwards and have asymmetrical webs. We found that upward-facing spiders had upside-down webs with larger upper parts, downward-facing spiders had normal webs with larger lower parts and sideways-facing spiders had more symmetrical webs. Downward-facing C. confusa spiders were larger than upward- and sideways-facing individuals. We also found that during prey attacks, downward-facing spiders ran significantly faster downwards than upwards, which was not the case in upward-facing spiders. These results suggest that the spider's orientation at the hub and web asymmetry enhance its foraging efficiency by minimizing the time to reach prey trapped in the web. PMID:20462900

  1. A Virtual Learning Environment for Part-Time MASW Students: An Evaluation of the WebCT

    ERIC Educational Resources Information Center

    Chan, Charles C.; Tsui, Ming-sum; Chan, Mandy Y. C.; Hong, Joe H.

    2008-01-01

    This study aims to evaluate the perception of a cohort of social workers studying for a part-time master's program in social work in using the popular Web-based learning platform--World Wide Web Course Tools (WebCT) as a complimentary method of teaching and learning. It was noted that social work profession began incorporating computer technology…

  2. ClinData Express – A Metadata Driven Clinical Research Data Management System for Secondary Use of Clinical Data

    PubMed Central

    Li, Zuofeng; Wen, Jingran; Zhang, Xiaoyan; Wu, Chunxiao; Li, Zuogao; Liu, Lei

    2012-01-01

    Aim to ease the secondary use of clinical data in clinical research, we introduce a metadata driven web-based clinical data management system named ClinData Express. ClinData Express is made up of two parts: 1) m-designer, a standalone software for metadata definition; 2) a web based data warehouse system for data management. With ClinData Express, what the researchers need to do is to define the metadata and data model in the m-designer. The web interface for data collection and specific database for data storage will be automatically generated. The standards used in the system and the data export modular make sure of the data reuse. The system has been tested on seven disease-data collection in Chinese and one form from dbGap. The flexibility of system makes its great potential usage in clinical research. The system is available at http://code.google.com/p/clindataexpress. PMID:23304327

  3. Non-Relative Value Unit-Generating Activities Represent One-Fifth of Academic Neuroradiologist Productivity.

    PubMed

    Wintermark, M; Zeineh, M; Zaharchuk, G; Srivastava, A; Fischbein, N

    2016-07-01

    A neuroradiologist's activity includes many tasks beyond interpreting relative value unit-generating imaging studies. Our aim was to test a simple method to record and quantify the non-relative value unit-generating clinical activity represented by consults and clinical conferences, including tumor boards. Four full-time neuroradiologists, working an average of 50% clinical and 50% academic activity, systematically recorded all the non-relative value unit-generating consults and conferences in which they were involved during 3 months by using a simple, Web-based, computer-based application accessible from smartphones, tablets, or computers. The number and type of imaging studies they interpreted during the same period and the associated relative value units were extracted from our billing system. During 3 months, the 4 neuroradiologists working an average of 50% clinical activity interpreted 4241 relative value unit-generating imaging studies, representing 8152 work relative value units. During the same period, they recorded 792 non-relative value unit-generating study reviews as part of consults and conferences (not including reading room consults), representing 19% of the interpreted relative value unit-generating imaging studies. We propose a simple Web-based smartphone app to record and quantify non-relative value unit-generating activities including consults, clinical conferences, and tumor boards. The quantification of non-relative value unit-generating activities is paramount in this time of a paradigm shift from volume to value. It also represents an important tool for determining staffing levels, which cannot be performed on the basis of relative value unit only, considering the importance of time spent by radiologists on non-relative value unit-generating activities. It may also influence payment models from medical centers to radiology departments or practices. © 2016 by American Journal of Neuroradiology.

  4. A reusable anatomically segmented digital mannequin for public health communication.

    PubMed

    Fujieda, Kaori; Okubo, Kosaku

    2016-01-01

    The ongoing development of world wide web technologies has facilitated a change in health communication, which has now become bi-directional and encompasses people with diverse backgrounds. To enable an even greater role for medical illustrations, a data set, BodyParts3D, has been generated and its data set can be used by anyone to create and exchange customised three-dimensional (3D) anatomical images. BP3D comprises more than 3000 3D object files created by segmenting a digital mannequin in accordance with anatomical naming conventions. This paper describes the methodologies and features used to generate an anatomically correct male mannequin.

  5. Consolidating drug data on a global scale using Linked Data.

    PubMed

    Jovanovik, Milos; Trajanov, Dimitar

    2017-01-21

    Drug product data is available on the Web in a distributed fashion. The reasons lie within the regulatory domains, which exist on a national level. As a consequence, the drug data available on the Web are independently curated by national institutions from each country, leaving the data in varying languages, with a varying structure, granularity level and format, on different locations on the Web. Therefore, one of the main challenges in the realm of drug data is the consolidation and integration of large amounts of heterogeneous data into a comprehensive dataspace, for the purpose of developing data-driven applications. In recent years, the adoption of the Linked Data principles has enabled data publishers to provide structured data on the Web and contextually interlink them with other public datasets, effectively de-siloing them. Defining methodological guidelines and specialized tools for generating Linked Data in the drug domain, applicable on a global scale, is a crucial step to achieving the necessary levels of data consolidation and alignment needed for the development of a global dataset of drug product data. This dataset would then enable a myriad of new usage scenarios, which can, for instance, provide insight into the global availability of different drug categories in different parts of the world. We developed a methodology and a set of tools which support the process of generating Linked Data in the drug domain. Using them, we generated the LinkedDrugs dataset by seamlessly transforming, consolidating and publishing high-quality, 5-star Linked Drug Data from twenty-three countries, containing over 248,000 drug products, over 99,000,000 RDF triples and over 278,000 links to generic drugs from the LOD Cloud. Using the linked nature of the dataset, we demonstrate its ability to support advanced usage scenarios in the drug domain. The process of generating the LinkedDrugs dataset demonstrates the applicability of the methodological guidelines and the supporting tools in transforming drug product data from various, independent and distributed sources, into a comprehensive Linked Drug Data dataset. The presented user-centric and analytical usage scenarios over the dataset show the advantages of having a de-siloed, consolidated and comprehensive dataspace of drug data available via the existing infrastructure of the Web.

  6. Changing Instructional Practices through Technology Training, Part 2 of 2.

    ERIC Educational Resources Information Center

    Seamon, Mary

    2001-01-01

    This second of a two-part article introducing the steps in a school district's teacher professional development model discusses steps three through six: Web page or project; Internet Discovery (with its five phases-question, search, interpretation, composition, sharing); Cyberinquiry; and WebQuests. Three examples are included: Web Page…

  7. IRIS Earthquake Browser with Integration to the GEON IDV for 3-D Visualization of Hypocenters.

    NASA Astrophysics Data System (ADS)

    Weertman, B. R.

    2007-12-01

    We present a new generation of web based earthquake query tool - the IRIS Earthquake Browser (IEB). The IEB combines the DMC's large set of earthquake catalogs (provided by USGS/NEIC, ISC and the ANF) with the popular Google Maps web interface. With the IEB you can quickly and easily find earthquakes in any region of the globe. Using Google's detailed satellite images, earthquakes can be easily co-located with natural geographic features such as volcanoes as well as man made features such as commercial mines. A set of controls allow earthquakes to be filtered by time, magnitude, and depth range as well as catalog name, contributor name and magnitude type. Displayed events can be easily exported in NetCDF format into the GEON Integrated Data Viewer (IDV) where hypocenters may be visualized in three dimensions. Looking "under the hood", the IEB is based on AJAX technology and utilizes REST style web services hosted at the IRIS DMC. The IEB is part of a broader effort at the DMC aimed at making our data holdings available via web services. The IEB is useful both educationally and as a research tool.

  8. Formats and Network Protocols for Browser Access to 2D Raster Data

    NASA Astrophysics Data System (ADS)

    Plesea, L.

    2015-12-01

    Tiled web maps in browsers are a major success story, forming the foundation of many current web applications. Enabling tiled data access is the next logical step, and is likely to meet with similar success. Many ad-hoc approaches have already started to appear, and something similar is explored within the Open Geospatial Consortium. One of the main obstacles in making browser data access a reality is the lack of a well-known data format. This obstacle also represents an opportunity to analyze the requirements and possible candidates, applying lessons learned from web tiled image services and protocols. Similar to the image counterpart, a web tile raster data format needs to have good intrinsic compression and be able to handle high byte count data types including floating point. An overview of a possible solution to the format problem, a 2D data raster compression algorithm called Limited Error Raster Compression (LERC) will be presented. In addition to the format, best practices for high request rate HTTP services also need to be followed. In particular, content delivery network (CDN) caching suitability needs to be part of any design, not an after-thought. Last but not least, HTML 5 browsers will certainly be part of any solution since they provide improved access to binary data, as well as more powerful ways to view and interact with the data in the browser. In a simple but relevant application, digital elevation model (DEM) raster data is served as LERC compressed data tiles which are used to generate terrain by a HTML5 scene viewer.

  9. Next generation of weather generators on web service framework

    NASA Astrophysics Data System (ADS)

    Chinnachodteeranun, R.; Hung, N. D.; Honda, K.; Ines, A. V. M.

    2016-12-01

    Weather generator is a statistical model that synthesizes possible realization of long-term historical weather in future. It generates several tens to hundreds of realizations stochastically based on statistical analysis. Realization is essential information as a crop modeling's input for simulating crop growth and yield. Moreover, they can be contributed to analyzing uncertainty of weather to crop development stage and to decision support system on e.g. water management and fertilizer management. Performing crop modeling requires multidisciplinary skills which limit the usage of weather generator only in a research group who developed it as well as a barrier for newcomers. To improve the procedures of performing weather generators as well as the methodology to acquire the realization in a standard way, we implemented a framework for providing weather generators as web services, which support service interoperability. Legacy weather generator programs were wrapped in the web service framework. The service interfaces were implemented based on an international standard that was Sensor Observation Service (SOS) defined by Open Geospatial Consortium (OGC). Clients can request realizations generated by the model through SOS Web service. Hierarchical data preparation processes required for weather generator are also implemented as web services and seamlessly wired. Analysts and applications can invoke services over a network easily. The services facilitate the development of agricultural applications and also reduce the workload of analysts on iterative data preparation and handle legacy weather generator program. This architectural design and implementation can be a prototype for constructing further services on top of interoperable sensor network system. This framework opens an opportunity for other sectors such as application developers and scientists in other fields to utilize weather generators.

  10. Client-Side Event Processing for Personalized Web Advertisement

    NASA Astrophysics Data System (ADS)

    Stühmer, Roland; Anicic, Darko; Sen, Sinan; Ma, Jun; Schmidt, Kay-Uwe; Stojanovic, Nenad

    The market for Web advertisement is continuously growing and correspondingly, the number of approaches that can be used for realizing Web advertisement are increasing. However, current approaches fail to generate very personalized ads for a current Web user that is visiting a particular Web content. They mainly try to develop a profile based on the content of that Web page or on a long-term user's profile, by not taking into account current user's preferences. We argue that by discovering a user's interest from his current Web behavior we can support the process of ad generation, especially the relevance of an ad for the user. In this paper we present the conceptual architecture and implementation of such an approach. The approach is based on the extraction of simple events from the user interaction with a Web page and their combination in order to discover the user's interests. We use semantic technologies in order to build such an interpretation out of many simple events. We present results from preliminary evaluation studies. The main contribution of the paper is a very efficient, semantic-based client-side architecture for generating and combining Web events. The architecture ensures the agility of the whole advertisement system, by complexly processing events on the client. In general, this work contributes to the realization of new, event-driven applications for the (Semantic) Web.

  11. Generating Mosaics of Astronomical Images

    NASA Technical Reports Server (NTRS)

    Bergou, Attila; Berriman, Bruce; Good, John; Jacob, Joseph; Katz, Daniel; Laity, Anastasia; Prince, Thomas; Williams, Roy

    2005-01-01

    "Montage" is the name of a service of the National Virtual Observatory (NVO), and of software being developed to implement the service via the World Wide Web. Montage generates science-grade custom mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. "Science-grade" in this context signifies that terrestrial and instrumental features are removed from images in a way that can be described quantitatively. "Custom" refers to user-specified parameters of projection, coordinates, size, rotation, and spatial sampling. The greatest value of Montage is expected to lie in its ability to analyze images at multiple wavelengths, delivering them on a common projection, coordinate system, and spatial sampling, and thereby enabling further analysis as though they were part of a single, multi-wavelength image. Montage will be deployed as a computation-intensive service through existing astronomy portals and other Web sites. It will be integrated into the emerging NVO architecture and will be executed on the TeraGrid. The Montage software will also be portable and publicly available.

  12. Web Browser Trends and Technologies.

    ERIC Educational Resources Information Center

    Goodwin-Jones, Bob

    2000-01-01

    Discusses Web browsers and how their capabilities have been expanded, support for Web browsing on different devices (cell phones, palmtop computers, TV sets), and browser support for the next-generation Web authoring language, XML ("extensible markup language"). (Author/VWL)

  13. Design, implementation and practice of JBEI-ICE: an open source biological part registry platform and tools.

    PubMed

    Ham, Timothy S; Dmytriv, Zinovii; Plahar, Hector; Chen, Joanna; Hillson, Nathan J; Keasling, Jay D

    2012-10-01

    The Joint BioEnergy Institute Inventory of Composable Elements (JBEI-ICEs) is an open source registry platform for managing information about biological parts. It is capable of recording information about 'legacy' parts, such as plasmids, microbial host strains and Arabidopsis seeds, as well as DNA parts in various assembly standards. ICE is built on the idea of a web of registries and thus provides strong support for distributed interconnected use. The information deposited in an ICE installation instance is accessible both via a web browser and through the web application programming interfaces, which allows automated access to parts via third-party programs. JBEI-ICE includes several useful web browser-based graphical applications for sequence annotation, manipulation and analysis that are also open source. As with open source software, users are encouraged to install, use and customize JBEI-ICE and its components for their particular purposes. As a web application programming interface, ICE provides well-developed parts storage functionality for other synthetic biology software projects. A public instance is available at public-registry.jbei.org, where users can try out features, upload parts or simply use it for their projects. The ICE software suite is available via Google Code, a hosting site for community-driven open source projects.

  14. Automatic Generation of Data Types for Classification of Deep Web Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automaticmore » generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.« less

  15. Learning about the Human Genome. Part 2: Resources for Science Educators. ERIC Digest.

    ERIC Educational Resources Information Center

    Haury, David L.

    This ERIC Digest identifies how the human genome project fits into the "National Science Education Standards" and lists Human Genome Project Web sites found on the World Wide Web. It is a resource companion to "Learning about the Human Genome. Part 1: Challenge to Science Educators" (Haury 2001). The Web resources and…

  16. Fabrication and Characterization of Three Dimensional Photonic Crystals Generated by Multibeam Interference Lithography

    DTIC Science & Technology

    2009-01-01

    and J. A. Lewis, "Microperiodic structures - Direct writing of three-dimensional webs ," Nature, vol. 428, pp. 386-386, 2004. [9] M. Campbell, D. N...of Applied Physics Part 1-Regular Papers Brief Communications & Review Papers , vol. 44, pp. 6355-6367, 2005. [75] P. Cloetens, W. Ludwig, J... paper screen on the sample holder and marking the beam position. If the central beam is properly aligned, the spot on the screen remains at the

  17. Dynamic Web Pages: Performance Impact on Web Servers.

    ERIC Educational Resources Information Center

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  18. The Hebrewer: A Web-Based Inflection Generator

    ERIC Educational Resources Information Center

    Foster, James Q.; Harrell, Lane Foster; Raizen, Esther

    2004-01-01

    This paper reports on the grammatical and programmatical production aspects of the "Hebrewer," a cross-platform web-based reference work in the form of a Hebrew inflection generator. The Hebrewer, a Java applet/servlet combination, is currently capable of generating 2,500 nouns in full declension and 500 verbs in full conjugation,…

  19. Preparing for the Real Web Generation

    ERIC Educational Resources Information Center

    Pence, Harry E.

    2007-01-01

    Some have called the current generation of college students the Web generation. They are wrong! The pace of technology change continues to quicken. The effects of globalization and social networking have not yet had their full impact. At best, the present students represent a transitional group. The tools we use define us, and the media revolution…

  20. WaveNet: A Web-Based Metocean Data Access, Processing and Analysis Tool; Part 5 - WW3 Database

    DTIC Science & Technology

    2015-02-01

    Program ( CDIP ); and Part 4 for the Great Lakes Observing System/Coastal Forecasting System (GLOS/GLCFS). Using step-by-step instructions, this Part 5...Demirbilek, Z., L. Lin, and D. Wilson. 2014a. WaveNet: A web-based metocean data access, processing, and analysis tool; part 3– CDIP database

  1. Internet Usage by Low-Literacy Adults Seeking Health Information: An Observational Analysis

    PubMed Central

    Birru, Mehret S; Monaco, Valerie M; Charles, Lonelyss; Drew, Hadiya; Njie, Valerie; Bierria, Timothy; Detlefsen, Ellen

    2004-01-01

    Background Adults with low literacy may encounter informational obstacles on the Internet when searching for health information, in part because most health Web sites require at least a high-school reading proficiency for optimal access. Objective The purpose of this study was to 1) determine how low-literacy adults independently access and evaluate health information on the Internet, 2) identify challenges and areas of proficiency in the Internet-searching skills of low-literacy adults. Methods Subjects (n=8) were enrolled in a reading assistance program at Bidwell Training Center in Pittsburgh, PA, and read at a 3rd to 8th grade level. Subjects conducted self-directed Internet searches for designated health topics while utilizing a think-aloud protocol. Subjects' keystrokes and comments were recorded using Camtasia Studio screen-capture software. The search terms used to find health information, the amount of time spent on each Web site, the number of Web sites accessed, the reading level of Web sites accessed, and the responses of subjects to questionnaires were assessed. Results Subjects collectively answered 8 out of 24 questions correctly. Seven out of 8 subjects selected "sponsored sites"-paid Web advertisements-over search engine-generated links when answering health questions. On average, subjects accessed health Web sites written at or above a 10th grade reading level. Standard methodologies used for measuring health literacy and for promoting subjects to verbalize responses to Web-site form and content had limited utility in this population. Conclusion This study demonstrates that Web health information requires a reading level that prohibits optimal access by some low-literacy adults. These results highlight the low-literacy adult population as a potential audience for Web health information, and indicate some areas of difficulty that these individuals face when using the Internet and health Web sites to find information on specific health topics. PMID:15471751

  2. 75 FR 76401 - Pilot Program for Extended Time Period To Reply to a Notice To File Missing Parts of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-08

    ... filing system, EFS-Web, and selecting the document description of ``Certification and Request for Missing... Filing System Web (EFS-Web), 74 FR 55200 (Oct. 27, 2009), 1348 Off. Gaz. Pat. Office 394 (Nov. 24, 2009... parts notice, including increased use of the eighteen-month publication system, more time for applicants...

  3. Food-web complexity emerging from ecological dynamics on adaptive networks.

    PubMed

    Garcia-Domingo, Josep L; Saldaña, Joan

    2007-08-21

    Food webs are complex networks describing trophic interactions in ecological communities. Since Robert May's seminal work on random structured food webs, the complexity-stability debate is a central issue in ecology: does network complexity increase or decrease food-web persistence? A multi-species predator-prey model incorporating adaptive predation shows that the action of ecological dynamics on the topology of a food web (whose initial configuration is generated either by the cascade model or by the niche model) render, when a significant fraction of adaptive predators is present, similar hyperbolic complexity-persistence relationships as those observed in empirical food webs. It is also shown that the apparent positive relation between complexity and persistence in food webs generated under the cascade model, which has been pointed out in previous papers, disappears when the final connection is used instead of the initial one to explain species persistence.

  4. Trophic connections in Lake Superior Part II: the nearshore fish community

    USGS Publications Warehouse

    Gamble, A.E.; Hrabik, T.R.; Yule, D.L.; Stockwell, J.D.

    2011-01-01

    We use detailed diet analyses of the predominant planktivorous, benthivorous and piscivorous fish species from Lake Superior to create a nearshore (bathymetric depths Mysis diluviana and Diporeia spp). Although the piscivorous fishes like lean lake trout (Salvelinus namaycush) fed to a lesser extent on Diporeia and Mysis, they were still strongly connected to these macroinvertebrates, which were consumed by their primary prey species (sculpin spp., rainbow smelt Osmerus mordax, and coregonines). The addition of Bythotrephes to summer/fall cisco and lake whitefish diets, and the decrease in rainbow smelt in lean lake trout diets (replaced by coregonines) were the largest observed differences relative to historic Lake Superior diet studies. Although the offshore food web of Lake Superior was simpler than nearshore in terms of number of fish species present, the two areas had remarkably similar food web structures, and both fish communities were primarily supported by Mysis and Diporeia. We conclude that declines in Mysis or Diporeia populations would have a significant impact on energy flow in Lake Superior. The food web information we generated can be used to better identify management strategies for Lake Superior.

  5. Addressing and Presenting Quality of Satellite Data via Web-Based Services

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory; Lynnes, C.; Ahmad, S.; Fox, P.; Zednik, S.; West, P.

    2011-01-01

    With the recent attention to climate change and proliferation of remote-sensing data utilization, climate model and various environmental monitoring and protection applications have begun to increasingly rely on satellite measurements. Research application users seek good quality satellite data, with uncertainties and biases provided for each data point. However, different communities address remote sensing quality issues rather inconsistently and differently. We describe our attempt to systematically characterize, capture, and provision quality and uncertainty information as it applies to the NASA MODIS Aerosol Optical Depth data product. In particular, we note the semantic differences in quality/bias/uncertainty at the pixel, granule, product, and record levels. We outline various factors contributing to uncertainty or error budget; errors. Web-based science analysis and processing tools allow users to access, analyze, and generate visualizations of data while alleviating users from having directly managing complex data processing operations. These tools provide value by streamlining the data analysis process, but usually shield users from details of the data processing steps, algorithm assumptions, caveats, etc. Correct interpretation of the final analysis requires user understanding of how data has been generated and processed and what potential biases, anomalies, or errors may have been introduced. By providing services that leverage data lineage provenance and domain-expertise, expert systems can be built to aid the user in understanding data sources, processing, and the suitability for use of products generated by the tools. We describe our experiences developing a semantic, provenance-aware, expert-knowledge advisory system applied to NASA Giovanni web-based Earth science data analysis tool as part of the ESTO AIST-funded Multi-sensor Data Synergy Advisor project.

  6. Exposing the structure of an Arctic food web.

    PubMed

    Wirta, Helena K; Vesterinen, Eero J; Hambäck, Peter A; Weingartner, Elisabeth; Rasmussen, Claus; Reneerkens, Jeroen; Schmidt, Niels M; Gilg, Olivier; Roslin, Tomas

    2015-09-01

    How food webs are structured has major implications for their stability and dynamics. While poorly studied to date, arctic food webs are commonly assumed to be simple in structure, with few links per species. If this is the case, then different parts of the web may be weakly connected to each other, with populations and species united by only a low number of links. We provide the first highly resolved description of trophic link structure for a large part of a high-arctic food web. For this purpose, we apply a combination of recent techniques to describing the links between three predator guilds (insectivorous birds, spiders, and lepidopteran parasitoids) and their two dominant prey orders (Diptera and Lepidoptera). The resultant web shows a dense link structure and no compartmentalization or modularity across the three predator guilds. Thus, both individual predators and predator guilds tap heavily into the prey community of each other, offering versatile scope for indirect interactions across different parts of the web. The current description of a first but single arctic web may serve as a benchmark toward which to gauge future webs resolved by similar techniques. Targeting an unusual breadth of predator guilds, and relying on techniques with a high resolution, it suggests that species in this web are closely connected. Thus, our findings call for similar explorations of link structure across multiple guilds in both arctic and other webs. From an applied perspective, our description of an arctic web suggests new avenues for understanding how arctic food webs are built and function and of how they respond to current climate change. It suggests that to comprehend the community-level consequences of rapid arctic warming, we should turn from analyses of populations, population pairs, and isolated predator-prey interactions to considering the full set of interacting species.

  7. 31 CFR Appendix A to Part 560 - Persons Determined To Be the Government of Iran, as Defined in § 560.304 of This Part

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... through the following page on OFAC's Web site: http://www.treasury.gov/sdn. Additional information.... This document and additional information concerning OFAC are available from OFAC's Web site: http://www... via facsimile through a 24-hour fax-on-demand service, tel.: 202/622-0077. Please consult OFAC's Web...

  8. Participatory GIS in design of the Wroclaw University of Science and Technology campus web map and spatial analysis of campus area quality

    NASA Astrophysics Data System (ADS)

    Blachowski, Jan; Łuczak, Jakub; Zagrodnik, Paulina

    2018-01-01

    Public participation geographic information system (GIS) and participatory mapping data collection methods are means that enhance capacity in generating, managing, and communicating spatial information in various fields ranging from local planning to environmental management. In this study these methods have been used in two ways. The first one, to gather information on the additional functionality of campus web map expected by its potential users, i.e. students, staff and visitors, through web based survey. The second, to collect geographically referenced information on campus areas that are liked and disliked in a geo-survey carried out with ArcGIS Online GeoForm Application. The results of the first survey were used to map facilities such as: bicycle infrastructure, building entrances, wheelchair accessible infrastructure and benches. The results of the second one, to analyse the most and the least attractive parts of the campus with heat and hot spot analyses in GIS. In addition, the answers have been studied with regard to the visual and functional aspects of campus area raised in the survey. The thematic layers developed in the results of field mapping and geoprocessing of geosurvey data were included in the campus web map project. The paper describes the applied methodology of data collection, processing, analysis, interpretation and geovisualisation.

  9. Public Outreach at RAL: Engaging the Next Generation of Scientists and Engineers

    NASA Astrophysics Data System (ADS)

    Corbett, G.; Ryall, G.; Palmer, S.; Collier, I. P.; Adams, J.; Appleyard, R.

    2015-12-01

    The Rutherford Appleton Laboratory (RAL) is part of the UK's Science and Technology Facilities Council (STFC). As part of the Royal Charter that established the STFC, the organisation is required to generate public awareness and encourage public engagement and dialogue in relation to the science undertaken. The staff at RAL firmly support this activity as it is important to encourage the next generation of students to consider studying Science, Technology, Engineering, and Mathematics (STEM) subjects, providing the UK with a highly skilled work-force in the future. To this end, the STFC undertakes a variety of outreach activities. This paper will describe the outreach activities undertaken by RAL, particularly focussing on those of the Scientific Computing Department (SCD). These activities include: an Arduino based activity day for 12-14 year-olds to celebrate Ada Lovelace day; running a centre as part of the Young Rewired State - encouraging 11-18 year-olds to create web applications with open data; sponsoring a team in the Engineering Education Scheme - supporting a small team of 16-17 year-olds to solve a real world engineering problem; as well as the more traditional tours of facilities. These activities could serve as an example for other sites involved in scientific computing around the globe.

  10. New Generation Sensor Web Enablement

    PubMed Central

    Bröring, Arne; Echterhoff, Johannes; Jirka, Simon; Simonis, Ingo; Everding, Thomas; Stasch, Christoph; Liang, Steve; Lemmens, Rob

    2011-01-01

    Many sensor networks have been deployed to monitor Earth’s environment, and more will follow in the future. Environmental sensors have improved continuously by becoming smaller, cheaper, and more intelligent. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not straightforward. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. The concept of the Sensor Web reflects such a kind of infrastructure for sharing, finding, and accessing sensors and their data across different applications. It hides the heterogeneous sensor hardware and communication protocols from the applications built on top of it. The Sensor Web Enablement initiative of the Open Geospatial Consortium standardizes web service interfaces and data encodings which can be used as building blocks for a Sensor Web. This article illustrates and analyzes the recent developments of the new generation of the Sensor Web Enablement specification framework. Further, we relate the Sensor Web to other emerging concepts such as the Web of Things and point out challenges and resulting future work topics for research on Sensor Web Enablement. PMID:22163760

  11. Interval-level measurement with visual analogue scales in Internet-based research: VAS Generator.

    PubMed

    Reips, Ulf-Dietrich; Funke, Frederik

    2008-08-01

    The present article describes VAS Generator (www.vasgenerator.net), a free Web service for creating a wide range of visual analogue scales that can be used as measurement devices in Web surveys and Web experimentation, as well as for local computerized assessment. A step-by-step example for creating and implementing a visual analogue scale with visual feedback is given. VAS Generator and the scales it generates work independently of platforms and use the underlying languages HTML and JavaScript. Results from a validation study with 355 participants are reported and show that the scales generated with VAS Generator approximate an interval-scale level. In light of previous research on visual analogue versus categorical (e.g., radio button) scales in Internet-based research, we conclude that categorical scales only reach ordinal-scale level, and thus visual analogue scales are to be preferred whenever possible.

  12. BEAT: A Web-Based Boolean Expression Fault-Based Test Case Generation Tool

    ERIC Educational Resources Information Center

    Chen, T. Y.; Grant, D. D.; Lau, M. F.; Ng, S. P.; Vasa, V. R.

    2006-01-01

    BEAT is a Web-based system that generates fault-based test cases from Boolean expressions. It is based on the integration of our several fault-based test case selection strategies. The generated test cases are considered to be fault-based, because they are aiming at the detection of particular faults. For example, when the Boolean expression is in…

  13. 31 CFR 537.310 - Licenses; general and specific.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... part or are made available on OFAC's Web site: www.treasury.gov/ofac. (c) The term specific license... part or made available on OFAC's Web site. Note to § 537.310: See § 501.801 of this chapter on...

  14. The Geospatial Web and Local Geographical Education

    ERIC Educational Resources Information Center

    Harris, Trevor M.; Rouse, L. Jesse; Bergeron, Susan J.

    2010-01-01

    Recent innovations in the Geospatial Web represent a paradigm shift in Web mapping by enabling educators to explore geography in the classroom by dynamically using a rapidly growing suite of impressive online geospatial tools. Coupled with access to spatial data repositories and User-Generated Content, the Geospatial Web provides a powerful…

  15. Web-Mediated Knowledge Synthesis for Educators

    ERIC Educational Resources Information Center

    DeSchryver, Michael

    2015-01-01

    Ubiquitous and instant access to information on the Web is challenging what constitutes 21st century literacies. This article explores the notion of Web-mediated knowledge synthesis, an approach to integrating Web-based learning that may result in generative synthesis of ideas. This article describes the skills and strategies that may support…

  16. Brainstorming Design for Health: Helping Patients Utilize Patient-Generated Information on the Web

    PubMed Central

    Huh, Jina; Hartzler, Andrea; Munson, Sean; Anderson, Nick; Edwards, Kelly; Gore, John L.; McDonald, David; O’Leary, Jim; Parker, Andrea; Streat, Derek; Yetisgen-Yildiz, Meliha; Pratt, Wanda; Ackerman, Mark S.

    2013-01-01

    Researchers and practitioners show increasing sinterest in utilizing patient-generated information on the Web. Although the HCI and CSCW communities have provided many exciting opportunities for exploring new ideas and building broad agenda in health, few venues offer a platform for interdisciplinary and collaborative brainstorming about design challenges and opportunities in this space. The goal of this workshop is to provide participants with opportunities to interact with stakeholders from diverse backgrounds and practices—researchers, practitioners, designers, programmers, and ethnographers—and together generate tangible design outcomes that utilize patient-generated information on the Web. Through small multidisciplinary group work, we will provide participants with new collaboration opportunities, understanding of the state of the art, inspiration for future work, and ideally avenues for continuing to develop research and design ideas generated at the workshop. PMID:24499843

  17. Content-Based Personalization Services Integrating Folksonomies

    NASA Astrophysics Data System (ADS)

    Musto, Cataldo; Narducci, Fedelucio; Lops, Pasquale; de Gemmis, Marco; Semeraro, Giovanni

    Basic content-based personalization consists in matching up the attributes of a user profile, in which preferences and interests are stored, with the attributes of a content object. The Web 2.0 (r)evolution has changed the game for personalization, from ‘elitary’ Web 1.0, written by few and read by many, to web content generated by everyone (user-generated content - UGC), since the role of people has evolved from passive consumers of information to that of active contributors.

  18. Web-4D-QSAR: A web-based application to generate 4D-QSAR descriptors.

    PubMed

    Ataide Martins, João Paulo; Rougeth de Oliveira, Marco Antônio; Oliveira de Queiroz, Mário Sérgio

    2018-06-05

    A web-based application is developed to generate 4D-QSAR descriptors using the LQTA-QSAR methodology, based on molecular dynamics (MD) trajectories and topology information retrieved from the GROMACS package. The LQTAGrid module calculates the intermolecular interaction energies at each grid point, considering probes and all aligned conformations resulting from MD simulations. These interaction energies are the independent variables or descriptors employed in a QSAR analysis. A friendly front end web interface, built using the Django framework and Python programming language, integrates all steps of the LQTA-QSAR methodology in a way that is transparent to the user, and in the backend, GROMACS and LQTAGrid are executed to generate 4D-QSAR descriptors to be used later in the process of QSAR model building. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  19. WebLogo: A Sequence Logo Generator

    PubMed Central

    Crooks, Gavin E.; Hon, Gary; Chandonia, John-Marc; Brenner, Steven E.

    2004-01-01

    WebLogo generates sequence logos, graphical representations of the patterns within a multiple sequence alignment. Sequence logos provide a richer and more precise description of sequence similarity than consensus sequences and can rapidly reveal significant features of the alignment otherwise difficult to perceive. Each logo consists of stacks of letters, one stack for each position in the sequence. The overall height of each stack indicates the sequence conservation at that position (measured in bits), whereas the height of symbols within the stack reflects the relative frequency of the corresponding amino or nucleic acid at that position. WebLogo has been enhanced recently with additional features and options, to provide a convenient and highly configurable sequence logo generator. A command line interface and the complete, open WebLogo source code are available for local installation and customization. PMID:15173120

  20. Automated ocean color product validation for the Southern California Bight

    NASA Astrophysics Data System (ADS)

    Davis, Curtiss O.; Tufillaro, Nicholas; Jones, Burt; Arnone, Robert

    2012-06-01

    Automated match ups allow us to maintain and improve the products of current satellite ocean color sensors (MODIS, MERIS), and new sensors (VIIRS). As part of the VIIRS mission preparation, we have created a web based automated match up tool that provides access to searchable fields for date, site, and products, and creates match-ups between satellite (MODIS, MERIS, VIIRS), and in-situ measurements (HyperPRO and SeaPRISM). The back end of the system is a 'mySQL' database, and the front end is a `php' web portal with pull down menus for searchable fields. Based on selections, graphics are generated showing match-ups and statistics, and ascii files are created for downloads for the matchup data. Examples are shown for matching the satellite data with the data from Platform Eureka SeaPRISM off L.A. Harbor in the Southern California Bight.

  1. phiGENOME: an integrative navigation throughout bacteriophage genomes.

    PubMed

    Stano, Matej; Klucar, Lubos

    2011-11-01

    phiGENOME is a web-based genome browser generating dynamic and interactive graphical representation of phage genomes stored in the phiSITE, database of gene regulation in bacteriophages. phiGENOME is an integral part of the phiSITE web portal (http://www.phisite.org/phigenome) and it was optimised for visualisation of phage genomes with the emphasis on the gene regulatory elements. phiGENOME consists of three components: (i) genome map viewer built using Adobe Flash technology, providing dynamic and interactive graphical display of phage genomes; (ii) sequence browser based on precisely formatted HTML tags, providing detailed exploration of genome features on the sequence level and (iii) regulation illustrator, based on Scalable Vector Graphics (SVG) and designed for graphical representation of gene regulations. Bringing 542 complete genome sequences accompanied with their rich annotations and references, makes phiGENOME a unique information resource in the field of phage genomics. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. How the new web generations are changing library and information services.

    PubMed

    Miranda, Giovanna F; Gualtieri, Francesca; Coccia, Paolo

    2010-04-01

    The new Web generations are influencing the minds and changing the habits of software developers and end users. Users, librarians, and information services professionals can interact more efficiently, creating additional information and content and generating knowledge. This new scenario is also changing the behavior of information providers and users in health sciences libraries. This article reviews the new Web environments and tools that give librarians opportunities to tailor their services better, and gives some examples of the advantages and disadvantages for them and their users. Librarians need to adapt to the new mindset of users, linking new technologies, information, and people.

  3. Embedding the Form Generator in a Content Management System

    NASA Astrophysics Data System (ADS)

    Delgado, A.; Wicenec, A.; Delmotte, N.; Tejero, A.

    2008-08-01

    Given the tremendous amount of data generated by ESO's telescopes and the rapid evolution of the World Wide Web, the ESO archive web interface needs to offer more flexible services and advanced functionalities to a growing community of users all over the world. To achieve this endeavour, a query form generator is being developed inside a Content Management System. We present here a progress report.

  4. Writing DNA with GenoCAD.

    PubMed

    Czar, Michael J; Cai, Yizhi; Peccoud, Jean

    2009-07-01

    Chemical synthesis of custom DNA made to order calls for software streamlining the design of synthetic DNA sequences. GenoCAD (www.genocad.org) is a free web-based application to design protein expression vectors, artificial gene networks and other genetic constructs composed of multiple functional blocks called genetic parts. By capturing design strategies in grammatical models of DNA sequences, GenoCAD guides the user through the design process. By successively clicking on icons representing structural features or actual genetic parts, complex constructs composed of dozens of functional blocks can be designed in a matter of minutes. GenoCAD automatically derives the construct sequence from its comprehensive libraries of genetic parts. Upon completion of the design process, users can download the sequence for synthesis or further analysis. Users who elect to create a personal account on the system can customize their workspace by creating their own parts libraries, adding new parts to the libraries, or reusing designs to quickly generate sets of related constructs.

  5. Biomedical information retrieval across languages.

    PubMed

    Daumke, Philipp; Markü, Kornél; Poprat, Michael; Schulz, Stefan; Klar, Rüdiger

    2007-06-01

    This work presents a new dictionary-based approach to biomedical cross-language information retrieval (CLIR) that addresses many of the general and domain-specific challenges in current CLIR research. Our method is based on a multilingual lexicon that was generated partly manually and partly automatically, and currently covers six European languages. It contains morphologically meaningful word fragments, termed subwords. Using subwords instead of entire words significantly reduces the number of lexical entries necessary to sufficiently cover a specific language and domain. Mediation between queries and documents is based on these subwords as well as on lists of word-n-grams that are generated from large monolingual corpora and constitute possible translation units. The translations are then sent to a standard Internet search engine. This process makes our approach an effective tool for searching the biomedical content of the World Wide Web in different languages. We evaluate this approach using the OHSUMED corpus, a large medical document collection, within a cross-language retrieval setting.

  6. ITMS: Individualized Teaching Material System: Adaptive Integration of Web Pages Distributed in Some Servers.

    ERIC Educational Resources Information Center

    Mitsuhara, Hiroyuki; Kurose, Yoshinobu; Ochi, Youji; Yano, Yoneo

    The authors developed a Web-based Adaptive Educational System (Web-based AES) named ITMS (Individualized Teaching Material System). ITMS adaptively integrates knowledge on the distributed Web pages and generates individualized teaching material that has various contents. ITMS also presumes the learners' knowledge levels from the states of their…

  7. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)

    EPA Science Inventory

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  8. Prediction of toxicity and comparison of alternatives using WebTEST (Web-services Toxicity Estimation Software Tool)(Bled Slovenia)

    EPA Science Inventory

    A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...

  9. Flush-mounting technique for composite beams

    NASA Technical Reports Server (NTRS)

    Harman, T. C.; Kay, B. F.

    1980-01-01

    Procedure permits mounting of heavy parts to surface of composite beams without appreciably weakening beam web. Web is split and held apart in region where attachment is to be made by lightweight precast foam filler. Bolt hole penetrates foam rather than web, and is secured by barrelnut in transverse bushing through web.

  10. A Two-Tiered Model for Analyzing Library Web Site Usage Statistics, Part 1: Web Server Logs.

    ERIC Educational Resources Information Center

    Cohen, Laura B.

    2003-01-01

    Proposes a two-tiered model for analyzing web site usage statistics for academic libraries: one tier for library administrators that analyzes measures indicating library use, and a second tier for web site managers that analyzes measures aiding in server maintenance and site design. Discusses the technology of web site usage statistics, and…

  11. Realtime Data to Enable Earth-Observing Sensor Web Capabilities

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.

    2015-12-01

    Over the past decade NASA's Earth Science Technology Office (ESTO) has invested in new technologies for information systems to enhance the Earth-observing capabilities of satellites, aircraft, and ground-based in situ observations. One focus area has been to create a common infrastructure for coordinated measurements from multiple vantage points which could be commanded either manually or through autonomous means, such as from a numerical model. This paradigm became known as the sensor web, formally defined to be "a coherent set of heterogeneous, loosely-coupled, distributed observing nodes interconnected by a communications fabric that can collectively behave as a single dynamically adaptive and reconfigurable observing system". This would allow for adaptive targeting of rapidly evolving, transient, or variable meteorological features to improve our ability to monitor, understand, and predict their evolution. It would also enable measurements earmarked at critical regions of the atmosphere that are highly sensitive to data analysis errors, thus offering the potential for significant improvements in the predictive skill of numerical weather forecasts. ESTO's investment strategy was twofold. Recognizing that implementation of an operational sensor web would not only involve technical cost and risk but also would require changes to the culture of how flight missions were designed and operated, ESTO funded the development of a mission-planning simulator that would quantitatively assess the added value of coordinated observations. The simulator was designed to provide the capability to perform low-cost engineering and design trade studies using synthetic data generated by observing system simulation experiments (OSSEs). The second part of the investment strategy was to invest in prototype applications that implemented key features of a sensor web, with the dual goals of developing a sensor web reference architecture as well as supporting useful science activities that would produce immediate benefit. We briefly discuss three of ESTO's sensor web projects that resulted from solicitations released in 2008 and 2011: the Earth System Sensor Web Simulator, the Earth Phenomena Observing System, and the Sensor Web 3G Namibia Flood Pilot.

  12. Clustergrammer, a web-based heatmap visualization and analysis tool for high-dimensional biological data

    PubMed Central

    Fernandez, Nicolas F.; Gundersen, Gregory W.; Rahman, Adeeb; Grimes, Mark L.; Rikova, Klarisa; Hornbeck, Peter; Ma’ayan, Avi

    2017-01-01

    Most tools developed to visualize hierarchically clustered heatmaps generate static images. Clustergrammer is a web-based visualization tool with interactive features such as: zooming, panning, filtering, reordering, sharing, performing enrichment analysis, and providing dynamic gene annotations. Clustergrammer can be used to generate shareable interactive visualizations by uploading a data table to a web-site, or by embedding Clustergrammer in Jupyter Notebooks. The Clustergrammer core libraries can also be used as a toolkit by developers to generate visualizations within their own applications. Clustergrammer is demonstrated using gene expression data from the cancer cell line encyclopedia (CCLE), original post-translational modification data collected from lung cancer cells lines by a mass spectrometry approach, and original cytometry by time of flight (CyTOF) single-cell proteomics data from blood. Clustergrammer enables producing interactive web based visualizations for the analysis of diverse biological data. PMID:28994825

  13. An overview of new video coding tools under consideration for VP10: the successor to VP9

    NASA Astrophysics Data System (ADS)

    Mukherjee, Debargha; Su, Hui; Bankoski, James; Converse, Alex; Han, Jingning; Liu, Zoe; Xu, Yaowu

    2015-09-01

    Google started an opensource project, entitled the WebM Project, in 2010 to develop royaltyfree video codecs for the web. The present generation codec developed in the WebM project called VP9 was finalized in mid2013 and is currently being served extensively by YouTube, resulting in billions of views per day. Even though adoption of VP9 outside Google is still in its infancy, the WebM project has already embarked on an ambitious project to develop a next edition codec VP10 that achieves at least a generational bitrate reduction over the current generation codec VP9. Although the project is still in early stages, a set of new experimental coding tools have already been added to baseline VP9 to achieve modest coding gains over a large enough test set. This paper provides a technical overview of these coding tools.

  14. Implementation of interactive virtual simulation of physical systems

    NASA Astrophysics Data System (ADS)

    Sanchez, H.; Escobar, J. J.; Gonzalez, J. D.; Beltran, J.

    2014-03-01

    Considering the limited availability of laboratories for physics teaching and the difficulties this causes in the learning of school students in Santa Marta Colombia, we have developed software in order to generate greater student interaction with the phenomena physical and improve their understanding. Thereby, this system has been proposed in an architecture Model/View- View- Model (MVVM), sharing the benefits of MVC. Basically, this pattern consists of 3 parts: The Model, that is responsible for business logic related. The View, which is the part with which we are most familiar and the user sees. Its role is to display data to the user and allowing manipulation of the data of the application. The ViewModel, which is the middle part of the Model and the View (analogous to the Controller in the MVC pattern), as well as being responsible for implementing the behavior of the view to respond to user actions and expose data model in a way that is easy to use links to data in the view. .NET Framework 4.0 and editing package Silverlight 4 and 5 are the main requirements needed for the deployment of physical simulations that are hosted in the web application and a web browser (Internet Explorer, Mozilla Firefox or Chrome). The implementation of this innovative application in educational institutions has shown that students improved their contextualization of physical phenomena.

  15. Patient and parent views on a Web 2.0 Diabetes Portal--the management tool, the generator, and the gatekeeper: qualitative study.

    PubMed

    Nordfeldt, Sam; Hanberger, Lena; Berterö, Carina

    2010-05-28

    The Internet has undergone rapid development, with significant impact on social life and on modes of communication. Modern management of type 1 diabetes requires that patients have access to continuous support and learning opportunities. Although Web 2.0 resources can provide this support, few pediatric clinics offer it as part of routine diabetes care. We aimed to explore patients' and parents' attitudes toward a local Web 2.0 portal tailored to young patients with type 1 diabetes and their parents, with social networking tools such as message boards and blogs, locally produced self-care and treatment information, and interactive pedagogic devices. Opportunities and obstacles to the implementation of Web 2.0 applications in clinical practice were sought. Participants were 16 mothers, 3 fathers, and 5 young patients (ages 11-18 years; median 14 years) who each wrote an essay on their experience using the portal, irrespective of frequency and/or their success in using it. Two main guiding questions were asked. A qualitative content analysis was conducted of the essays as a whole. Three main categories of portal users' attitudes were found; we named them "the management tool," "the generator," and "the gatekeeper." One category was related to the management tool functionality of the portal, and a wide range of concrete examples was found regarding useful facts and updates. Being enabled to search when necessary and find reliable information provided by local clinicians was regarded as a great advantage, facilitating a feeling of security and being in control. Finding answers to difficult-to-ask questions, questions portal users did not know they had before, and questions focusing on sensitive areas such as anxiety and fear, was also an important feature. A second category was related to the generator function in that visiting the portal could generate more information than expected, which could lead to increased use. Active message boards and chat rooms were found to have great value for enhancing mediation of third party peer-to-peer information. A certain level of active users from peer families and visible signs of their activity were considered necessary to attract returning users. A third category was related to the gatekeeper function of the password requirement, which created various access problems. This and other unsuccessful experiences caused users to drop the portal. A largely open portal was suggested to enhance use by those associated with the child with diabetes, such as school personnel, relatives, friends and others, and also by young users somewhat unwilling to self-identify with the disease. Web 2.0 services have great potential for supporting parents and patients with type 1 diabetes by enhancing their information retrieval and disease management. Well-developed services, such as this one, may generate continued use and should, therefore, be carefully maintained and updated by health care professionals who are alert and active on the site with new information and updates. Login procedures should be simple and minimized as much as possible. The education of clinical practitioners regarding the use of Web 2.0 resources needs more attention.

  16. TermGenie – a web-application for pattern-based ontology class generation

    DOE PAGES

    Dietze, Heiko; Berardini, Tanya Z.; Foulger, Rebecca E.; ...

    2014-01-01

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 newmore » classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. Lastly, TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.« less

  17. TermGenie – a web-application for pattern-based ontology class generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietze, Heiko; Berardini, Tanya Z.; Foulger, Rebecca E.

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 newmore » classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. Lastly, TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.« less

  18. TermGenie - a web-application for pattern-based ontology class generation.

    PubMed

    Dietze, Heiko; Berardini, Tanya Z; Foulger, Rebecca E; Hill, David P; Lomax, Jane; Osumi-Sutherland, David; Roncaglia, Paola; Mungall, Christopher J

    2014-01-01

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 new classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.

  19. enoLOGOS: a versatile web tool for energy normalized sequence logos

    PubMed Central

    Workman, Christopher T.; Yin, Yutong; Corcoran, David L.; Ideker, Trey; Stormo, Gary D.; Benos, Panayiotis V.

    2005-01-01

    enoLOGOS is a web-based tool that generates sequence logos from various input sources. Sequence logos have become a popular way to graphically represent DNA and amino acid sequence patterns from a set of aligned sequences. Each position of the alignment is represented by a column of stacked symbols with its total height reflecting the information content in this position. Currently, the available web servers are able to create logo images from a set of aligned sequences, but none of them generates weighted sequence logos directly from energy measurements or other sources. With the advent of high-throughput technologies for estimating the contact energy of different DNA sequences, tools that can create logos directly from binding affinity data are useful to researchers. enoLOGOS generates sequence logos from a variety of input data, including energy measurements, probability matrices, alignment matrices, count matrices and aligned sequences. Furthermore, enoLOGOS can represent the mutual information of different positions of the consensus sequence, a unique feature of this tool. Another web interface for our software, C2H2-enoLOGOS, generates logos for the DNA-binding preferences of the C2H2 zinc-finger transcription factor family members. enoLOGOS and C2H2-enoLOGOS are accessible over the web at . PMID:15980495

  20. SiBIC: a web server for generating gene set networks based on biclusters obtained by maximal frequent itemset mining.

    PubMed

    Takahashi, Kei-ichiro; Takigawa, Ichigaku; Mamitsuka, Hiroshi

    2013-01-01

    Detecting biclusters from expression data is useful, since biclusters are coexpressed genes under only part of all given experimental conditions. We present a software called SiBIC, which from a given expression dataset, first exhaustively enumerates biclusters, which are then merged into rather independent biclusters, which finally are used to generate gene set networks, in which a gene set assigned to one node has coexpressed genes. We evaluated each step of this procedure: 1) significance of the generated biclusters biologically and statistically, 2) biological quality of merged biclusters, and 3) biological significance of gene set networks. We emphasize that gene set networks, in which nodes are not genes but gene sets, can be more compact than usual gene networks, meaning that gene set networks are more comprehensible. SiBIC is available at http://utrecht.kuicr.kyoto-u.ac.jp:8080/miami/faces/index.jsp.

  1. Web 2 Technologies for Net Native Language Learners: A "Social CALL"

    ERIC Educational Resources Information Center

    Karpati, Andrea

    2009-01-01

    In order to make optimal educational use of social spaces offered by thousands of international communities in the second generation web applications termed Web 2 or Social Web, ICT competences as well as social skills are needed for both teachers and learners. The paper outlines differences in competence structures of Net Natives (who came of age…

  2. The World Wide Web and the Television Generation.

    ERIC Educational Resources Information Center

    Maddux, Cleborne D.

    1996-01-01

    The hypermedia nature of the World Wide Web may represent a true paradigm shift in telecommunications, but barriers exist to the Web having similar impact on education. Some of today's college students compare the Web with "bad TV"--lengthy pauses, links that result in error messages, and animation and sound clips that are too brief.…

  3. Large-Scale Multiobjective Static Test Generation for Web-Based Testing with Integer Programming

    ERIC Educational Resources Information Center

    Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M.

    2013-01-01

    Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…

  4. The Semantic Web and Educational Technology

    ERIC Educational Resources Information Center

    Maddux, Cleborne D., Ed.

    2008-01-01

    The "Semantic Web" is an idea proposed by Tim Berners-Lee, the inventor of the "World Wide Web." The topic has been generating a great deal of interest and enthusiasm, and there is a rapidly growing body of literature dealing with it. This article attempts to explain how the Semantic Web would work, and explores short-term and long-term…

  5. Biotool2Web: creating simple Web interfaces for bioinformatics applications.

    PubMed

    Shahid, Mohammad; Alam, Intikhab; Fuellen, Georg

    2006-01-01

    Currently there are many bioinformatics applications being developed, but there is no easy way to publish them on the World Wide Web. We have developed a Perl script, called Biotool2Web, which makes the task of creating web interfaces for simple ('home-made') bioinformatics applications quick and easy. Biotool2Web uses an XML document containing the parameters to run the tool on the Web, and generates the corresponding HTML and common gateway interface (CGI) files ready to be published on a web server. This tool is available for download at URL http://www.uni-muenster.de/Bioinformatics/services/biotool2web/ Georg Fuellen (fuellen@alum.mit.edu).

  6. shinyheatmap: Ultra fast low memory heatmap web interface for big data genomics.

    PubMed

    Khomtchouk, Bohdan B; Hennessy, James R; Wahlestedt, Claes

    2017-01-01

    Transcriptomics, metabolomics, metagenomics, and other various next-generation sequencing (-omics) fields are known for their production of large datasets, especially across single-cell sequencing studies. Visualizing such big data has posed technical challenges in biology, both in terms of available computational resources as well as programming acumen. Since heatmaps are used to depict high-dimensional numerical data as a colored grid of cells, efficiency and speed have often proven to be critical considerations in the process of successfully converting data into graphics. For example, rendering interactive heatmaps from large input datasets (e.g., 100k+ rows) has been computationally infeasible on both desktop computers and web browsers. In addition to memory requirements, programming skills and knowledge have frequently been barriers-to-entry for creating highly customizable heatmaps. We propose shinyheatmap: an advanced user-friendly heatmap software suite capable of efficiently creating highly customizable static and interactive biological heatmaps in a web browser. shinyheatmap is a low memory footprint program, making it particularly well-suited for the interactive visualization of extremely large datasets that cannot typically be computed in-memory due to size restrictions. Also, shinyheatmap features a built-in high performance web plug-in, fastheatmap, for rapidly plotting interactive heatmaps of datasets as large as 105-107 rows within seconds, effectively shattering previous performance benchmarks of heatmap rendering speed. shinyheatmap is hosted online as a freely available web server with an intuitive graphical user interface: http://shinyheatmap.com. The methods are implemented in R, and are available as part of the shinyheatmap project at: https://github.com/Bohdan-Khomtchouk/shinyheatmap. Users can access fastheatmap directly from within the shinyheatmap web interface, and all source code has been made publicly available on Github: https://github.com/Bohdan-Khomtchouk/fastheatmap.

  7. Standards, Efficiency, and the Evolution of Web Design

    ERIC Educational Resources Information Center

    Mitchell, Erik

    2010-01-01

    The author recently created a presentation using HTML5 based on a tutorial put together by Marcin Wichary. The example presentation is part proof-of-concept, part instructional piece, and it is part of a larger site on HTML5 and how one can use it to create rich Web-based applications. The more he delved into HTML5, the more he found that it was…

  8. Web Accessibility Policies at Land-Grant Universities

    ERIC Educational Resources Information Center

    Bradbard, David A.; Peters, Cara; Caneva, Yoana

    2010-01-01

    The Web has become an integral part of postsecondary education within the United States. There are specific laws that legally mandate postsecondary institutions to have Web sites that are accessible for students with disabilities (e.g., the Americans with Disabilities Act (ADA)). Web accessibility policies are a way for universities to provide a…

  9. The pepATTRACT web server for blind, large-scale peptide-protein docking.

    PubMed

    de Vries, Sjoerd J; Rey, Julien; Schindler, Christina E M; Zacharias, Martin; Tuffery, Pierre

    2017-07-03

    Peptide-protein interactions are ubiquitous in the cell and form an important part of the interactome. Computational docking methods can complement experimental characterization of these complexes, but current protocols are not applicable on the proteome scale. pepATTRACT is a novel docking protocol that is fully blind, i.e. it does not require any information about the binding site. In various stages of its development, pepATTRACT has participated in CAPRI, making successful predictions for five out of seven protein-peptide targets. Its performance is similar or better than state-of-the-art local docking protocols that do require binding site information. Here we present a novel web server that carries out the rigid-body stage of pepATTRACT. On the peptiDB benchmark, the web server generates a correct model in the top 50 in 34% of the cases. Compared to the full pepATTRACT protocol, this leads to some loss of performance, but the computation time is reduced from ∼18 h to ∼10 min. Combined with the fact that it is fully blind, this makes the web server well-suited for large-scale in silico protein-peptide docking experiments. The rigid-body pepATTRACT server is freely available at http://bioserv.rpbs.univ-paris-diderot.fr/services/pepATTRACT. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Informatics in radiology: automated Web-based graphical dashboard for radiology operational business intelligence.

    PubMed

    Nagy, Paul G; Warnock, Max J; Daly, Mark; Toland, Christopher; Meenan, Christopher D; Mezrich, Reuben S

    2009-11-01

    Radiology departments today are faced with many challenges to improve operational efficiency, performance, and quality. Many organizations rely on antiquated, paper-based methods to review their historical performance and understand their operations. With increased workloads, geographically dispersed image acquisition and reading sites, and rapidly changing technologies, this approach is increasingly untenable. A Web-based dashboard was constructed to automate the extraction, processing, and display of indicators and thereby provide useful and current data for twice-monthly departmental operational meetings. The feasibility of extracting specific metrics from clinical information systems was evaluated as part of a longer-term effort to build a radiology business intelligence architecture. Operational data were extracted from clinical information systems and stored in a centralized data warehouse. Higher-level analytics were performed on the centralized data, a process that generated indicators in a dynamic Web-based graphical environment that proved valuable in discussion and root cause analysis. Results aggregated over a 24-month period since implementation suggest that this operational business intelligence reporting system has provided significant data for driving more effective management decisions to improve productivity, performance, and quality of service in the department.

  11. Graph and Network for Model Elicitation (GNOME Phase 2)

    DTIC Science & Technology

    2013-02-01

    10 3.3 GNOME UI Components for NOEM Web Client...20 Figure 17: Sampling in Web -client...the web -client). The server-side service can run and generate data asynchronously, allowing a cluster of servers to run the sampling. Also, a

  12. Using Semantic Web technologies for the generation of domain-specific templates to support clinical study metadata standards.

    PubMed

    Jiang, Guoqian; Evans, Julie; Endle, Cory M; Solbrig, Harold R; Chute, Christopher G

    2016-01-01

    The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.

  13. Exploring ecology through science terms: A computer-supported vocabulary supplement to the science curriculum in a two-way immersion program

    NASA Astrophysics Data System (ADS)

    Herrera, Francisco Javier, Jr.

    This study set out to examine how a web-based tool embedded with vocabulary strategies, as part of the science curriculum in a third grade two-way immersion classroom, would aid students' academic vocabulary development. Fourteen students (seven boys, seven girls; ten of which were English learners) participated in this study. Students utilized web pages as part of their science curriculum on the topic of ecology. The study documented students' use of the web pages as a data-gathering tool on the topic of ecology during science instruction. Students were video and audio taped as they explored the web pages. Results indicated that through the use of the intervention web pages students significantly improved their knowledge of academic English target words.

  14. Don't Be Afraid to Explore Web 2.0

    ERIC Educational Resources Information Center

    Thompson, John

    2008-01-01

    Web 2.0 is a hot topic. The term "Web 2.0" refers to the next generation of Internet applications that allow the average Internet user to collaborate and share information online. Web 2.0 sites allow anyone to contribute content and to participate with other users in editing and even combining or remixing existing content with other material to…

  15. Dynamic Generation of Reduced Ontologies to Support Resource Constraints of Mobile Devices

    ERIC Educational Resources Information Center

    Schrimpsher, Dan

    2011-01-01

    As Web Services and the Semantic Web become more important, enabling technologies such as web service ontologies will grow larger. At the same time, use of mobile devices to access web services has doubled in the last year. The ability of these resource constrained devices to download and reason across these ontologies to support service discovery…

  16. What Web 2.0 Means to Facilities Professionals

    ERIC Educational Resources Information Center

    Allen, Scott

    2008-01-01

    It's official--the Web is now social. Actually, it has always been social to a degree, but now it's "mostly" social. A lot of terms have been coined or adopted to describe various aspects of this phenomenon--social media, social networking, consumer-generated media (CGM) and Web 2.0. While it is hard to define "exactly" what Web 2.0 is, or when…

  17. WebGIS based community services architecture by griddization managements and crowdsourcing services

    NASA Astrophysics Data System (ADS)

    Wang, Haiyin; Wan, Jianhua; Zeng, Zhe; Zhou, Shengchuan

    2016-11-01

    Along with the fast economic development of cities, rapid urbanization, population surge, in China, the social community service mechanisms need to be rationalized and the policy standards need to be unified, which results in various types of conflicts and challenges for community services of government. Based on the WebGIS technology, the article provides a community service architecture by gridding management and crowdsourcing service. The WEBGIS service architecture includes two parts: the cloud part and the mobile part. The cloud part refers to community service centres, which can instantaneously response the emergency, visualize the scene of the emergency, and analyse the data from the emergency. The mobile part refers to the mobile terminal, which can call the centre, report the event, collect data and verify the feedback. This WebGIS based community service systems for Huangdao District of Qingdao, were awarded the “2015’ national innovation of social governance case of typical cases”.

  18. Metadata tables to enable dynamic data modeling and web interface design: the SEER example.

    PubMed

    Weiner, Mark; Sherr, Micah; Cohen, Abigail

    2002-04-01

    A wealth of information addressing health status, outcomes and resource utilization is compiled and made available by various government agencies. While exploration of the data is possible using existing tools, in general, would-be users of the resources must acquire CD-ROMs or download data from the web, and upload the data into their own database. Where web interfaces exist, they are highly structured, limiting the kinds of queries that can be executed. This work develops a web-based database interface engine whose content and structure is generated through interaction with a metadata table. The result is a dynamically generated web interface that can easily accommodate changes in the underlying data model by altering the metadata table, rather than requiring changes to the interface code. This paper discusses the background and implementation of the metadata table and web-based front end and provides examples of its use with the NCI's Surveillance, Epidemiology and End-Results (SEER) database.

  19. Developing creativity and problem-solving skills of engineering students: a comparison of web- and pen-and-paper-based approaches

    NASA Astrophysics Data System (ADS)

    Valentine, Andrew; Belski, Iouri; Hamilton, Margaret

    2017-11-01

    Problem-solving is a key engineering skill, yet is an area in which engineering graduates underperform. This paper investigates the potential of using web-based tools to teach students problem-solving techniques without the need to make use of class time. An idea generation experiment involving 90 students was designed. Students were surveyed about their study habits and reported they use electronic-based materials more than paper-based materials while studying, suggesting students may engage with web-based tools. Students then generated solutions to a problem task using either a paper-based template or an equivalent web interface. Students who used the web-based approach performed as well as students who used the paper-based approach, suggesting the technique can be successfully adopted and taught online. Web-based tools may therefore be adopted as supplementary material in a range of engineering courses as a way to increase students' options for enhancing problem-solving skills.

  20. Progress in passive solar energy systems. Volume 8. Part 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayes, J.; Andrejko, D.A.

    1983-01-01

    This book presents the papers given at a conference sponsored by the US DOE, the Solar Energy Research Institute, SolarVision, Inc., and the Southern California Solar Energy Society. The topics considered at the conference included sizing solar energy systems for agricultural applications, a farm scale ethanol production plant, the EEC wind energy RandD program, the passive solar performance assessment of an earth-sheltered house, the ARCO 1 MW photovoltaic power plant, the performance of a dendritic web photovoltaic module, second generation point focused concentrators, linear fresnel lens concentrating photovoltaic collectors, photovoltaic conversion efficiency, amorphous silicon thin film solar cells, a photovoltaicmore » system for a shopping center, photovoltaic power generation for the utility industry, spectral solar radiation, and the analysis of insolation data.« less

  1. Human Trafficking in the United States. Part II. Survey of U.S. Government Web Resources for Publications and Data

    ERIC Educational Resources Information Center

    Panigabutra-Roberts, Anchalee

    2012-01-01

    This second part of a two-part series is a survey of U.S. government web resources on human trafficking in the United States, particularly of the online publications and data included on agencies' websites. Overall, the goal is to provide an introduction, an overview, and a guide on this topic for library staff to use in their research and…

  2. Using a Web-based GIS to Teach Problem-based Science in High School and College

    NASA Astrophysics Data System (ADS)

    Metzger, E.; Lenkeit Meezan, , K. A.; Schmidt, C.; Taketa, R.; Carter, J.; Iverson, R.

    2008-12-01

    Foothill College has partnered with San Jose State University to bring GIS web mapping technology to the high school and college classroom. The project consists of two parts. In the first part, Foothill and San Jose State University have teamed up to offer classes on building and maintaining Web based Geographic Information Systems (GIS). Web-based GIS such as Google Maps, MapQuest and Yahoo Maps have become ubiquitous, and the skills to build and maintain these systems are in high demand from many employers. In the second part of the project, high school students will be able to learn about Web GIS as a real world tool used by scientists. The students in the Foothill College/San Jose State class will build their Web GIS using scientific data related to the San Francisco/San Joaquin Delta region, with a focus on watersheds, biodiversity and earthquake hazards. This project includes high school level curriculum development that will tie in to No Child Left Behind and National Curriculum Standards in both Science and Geography, and provide workshops for both pre-and in- service teachers in the use of Web GIS-driven course material in the high school classroom. The project will bring the work of professional scientists into any high school classroom with an internet connection; while simultaneously providing workforce training in high demand technology based jobs.

  3. Evaluation of WebEase: An Epilepsy Self-Management Web Site

    ERIC Educational Resources Information Center

    DiIorio, Colleen; Escoffery, Cam; McCarty, Frances; Yeager, Katherine A.; Henry, Thomas R.; Koganti, Archana; Reisinger, Elizabeth L.; Wexler, Bethany

    2009-01-01

    People with epilepsy have various education needs and must adopt many self-management behaviors in order to control their condition. This study evaluates WebEase, an Internet-based, theory-driven, self-management program for adults with epilepsy. Thirty-five participants took part in a 6-week pilot implementation of WebEase. The main components of…

  4. Student Perceptions of Learning in a Web-Based Tutorial.

    ERIC Educational Resources Information Center

    Brescia, William; McAuley, Sean

    This case study used both quantitative and qualitative methods to investigate students' perceptions of learning using a Web-based tutorial. Students participated in a Web-based tutorial to learn basic HTML as part of a graduate-level Web design course. Four of five students agreed to participate in the survey and interviews. After completing the…

  5. PhyloBot: A Web Portal for Automated Phylogenetics, Ancestral Sequence Reconstruction, and Exploration of Mutational Trajectories.

    PubMed

    Hanson-Smith, Victor; Johnson, Alexander

    2016-07-01

    The method of phylogenetic ancestral sequence reconstruction is a powerful approach for studying evolutionary relationships among protein sequence, structure, and function. In particular, this approach allows investigators to (1) reconstruct and "resurrect" (that is, synthesize in vivo or in vitro) extinct proteins to study how they differ from modern proteins, (2) identify key amino acid changes that, over evolutionary timescales, have altered the function of the protein, and (3) order historical events in the evolution of protein function. Widespread use of this approach has been slow among molecular biologists, in part because the methods require significant computational expertise. Here we present PhyloBot, a web-based software tool that makes ancestral sequence reconstruction easy. Designed for non-experts, it integrates all the necessary software into a single user interface. Additionally, PhyloBot provides interactive tools to explore evolutionary trajectories between ancestors, enabling the rapid generation of hypotheses that can be tested using genetic or biochemical approaches. Early versions of this software were used in previous studies to discover genetic mechanisms underlying the functions of diverse protein families, including V-ATPase ion pumps, DNA-binding transcription regulators, and serine/threonine protein kinases. PhyloBot runs in a web browser, and is available at the following URL: http://www.phylobot.com. The software is implemented in Python using the Django web framework, and runs on elastic cloud computing resources from Amazon Web Services. Users can create and submit jobs on our free server (at the URL listed above), or use our open-source code to launch their own PhyloBot server.

  6. PhyloBot: A Web Portal for Automated Phylogenetics, Ancestral Sequence Reconstruction, and Exploration of Mutational Trajectories

    PubMed Central

    Hanson-Smith, Victor; Johnson, Alexander

    2016-01-01

    The method of phylogenetic ancestral sequence reconstruction is a powerful approach for studying evolutionary relationships among protein sequence, structure, and function. In particular, this approach allows investigators to (1) reconstruct and “resurrect” (that is, synthesize in vivo or in vitro) extinct proteins to study how they differ from modern proteins, (2) identify key amino acid changes that, over evolutionary timescales, have altered the function of the protein, and (3) order historical events in the evolution of protein function. Widespread use of this approach has been slow among molecular biologists, in part because the methods require significant computational expertise. Here we present PhyloBot, a web-based software tool that makes ancestral sequence reconstruction easy. Designed for non-experts, it integrates all the necessary software into a single user interface. Additionally, PhyloBot provides interactive tools to explore evolutionary trajectories between ancestors, enabling the rapid generation of hypotheses that can be tested using genetic or biochemical approaches. Early versions of this software were used in previous studies to discover genetic mechanisms underlying the functions of diverse protein families, including V-ATPase ion pumps, DNA-binding transcription regulators, and serine/threonine protein kinases. PhyloBot runs in a web browser, and is available at the following URL: http://www.phylobot.com. The software is implemented in Python using the Django web framework, and runs on elastic cloud computing resources from Amazon Web Services. Users can create and submit jobs on our free server (at the URL listed above), or use our open-source code to launch their own PhyloBot server. PMID:27472806

  7. Teacher Education in the Generative Virtual Classroom: A Web-Delivered Context for Developing Learning Theories.

    ERIC Educational Resources Information Center

    Schaverien, Lynette

    This paper describes a research-based, Web-delivered context, the Generative Virtual Classroom (GVC), in which student teachers can develop their ability to recognize, describe, analyze, and theorize learning, and it reports findings of three investigations into its use. The learning environment aims to exploit the possibilities of advanced…

  8. REMORA: a pilot in the ocean of BioMoby web-services.

    PubMed

    Carrere, Sébastien; Gouzy, Jérôme

    2006-04-01

    Emerging web-services technology allows interoperability between multiple distributed architectures. Here, we present REMORA, a web server implemented according to the BioMoby web-service specifications, providing life science researchers with an easy-to-use workflow generator and launcher, a repository of predefined workflows and a survey system. Jerome.Gouzy@toulouse.inra.fr The REMORA web server is freely available at http://bioinfo.genopole-toulouse.prd.fr/remora, sources are available upon request from the authors.

  9. Modelling Size Structured Food Webs Using a Modified Niche Model with Two Predator Traits

    PubMed Central

    Klecka, Jan

    2014-01-01

    The structure of food webs is frequently described using phenomenological stochastic models. A prominent example, the niche model, was found to produce artificial food webs resembling real food webs according to a range of summary statistics. However, the size structure of food webs generated by the niche model and real food webs has not yet been rigorously compared. To fill this void, I use a body mass based version of the niche model and compare prey-predator body mass allometry and predator-prey body mass ratios predicted by the model to empirical data. The results show that the model predicts weaker size structure than observed in many real food webs. I introduce a modified version of the niche model which allows to control the strength of size-dependence of predator-prey links. In this model, optimal prey body mass depends allometrically on predator body mass and on a second trait, such as foraging mode. These empirically motivated extensions of the model allow to represent size structure of real food webs realistically and can be used to generate artificial food webs varying in several aspects of size structure in a controlled way. Hence, by explicitly including the role of species traits, this model provides new opportunities for simulating the consequences of size structure for food web dynamics and stability. PMID:25119999

  10. A Query Integrator and Manager for the Query Web

    PubMed Central

    Brinkley, James F.; Detwiler, Landon T.

    2012-01-01

    We introduce two concepts: the Query Web as a layer of interconnected queries over the document web and the semantic web, and a Query Web Integrator and Manager (QI) that enables the Query Web to evolve. QI permits users to write, save and reuse queries over any web accessible source, including other queries saved in other installations of QI. The saved queries may be in any language (e.g. SPARQL, XQuery); the only condition for interconnection is that the queries return their results in some form of XML. This condition allows queries to chain off each other, and to be written in whatever language is appropriate for the task. We illustrate the potential use of QI for several biomedical use cases, including ontology view generation using a combination of graph-based and logical approaches, value set generation for clinical data management, image annotation using terminology obtained from an ontology web service, ontology-driven brain imaging data integration, small-scale clinical data integration, and wider-scale clinical data integration. Such use cases illustrate the current range of applications of QI and lead us to speculate about the potential evolution from smaller groups of interconnected queries into a larger query network that layers over the document and semantic web. The resulting Query Web could greatly aid researchers and others who now have to manually navigate through multiple information sources in order to answer specific questions. PMID:22531831

  11. Worldwide telemedicine services based on distributed multimedia electronic patient records by using the second generation Web server hyperwave.

    PubMed

    Quade, G; Novotny, J; Burde, B; May, F; Beck, L E; Goldschmidt, A

    1999-01-01

    A distributed multimedia electronic patient record (EPR) is a central component of a medicine-telematics application that supports physicians working in rural areas of South America, and offers medical services to scientists in Antarctica. A Hyperwave server is used to maintain the patient record. As opposed to common web servers--and as a second generation web server--Hyperwave provides the capability of holding documents in a distributed web space without the problem of broken links. This enables physicians to browse through a patient's record by using a standard browser even if the patient's record is distributed over several servers. The patient record is basically implemented on the "Good European Health Record" (GEHR) architecture.

  12. Policy-Aware Content Reuse on the Web

    NASA Astrophysics Data System (ADS)

    Seneviratne, Oshani; Kagal, Lalana; Berners-Lee, Tim

    The Web allows users to share their work very effectively leading to the rapid re-use and remixing of content on the Web including text, images, and videos. Scientific research data, social networks, blogs, photo sharing sites and other such applications known collectively as the Social Web have lots of increasingly complex information. Such information from several Web pages can be very easily aggregated, mashed up and presented in other Web pages. Content generation of this nature inevitably leads to many copyright and license violations, motivating research into effective methods to detect and prevent such violations.

  13. A novel adaptive Cuckoo search for optimal query plan generation.

    PubMed

    Gomathi, Ramalingam; Sharmila, Dhandapani

    2014-01-01

    The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.

  14. USGS Water Data for Washington

    USGS Publications Warehouse

    ,

    2009-01-01

    The U.S. Geological Survey (USGS) has been investigating the water resources of Washington State since the latter part of the 19th century. During this time, demand for water has evolved from primarily domestic and stock needs to the current complex requirements for public-water supplies, irrigation, power generation, navigation, ecological needs, and numerous other uses. Water-resource data collected by the USGS in Washington have been, or soon will be, published by the USGS Washington Water Science Center (WAWSC) in numerous data and interpretive reports. Most of these reports are available online at the WAWSC web page http://wa.water.usgs.gov/pubs/

  15. MCM generator: a Java-based tool for generating medical metadata.

    PubMed

    Munoz, F; Hersh, W

    1998-01-01

    In a previous paper we introduced the need to implement a mechanism to facilitate the discovery of relevant Web medical documents. We maintained that the use of META tags, specifically ones that define the medical subject and resource type of a document, help towards this goal. We have now developed a tool to facilitate the generation of these tags for the authors of medical documents. Written entirely in Java, this tool makes use of the SAPHIRE server, and helps the author identify the Medical Subject Heading terms that most appropriately describe the subject of the document. Furthermore, it allows the author to generate metadata tags for the 15 elements that the Dublin Core considers as core elements in the description of a document. This paper describes the use of this tool in the cataloguing of Web and non-Web medical documents, such as images, movie, and sound files.

  16. Computational algorithm to evaluate product disassembly cost index

    NASA Astrophysics Data System (ADS)

    Zeid, Ibrahim; Gupta, Surendra M.

    2002-02-01

    Environmentally conscious manufacturing is an important paradigm in today's engineering practice. Disassembly is a crucial factor in implementing this paradigm. Disassembly allows the reuse and recycling of parts and products that reach their death after their life cycle ends. There are many questions that must be answered before a disassembly decision can be reached. The most important question is economical. The cost of disassembly versus the cost of scrapping a product is always considered. This paper develops a computational tool that allows decision-makers to calculate the disassembly cost of a product. The tool makes it simple to perform 'what if' scenarios fairly quickly. The tool is Web based and has two main parts. The front-end part is a Web page and runs on the client side in a Web browser, while the back-end part is a disassembly engine (servlet) that has disassembly knowledge and costing algorithms and runs on the server side. The tool is based on the client/server model that is pervasively utilized throughout the World Wide Web. An example is used to demonstrate the implementation and capabilities of the tool.

  17. Preparing Art Teachers to Teach in a New Digital Landscape

    ERIC Educational Resources Information Center

    Roland, Craig

    2010-01-01

    When the World Wide Web first went mainstream in the mid-1990s, people saw it primarily as a place to store and surf for information. The Web generated a lot of excitement among the education community in those early days. The past few years have witnessed the emergence of new ways to experience the World Wide Web. The term "Web 2.0" is frequently…

  18. When the New Application Smell Is Gone: Traditional Intranet Best Practices and Existing Web 2.0 Intranet Infrastructures

    ERIC Educational Resources Information Center

    Yoose, Becky

    2010-01-01

    With the growth of Web 2.0 library intranets in recent years, many libraries are leaving behind legacy, first-generation intranets. As Web 2.0 intranets multiply and mature, how will traditional intranet best practices--especially in the areas of planning, implementation, and evaluation--translate into an existing Web 2.0 intranet infrastructure?…

  19. Web Intelligence and Artificial Intelligence in Education

    ERIC Educational Resources Information Center

    Devedzic, Vladan

    2004-01-01

    This paper surveys important aspects of Web Intelligence (WI) in the context of Artificial Intelligence in Education (AIED) research. WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web-related products, systems, services, and…

  20. Physical Webbing: Collaborative Kinesthetic Three-Dimensional Mind Maps[R

    ERIC Educational Resources Information Center

    Williams, Marian H.

    2012-01-01

    Mind Mapping has predominantly been used by individuals or collaboratively in groups as a paper-based or computer-generated learning strategy. In an effort to make Mind Mapping kinesthetic, collaborative, and three-dimensional, an innovative pedagogical strategy, termed Physical Webbing, was devised. In the Physical Web activity, groups…

  1. The Librarian's Internet Survival Guide: Strategies for the High-Tech Reference Desk.

    ERIC Educational Resources Information Center

    McDermott, Irene E.; Quint, Barbara, Ed.

    This guide discusses the use of the World Wide Web for library reference service. Part 1, "Ready Reference on the Web: Resources for Patrons," contains chapters on searching and meta-searching the Internet, using the Web to find people, news on the Internet, quality reference resources on the Web, Internet sites for kids, free full-text…

  2. Classroom Web Pages: A "How-To" Guide for Educators.

    ERIC Educational Resources Information Center

    Fehling, Eric E.

    This manual provides teachers, with very little or no technology experience, with a step-by-step guide for developing the necessary skills for creating a class Web Page. The first part of the manual is devoted to the thought processes preceding the actual creation of the Web Page. These include looking at other Web Pages, deciding what should be…

  3. It's Time to Use a Wiki as Part of Your Web Site

    ERIC Educational Resources Information Center

    Ribaric, Tim

    2007-01-01

    Without a doubt, the term "wiki" has leaked into almost every discussion concerning Web 2.0. The real question becomes: Is there a place for a wiki on every library Web site? The answer should be an emphatic "yes." People often praise the wiki because it offers simple page creation and provides instant gratification for amateur Web developers.…

  4. NemaPath: online exploration of KEGG-based metabolic pathways for nematodes

    PubMed Central

    Wylie, Todd; Martin, John; Abubucker, Sahar; Yin, Yong; Messina, David; Wang, Zhengyuan; McCarter, James P; Mitreva, Makedonka

    2008-01-01

    Background Nematode.net is a web-accessible resource for investigating gene sequences from parasitic and free-living nematode genomes. Beyond the well-characterized model nematode C. elegans, over 500,000 expressed sequence tags (ESTs) and nearly 600,000 genome survey sequences (GSSs) have been generated from 36 nematode species as part of the Parasitic Nematode Genomics Program undertaken by the Genome Center at Washington University School of Medicine. However, these sequencing data are not present in most publicly available protein databases, which only include sequences in Swiss-Prot. Swiss-Prot, in turn, relies on GenBank/Embl/DDJP for predicted proteins from complete genomes or full-length proteins. Description Here we present the NemaPath pathway server, a web-based pathway-level visualization tool for navigating putative metabolic pathways for over 30 nematode species, including 27 parasites. The NemaPath approach consists of two parts: 1) a backend tool to align and evaluate nematode genomic sequences (curated EST contigs) against the annotated Kyoto Encyclopedia of Genes and Genomes (KEGG) protein database; 2) a web viewing application that displays annotated KEGG pathway maps based on desired confidence levels of primary sequence similarity as defined by a user. NemaPath also provides cross-referenced access to nematode genome information provided by other tools available on Nematode.net, including: detailed NemaGene EST cluster information; putative translations; GBrowse EST cluster views; links from nematode data to external databases for corresponding synonymous C. elegans counterparts, subject matches in KEGG's gene database, and also KEGG Ontology (KO) identification. Conclusion The NemaPath server hosts metabolic pathway mappings for 30 nematode species and is available on the World Wide Web at . The nematode source sequences used for the metabolic pathway mappings are available via FTP , as provided by the Genome Center at Washington University School of Medicine. PMID:18983679

  5. Biological data integration: wrapping data and tools.

    PubMed

    Lacroix, Zoé

    2002-06-01

    Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. Building a digital library for scientific data requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web as well as data generated by software. We present an approach to wrapping web data sources, databases, flat files, or data generated by tools through a database view mechanism. Generally, a wrapper has two tasks: it first sends a query to the source to retrieve data and, second builds the expected output with respect to the virtual structure. Our wrappers are composed of a retrieval component based on an intermediate object view mechanism called search views mapping the source capabilities to attributes, and an eXtensible Markup Language (XML) engine, respectively, to perform these two tasks. The originality of the approach consists of: 1) a generic view mechanism to access seamlessly data sources with limited capabilities and 2) the ability to wrap data sources as well as the useful specific tools they may provide. Our approach has been developed and demonstrated as part of the multidatabase system supporting queries via uniform object protocol model (OPM) interfaces.

  6. Post-Web 2.0 Pedagogy: From Student-Generated Content to International Co-Production Enabled by Mobile Social Media

    ERIC Educational Resources Information Center

    Cochrane, Thomas; Antonczak, Laurent; Wagner, Daniel

    2013-01-01

    The advent of web 2.0 has enabled new forms of collaboration centred upon user-generated content, however, mobile social media is enabling a new wave of social collaboration. Mobile devices have disrupted and reinvented traditional media markets and distribution: iTunes, Google Play and Amazon now dominate music industry distribution channels,…

  7. Next Generation Online: Advancing Learning through Dynamic Design, Virtual and Web 2.0 Technologies, and Instructor "Attitude"

    ERIC Educational Resources Information Center

    O'Connor, Eileen

    2013-01-01

    With the advent of web 2.0 and virtual technologies and new understandings about learning within a global, networked environment, online course design has moved beyond the constraints of text readings, papers, and discussion boards. This next generation of online courses needs to dynamically and actively integrate the wide-ranging distribution of…

  8. Differences in Learning Preferences by Generational Cohort: Implications for Instructional Design in Corporate Web-Based Learning

    ERIC Educational Resources Information Center

    Kriegel, Jessica

    2013-01-01

    In today's global and high-tech economy, the primary contributing factor to sustainable competitive advantage is the strategic development of employees, an organization's only unique asset. However, with four generations actively present in the workforce and the proliferation of web-based learning as a key method for developing…

  9. Web Accessibility and Guidelines

    NASA Astrophysics Data System (ADS)

    Harper, Simon; Yesilada, Yeliz

    Access to, and movement around, complex online environments, of which the World Wide Web (Web) is the most popular example, has long been considered an important and major issue in the Web design and usability field. The commonly used slang phrase ‘surfing the Web’ implies rapid and free access, pointing to its importance among designers and users alike. It has also been long established that this potentially complex and difficult access is further complicated, and becomes neither rapid nor free, if the user is disabled. There are millions of people who have disabilities that affect their use of the Web. Web accessibility aims to help these people to perceive, understand, navigate, and interact with, as well as contribute to, the Web, and thereby the society in general. This accessibility is, in part, facilitated by the Web Content Accessibility Guidelines (WCAG) currently moving from version one to two. These guidelines are intended to encourage designers to make sure their sites conform to specifications, and in that conformance enable the assistive technologies of disabled users to better interact with the page content. In this way, it was hoped that accessibility could be supported. While this is in part true, guidelines do not solve all problems and the new WCAG version two guidelines are surrounded by controversy and intrigue. This chapter aims to establish the published literature related to Web accessibility and Web accessibility guidelines, and discuss limitations of the current guidelines and future directions.

  10. Using the Web as a Higher Order Thinking Partner: Case Study of an Advanced Learner Creatively Synthesizing Knowledge on the Web

    ERIC Educational Resources Information Center

    DeSchryver, Michael

    2017-01-01

    Previous work provided foundations for the theory of web-mediated knowledge synthesis, a framework for using the web in more creative and generative ways. This article explores specific instances of the various elements of that theory in a single case from the initial study. That is, a thorough exploration of think-aloud, screen capture, and…

  11. Semantic Web technologies for the big data in life sciences.

    PubMed

    Wu, Hongyan; Yamaguchi, Atsuko

    2014-08-01

    The life sciences field is entering an era of big data with the breakthroughs of science and technology. More and more big data-related projects and activities are being performed in the world. Life sciences data generated by new technologies are continuing to grow in not only size but also variety and complexity, with great speed. To ensure that big data has a major influence in the life sciences, comprehensive data analysis across multiple data sources and even across disciplines is indispensable. The increasing volume of data and the heterogeneous, complex varieties of data are two principal issues mainly discussed in life science informatics. The ever-evolving next-generation Web, characterized as the Semantic Web, is an extension of the current Web, aiming to provide information for not only humans but also computers to semantically process large-scale data. The paper presents a survey of big data in life sciences, big data related projects and Semantic Web technologies. The paper introduces the main Semantic Web technologies and their current situation, and provides a detailed analysis of how Semantic Web technologies address the heterogeneous variety of life sciences big data. The paper helps to understand the role of Semantic Web technologies in the big data era and how they provide a promising solution for the big data in life sciences.

  12. Web-services-based spatial decision support system to facilitate nuclear waste siting

    NASA Astrophysics Data System (ADS)

    Huang, L. Xinglai; Sheng, Grant

    2006-10-01

    The availability of spatial web services enables data sharing among managers, decision and policy makers and other stakeholders in much simpler ways than before and subsequently has created completely new opportunities in the process of spatial decision making. Though generally designed for a certain problem domain, web-services-based spatial decision support systems (WSDSS) can provide a flexible problem-solving environment to explore the decision problem, understand and refine problem definition, and generate and evaluate multiple alternatives for decision. This paper presents a new framework for the development of a web-services-based spatial decision support system. The WSDSS is comprised of distributed web services that either have their own functions or provide different geospatial data and may reside in different computers and locations. WSDSS includes six key components, namely: database management system, catalog, analysis functions and models, GIS viewers and editors, report generators, and graphical user interfaces. In this study, the architecture of a web-services-based spatial decision support system to facilitate nuclear waste siting is described as an example. The theoretical, conceptual and methodological challenges and issues associated with developing web services-based spatial decision support system are described.

  13. A Web Application for Cotton Irrigation Management on The US Southern High Plains. Part I: Crop Yield Modeling and Profit Analysis

    USDA-ARS?s Scientific Manuscript database

    Irrigated cotton (Gossypium Hirsutum L.) production is a central part of west Texas agriculture that depends on the essentially non-renewable water resource of the Ogallala aquifer. Web-based decision support tools that estimate the profit effects of irrigation for cotton under varying lint price, p...

  14. Scientific Workflows and the Sensor Web for Virtual Environmental Observatories

    NASA Astrophysics Data System (ADS)

    Simonis, I.; Vahed, A.

    2008-12-01

    Virtual observatories mature from their original domain and become common practice for earth observation research and policy building. The term Virtual Observatory originally came from the astronomical research community. Here, virtual observatories provide universal access to the available astronomical data archives of space and ground-based observatories. Further on, as those virtual observatories aim at integrating heterogeneous ressources provided by a number of participating organizations, the virtual observatory acts as a coordinating entity that strives for common data analysis techniques and tools based on common standards. The Sensor Web is on its way to become one of the major virtual observatories outside of the astronomical research community. Like the original observatory that consists of a number of telescopes, each observing a specific part of the wave spectrum and with a collection of astronomical instruments, the Sensor Web provides a multi-eyes perspective on the current, past, as well as future situation of our planet and its surrounding spheres. The current view of the Sensor Web is that of a single worldwide collaborative, coherent, consistent and consolidated sensor data collection, fusion and distribution system. The Sensor Web can perform as an extensive monitoring and sensing system that provides timely, comprehensive, continuous and multi-mode observations. This technology is key to monitoring and understanding our natural environment, including key areas such as climate change, biodiversity, or natural disasters on local, regional, and global scales. The Sensor Web concept has been well established with ongoing global research and deployment of Sensor Web middleware and standards and represents the foundation layer of systems like the Global Earth Observation System of Systems (GEOSS). The Sensor Web consists of a huge variety of physical and virtual sensors as well as observational data, made available on the Internet at standardized interfaces. All data sets and sensor communication follow well-defined abstract models and corresponding encodings, mostly developed by the OGC Sensor Web Enablement initiative. Scientific progress is currently accelerated by an emerging new concept called scientific workflows, which organize and manage complex distributed computations. A scientific workflow represents and records the highly complex processes that a domain scientist typically would follow in exploration, discovery and ultimately, transformation of raw data to publishable results. The challenge is now to integrate the benefits of scientific workflows with those provided by the Sensor Web in order to leverage all resources for scientific exploration, problem solving, and knowledge generation. Scientific workflows for the Sensor Web represent the next evolutionary step towards efficient, powerful, and flexible earth observation frameworks and platforms. Those platforms support the entire process from capturing data, sharing and integrating, to requesting additional observations. Multiple sites and organizations will participate on single platforms and scientists from different countries and organizations interact and contribute to large-scale research projects. Simultaneously, the data- and information overload becomes manageable, as multiple layers of abstraction will free scientists to deal with underlying data-, processing or storage peculiarities. The vision are automated investigation and discovery mechanisms that allow scientists to pose queries to the system, which in turn would identify potentially related resources, schedules processing tasks and assembles all parts in workflows that may satisfy the query.

  15. An overview of current and potential use of information and communication technologies for immunization promotion among adolescents.

    PubMed

    Amicizia, Daniela; Domnich, Alexander; Gasparini, Roberto; Bragazzi, Nicola Luigi; Lai, Piero Luigi; Panatto, Donatella

    2013-12-01

    Information and communication technologies (ICT), such as the Internet or mobile telephony, have become an important part of the life of today's adolescents and their main means of procuring information. The new generation of the Internet based on social-networking technologies, Web 2.0, is increasingly used for health purposes by both laypeople and health professionals. A broad spectrum of Web 2.0 applications provides several opportunities for healthcare workers, in that they can reach large numbers of teenagers in an individualized way and promote vaccine-related knowledge in an interactive and entertaining manner. These applications, namely social-networking and video-sharing websites, wikis and microblogs, should be monitored in order to identify current attitudes toward vaccination, to reply to vaccination critics and to establish a real-time dialog with users. Moreover, the ubiquity of mobile telephony makes it a valuable means of involving teenagers in immunization promotion, especially in developing countries.

  16. U.S. Spacesuit Knowledge Capture Status and Initiatives in Fiscal Year 2014

    NASA Technical Reports Server (NTRS)

    Chullen, Cinda; Oliva, Vladenka R.

    2015-01-01

    Since its 2008 inception, the NASA U.S. Spacesuit Knowledge Capture (KC) program has shared historical spacesuit information with engineers and other technical team members to expand their understanding of the spacesuit's evolution, known capability and limitations, and future desires and needs for its use. As part of the U.S. Spacesuit KC program, subject-matter experts have delivered presentations, held workshops, and participated in interviews to share valuable spacesuit lessons learned to ensure this vital information will survive for existing and future generations to use. These events have included spacesuit knowledge from the inception of NASA's first spacesuit to current spacesuit design. To ensure that this information is shared with the entire NASA community and other interested or invested entities, these KC events were digitally recorded and transcribed to be uploaded onto several applicable NASA Web sites. This paper discusses the various Web sites that the KC events are uploaded to and possible future sites that will channel this information.

  17. Installing computers in older adults' homes and teaching them to access a patient education web site: a systematic approach.

    PubMed

    Dauz, Emily; Moore, Jan; Smith, Carol E; Puno, Florence; Schaag, Helen

    2004-01-01

    This article describes the experiences of nurses who, as part of a large clinical trial, brought the Internet into older adults' homes by installing a computer, if needed, and connecting to a patient education Web site. Most of these patients had not previously used the Internet and were taught even basic computer skills when necessary. Because of increasing use of the Internet in patient education, assessment, and home monitoring, nurses in various roles currently connect with patients to monitor their progress, teach about medications, and answer questions about appointments and treatments. Thus, nurses find themselves playing the role of technology managers for patients with home-based Internet connections. This article provides step-by-step procedures for computer installation and training in the form of protocols, checklists, and patient user guides. By following these procedures, nurses can install computers, arrange Internet access, teach and connect to their patients, and prepare themselves to install future generations of technological devices.

  18. U.S. Spacesuit Knowledge Capture Accomplishments in Fiscal Year 2014

    NASA Technical Reports Server (NTRS)

    Chullen, Cinda; Oliva, Vladenka R.

    2015-01-01

    Since its 2008 inception, the NASA U.S. Spacesuit Knowledge Capture (KC) program has shared historical spacesuit information with engineers and other technical team members to expand their understanding of the spacesuit's evolution, known capability and limitations, and future desires and needs for its use. As part of the U.S. Spacesuit KC program, subject-matter experts have delivered presentations, held workshops, and participated in interviews to share valuable spacesuit lessons learned to ensure this vital information will survive for existing and future generations to use. These events have included spacesuit knowledge from the inception of NASA's first spacesuit to current spacesuit design. To ensure that this information is shared with the entire NASA community and other interested or invested entities, these KC events were digitally recorded and transcribed to be uploaded onto several applicable NASA Web sites. This paper discusses the various Web sites that the KC events are uploaded to and possible future sites that will channel this information.

  19. An overview of current and potential use of information and communication technologies for immunization promotion among adolescents

    PubMed Central

    Amicizia, Daniela; Domnich, Alexander; Gasparini, Roberto; Bragazzi, Nicola Luigi; Lai, Piero Luigi; Panatto, Donatella

    2013-01-01

    Information and communication technologies (ICT), such as the Internet or mobile telephony, have become an important part of the life of today’s adolescents and their main means of procuring information. The new generation of the Internet based on social-networking technologies, Web 2.0, is increasingly used for health purposes by both laypeople and health professionals. A broad spectrum of Web 2.0 applications provides several opportunities for healthcare workers, in that they can reach large numbers of teenagers in an individualized way and promote vaccine-related knowledge in an interactive and entertaining manner. These applications, namely social-networking and video-sharing websites, wikis and microblogs, should be monitored in order to identify current attitudes toward vaccination, to reply to vaccination critics and to establish a real-time dialog with users. Moreover, the ubiquity of mobile telephony makes it a valuable means of involving teenagers in immunization promotion, especially in developing countries. PMID:23954845

  20. A Web Server for MACCS Magnetometer Data

    NASA Technical Reports Server (NTRS)

    Engebretson, Mark J.

    1998-01-01

    NASA Grant NAG5-3719 was provided to Augsburg College to support the development of a web server for the Magnetometer Array for Cusp and Cleft Studies (MACCS), a two-dimensional array of fluxgate magnetometers located at cusp latitudes in Arctic Canada. MACCS was developed as part of the National Science Foundation's GEM (Geospace Environment Modeling) Program, which was designed in part to complement NASA's Global Geospace Science programs during the decade of the 1990s. This report describes the successful use of these grant funds to support a working web page that provides both daily plots and file access to any user accessing the worldwide web. The MACCS home page can be accessed at http://space.augsburg.edu/space/MaccsHome.html.

  1. Untangling Web 2.0: Charting Web 2.0 Tools, the NCSS Guidelines for Effective Use of Technology, and Bloom's Taxonomy

    ERIC Educational Resources Information Center

    Diacopoulos, Mark M.

    2015-01-01

    The potential for social studies to embrace instructional technology and Web 2.0 applications has become a growing trend in recent social studies research. As part of an ongoing process of collaborative enquiry between an instructional specialist and social studies teachers in a Professional Learning Community, a table of Web 2.0 applications was…

  2. Affordances of students' using the World Wide Web as a publishing medium in project-based learning environments

    NASA Astrophysics Data System (ADS)

    Bos, Nathan Daniel

    This dissertation investigates the emerging affordance of the World Wide Web as a place for high school students to become authors and publishers of information. Two empirical studies lay groundwork for student publishing by examining learning issues related to audience adaptation in writing, motivation and engagement with hypermedia, design, problem-solving, and critical evaluation. Two models of student publishing on the World Wide Web were investigated over the course of two 11spth grade project-based science curriculums. In the first curricular model, students worked in pairs to design informative hypermedia projects about infectious diseases that were published on the Web. Four case studies were written, drawing on both product- and process-related data sources. Four theoretically important findings are illustrated through these cases: (1) multimedia, especially graphics, seemed to catalyze some students' design processes by affecting the sequence of their design process and by providing a connection between the science content and their personal interest areas, (2) hypermedia design can demand high levels of analysis and synthesis of science content, (3) students can learn to think about science content representation through engagement with challenging design tasks, and (4) students' consideration of an outside audience can be facilitated by teacher-given design principles. The second Web-publishing model examines how students critically evaluate scientific resources on the Web, and how students can contribute to the Web's organization and usability by publishing critical reviews. Students critically evaluated Web resources using a four-part scheme: summarization of content, content, evaluation of credibility, evaluation of organizational structure, and evaluation of appearance. Content analyses comparing students' reviews and reviewed Web documents showed that students were proficient at summarizing content of Web documents, identifying their publishing source, and evaluating their organizational features; however, students struggled to identify scientific evidence, bias, or sophisticated use of media in Web pages. Shortcomings were shown to be partly due to deficiencies in the Web pages themselves and partly due to students' inexperience with the medium or lack of critical evaluation skills. Future directions of this idea are discussed, including discussion of how students' reviews have been integrated into a current digital library development project.

  3. Facilitating Collaboration, Knowledge Construction and Communication with Web-Enabled Databases.

    ERIC Educational Resources Information Center

    McNeil, Sara G.; Robin, Bernard R.

    This paper presents an overview of World Wide Web-enabled databases that dynamically generate Web materials and focuses on the use of this technology to support collaboration, knowledge construction, and communication. Database applications have been used in classrooms to support learning activities for over a decade, but, although business and…

  4. Web-Based Teaching: The Beginning of the End for Universities?

    ERIC Educational Resources Information Center

    Wyatt, Ray

    This paper describes a World Wide Web-based, generic, inter-disciplinary subject called computer-aided policymaking. It has been offered at Melbourne University (Australia) from the beginning of 2001. It has generated some salutary lessons in marketing and pedagogy, but overall it is concluded that Web-based teaching has a rosy future.…

  5. Past, Present, and Future Trends in Teaching Clinical Skills through Web-Based Learning Environments

    ERIC Educational Resources Information Center

    Coe Regan, Jo Ann R.; Youn, Eric J.

    2008-01-01

    Distance education in social work has grown significantly due to the use of interactive television and computer networks. Given the recent developments in delivering distance education utilizing Web-based technology, this article presents a literature review focused on identifying generational trends in the development of Web-based learning…

  6. Higher Secondary Learners' Effectiveness towards Web Based Instruction (WBI) on Chemistry

    ERIC Educational Resources Information Center

    Sudha, A.; Amutha, S.

    2015-01-01

    Web-based training is becoming a phenomenon in education today because of its flexibility and convenience, it is vitally important to address those issues that adversely impact retention and success in this environment. To generate principles of effective asynchronous web-based materials specifically applicable for secondary level students based…

  7. A Ubiquitous Sensor Network Platform for Integrating Smart Devices into the Semantic Sensor Web

    PubMed Central

    de Vera, David Díaz Pardo; Izquierdo, Álvaro Sigüenza; Vercher, Jesús Bernat; Gómez, Luis Alfonso Hernández

    2014-01-01

    Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs. PMID:24945678

  8. A ubiquitous sensor network platform for integrating smart devices into the semantic sensor web.

    PubMed

    de Vera, David Díaz Pardo; Izquierdo, Alvaro Sigüenza; Vercher, Jesús Bernat; Hernández Gómez, Luis Alfonso

    2014-06-18

    Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs.

  9. GIDEP Batching Tool

    NASA Technical Reports Server (NTRS)

    Fong, Danny; Odell,Dorice; Barry, Peter; Abrahamian, Tomik

    2008-01-01

    This software provides internal, automated search mechanics of GIDEP (Government- Industry Data Exchange Program) Alert data imported from the GIDEP government Web site. The batching tool allows the import of a single parts list in tab-delimited text format into the local JPL GIDEP database. Delimiters from every part number are removed. The original part numbers with delimiters are compared, as well as the newly generated list without the delimiters. The two lists run against the GIDEP imports, and output any matches. This feature only works with Netscape 2.0 or greater, or Internet Explorer 4.0 or greater. The user selects the browser button to choose a text file to import. When the submit button is pressed, this script will import alerts from the text file into the local JPL GIDEP database. This batch tool provides complete in-house control over exported material and data for automated batch match abilities. The batching tool has the ability to match capabilities of the parts list to tables, and yields results that aid further research and analysis. This provides more control over GIDEP information for metrics and reports information not provided by the government site. This software yields results quickly and gives more control over external data from the government site in order to generate other reports not available from the external source. There is enough space to store years of data. The program relates to risk identification and management with regard to projects and GIDEP alert information encompassing flight parts for space exploration.

  10. Model Driven Development of Web Services and Dynamic Web Services Composition

    DTIC Science & Technology

    2005-01-01

    27 2.4.1 Feature-Oriented Domain Analysis ( FODA ).......................................27 2.4.2 The need of automation for Feature-Oriented...Diagram Algebra FDL Feature Description Language FODA Feature-Oriented Domain Analysis FSM Finite State Machine GDM Generative Domain...Oriented Domain Analysis ( FODA ) in Section 2.4 and Aspect-Oriented Generative Do- main Modeling (AOGDM) in Section 2.5, which not only represent two

  11. Research and Development of Web-Based Virtual Online Classroom

    ERIC Educational Resources Information Center

    Yang, Zongkai; Liu, Qingtang

    2007-01-01

    To build a web-based virtual learning environment depends on information technologies, concerns technology supporting learning methods and theories. A web-based virtual online classroom is designed and developed based on learning theories and streaming media technologies. And it is composed of two parts: instructional communicating environment…

  12. A "Bottom-Up" Approach to Food Web Construction

    ERIC Educational Resources Information Center

    Demetriou, Dorita; Korfiatis, Konstantinos; Constantinou, Constantinos

    2009-01-01

    The ability to comprehend trophic (nutritional) relationships and food web dynamics is an essential part of environmental literacy. However, students face severe difficulties in grasping the variety of causal patterns in food webs. We propose a curriculum for comprehending trophic relations in elementary school. The curriculum allows students to…

  13. Structural and Multilingual Approaches to Subject Access on the Web.

    ERIC Educational Resources Information Center

    Chan, Lois Mai; Lin, Xia; Zeng, Marcia

    This paper presents some of the efforts currently being made to develop mechanisms that can organize World Wide Web resources for efficient and effective retrieval, as well as programs that can accommodate multiple languages. Part 1 discusses structural approaches to organizing Web resources, including the use of hierarchical or…

  14. LifeWatchGreece Portal development: architecture, implementation and challenges for a biodiversity research e-infrastructure.

    PubMed

    Gougousis, Alexandros; Bailly, Nicolas

    2016-01-01

    Biodiversity data is characterized by its cross-disciplinary character, the extremely broad range of data types and structures, and the plethora of different data sources providing resources for the same piece of information in a heterogeneous way. Since the web inception two decades ago, there are multiple initiatives to connect, aggregate, share, and publish biodiversity data, and to establish data and work flows in order to analyze them. The European program LifeWatch aims at establishing a distributed network of nodes implementing virtual research environment in Europe to facilitate the work of biodiversity researchers and managers. LifeWatchGreece is one of these nodes where a portal was developed offering access to a suite of virtual laboratories and e-services. Despite its strict definition in information technology, in practice "portal" is a fairly broad term that embraces many web architectures. In the biodiversity domain, the term "portal" is usually used to indicate either a web site that provides access to a single or an aggregation of data repositories (like: http://indiabiodiversity.org/, http://www.mountainbiodiversity.org/, http://data.freshwaterbiodiversity.eu), a web site that gathers information about various online biodiversity tools (like http://test-eubon.ebd.csic.es/, http://marine.lifewatch.eu/) or a web site that just gathers information and news about the biodiversity domain (like http://chm.moew.government.bg). LifeWatchGreece's portal takes the concept of a portal a step further. In strict IT terms, LifeWatchGreece's portal is partly a portal, partly a platform and partly an aggregator. It includes a number of biodiversity-related web tools integrated into a centrally-controlled software ecosystem. This ecosystem includes subsystems for access control, traffic monitoring, user notifications and web tool management. These subsystems are shared to all the web tools that have been integrated to the portal and thereby are part of this ecosystem. These web tools do not consist in external and completely independent web applications as it happens in most other portals. A quite obvious (to the user) indication of this is the Single-Sign-On (SSO) functionality for all tools and the common user interface wrapper that most of these tools use. Another example of a less obvious functionality is the common user profile that is shared and can be utilized by all tools (e.g user's timezone).

  15. Trial Promoter: A Web-Based Tool for Boosting the Promotion of Clinical Research Through Social Media.

    PubMed

    Reuter, Katja; Ukpolo, Francis; Ward, Edward; Wilson, Melissa L; Angyan, Praveen

    2016-06-29

    Scarce information about clinical research, in particular clinical trials, is among the top reasons why potential participants do not take part in clinical studies. Without volunteers, on the other hand, clinical research and the development of novel approaches to preventing, diagnosing, and treating disease are impossible. Promising digital options such as social media have the potential to work alongside traditional methods to boost the promotion of clinical research. However, investigators and research institutions are challenged to leverage these innovations while saving time and resources. To develop and test the efficiency of a Web-based tool that automates the generation and distribution of user-friendly social media messages about clinical trials. Trial Promoter is developed in Ruby on Rails, HTML, cascading style sheet (CSS), and JavaScript. In order to test the tool and the correctness of the generated messages, clinical trials (n=46) were randomized into social media messages and distributed via the microblogging social media platform Twitter and the social network Facebook. The percent correct was calculated to determine the probability with which Trial Promoter generates accurate messages. During a 10-week testing phase, Trial Promoter automatically generated and published 525 user-friendly social media messages on Twitter and Facebook. On average, Trial Promoter correctly used the message templates and substituted the message parameters (text, URLs, and disease hashtags) 97.7% of the time (1563/1600). Trial Promoter may serve as a promising tool to render clinical trial promotion more efficient while requiring limited resources. It supports the distribution of any research or other types of content. The Trial Promoter code and installation instructions are freely available online.

  16. Trial Promoter: A Web-Based Tool for Boosting the Promotion of Clinical Research Through Social Media

    PubMed Central

    Ukpolo, Francis; Ward, Edward; Wilson, Melissa L

    2016-01-01

    Background Scarce information about clinical research, in particular clinical trials, is among the top reasons why potential participants do not take part in clinical studies. Without volunteers, on the other hand, clinical research and the development of novel approaches to preventing, diagnosing, and treating disease are impossible. Promising digital options such as social media have the potential to work alongside traditional methods to boost the promotion of clinical research. However, investigators and research institutions are challenged to leverage these innovations while saving time and resources. Objective To develop and test the efficiency of a Web-based tool that automates the generation and distribution of user-friendly social media messages about clinical trials. Methods Trial Promoter is developed in Ruby on Rails, HTML, cascading style sheet (CSS), and JavaScript. In order to test the tool and the correctness of the generated messages, clinical trials (n=46) were randomized into social media messages and distributed via the microblogging social media platform Twitter and the social network Facebook. The percent correct was calculated to determine the probability with which Trial Promoter generates accurate messages. Results During a 10-week testing phase, Trial Promoter automatically generated and published 525 user-friendly social media messages on Twitter and Facebook. On average, Trial Promoter correctly used the message templates and substituted the message parameters (text, URLs, and disease hashtags) 97.7% of the time (1563/1600). Conclusions Trial Promoter may serve as a promising tool to render clinical trial promotion more efficient while requiring limited resources. It supports the distribution of any research or other types of content. The Trial Promoter code and installation instructions are freely available online. PMID:27357424

  17. Edaphostat: interactive ecological analysis of soil organism occurrences and preferences from the Edaphobase data warehouse

    PubMed Central

    Scholz-Starke, Björn; Burkhardt, Ulrich; Lesch, Stephan; Rick, Sebastian; Russell, David; Roß-Nickoll, Martina; Ottermanns, Richard

    2017-01-01

    Abstract The Edaphostat web application allows interactive and dynamic analyses of soil organism data stored in the Edaphobase data warehouse. It is part of the Edaphobase web application and can be accessed by any modern browser. The tool combines data from different sources (publications, field studies and museum collections) and allows species preferences along various environmental gradients (i.e. C/N ratio and pH) and classification systems (habitat type and soil type) to be analyzed. Database URL: Edaphostat is part of the Edaphobase Web Application available at https://portal.edaphobase.org PMID:29220469

  18. Development of Web GIS for complex processing and visualization of climate geospatial datasets as an integral part of dedicated Virtual Research Environment

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Okladnikov, Igor; Titov, Alexander

    2017-04-01

    For comprehensive usage of large geospatial meteorological and climate datasets it is necessary to create a distributed software infrastructure based on the spatial data infrastructure (SDI) approach. Currently, it is generally accepted that the development of client applications as integrated elements of such infrastructure should be based on the usage of modern web and GIS technologies. The paper describes the Web GIS for complex processing and visualization of geospatial (mainly in NetCDF and PostGIS formats) datasets as an integral part of the dedicated Virtual Research Environment for comprehensive study of ongoing and possible future climate change, and analysis of their implications, providing full information and computing support for the study of economic, political and social consequences of global climate change at the global and regional levels. The Web GIS consists of two basic software parts: 1. Server-side part representing PHP applications of the SDI geoportal and realizing the functionality of interaction with computational core backend, WMS/WFS/WPS cartographical services, as well as implementing an open API for browser-based client software. Being the secondary one, this part provides a limited set of procedures accessible via standard HTTP interface. 2. Front-end part representing Web GIS client developed according to a "single page application" technology based on JavaScript libraries OpenLayers (http://openlayers.org/), ExtJS (https://www.sencha.com/products/extjs), GeoExt (http://geoext.org/). It implements application business logic and provides intuitive user interface similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Boundless/OpenGeo architecture was used as a basis for Web-GIS client development. According to general INSPIRE requirements to data visualization Web GIS provides such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. The specialized Web GIS client contains three basic tires: • Tier of NetCDF metadata in JSON format • Middleware tier of JavaScript objects implementing methods to work with: o NetCDF metadata o XML file of selected calculations configuration (XML task) o WMS/WFS/WPS cartographical services • Graphical user interface tier representing JavaScript objects realizing general application business logic Web-GIS developed provides computational processing services launching to support solving tasks in the area of environmental monitoring, as well as presenting calculation results in the form of WMS/WFS cartographical layers in raster (PNG, JPG, GeoTIFF), vector (KML, GML, Shape), and binary (NetCDF) formats. It has shown its effectiveness in the process of solving real climate change research problems and disseminating investigation results in cartographical formats. The work is supported by the Russian Science Foundation grant No 16-19-10257.

  19. A web application for cotton irrigation management on the U.S. southern high plains. Part I: Crop yield modeling and profit analysis

    USDA-ARS?s Scientific Manuscript database

    Irrigated cotton (Gossypium Hirsutum L.) production is a central part of west Texas agriculture that depends on the essentially non-renewable water resource of the Ogallala aquifer. Web-based decision support tools that estimate the profit effects of irrigation for cotton under varying lint price, p...

  20. Wind speed affects prey-catching behaviour in an orb web spider.

    PubMed

    Turner, Joe; Vollrath, Fritz; Hesselberg, Thomas

    2011-12-01

    Wind has previously been shown to influence the location and orientation of spider web sites and also the geometry and material composition of constructed orb webs. We now show that wind also influences components of prey-catching behaviour within the web. A small wind tunnel was used to generate different wind speeds. Araneus diadematus ran more slowly towards entangled Drosophila melanogaster in windy conditions, which took less time to escape the web. This indicates a lower capture probability and a diminished overall predation efficiency for spiders at higher wind speeds. We conclude that spiders' behaviour of taking down their webs as wind speed increases may therefore not be a response only to possible web damage.

  1. Wind speed affects prey-catching behaviour in an orb web spider

    NASA Astrophysics Data System (ADS)

    Turner, Joe; Vollrath, Fritz; Hesselberg, Thomas

    2011-12-01

    Wind has previously been shown to influence the location and orientation of spider web sites and also the geometry and material composition of constructed orb webs. We now show that wind also influences components of prey-catching behaviour within the web. A small wind tunnel was used to generate different wind speeds. Araneus diadematus ran more slowly towards entangled Drosophila melanogaster in windy conditions, which took less time to escape the web. This indicates a lower capture probability and a diminished overall predation efficiency for spiders at higher wind speeds. We conclude that spiders' behaviour of taking down their webs as wind speed increases may therefore not be a response only to possible web damage.

  2. Helpers Program: A Pilot Test of Brief Tobacco Intervention Training in Three Corporations

    PubMed Central

    Muramoto, Myra L.; Wassum, Ken; Connolly, Tim; Matthews, Eva; Floden, Lysbeth

    2014-01-01

    Background Quitlines and worksite-sponsored cessation programs are effective and highly accessible, but limited by low utilization. Efforts to encourage use of cessation aids have focused almost exclusively on the smoker, overlooking the potential for friends, family, coworkers and others in a tobacco user’s social network to influence quitting and use of effective treatment. Methods Longitudinal, observational pilot feasibility study with six-week follow-up survey. Setting/Participants Employees of three national corporations, with a combined target audience of 102,100 employees. Intervention The Helpers Program offers Web-based brief intervention (BI) training to activate social networks of tobacco users to encourage quitting and use of effective treatment. Helpers was offered from 1/10/08 to 3/31/08, as a treatment engagement strategy, together with Free and Clear’s (F&C) telephone/Web-based cessation services. Main outcome measures web-site utilization, training completion, post-training changes in knowledge and self-efficacy with delivery of BIs, referrals to F&C, and use of BI training. Results There were 19,109 unique visitors to the Helpers Web-site. Of these, 4727 created user accounts; 1427 registered for Helpers Training; 766 completed training. There were 445 visits to the referral page and 201 e-mail or letter referrals generated. There were 67 requests for technical support. Of follow-up survey respondents (n=289), 78.9% reported offering a BI. Conclusions Offering the Helpers Program Web-site to a large, diverse audience as part of an employer-sponsored worksite health promotion program is both feasible and well accepted by employees. Website users will participate in training, encourage quitting, and refer smokers to quitline services. PMID:20176303

  3. Visualization for genomics: the Microbial Genome Viewer.

    PubMed

    Kerkhoven, Robert; van Enckevort, Frank H J; Boekhorst, Jos; Molenaar, Douwe; Siezen, Roland J

    2004-07-22

    A Web-based visualization tool, the Microbial Genome Viewer, is presented that allows the user to combine complex genomic data in a highly interactive way. This Web tool enables the interactive generation of chromosome wheels and linear genome maps from genome annotation data stored in a MySQL database. The generated images are in scalable vector graphics (SVG) format, which is suitable for creating high-quality scalable images and dynamic Web representations. Gene-related data such as transcriptome and time-course microarray experiments can be superimposed on the maps for visual inspection. The Microbial Genome Viewer 1.0 is freely available at http://www.cmbi.kun.nl/MGV

  4. AdaFF: Adaptive Failure-Handling Framework for Composite Web Services

    NASA Astrophysics Data System (ADS)

    Kim, Yuna; Lee, Wan Yeon; Kim, Kyong Hoon; Kim, Jong

    In this paper, we propose a novel Web service composition framework which dynamically accommodates various failure recovery requirements. In the proposed framework called Adaptive Failure-handling Framework (AdaFF), failure-handling submodules are prepared during the design of a composite service, and some of them are systematically selected and automatically combined with the composite Web service at service instantiation in accordance with the requirement of individual users. In contrast, existing frameworks cannot adapt the failure-handling behaviors to user's requirements. AdaFF rapidly delivers a composite service supporting the requirement-matched failure handling without manual development, and contributes to a flexible composite Web service design in that service architects never care about failure handling or variable requirements of users. For proof of concept, we implement a prototype system of the AdaFF, which automatically generates a composite service instance with Web Services Business Process Execution Language (WS-BPEL) according to the users' requirement specified in XML format and executes the generated instance on the ActiveBPEL engine.

  5. Novel inter and intra prediction tools under consideration for the emerging AV1 video codec

    NASA Astrophysics Data System (ADS)

    Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe; Wang, Yunqing; Bankoski, Jim; Wang, Chen; Keyder, Emil

    2017-09-01

    Google started the WebM Project in 2010 to develop open source, royalty- free video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec AV1, in a consortium of major tech companies called the Alliance for Open Media, that achieves at least a generational improvement in coding efficiency over VP9. In this paper, we focus primarily on new tools in AV1 that improve the prediction of pixel blocks before transforms, quantization and entropy coding are invoked. Specifically, we describe tools and coding modes that improve intra, inter and combined inter-intra prediction. Results are presented on standard test sets.

  6. An Automated Web Diary System for TeleHomeCare Patient Monitoring

    PubMed Central

    Ganzinger, Matthias; Demiris, George; Finkelstein, Stanley M.; Speedie, Stuart; Lundgren, Jan Marie

    2001-01-01

    The TeleHomeCare project monitors home care patients via the Internet. Each patient has a personalized homepage with an electronic diary for collecting the monitoring data with HTML forms. The web pages are generated dynamically using PHP. All data are stored in a MySQL database. Data are checked immediately by the system; if a value exceeds a predefined limit an alarm message is generated and sent automatically to the patient's case manager. Weekly graphical reports (PDF format) are also generated and sent by email to the same destination.

  7. Best practices in Web-based courses: generational differences across undergraduate and graduate nursing students.

    PubMed

    Billings, Diane M; Skiba, Diane J; Connors, Helen R

    2005-01-01

    The demand for online courses is greatly increasing across all levels of the curriculum in higher education. With this change in teaching and learning strategies comes the need for quality control to determine best practices in online learning communities. This study examines the differences in student perceptions of the use of technology, educational practices, and outcomes between undergraduate and graduate students enrolled in Web-based courses. The multisite study uses the benchmarking process and the Flashlight Program Evaluating Educational Uses of the Web in Nursing survey instrument to study best practices and examine generational differences between the two groups of students. The outcomes of the study establish benchmarks for quality improvement in online learning. The results support the educational model for online learning and postulates about generational differences for future study.

  8. Web 2.0 Applications in China

    NASA Astrophysics Data System (ADS)

    Zhai, Dongsheng; Liu, Chen

    Since 2005, the term Web 2.0 has gradually become a hot topic on the Internet. Web 2.0 lets users create web contents as distinct from webmasters or web coders. Web 2.0 has come to our work, our life and even has become an indispensable part of our web-life. Its applications have already been widespread in many fields on the Internet. So far, China has about 137 million netizens [1], therefore its Web 2.0 market is so attractive that many sources of venture capital flow into the Chinese Web 2.0 market and there are also a lot of new Web 2.0 companies in China. However, the development of Web 2.0 in China is accompanied by some problems and obstacles. In this paper, we will mainly discuss Web 2.0 applications in China, with their current problems and future development trends.

  9. Assassin bug uses aggressive mimicry to lure spider prey.

    PubMed

    Wignall, Anne E; Taylor, Phillip W

    2011-05-07

    Assassin bugs (Stenolemus bituberus) hunt web-building spiders by invading the web and plucking the silk to generate vibrations that lure the resident spider into striking range. To test whether vibrations generated by bugs aggressively mimic the vibrations generated by insect prey, we compared the responses of spiders to bugs with how they responded to prey, courting male spiders and leaves falling into the web. We also analysed the associated vibrations. Similar spider orientation and approach behaviours were observed in response to vibrations from bugs and prey, whereas different behaviours were observed in response to vibrations from male spiders and leaves. Peak frequency and duration of vibrations generated by bugs were similar to those generated by prey and courting males. Further, vibrations from bugs had a temporal structure and amplitude that were similar to vibrations generated by leg and body movements of prey and distinctly different to vibrations from courting males or leaves, or prey beating their wings. To be an effective predator, bugs do not need to mimic the full range of prey vibrations. Instead bugs are general mimics of a subset of prey vibrations that fall within the range of vibrations classified by spiders as 'prey'.

  10. Power Plants, Steam and Gas Turbines WebQuest

    ERIC Educational Resources Information Center

    Ulloa, Carlos; Rey, Guillermo D.; Sánchez, Ángel; Cancela, Ángeles

    2012-01-01

    A WebQuest is an Internet-based and inquiry-oriented learning activity. The aim of this work is to outline the creation of a WebQuest entitled "Power Generation Plants: Steam and Gas Turbines." This is one of the topics covered in the course "Thermodynamics and Heat Transfer," which is offered in the second year of Mechanical…

  11. 78 FR 6142 - Vogtle Electric Generating Plant, Units 3 and 4; Application and Amendment to Combined Licenses...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-29

    ... NRC-2008- 0252. You may submit comments by any of the following methods: Federal Rulemaking Web site... publicly available, by any of the following methods: Federal Rulemaking Web site: Go to http://www... ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public Document Room...

  12. The New Generation of Citation Indexing in the Age of Digital Libraries

    ERIC Educational Resources Information Center

    Liu, Mengxiong; Cabrera, Peggy

    2008-01-01

    As the Web is becoming a powerful new medium in scientific publication and scholarly communication, citation indexing has found a new application in the digital environment. The authors reviewed the new developments in Web-based citation indexing and conducted a case study in three major citation search tools, "Web of Science", "Scopus" and…

  13. WIRM: An Open Source Toolkit for Building Biomedical Web Applications

    PubMed Central

    Jakobovits, Rex M.; Rosse, Cornelius; Brinkley, James F.

    2002-01-01

    This article describes an innovative software toolkit that allows the creation of web applications that facilitate the acquisition, integration, and dissemination of multimedia biomedical data over the web, thereby reducing the cost of knowledge sharing. There is a lack of high-level web application development tools suitable for use by researchers, clinicians, and educators who are not skilled programmers. Our Web Interfacing Repository Manager (WIRM) is a software toolkit that reduces the complexity of building custom biomedical web applications. WIRM’s visual modeling tools enable domain experts to describe the structure of their knowledge, from which WIRM automatically generates full-featured, customizable content management systems. PMID:12386108

  14. 18 CFR 284.13 - Reporting requirements for interstate pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... following information on its Internet web site, and provide the information in downloadable file formats, in... index of customers must also posted on the pipeline's Internet web, in accordance with standards adopted in § 284.12 of this part, and made available from the Internet web site in a downloadable format...

  15. 18 CFR 284.13 - Reporting requirements for interstate pipelines.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... following information on its Internet web site, and provide the information in downloadable file formats, in... index of customers must also posted on the pipeline's Internet web, in accordance with standards adopted in § 284.12 of this part, and made available from the Internet web site in a downloadable format...

  16. 18 CFR 284.13 - Reporting requirements for interstate pipelines.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... following information on its Internet web site, and provide the information in downloadable file formats, in... index of customers must also posted on the pipeline's Internet web, in accordance with standards adopted in § 284.12 of this part, and made available from the Internet web site in a downloadable format...

  17. 18 CFR 284.13 - Reporting requirements for interstate pipelines.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... following information on its Internet web site, and provide the information in downloadable file formats, in... index of customers must also posted on the pipeline's Internet web, in accordance with standards adopted in § 284.12 of this part, and made available from the Internet web site in a downloadable format...

  18. 18 CFR 284.13 - Reporting requirements for interstate pipelines.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... following information on its Internet web site, and provide the information in downloadable file formats, in... index of customers must also posted on the pipeline's Internet web, in accordance with standards adopted in § 284.12 of this part, and made available from the Internet web site in a downloadable format...

  19. 32 CFR Appendix A to Part 806b - Definitions

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... exemption for protecting the identity of confidential sources. Cookie: Data created by a Web server that is... (persistent cookie). It provides a way for the Web site to identify users and keep track of their preferences... or is sent to a Web site different from the one you are currently viewing. Defense Data Integrity...

  20. 32 CFR Appendix A to Part 806b - Definitions

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... exemption for protecting the identity of confidential sources. Cookie: Data created by a Web server that is... (persistent cookie). It provides a way for the Web site to identify users and keep track of their preferences... or is sent to a Web site different from the one you are currently viewing. Defense Data Integrity...

  1. Exploring the Influence of Web-Based Portfolio Development on Learning To Teach Elementary Science.

    ERIC Educational Resources Information Center

    Avraamidou, Lucy; Zembal-Saul, Carla

    This study examined how Web-based portfolio development supported reflective thinking and learning within a Professional Development School (PDS). It investigated the evidence-based philosophies developed by prospective teachers as a central part of the Web-based portfolio task, noting how technology contributed to the portfolio task. Participants…

  2. WebChem Viewer: a tool for the easy dissemination of chemical and structural data sets

    PubMed Central

    2014-01-01

    Background Sharing sets of chemical data (e.g., chemical properties, docking scores, etc.) among collaborators with diverse skill sets is a common task in computer-aided drug design and medicinal chemistry. The ability to associate this data with images of the relevant molecular structures greatly facilitates scientific communication. There is a need for a simple, free, open-source program that can automatically export aggregated reports of entire chemical data sets to files viewable on any computer, regardless of the operating system and without requiring the installation of additional software. Results We here present a program called WebChem Viewer that automatically generates these types of highly portable reports. Furthermore, in designing WebChem Viewer we have also created a useful online web application for remotely generating molecular structures from SMILES strings. We encourage the direct use of this online application as well as its incorporation into other software packages. Conclusions With these features, WebChem Viewer enables interdisciplinary collaborations that require the sharing and visualization of small molecule structures and associated sets of heterogeneous chemical data. The program is released under the FreeBSD license and can be downloaded from http://nbcr.ucsd.edu/WebChemViewer. The associated web application (called “Smiley2png 1.0”) can be accessed through freely available web services provided by the National Biomedical Computation Resource at http://nbcr.ucsd.edu. PMID:24886360

  3. WebBio, a web-based management and analysis system for patient data of biological products in hospital.

    PubMed

    Lu, Ying-Hao; Kuo, Chen-Chun; Huang, Yaw-Bin

    2011-08-01

    We selected HTML, PHP and JavaScript as the programming languages to build "WebBio", a web-based system for patient data of biological products and used MySQL as database. WebBio is based on the PHP-MySQL suite and is run by Apache server on Linux machine. WebBio provides the functions of data management, searching function and data analysis for 20 kinds of biological products (plasma expanders, human immunoglobulin and hematological products). There are two particular features in WebBio: (1) pharmacists can rapidly find out whose patients used contaminated products for medication safety, and (2) the statistics charts for a specific product can be automatically generated to reduce pharmacist's work loading. WebBio has successfully turned traditional paper work into web-based data management.

  4. Aspect level sentiment analysis using machine learning

    NASA Astrophysics Data System (ADS)

    Shubham, D.; Mithil, P.; Shobharani, Meesala; Sumathy, S.

    2017-11-01

    In modern world the development of web and smartphones increases the usage of online shopping. The overall feedback about product is generated with the help of sentiment analysis using text processing.Opinion mining or sentiment analysis is used to collect and categorized the reviews of product. The proposed system uses aspect leveldetection in which features are extracted from the datasets. The system performs pre-processing operation such as tokenization, part of speech and limitization on the data tofinds meaningful information which is used to detect the polarity level and assigns rating to product. The proposed model focuses on aspects to produces accurate result by avoiding the spam reviews.

  5. Generation of open biomedical datasets through ontology-driven transformation and integration processes.

    PubMed

    Carmen Legaz-García, María Del; Miñarro-Giménez, José Antonio; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2016-06-03

    Biomedical research usually requires combining large volumes of data from multiple heterogeneous sources, which makes difficult the integrated exploitation of such data. The Semantic Web paradigm offers a natural technological space for data integration and exploitation by generating content readable by machines. Linked Open Data is a Semantic Web initiative that promotes the publication and sharing of data in machine readable semantic formats. We present an approach for the transformation and integration of heterogeneous biomedical data with the objective of generating open biomedical datasets in Semantic Web formats. The transformation of the data is based on the mappings between the entities of the data schema and the ontological infrastructure that provides the meaning to the content. Our approach permits different types of mappings and includes the possibility of defining complex transformation patterns. Once the mappings are defined, they can be automatically applied to datasets to generate logically consistent content and the mappings can be reused in further transformation processes. The results of our research are (1) a common transformation and integration process for heterogeneous biomedical data; (2) the application of Linked Open Data principles to generate interoperable, open, biomedical datasets; (3) a software tool, called SWIT, that implements the approach. In this paper we also describe how we have applied SWIT in different biomedical scenarios and some lessons learned. We have presented an approach that is able to generate open biomedical repositories in Semantic Web formats. SWIT is able to apply the Linked Open Data principles in the generation of the datasets, so allowing for linking their content to external repositories and creating linked open datasets. SWIT datasets may contain data from multiple sources and schemas, thus becoming integrated datasets.

  6. Utilizing Web 2.0 Technologies for Library Web Tutorials: An Examination of Instruction on Community College Libraries' Websites Serving Large Student Bodies

    ERIC Educational Resources Information Center

    Blummer, Barbara; Kenton, Jeffrey M.

    2015-01-01

    This is the second part of a series on Web 2.0 tools available from community college libraries' Websites. The first article appeared in an earlier volume of this journal and it illustrated the wide variety of Web 2.0 tools on community college libraries' Websites serving large student bodies (Blummer and Kenton 2014). The research found many of…

  7. Relaxed Manufacturing Design Tolerance Concepts. Volume 2. Appendices

    DTIC Science & Technology

    1977-07-01

    are typical for such a part. A rough and finish pass is made to web thickness, a rough cut is made in the corners and then the stlf- fener walls...the programmed teed rate a little more on the second side due to having less material behind the web ; however, a designer usually provides for...Typical Survov Data Rocord Recommended Relaxation on Drawing Tolerances for Webs and Stiffcners Web Dimensional Deviation Occurrences vs. Stiffener

  8. WebEAV: automatic metadata-driven generation of web interfaces to entity-attribute-value databases.

    PubMed

    Nadkarni, P M; Brandt, C M; Marenco, L

    2000-01-01

    The task of creating and maintaining a front end to a large institutional entity-attribute-value (EAV) database can be cumbersome when using traditional client-server technology. Switching to Web technology as a delivery vehicle solves some of these problems but introduces others. In particular, Web development environments tend to be primitive, and many features that client-server developers take for granted are missing. WebEAV is a generic framework for Web development that is intended to streamline the process of Web application development for databases having a significant EAV component. It also addresses some challenging user interface issues that arise when any complex system is created. The authors describe the architecture of WebEAV and provide an overview of its features with suitable examples.

  9. Developing a Metadata Infrastructure to facilitate data driven science gateway and to provide Inspire/GEMINI compliance for CLIPC

    NASA Astrophysics Data System (ADS)

    Mihajlovski, Andrej; Plieger, Maarten; Som de Cerff, Wim; Page, Christian

    2016-04-01

    The CLIPC project is developing a portal to provide a single point of access for scientific information on climate change. This is made possible through the Copernicus Earth Observation Programme for Europe, which will deliver a new generation of environmental measurements of climate quality. The data about the physical environment which is used to inform climate change policy and adaptation measures comes from several categories: satellite measurements, terrestrial observing systems, model projections and simulations and from re-analyses (syntheses of all available observations constrained with numerical weather prediction systems). These data categories are managed by different communities: CLIPC will provide a single point of access for the whole range of data. The CLIPC portal will provide a number of indicators showing impacts on specific sectors which have been generated using a range of factors selected through structured expert consultation. It will also, as part of the transformation services, allow users to explore the consequences of using different combinations of driving factors which they consider to be of particular relevance to their work or life. The portal will provide information on the scientific quality and pitfalls of such transformations to prevent misleading usage of the results. The CLIPC project will develop an end to end processing chain (indicator tool kit), from comprehensive information on the climate state through to highly aggregated decision relevant products. Indicators of climate change and climate change impact will be provided, and a tool kit to update and post process the collection of indicators will be integrated into the portal. The CLIPC portal has a distributed architecture, making use of OGC services provided by e.g., climate4impact.eu and CEDA. CLIPC has two themes: 1. Harmonized access to climate datasets derived from models, observations and re-analyses 2. A climate impact tool kit to evaluate, rank and aggregate indicators Key is the availability of standardized metadata, describing indicator data and services. This will enable standardization and interoperability between the different distributed services of CLIPC. To disseminate CLIPC indicator data, transformed data products to enable impacts assessments and climate change impact indicators a standardized meta-data infrastructure is provided. The challenge is that compliance of existing metadata to INSPIRE ISO standards and GEMINI standards needs to be extended to further allow the web portal to be generated from the available metadata blueprint. The information provided in the headers of netCDF files available through multiple catalogues, allow us to generate ISO compliant meta data which is in turn used to generate web based interface content, as well as OGC compliant web services such as WCS and WMS for front end and WPS interactions for the scientific users to combine and generate new datasets. The goal of the metadata infrastructure is to provide a blueprint for creating a data driven science portal, generated from the underlying: GIS data, web services and processing infrastructure. In the presentation we will present the results and lessons learned.

  10. Web 2.0 Technologies for Effective Knowledge Management in Organizations: A Qualitative Analysis

    ERIC Educational Resources Information Center

    Nath, Anupam Kumar

    2012-01-01

    A new generation of Internet-based collaborative tools, commonly known as Web 2.0, has increased in popularity, availability, and power in the last few years (Kane and Fichman, 2009). Web 2.0 is a set of Internet-based applications that harness network effects by facilitating collaborative and participative computing (O'Reilly, 2006).…

  11. Understanding Web Activity Patterns among Teachers, Students and Teacher Candidates

    ERIC Educational Resources Information Center

    Kimmons, Royce; Clark, B.; Lim, M.

    2017-01-01

    This study sought to understand generational and role differences in web usage of teachers, teacher candidates and K-12 students in a state in the USA (n = 2261). The researchers employed unique methods, which included using a custom-built persistent web browser to track user behaviours free of self-report, self-selection and perception bias.…

  12. The Adventures of Miranda in the Brave New World: Learning in a Web 2.0 Millennium

    ERIC Educational Resources Information Center

    Barnes, Cameron; Tynan, Belinda

    2007-01-01

    This paper looks at the implications of Web 2.0 technologies for university teaching and learning. The latest generation of undergraduates already live in a Web 2.0 world. They have new service expectations and are increasingly dissatisfied with teacher-centred pedagogies. To attract and retain these students, universities will need to rethink…

  13. Lexicon Sextant: Modeling a Mnemonic System for Customizable Browser Information Organization and Management

    ERIC Educational Resources Information Center

    Shen, Siu-Tsen

    2016-01-01

    This paper presents an ongoing study of the development of a customizable web browser information organization and management system, which the author has named Lexicon Sextant (LS). LS is a user friendly, graphical web based add-on to the latest generation of web browsers, such as Google Chrome, making it easier and more intuitive to store and…

  14. DNA Data Visualization (DDV): Software for Generating Web-Based Interfaces Supporting Navigation and Analysis of DNA Sequence Data of Entire Genomes.

    PubMed

    Neugebauer, Tomasz; Bordeleau, Eric; Burrus, Vincent; Brzezinski, Ryszard

    2015-01-01

    Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.

  15. Secure, Autonomous, Intelligent Controller for Integrating Distributed Sensor Webs

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    2007-01-01

    This paper describes the infrastructure and protocols necessary to enable near-real-time commanding, access to space-based assets, and the secure interoperation between sensor webs owned and controlled by various entities. Select terrestrial and aeronautics-base sensor webs will be used to demonstrate time-critical interoperability between integrated, intelligent sensor webs both terrestrial and between terrestrial and space-based assets. For this work, a Secure, Autonomous, Intelligent Controller and knowledge generation unit is implemented using Virtual Mission Operation Center technology.

  16. Information-Flow-Based Access Control for Web Browsers

    NASA Astrophysics Data System (ADS)

    Yoshihama, Sachiko; Tateishi, Takaaki; Tabuchi, Naoshi; Matsumoto, Tsutomu

    The emergence of Web 2.0 technologies such as Ajax and Mashup has revealed the weakness of the same-origin policy[1], the current de facto standard for the Web browser security model. We propose a new browser security model to allow fine-grained access control in the client-side Web applications for secure mashup and user-generated contents. We propose a browser security model that is based on information-flow-based access control (IBAC) to overcome the dynamic nature of the client-side Web applications and to accurately determine the privilege of scripts in the event-driven programming model.

  17. 12 CFR Appendix A to Part 40 - Model Privacy Form

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the term “Social Security number” in the first bullet. (2) Institutions must use five (5) of the...; a Web site; or use of a mail-in opt-out form. Institutions may include the words “toll-free” before... specific Web address that takes consumers directly to the opt-out page or a general Web address that...

  18. 12 CFR Appendix A to Part 716 - Model Privacy Form

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the term “Social Security number” in the first bullet. (2) Institutions must use five (5) of the...; a Web site; or use of a mail-in opt-out form. Institutions may include the words “toll-free” before... specific Web address that takes consumers directly to the opt-out page or a general Web address that...

  19. 12 CFR Appendix A to Part 573 - Model Privacy Form

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the term “Social Security number” in the first bullet. (2) Institutions must use five (5) of the...; a Web site; or use of a mail-in opt-out form. Institutions may include the words “toll-free” before... specific Web address that takes consumers directly to the opt-out page or a general Web address that...

  20. Faculty Recommendations for Web Tools: Implications for Course Management Systems

    ERIC Educational Resources Information Center

    Oliver, Kevin; Moore, John

    2008-01-01

    A gap analysis of web tools in Engineering was undertaken as one part of the Digital Library Network for Engineering and Technology (DLNET) grant funded by NSF (DUE-0085849). DLNET represents a Web portal and an online review process to archive quality knowledge objects in Engineering and Technology disciplines. The gap analysis coincided with the…

  1. Web Pages for Your Classroom: The Easy Way!

    ERIC Educational Resources Information Center

    McCorkle, Sandra K.

    This book provides the classroom teacher or librarian with templates and instructions for creating Web pages for use with middle school or high school students. The pages can then be used for doing research projects or other types of projects that familiarize students with the power, flexibility, and usefulness of the Web. Part I, Technology in…

  2. 40 CFR 63.4361 - How do I determine the emission capture system efficiency?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... part 51 to determine the mass fraction of TVH liquid input from each regulated material used in the web.... TVHi = Mass fraction of TVH in regulated material, i, that is applied in the web coating/printing or... the mass of liquid TVH in regulated materials applied in the web coating/printing or dyeing/finishing...

  3. Methodology for Localized and Accessible Image Formation and Elucidation

    ERIC Educational Resources Information Center

    Patil, Sandeep R.; Katiyar, Manish

    2009-01-01

    Accessibility is one of the key checkpoints in all software products, applications, and Web sites. Accessibility with digital images has always been a major challenge for the industry. Images form an integral part of certain type of documents and most Web 2.0-compliant Web sites. Individuals challenged with blindness and many dyslexics only make…

  4. 30 CFR 585.500 - How do I make payments under this part?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... credit card or automated clearing house payments through the Pay.gov Web site, and you must include one copy of the Pay.gov confirmation receipt page with your unsolicited request or signed lease instrument. You may access the Pay.gov Web site through links on the BOEM Offshore Web site at: http://www.boem...

  5. 30 CFR 285.500 - How do I make payments under this part?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... your lease, you must make credit card or automated clearing house payments through the Pay.gov Web site, and you must include one copy of the Pay.gov confirmation receipt page with your unsolicited request or signed lease instrument. You may access the Pay.gov Web site through links on the MMS Offshore Web...

  6. 30 CFR 585.500 - How do I make payments under this part?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... credit card or automated clearing house payments through the Pay.gov Web site, and you must include one copy of the Pay.gov confirmation receipt page with your unsolicited request or signed lease instrument. You may access the Pay.gov Web site through links on the BOEM Offshore Web site at: http://www.boem...

  7. 30 CFR 285.500 - How do I make payments under this part?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or automated clearing house payments through the Pay.gov Web site, and you must include one copy of the Pay.gov confirmation receipt page with your unsolicited request or signed lease instrument. You may access the Pay.gov Web site through links on the MMS Offshore Web site at: http://www.mms.gov...

  8. 49 CFR 190.11 - Availability of informal guidance and interpretive assistance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... established a Web site and a telephone line to OPS headquarters where information on and advice about compliance with the pipeline safety regulations specified in 49 CFR parts 190-199 is available. The Web site..., individuals may leave a recorded voicemail message or post a message on the OPS Web site. The telephone number...

  9. Finite Element Analysis for the Web Offset of Wind Turbine Blade

    NASA Astrophysics Data System (ADS)

    Zhou, Bo; Wang, Xin; Zheng, Changwei; Cao, Jinxiang; Zou, Pingguo

    2017-05-01

    The web is an important part of wind turbine blade, which improves bending properties. Much of blade process is handmade, so web offset of wind turbine blade is one of common quality defects. In this paper, a 3D parametric finite element model of a blade for 2MW turbine was established by ANSYS. Stress distributions in different web offset values were studied. There were three kinds of web offset. The systematic study of web offset was done by orthogonal experiment. The most important factor of stress distributions was found. The analysis results have certain instructive significance to design and manufacture of wind turbine blade.

  10. Harvesting data from advanced technologies.

    DOT National Transportation Integrated Search

    2014-11-01

    Data streams are emerging everywhere such as Web logs, Web page click streams, sensor data streams, and credit card transaction flows. : Different from traditional data sets, data streams are sequentially generated and arrive one by one rather than b...

  11. Cytoscape tools for the web age: D3.js and Cytoscape.js exporters

    PubMed Central

    Ono, Keiichiro; Demchak, Barry; Ideker, Trey

    2014-01-01

    In this paper we present new data export modules for Cytoscape 3 that can generate network files for Cytoscape.js and D3.js. Cytoscape.js exporter is implemented as a core feature of Cytoscape 3, and D3.js exporter is available as a Cytoscape 3 app. These modules enable users to seamlessly export network and table data sets generated in Cytoscape to popular JavaScript library readable formats. In addition, we implemented template web applications for browser-based interactive network visualization that can be used as basis for complex data visualization applications for bioinformatics research. Example web applications created with these tools demonstrate how Cytoscape works in modern data visualization workflows built with traditional desktop tools and emerging web-based technologies. This interactivity enables researchers more flexibility than with static images, thereby greatly improving the quality of insights researchers can gain from them. PMID:25520778

  12. Cytoscape tools for the web age: D3.js and Cytoscape.js exporters.

    PubMed

    Ono, Keiichiro; Demchak, Barry; Ideker, Trey

    2014-01-01

    In this paper we present new data export modules for Cytoscape 3 that can generate network files for Cytoscape.js and D3.js. Cytoscape.js exporter is implemented as a core feature of Cytoscape 3, and D3.js exporter is available as a Cytoscape 3 app. These modules enable users to seamlessly export network and table data sets generated in Cytoscape to popular JavaScript library readable formats. In addition, we implemented template web applications for browser-based interactive network visualization that can be used as basis for complex data visualization applications for bioinformatics research. Example web applications created with these tools demonstrate how Cytoscape works in modern data visualization workflows built with traditional desktop tools and emerging web-based technologies. This interactivity enables researchers more flexibility than with static images, thereby greatly improving the quality of insights researchers can gain from them.

  13. Demonstrating the use of web analytics and an online survey to understand user groups of a national network of river level data

    NASA Astrophysics Data System (ADS)

    Macleod, Christopher Kit; Braga, Joao; Arts, Koen; Ioris, Antonio; Han, Xiwu; Sripada, Yaji; van der Wal, Rene

    2016-04-01

    The number of local, national and international networks of online environmental sensors are rapidly increasing. Where environmental data are made available online for public consumption, there is a need to advance our understanding of the relationships between the supply of and the different demands for such information. Understanding how individuals and groups of users are using online information resources may provide valuable insights into their activities and decision making. As part of the 'dot.rural wikiRivers' project we investigated the potential of web analytics and an online survey to generate insights into the use of a national network of river level data from across Scotland. These sources of online information were collected alongside phone interviews with volunteers sampled from the online survey, and interviews with providers of online river level data; as part of a larger project that set out to help improve the communication of Scotland's online river data. Our web analytics analysis was based on over 100 online sensors which are maintained by the Scottish Environmental Protection Agency (SEPA). Through use of Google Analytics data accessed via the R Ganalytics package we assessed: if the quality of data provided by Google Analytics free service is good enough for research purposes; if we could demonstrate what sensors were being used, when and where; how the nature and pattern of sensor data may affect web traffic; and whether we can identify and profile these users based on information from traffic sources. Web analytics data consists of a series of quantitative metrics which capture and summarize various dimensions of the traffic to a certain web page or set of pages. Examples of commonly used metrics include the number of total visits to a site and the number of total page views. Our analyses of the traffic sources from 2009 to 2011 identified several different major user groups. To improve our understanding of how the use of this national network of river level data may provide insights into the interactions between individuals and their usage of hydrological information, we ran an online survey linked to the SEPA river level pages for one year. We collected over 2000 complete responses to the survey. The survey included questions on user activities and the importance of river level information for their activities; alongside questions on what additional information they used in their decision making e.g. precipitation, and when and what river pages they visited. In this presentation we will present results from our analysis of the web analytics and online survey, and the insights they provide to understanding user groups of this national network of river level data.

  14. EnviroAtlas - New York, NY - One Meter Resolution Urban Land Cover Data (2008) Web Service

    EPA Pesticide Factsheets

    This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas ). The New York, NY EnviroAtlas Meter-scale Urban Land Cover (MULC) Data were generated by the University of Vermont Spatial Analysis Laboratory (SAL) under the direction of Jarlath O'Neil-Dunne as part of the United States Forest Service Urban Tree Canopy (UTC) assessment program. Seven classes were mapped using LiDAR and high resolution orthophotography: Tree Canopy, Grass/Shrub, Bare Soil, Water, Buildings, Roads/Railroads, and Other Paved Surfaces. These data were subsequently merged to fit with the EPA classification. The SAL project covered the five boroughs within the NYC city limits. However the EPA study area encompassed that area plus a 1 kilometer buffer. Additional land cover for the buffer area was generated from United States Department of Agriculture (USDA) National Agricultural Imagery Program (NAIP) four band (red, green, blue, and near infrared) aerial photography at 1 m spatial resolution from July, 2011 and LiDAR from 2010. Six land cover classes were mapped: water, impervious surfaces, soil and barren land, trees, grass-herbaceous non-woody vegetation, and agriculture. An accuracy assessment of 600 completely random and 55 stratified random photo interpreted reference points yielded an overall User's fuzzy accuracy of 87 percent. The area mapped is the US Census Bureau's 2010 Urban Statistical Area for New Yor

  15. Methods Data Qualification Interim Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Sam Alessi; Tami Grimmett; Leng Vang

    The overall goal of the Next Generation Nuclear Plant (NGNP) Data Management and Analysis System (NDMAS) is to maintain data provenance for all NGNP data including the Methods component of NGNP data. Multiple means are available to access data stored in NDMAS. A web portal environment allows users to access data, view the results of qualification tests and view graphs and charts of various attributes of the data. NDMAS also has methods for the management of the data output from VHTR simulation models and data generated from experiments designed to verify and validate the simulation codes. These simulation models representmore » the outcome of mathematical representation of VHTR components and systems. The methods data management approaches described herein will handle data that arise from experiment, simulation, and external sources for the main purpose of facilitating parameter estimation and model verification and validation (V&V). A model integration environment entitled ModelCenter is used to automate the storing of data from simulation model runs to the NDMAS repository. This approach does not adversely change the why computational scientists conduct their work. The method is to be used mainly to store the results of model runs that need to be preserved for auditing purposes or for display to the NDMAS web portal. This interim report demonstrates the currently development of NDMAS for Methods data and discusses data and its qualification that is currently part of NDMAS.« less

  16. Scientific and Technical Development of the Next Generation Space Telescope

    NASA Technical Reports Server (NTRS)

    Burg, Richard

    2003-01-01

    The Next Generation Space Telescope (NGST) is part of the Origins program and is the key mission to discover the origins of galaxies in the Universe. It is essential that scientific requirements be translated into technical specifications at the beginning of the program and that there is technical participation by astronomers in the design and modeling of the observatory. During the active time period of this grant, the PI participated in the NGST program at GSFC by participating in the development of the Design Reference Mission, the development of the full end-to-end model of the observatory, the design trade-off based on the modeling, the Science Instrument Module definition and modeling, the study of proto-mission and test-bed development, and by participating in meetings including quarterly reviews and support of the NGST SWG. This work was documented in a series of NGST Monographs that are available on the NGST web site.

  17. Resource Management Scheme Based on Ubiquitous Data Analysis

    PubMed Central

    Lee, Heung Ki; Jung, Jaehee

    2014-01-01

    Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients' requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme. PMID:25197692

  18. Food-Web Complexity in Guaymas Basin Hydrothermal Vents and Cold Seeps.

    PubMed

    Portail, Marie; Olu, Karine; Dubois, Stanislas F; Escobar-Briones, Elva; Gelinas, Yves; Menot, Lénaick; Sarrazin, Jozée

    In the Guaymas Basin, the presence of cold seeps and hydrothermal vents in close proximity, similar sedimentary settings and comparable depths offers a unique opportunity to assess and compare the functioning of these deep-sea chemosynthetic ecosystems. The food webs of five seep and four vent assemblages were studied using stable carbon and nitrogen isotope analyses. Although the two ecosystems shared similar potential basal sources, their food webs differed: seeps relied predominantly on methanotrophy and thiotrophy via the Calvin-Benson-Bassham (CBB) cycle and vents on petroleum-derived organic matter and thiotrophy via the CBB and reductive tricarboxylic acid (rTCA) cycles. In contrast to symbiotic species, the heterotrophic fauna exhibited high trophic flexibility among assemblages, suggesting weak trophic links to the metabolic diversity of chemosynthetic primary producers. At both ecosystems, food webs did not appear to be organised through predator-prey links but rather through weak trophic relationships among co-occurring species. Examples of trophic or spatial niche differentiation highlighted the importance of species-sorting processes within chemosynthetic ecosystems. Variability in food web structure, addressed through Bayesian metrics, revealed consistent trends across ecosystems. Food-web complexity significantly decreased with increasing methane concentrations, a common proxy for the intensity of seep and vent fluid fluxes. Although high fluid-fluxes have the potential to enhance primary productivity, they generate environmental constraints that may limit microbial diversity, colonisation of consumers and the structuring role of competitive interactions, leading to an overall reduction of food-web complexity and an increase in trophic redundancy. Heterogeneity provided by foundation species was identified as an additional structuring factor. According to their biological activities, foundation species may have the potential to partly release the competitive pressure within communities of low fluid-flux habitats. Finally, ecosystem functioning in vents and seeps was highly similar despite environmental differences (e.g. physico-chemistry, dominant basal sources) suggesting that ecological niches are not specifically linked to the nature of fluids. This comparison of seep and vent functioning in the Guaymas basin thus provides further supports to the hypothesis of continuity among deep-sea chemosynthetic ecosystems.

  19. Food-Web Complexity in Guaymas Basin Hydrothermal Vents and Cold Seeps

    PubMed Central

    Olu, Karine; Dubois, Stanislas F.; Escobar-Briones, Elva; Gelinas, Yves; Menot, Lénaick; Sarrazin, Jozée

    2016-01-01

    In the Guaymas Basin, the presence of cold seeps and hydrothermal vents in close proximity, similar sedimentary settings and comparable depths offers a unique opportunity to assess and compare the functioning of these deep-sea chemosynthetic ecosystems. The food webs of five seep and four vent assemblages were studied using stable carbon and nitrogen isotope analyses. Although the two ecosystems shared similar potential basal sources, their food webs differed: seeps relied predominantly on methanotrophy and thiotrophy via the Calvin-Benson-Bassham (CBB) cycle and vents on petroleum-derived organic matter and thiotrophy via the CBB and reductive tricarboxylic acid (rTCA) cycles. In contrast to symbiotic species, the heterotrophic fauna exhibited high trophic flexibility among assemblages, suggesting weak trophic links to the metabolic diversity of chemosynthetic primary producers. At both ecosystems, food webs did not appear to be organised through predator-prey links but rather through weak trophic relationships among co-occurring species. Examples of trophic or spatial niche differentiation highlighted the importance of species-sorting processes within chemosynthetic ecosystems. Variability in food web structure, addressed through Bayesian metrics, revealed consistent trends across ecosystems. Food-web complexity significantly decreased with increasing methane concentrations, a common proxy for the intensity of seep and vent fluid fluxes. Although high fluid-fluxes have the potential to enhance primary productivity, they generate environmental constraints that may limit microbial diversity, colonisation of consumers and the structuring role of competitive interactions, leading to an overall reduction of food-web complexity and an increase in trophic redundancy. Heterogeneity provided by foundation species was identified as an additional structuring factor. According to their biological activities, foundation species may have the potential to partly release the competitive pressure within communities of low fluid-flux habitats. Finally, ecosystem functioning in vents and seeps was highly similar despite environmental differences (e.g. physico-chemistry, dominant basal sources) suggesting that ecological niches are not specifically linked to the nature of fluids. This comparison of seep and vent functioning in the Guaymas basin thus provides further supports to the hypothesis of continuity among deep-sea chemosynthetic ecosystems. PMID:27683216

  20. FlyAtlas 2: a new version of the Drosophila melanogaster expression atlas with RNA-Seq, miRNA-Seq and sex-specific data

    PubMed Central

    Krause, Sue A; Pandit, Aniruddha; Davies, Shireen A

    2018-01-01

    Abstract FlyAtlas 2 (www.flyatlas2.org) is part successor, part complement to the FlyAtlas database and web application for studying the expression of the genes of Drosophila melanogaster in different tissues of adults and larvae. Although generated in the same lab with the same fly line raised on the same diet as FlyAtlas, the FlyAtlas2 resource employs a completely new set of expression data based on RNA-Seq, rather than microarray analysis, and so it allows the user to obtain information for the expression of different transcripts of a gene. Furthermore, the data for somatic tissues are now available for both male and female adult flies, allowing studies of sexual dimorphism. Gene coverage has been extended by the inclusion of microRNAs and many of the RNA genes included in Release 6 of the Drosophila reference genome. The web interface has been modified to accommodate the extra data, but at the same time has been adapted for viewing on small mobile devices. Users also have access to the RNA-Seq reads displayed alongside the annotated Drosophila genome in the (external) UCSC browser, and are able to link out to the previous FlyAtlas resource to compare the data obtained by RNA-Seq with that obtained using microarrays. PMID:29069479

  1. Secure web-based invocation of large-scale plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.

    2004-12-01

    We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.

  2. Semantic Annotation of Video Fragments as Learning Objects: A Case Study with "YouTube" Videos and the Gene Ontology

    ERIC Educational Resources Information Center

    Garcia-Barriocanal, Elena; Sicilia, Miguel-Angel; Sanchez-Alonso, Salvador; Lytras, Miltiadis

    2011-01-01

    Web 2.0 technologies can be considered a loosely defined set of Web application styles that foster a kind of media consumer more engaged, and usually active in creating and maintaining Internet contents. Thus, Web 2.0 applications have resulted in increased user participation and massive user-generated (or user-published) open multimedia content,…

  3. Large-area sheet task advanced dendritic web growth development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D.; Schruben, J.

    1982-01-01

    The thermal stress model was used to generate the design of a low stress lid and shield configuration, which was fabricated and tested experimentally. In preliminary tests, the New Experimental Web Growth Facility performed as designed, producing web on the first run. These experiments suggested desirable design modifications in the melt level sensing system to improve further its performance, and these are being implemented.

  4. Education and Public Outreach at The Pavilion Lake Research Project: Fusion of Science and Education using Web 2.0

    NASA Astrophysics Data System (ADS)

    Cowie, B. R.; Lim, D. S.; Pendery, R.; Laval, B.; Slater, G. F.; Brady, A. L.; Dearing, W. L.; Downs, M.; Forrest, A.; Lees, D. S.; Lind, R. A.; Marinova, M.; Reid, D.; Seibert, M. A.; Shepard, R.; Williams, D.

    2009-12-01

    The Pavilion Lake Research Project (PLRP) is an international multi-disciplinary science and exploration effort to explain the origin and preservation potential of freshwater microbialites in Pavilion Lake, British Columbia, Canada. Using multiple exploration platforms including one person DeepWorker submersibles, Autonomous Underwater Vehicles, and SCUBA divers, the PLRP acts as an analogue research site for conducting science in extreme environments, such as the Moon or Mars. In 2009, the PLRP integrated several Web 2.0 technologies to provide a pilot-scale Education and Public Outreach (EPO) program targeting the internet savvy generation. The seamless integration of multiple technologies including Google Earth, Wordpress, Youtube, Twitter and Facebook, facilitated the rapid distribution of exciting and accessible science and exploration information over multiple channels. Field updates, science reports, and multimedia including videos, interactive maps, and immersive visualization were rapidly available through multiple social media channels, partly due to the ease of integration of these multiple technologies. Additionally, the successful application of videoconferencing via a readily available technology (Skype) has greatly increased the capacity of our team to conduct real-time education and public outreach from remote locations. The improved communication afforded by Web 2.0 has increased the quality of EPO provided by the PLRP, and has enabled a higher level of interaction between the science team and the community at large. Feedback from these online interactions suggest that remote communication via Web 2.0 technologies were effective tools for increasing public discourse and awareness of the science and exploration activity at Pavilion Lake.

  5. New Web Services for Broader Access to National Deep Submergence Facility Data Resources Through the Interdisciplinary Earth Data Alliance

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Grange, B.; Morton, J. J.; Soule, S. A.; Carbotte, S. M.; Lehnert, K.

    2016-12-01

    The National Deep Submergence Facility (NDSF) operates the Human Occupied Vehicle (HOV) Alvin, the Remotely Operated Vehicle (ROV) Jason, and the Autonomous Underwater Vehicle (AUV) Sentry. These vehicles are deployed throughout the global oceans to acquire sensor data and physical samples for a variety of interdisciplinary science programs. As part of the EarthCube Integrative Activity Alliance Testbed Project (ATP), new web services were developed to improve access to existing online NDSF data and metadata resources. These services make use of tools and infrastructure developed by the Interdisciplinary Earth Data Alliance (IEDA) and enable programmatic access to metadata and data resources as well as the development of new service-driven user interfaces. The Alvin Frame Grabber and Jason Virtual Van enable the exploration of frame-grabbed images derived from video cameras on NDSF dives. Metadata available for each image includes time and vehicle position, data from environmental sensors, and scientist-generated annotations, and data are organized and accessible by cruise and/or dive. A new FrameGrabber web service and service-driven user interface were deployed to offer integrated access to these data resources through a single API and allows users to search across content curated in both systems. In addition, a new NDSF Dive Metadata web service and service-driven user interface was deployed to provide consolidated access to basic information about each NDSF dive (e.g. vehicle name, dive ID, location, etc), which is important for linking distributed data resources curated in different data systems.

  6. Optimized and parallelized implementation of the electronegativity equalization method and the atom-bond electronegativity equalization method.

    PubMed

    Vareková, R Svobodová; Koca, J

    2006-02-01

    The most common way to calculate charge distribution in a molecule is ab initio quantum mechanics (QM). Some faster alternatives to QM have also been developed, the so-called "equalization methods" EEM and ABEEM, which are based on DFT. We have implemented and optimized the EEM and ABEEM methods and created the EEM SOLVER and ABEEM SOLVER programs. It has been found that the most time-consuming part of equalization methods is the reduction of the matrix belonging to the equation system generated by the method. Therefore, for both methods this part was replaced by the parallel algorithm WIRS and implemented within the PVM environment. The parallelized versions of the programs EEM SOLVER and ABEEM SOLVER showed promising results, especially on a single computer with several processors (compact PVM). The implemented programs are available through the Web page http://ncbr.chemi.muni.cz/~n19n/eem_abeem.

  7. The Sandwich Generation Diner: Development of a Web-Based Health Intervention for Intergenerational Caregivers

    PubMed Central

    George, Nika; MacDougall, Megan

    2016-01-01

    Background Women are disproportionately likely to assist aging family members; approximately 53 million in the United States are involved with the health care of aging parents, in-laws, or other relatives. The busy schedules of “sandwich generation” women who care for older relatives require accessible and flexible health education, including Web-based approaches. Objective This paper describes the development and implementation of a Web-based health education intervention, The Sandwich Generation Diner, as a tool for intergenerational caregivers of older adults with physical and cognitive impairments. Methods We used Bartholomew’s Intervention Mapping (IM) process to develop our theory-based health education program. Bandura’s (1997) self-efficacy theory provided the overarching theoretical model. Results The Sandwich Generation Diner website features four modules that address specific health care concerns. Our research involves randomly assigning caregiver participants to one of two experimental conditions that are identical in the type of information provided, but vary significantly in the presentation. In addition to structured Web-based assessments, specific website usage data are recorded. Conclusions The Sandwich Generation Diner was developed to address some of the informational and self-efficacy needs of intergenerational female caregivers. The next step is to demonstrate that this intervention is: (1) attractive and effective with families assisting older adults, and (2) feasible to embed within routine home health services for older adults. PMID:27269632

  8. UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces

    NASA Technical Reports Server (NTRS)

    Shiffman, Smadar; Degani, Asaf; Heymann, Michael

    2004-01-01

    In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.

  9. 77 FR 37905 - Agency Information Collection Activities; Submission to OMB for Review and Approval; Comment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-25

    ... Activities; Submission to OMB for Review and Approval; Comment Request; NESHAP for Paper and Other Web... electronic docket, go to www.regulations.gov . Title: NESHAP for Paper and Other Web Coating (Renewal). ICR... and Other Web Coating is subject to the General Provisions of the NESHAP at 40 CFR part 63, subpart A...

  10. The Social Validation of Institutional Indicators to Promote System-Wide Web Accessibility in Postsecondary Institutions

    ERIC Educational Resources Information Center

    Mariger, Heather Ann

    2011-01-01

    The Internet is an integral part of higher education today. Students, faculty, and staff must have access to the institutional web for essential activities. For persons with disabilities, the web is a double-edged sword. While an accessibly designed website can mitigate or remove barriers, an inaccessible one can make access impossible. If…

  11. A Collaborative Writing Project Using the Worldwide Web.

    ERIC Educational Resources Information Center

    Sylvester, Allen; Essex, Christopher

    A student in a distance education course, as part of a midterm project, set out to build a Web site that had written communication as its main focus. The Web site, "The Global Campfire," was modeled on the old Appalachian tradition of the "Story Tree," where a storyteller begins a story and allows group members to add to it.…

  12. Perspectives on Children's Navigation of the World Wide Web: Does the Type of Search Task Make a Difference?

    ERIC Educational Resources Information Center

    Bilal, Dania

    2002-01-01

    Reports findings of a three-part research project that examined the information seeking behavior and success of 22 seventh-grade science students in using the Web. Discusses problems encountered, including inadequate knowledge of how to use the search engine and poor level of research skills; and considers implications for Web training and system…

  13. Convenience and Community? An Exploratory Investigation into Learners' Experiences of Web Conferencing

    ERIC Educational Resources Information Center

    Cornelius, Sarah

    2013-01-01

    This paper presents the findings of an exploratory study into the experiences of a small group of learners who have made extensive use of web conferencing as part of their studies over the academic year 2009/10. The paper outlines the design of the programme and structure of web conferencing workshops. It draws on findings from a post-programme…

  14. Uncovering the Hidden Web, Part I: Finding What the Search Engines Don't. ERIC Digest.

    ERIC Educational Resources Information Center

    Mardis, Marcia

    Currently, the World Wide Web contains an estimated 7.4 million sites (OCLC, 2001). Yet even the most experienced searcher, using the most robust search engines, can access only about 16% of these pages (Dahn, 2001). The other 84% of the publicly available information on the Web is referred to as the "hidden,""invisible," or…

  15. Services for Graduate Students: A Review of Academic Library Web Sites

    ERIC Educational Resources Information Center

    Rempel, Hannah Gascho

    2010-01-01

    A library's Web site is well recognized as the gateway to the library for the vast majority of users. Choosing the most user-friendly Web architecture to reflect the many services libraries offer is a complex process, and librarians are still experimenting to find what works best for their users. As part of a redesign of the Oregon State…

  16. Open Data, Jupyter Notebooks and Geospatial Data Standards Combined - Opening up large volumes of marine and climate data to other communities

    NASA Astrophysics Data System (ADS)

    Clements, O.; Siemen, S.; Wagemann, J.

    2017-12-01

    The EU-funded Earthserver-2 project aims to offer on-demand access to large volumes of environmental data (Earth Observation, Marine, Climate data and Planetary data) via the interface standard Web Coverage Service defined by the Open Geospatial Consortium. Providing access to data via OGC web services (e.g. WCS and WMS) has the potential to open up services to a wider audience, especially to users outside the respective communities. Especially WCS 2.0 with its processing extension Web Coverage Processing Service (WCPS) is highly beneficial to make large volumes accessible to non-expert communities. Users do not have to deal with custom community data formats, such as GRIB for the meteorological community, but can directly access the data in a format they are more familiar with, such as NetCDF, JSON or CSV. Data requests can further directly be integrated into custom processing routines and users are not required to download Gigabytes of data anymore. WCS supports trim (reduction of data extent) and slice (reduction of data dimension) operations on multi-dimensional data, providing users a very flexible on-demand access to the data. WCPS allows the user to craft queries to run on the data using a text-based query language, similar to SQL. These queries can be very powerful, e.g. condensing a three-dimensional data cube into its two-dimensional mean. However, the more processing-intensive the more complex the query. As part of the EarthServer-2 project, we developed a python library that helps users to generate complex WCPS queries with Python, a programming language they are more familiar with. The interactive presentation aims to give practical examples how users can benefit from two specific WCS services from the Marine and Climate community. Use-cases from the two communities will show different approaches to take advantage of a Web Coverage (Processing) Service. The entire content is available with Jupyter Notebooks, as they prove to be a highly beneficial tool to generate reproducible workflows for environmental data analysis.

  17. Personality in cyberspace: personal Web sites as media for personality expressions and impressions.

    PubMed

    Marcus, Bernd; Machilek, Franz; Schütz, Astrid

    2006-06-01

    This research examined the personality of owners of personal Web sites based on self-reports, visitors' ratings, and the content of the Web sites. The authors compared a large sample of Web site owners with population-wide samples on the Big Five dimensions of personality. Controlling for demographic differences, the average Web site owner reported being slightly less extraverted and more open to experience. Compared with various other samples, Web site owners did not generally differ on narcissism, self-monitoring, or self-esteem, but gender differences on these traits were often smaller in Web site owners. Self-other agreement was highest with Openness to Experience, but valid judgments of all Big Five dimensions were derived from Web sites providing rich information. Visitors made use of quantifiable features of the Web site to infer personality, and the cues they utilized partly corresponded to self-reported traits. Copyright 2006 APA, all rights reserved.

  18. A Java tool for dynamic web-based 3D visualization of anatomy and overlapping gene or protein expression patterns.

    PubMed

    Gerth, Victor E; Vize, Peter D

    2005-04-01

    The Gene Expression Viewer is a web-launched three-dimensional visualization tool, tailored to compare surface reconstructions of multi-channel image volumes generated by confocal microscopy or micro-CT.

  19. Evaluating Web-Based Learning Systems

    ERIC Educational Resources Information Center

    Pergola, Teresa M.; Walters, L. Melissa

    2011-01-01

    Accounting educators continuously seek ways to effectively integrate instructional technology into accounting coursework as a means to facilitate active learning environments and address the technology-driven learning preferences of the current generation of students. Most accounting textbook publishers now provide interactive, web-based learning…

  20. A Software Engineering Approach based on WebML and BPMN to the Mediation Scenario of the SWS Challenge

    NASA Astrophysics Data System (ADS)

    Brambilla, Marco; Ceri, Stefano; Valle, Emanuele Della; Facca, Federico M.; Tziviskou, Christina

    Although Semantic Web Services are expected to produce a revolution in the development of Web-based systems, very few enterprise-wide design experiences are available; one of the main reasons is the lack of sound Software Engineering methods and tools for the deployment of Semantic Web applications. In this chapter, we present an approach to software development for the Semantic Web based on classical Software Engineering methods (i.e., formal business process development, computer-aided and component-based software design, and automatic code generation) and on semantic methods and tools (i.e., ontology engineering, semantic service annotation and discovery).

  1. 5 CFR 1640.6 - Methods of providing information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... part to participants by making it available on the TSP Web site. A participant can request paper copies of that information from the TSP by calling the ThriftLine, submitting a request through the TSP Web...

  2. 5 CFR 1640.6 - Methods of providing information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... part to participants by making it available on the TSP Web site. A participant can request paper copies of that information from the TSP by calling the ThriftLine, submitting a request through the TSP Web...

  3. 5 CFR 1640.6 - Methods of providing information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... part to participants by making it available on the TSP Web site. A participant can request paper copies of that information from the TSP by calling the ThriftLine, submitting a request through the TSP Web...

  4. Design, implementation, use, and preliminary evaluation of SEBASTIAN, a standards-based Web service for clinical decision support.

    PubMed

    Kawamoto, Kensaku; Lobach, David F

    2005-01-01

    Despite their demonstrated ability to improve care quality, clinical decision support systems are not widely used. In part, this limited use is due to the difficulty of sharing medical knowledge in a machine-executable format. To address this problem, we developed a decision support Web service known as SEBASTIAN. In SEBASTIAN, individual knowledge modules define the data requirements for assessing a patient, the conclusions that can be drawn using that data, and instructions on how to generate those conclusions. Using standards-based XML messages transmitted over HTTP, client decision support applications provide patient data to SEBASTIAN and receive patient-specific assessments and recommendations. SEBASTIAN has been used to implement four distinct decision support systems; an architectural overview is provided for one of these systems. Preliminary assessments indicate that SEBASTIAN fulfills all original design objectives, including the re-use of executable medical knowledge across diverse applications and care settings, the straightforward authoring of knowledge modules, and use of the framework to implement decision support applications with significant clinical utility.

  5. pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2014-01-01

    This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.

  6. 'Ingredients' of a supportive web of caring relationships at the end of life: findings from a community research project in Austria.

    PubMed

    Wegleitner, Klaus; Schuchter, Patrick; Prieth, Sonja

    2018-04-27

    In accordance with the pluralisation of life plans in late modernity, the societal organisation of care at the end of life is diverse. Although the public discourse in western societies is dominated by questions about optimising specialised palliative care services, public health approaches, which take into account the social determinants and inequalities in end-of-life care, have gained in importance over the last decade. Conceptual aspects, dimensions of impact and benefit for the dying and their communities are well discussed in the public health end-of-life care research literature. Our research focuses on the preconditions of a supportive caring web in order to understand how communities can build on their social capital to deal with existential uncertainty. As part of a large-scale community research project, we carried out focus groups and interviews with community members. Through dispositive analysis, we generated a set of care-web 'ingredients', which constitute and foster a caring community. These 'ingredients' need to be cultivated through an ongoing process of co-creation. This requires: (i) a focus on relationships and social systems; (ii) the creation of reflective spaces; and (iii) the strengthening of social capital, and d) the addressing of inequalities in care. © 2018 The Authors. Sociology of Health & Illness published by John Wiley & Sons Ltd on behalf of Foundation for SHIL.

  7. [Web Babel Syndrome and false expectations during own multimedia oncological search].

    PubMed

    Di Cerbo, A; Pezzuto, F; Laurino, C; Palmieri, B

    2014-01-01

    Nowadays Internet has become the new gold standard, for most of web users, when performing a screening on medical updates and/or cares. Although provided with some scientific background most of websites, blogs and videos (which as main source have Youtube) lack some institution that can garantee their contents and quality; thus one generates ambiguities among users giving light to a particular pathology called "Web Babel Syndrome"

  8. Cloud Computing Trace Characterization and Synthetic Workload Generation

    DTIC Science & Technology

    2013-03-01

    measurements [44]. Olio is primarily for learning Web 2.0 technologies, evaluating the three implementations (PHP, Java EE, and RubyOnRails (ROR...Add Event 17 Olio is well documented, but assumes prerequisite knowledge with setup and operation of apache web servers and MySQL databases. Olio...Faban supports numerous servers such as Apache httpd, Sun Java System Web, Portal and Mail Servers, Oracle RDBMS, memcached, and others [18]. Perhaps

  9. Towards a Simple and Efficient Web Search Framework

    DTIC Science & Technology

    2014-11-01

    any useful information about the various aspects of a topic. For example, for the query “ raspberry pi ”, it covers topics such as “what is raspberry pi ...topics generated by the LDA topic model for query ” raspberry pi ”. One simple explanation is that web texts are too noisy and unfocused for the LDA process...making a rasp- berry pi ”. However, the topics generated based on the 10 top ranked documents do not make much sense to us in terms of their keywords

  10. Web access and dissemination of Andalusian coastal erosion rates: viewers and standard/filtered map services.

    NASA Astrophysics Data System (ADS)

    Álvarez Francoso, Jose; Prieto Campos, Antonio; Ojeda Zujar, Jose; Guisado-Pintado, Emilia; Pérez Alcántara, Juan Pedro

    2017-04-01

    The accessibility to environmental information via web viewers using map services (OGC or proprietary services) has become more frequent since newly information sources (ortophotos, LIDAR, GPS) are of great detailed and thus generate a great volume of data which barely can be disseminated using either analogue (paper maps) or digital (pdf) formats. Moreover, governments and public institutions are concerned about the need of facilitates provision to research results and improve communication about natural hazards to citizens and stakeholders. This information ultimately, if adequately disseminated, it's crucial in decision making processes, risk management approaches and could help to increase social awareness related to environmental issues (particularly climate change impacts). To overcome this issue, two strategies for wide dissemination and communication of the results achieved in the calculation of beach erosion for the 640 km length of the Andalusian coast (South Spain) using web viewer technology are presented. Each of them are oriented to different end users and thus based on different methodologies. Erosion rates has been calculated at 50m intervals for different periods (1956-1977-2001-2011) as part of a National Research Project based on the spasialisation and web-access of coastal vulnerability indicators for Andalusian region. The 1st proposal generates WMS services (following OGC standards) that are made available by Geoserver, using a geoviewer client developed through Leaflet. This viewer is designed to be used by the general public (citizens, politics, etc) by combining a set of tools that give access to related documents (pdfs), visualisation tools (panoramio pictures, geo-localisation with GPS) are which are displayed within an user-friendly interface. Further, the use of WMS services (implemented on Geoserver) provides a detailed semiology (arrows and proportional symbols, using alongshore coastaline buffers to represent data) which not only enhances access to erosion rates but also enables multi-scale data representation. The 2nd proposal, as intended to be used by technicians and specialists on the field, includes a geoviewer with an innovative profile (including visualization of time-ranges, application of different uncertainty levels to the data, etc) to fulfil the needs of these users. For its development, a set of Javascript libraries combined with Openlayers (or Leaflet) are implemented to guarantee all the functionalities existing for the basic geoviewer. Further to this, the viewer has been improved by i) the generation of services by request through the application of a filter in ECQL language (Extended Common Query Language), using the vendor parameter CQL_FILTER from Geoserver. These dynamic filters allow the final user to predefine the visualised variable, its spatial and temporal domain, a range of specific values and other attributes, thus multiplying the generation of real-time cartography; ii) by using the layer's WFS service, the Javascript application exploit the alphanumeric data to generate related statistics in real time (e.g. mean rates, length of eroded coast, etc.) and interactive graphs (via HighCharts.js library) which accurately help in beach erosion rates interpretation (representing trends and bars diagrams, among others. As a result two approaches for communicating scientific results to different audiences based on web-based with complete dataset of geo-information, services and functionalities are implemented. The combination of standardised environmental data with tailor-made exploitation techniques (interactive maps, and real-time statistics) assures the correct access and interpretation of the information.

  11. The Web and Information Literacy: Scaffolding the use of Web Sources in a Project-Based Curriculum

    ERIC Educational Resources Information Center

    Walton, Marion; Archer, Arlene

    2004-01-01

    In this article we describe and discuss a three-year case study of a course in web literacy, part of the academic literacy curriculum for first-year engineering students at the University of Cape Town (UCT). Because they are seen as practical knowledge, not theoretical, information skills tend to be devalued at university and rendered invisible to…

  12. The Power and Peril of Web 3.0: It's More than Just Semantics

    ERIC Educational Resources Information Center

    Ohler, Jason

    2010-01-01

    The Information Age has been built, in part, on the belief that more information is always better. True to that sentiment, people have found ways to make a lot of information available to the masses--perhaps more than anyone ever imagined. The goal of the Semantic Web, often called Web 3.0, is for users to spend less time looking for information…

  13. Biological impacts of local vs. regional land use on a small tributary of the Seine River (France): insights from a food web approach based on stable isotopes.

    PubMed

    Hette-Tronquart, Nicolas; Oberdorff, Thierry; Tales, Evelyne; Zahm, Amandine; Belliard, Jérôme

    2017-03-23

    As part of the landscape, streams are influenced by land use. Here, we contributed to the understanding of the biological impacts of land use on streams, investigating how landscape effects vary with spatial scales (local vs. regional). We adopted a food web approach integrating both biological structure and functioning, to focus on the overall effect of land use on stream biocœnosis. We selected 17 sites of a small tributary of the Seine River (France) for their contrasted land use, and conducted a natural experiment by sampling three organic matter sources, three macroinvertebrate taxa, and most of the fish community. Using stable isotope analysis, we calculated three food web metrics evaluating two major dimensions of the trophic diversity displayed by the fish community: (i) the diversity of exploited resources and (ii) the trophic level richness. The idea was to examine whether (1) land-use effects varied according to spatial scales, (2) land use affected food webs through an effect on community structure and (3) land use affected food webs through an effect on available resources. Beside an increase in trophic diversity from upstream to downstream, our empirical data showed that food webs were influenced by land use in the riparian corridors (local scale). The effect was complex, and depended on site's position along the upstream-downstream gradient. By contrast, land use in the catchment (regional scale) did not influence stream biocœnosis. At the local scale, community structure was weakly influenced by land use, and thus played a minor role in explaining food web modifications. Our results suggested that the amount of available resources at the base of the food web was partly responsible for food web modifications. In addition, changes in biological functioning (i.e. feeding interactions) can also explain another part of the land-use effect. These results highlight the role played by the riparian corridors as a buffer zone, and advocate that riparian corridor should be at the centre of water management attention.

  14. WebAlchemist: a Web transcoding system for mobile Web access in handheld devices

    NASA Astrophysics Data System (ADS)

    Whang, Yonghyun; Jung, Changwoo; Kim, Jihong; Chung, Sungkwon

    2001-11-01

    In this paper, we describe the design and implementation of WebAlchemist, a prototype web transcoding system, which automatically converts a given HTML page into a sequence of equivalent HTML pages that can be properly displayed on a hand-held device. The Web/Alchemist system is based on a set of HTML transcoding heuristics managed by the Transcoding Manager (TM) module. In order to tackle difficult-to-transcode pages such as ones with large or complex table structures, we have developed several new transcoding heuristics that extract partial semantics from syntactic information such as the table width, font size and cascading style sheet. Subjective evaluation results using popular HTML pages (such as the CNN home page) show that WebAlchemist generates readable, structure-preserving transcoded pages, which can be properly displayed on hand-held devices.

  15. Data integration in the era of omics: current and future challenges

    PubMed Central

    2014-01-01

    To integrate heterogeneous and large omics data constitutes not only a conceptual challenge but a practical hurdle in the daily analysis of omics data. With the rise of novel omics technologies and through large-scale consortia projects, biological systems are being further investigated at an unprecedented scale generating heterogeneous and often large data sets. These data-sets encourage researchers to develop novel data integration methodologies. In this introduction we review the definition and characterize current efforts on data integration in the life sciences. We have used a web-survey to assess current research projects on data-integration to tap into the views, needs and challenges as currently perceived by parts of the research community. PMID:25032990

  16. Treatment dropout in web-based cognitive behavioral therapy for patients with eating disorders.

    PubMed

    Ter Huurne, Elke D; Postel, Marloes G; de Haan, Hein A; van der Palen, Job; DeJong, Cor A J

    2017-01-01

    Treatment dropout is an important concern in eating disorder treatments as it has negative implications for patients' outcome, clinicians' motivation, and research studies. Our main objective was to conduct an exploratory study on treatment dropout in a two-part web-based cognitive behavioral therapy with asynchronous therapeutic support. The analysis included 205 female patients with eating disorders. Reasons for dropout, treatment experiences, and predictors of dropout were analyzed. Overall treatment dropout was 37.6%, with 18.5% early dropout (before or during treatment part 1) and 19.0% late dropout (after part 1 or during part 2). Almost half of the participants identified personal circumstances as reason for dropout. The other participants mostly reported reasons related to the online delivery or treatment protocol. Predictors of early dropout included reporting less vigor and smoking at baseline and a longer average duration per completed treatment module of part 1. Late dropout was predicted by reporting less vigor at baseline and uncertainty about recommendation of the treatment to others after completion of treatment part 1. Generally, the web-based treatment and online therapeutic support were evaluated positively, although dropouts rated the treatment as significantly less helpful and effective than completers did. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. The initial development of the WebMedQual scale: domain assessment of the construct of quality of health web sites.

    PubMed

    Provost, Mélanie; Koompalum, Dayin; Dong, Diane; Martin, Bradley C

    2006-01-01

    To develop a comprehensive instrument assessing quality of health-related web sites. Phase I consisted of a literature review to identify constructs thought to indicate web site quality and to identify items. During content analysis, duplicate items were eliminated and items that were not clear, meaningful, or measurable were reworded or removed. Some items were generated by the authors. Phase II: a panel consisting of six healthcare and MIS reviewers was convened to assess each item for its relevance and importance to the construct and to assess item clarity and measurement feasibility. Three hundred and eighty-four items were generated from 26 sources. The initial content analysis reduced the scale to 104 items. Four of the six expert reviewers responded; high concordance on the relevance, importance and measurement feasibility of each item was observed: 3 out of 4, or all raters agreed on 76-85% of items. Based on the panel ratings, 9 items were removed, 3 added, and 10 revised. The WebMedQual consists of 8 categories, 8 sub-categories, 95 items and 3 supplemental items to assess web site quality. The constructs are: content (19 items), authority of source (18 items), design (19 items), accessibility and availability (6 items), links (4 items), user support (9 items), confidentiality and privacy (17 items), e-commerce (6 items). The "WebMedQual" represents a first step toward a comprehensive and standard quality assessment of health web sites. This scale will allow relatively easy assessment of quality with possible numeric scoring.

  18. Designing Crop Simulation Web Service with Service Oriented Architecture Principle

    NASA Astrophysics Data System (ADS)

    Chinnachodteeranun, R.; Hung, N. D.; Honda, K.

    2015-12-01

    Crop simulation models are efficient tools for simulating crop growth processes and yield. Running crop models requires data from various sources as well as time-consuming data processing, such as data quality checking and data formatting, before those data can be inputted to the model. It makes the use of crop modeling limited only to crop modelers. We aim to make running crop models convenient for various users so that the utilization of crop models will be expanded, which will directly improve agricultural applications. As the first step, we had developed a prototype that runs DSSAT on Web called as Tomorrow's Rice (v. 1). It predicts rice yields based on a planting date, rice's variety and soil characteristics using DSSAT crop model. A user only needs to select a planting location on the Web GUI then the system queried historical weather data from available sources and expected yield is returned. Currently, we are working on weather data connection via Sensor Observation Service (SOS) interface defined by Open Geospatial Consortium (OGC). Weather data can be automatically connected to a weather generator for generating weather scenarios for running the crop model. In order to expand these services further, we are designing a web service framework consisting of layers of web services to support compositions and executions for running crop simulations. This framework allows a third party application to call and cascade each service as it needs for data preparation and running DSSAT model using a dynamic web service mechanism. The framework has a module to manage data format conversion, which means users do not need to spend their time curating the data inputs. Dynamic linking of data sources and services are implemented using the Service Component Architecture (SCA). This agriculture web service platform demonstrates interoperability of weather data using SOS interface, convenient connections between weather data sources and weather generator, and connecting various services for running crop models for decision support.

  19. The Implementation of Cosine Similarity to Calculate Text Relevance between Two Documents

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Sembiring, C. A.; Budiman, M. A.

    2018-03-01

    Rapidly increasing number of web pages or documents leads to topic specific filtering in order to find web pages or documents efficiently. This is a preliminary research that uses cosine similarity to implement text relevance in order to find topic specific document. This research is divided into three parts. The first part is text-preprocessing. In this part, the punctuation in a document will be removed, then convert the document to lower case, implement stop word removal and then extracting the root word by using Porter Stemming algorithm. The second part is keywords weighting. Keyword weighting will be used by the next part, the text relevance calculation. Text relevance calculation will result the value between 0 and 1. The closer value to 1, then both documents are more related, vice versa.

  20. Wikis, Blogs, & More, Oh My!

    ERIC Educational Resources Information Center

    Villano, Matt

    2008-01-01

    Everyone seems to have a different definition for "Web 2.0," but most people agree the phrase describes a second generation of web-based communities and hosted services that aim to facilitate creativity, collaboration, and sharing between users. Technically speaking, these new technologies include blogs, wikis, folksonomies…

  1. 78 FR 69710 - Luminant Generation Company, LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-20

    ... methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID NRC-2008... . To begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public [[Page 69711

  2. Recipes for Semantic Web Dog Food — The ESWC and ISWC Metadata Projects

    NASA Astrophysics Data System (ADS)

    Möller, Knud; Heath, Tom; Handschuh, Siegfried; Domingue, John

    Semantic Web conferences such as ESWC and ISWC offer prime opportunities to test and showcase semantic technologies. Conference metadata about people, papers and talks is diverse in nature and neither too small to be uninteresting or too big to be unmanageable. Many metadata-related challenges that may arise in the Semantic Web at large are also present here. Metadata must be generated from sources which are often unstructured and hard to process, and may originate from many different players, therefore suitable workflows must be established. Moreover, the generated metadata must use appropriate formats and vocabularies, and be served in a way that is consistent with the principles of linked data. This paper reports on the metadata efforts from ESWC and ISWC, identifies specific issues and barriers encountered during the projects, and discusses how these were approached. Recommendations are made as to how these may be addressed in the future, and we discuss how these solutions may generalize to metadata production for the Semantic Web at large.

  3. 75 FR 35765 - Proposed Information Collection; Comment Request; BroadbandMatch Web Site Tool

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-23

    ... DEPARTMENT OF COMMERCE National Telecommunications and Information Administration Proposed Information Collection; Comment Request; BroadbandMatch Web Site Tool AGENCY: National Telecommunications and Information Administration, Commerce. ACTION: Notice. SUMMARY: The Department of Commerce, as part of its...

  4. 31 CFR 256.11 - How do agencies request payments?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... forms or by using other approved methods as provided for on the Judgment Fund Web site at http://www.fms..., Volume I, Part 6, Chapter 3100. The TFM is also available on the Judgment Fund Web site. The submitting...

  5. One EPA Web: Purpose, Audiences, Top Tasks (Round 2 Sites, April 2012 – January 2013)

    EPA Pesticide Factsheets

    Examples of the top audiences and tasks identified for priority topics can help EICs identify their own audiences and tasks for new web areas, as an important part of the content transformation process.

  6. 31 CFR 542.309 - Licenses; general and specific.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... part or made available on OFAC's Web site: www.treasury.gov/ofac. (c) The term specific license means... or made available on OFAC's Web site: www.treasury.gov/ofac. Note to § 542.309: See § 501.801 of this...

  7. 31 CFR 589.305 - Licenses; general and specific.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... part or made available on OFAC's Web site: www.treasury.gov/ofac. (c) The term specific license means... or made available on OFAC's Web site: www.treasury.gov/ofac. Note to § 589.305: See § 501.801 of this...

  8. Phylowood: interactive web-based animations of biogeographic and phylogeographic histories.

    PubMed

    Landis, Michael J; Bedford, Trevor

    2014-01-01

    Phylowood is a web service that uses JavaScript to generate in-browser animations of biogeographic and phylogeographic histories from annotated phylogenetic input. The animations are interactive, allowing the user to adjust spatial and temporal resolution, and highlight phylogenetic lineages of interest. All documentation and source code for Phylowood is freely available at https://github.com/mlandis/phylowood, and a live web application is available at https://mlandis.github.io/phylowood.

  9. Analysis of plastic deformation in silicon web crystals

    NASA Technical Reports Server (NTRS)

    Spitznagel, J. A.; Seidensticker, R. G.; Lien, S. Y.; Mchugh, J. P.; Hopkins, R. H.

    1987-01-01

    Numerical calculation of 111-plane 110-line slip activity in silicon web crystals generated by thermal stresses is in good agreement with etch pit patterns and X-ray topographic data. The data suggest that stress redistribution effects are small and that a model, similar to that proposed by Penning (1958) and Jordan (1981) but modified to account for dislocation annihilation and egress, can be used to describe plastic flow effects during silicon web growth.

  10. WebView Materialization

    DTIC Science & Technology

    2000-01-01

    horoscope page (for Scorpio). Although this particular combination might be unique or unpopular, if we decompose the page into four WebViews, one for metro...news, one for international news, one for the weather and one for the horoscope , then these WebViews can be accessed frequently enough to merit...query results, the cost of accessing them is about the same as the cost of generating them from scratch, using the virt policy. This will also be true

  11. Usability Evaluation of Public Web Mapping Sites

    NASA Astrophysics Data System (ADS)

    Wang, C.

    2014-04-01

    Web mapping sites are interactive maps that are accessed via Webpages. With the rapid development of Internet and Geographic Information System (GIS) field, public web mapping sites are not foreign to people. Nowadays, people use these web mapping sites for various reasons, in that increasing maps and related map services of web mapping sites are freely available for end users. Thus, increased users of web mapping sites led to more usability studies. Usability Engineering (UE), for instance, is an approach for analyzing and improving the usability of websites through examining and evaluating an interface. In this research, UE method was employed to explore usability problems of four public web mapping sites, analyze the problems quantitatively and provide guidelines for future design based on the test results. Firstly, the development progress for usability studies were described, and simultaneously several usability evaluation methods such as Usability Engineering (UE), User-Centered Design (UCD) and Human-Computer Interaction (HCI) were generally introduced. Then the method and procedure of experiments for the usability test were presented in detail. In this usability evaluation experiment, four public web mapping sites (Google Maps, Bing maps, Mapquest, Yahoo Maps) were chosen as the testing websites. And 42 people, who having different GIS skills (test users or experts), gender (male or female), age and nationality, participated in this test to complete the several test tasks in different teams. The test comprised three parts: a pretest background information questionnaire, several test tasks for quantitative statistics and progress analysis, and a posttest questionnaire. The pretest and posttest questionnaires focused on gaining the verbal explanation of their actions qualitatively. And the design for test tasks targeted at gathering quantitative data for the errors and problems of the websites. Then, the results mainly from the test part were analyzed. The success rate from different public web mapping sites was calculated and compared, and displayed by the means of diagram. And the answers from questionnaires were also classified and organized in this part. Moreover, based on the analysis, this paper expands the discussion about the layout, map visualization, map tools, search logic and etc. Finally, this paper closed with some valuable guidelines and suggestions for the design of public web mapping sites. Also, limitations for this research stated in the end.

  12. ULSGEN (Uplink Summary Generator)

    NASA Technical Reports Server (NTRS)

    Wang, Y.-F.; Schrock, M.; Reeve, T.; Nguyen, K.; Smith, B.

    2014-01-01

    Uplink is an important part of spacecraft operations. Ensuring the accuracy of uplink content is essential to mission success. Before commands are radiated to the spacecraft, the command and sequence must be reviewed and verified by various teams. In most cases, this process requires collecting the command data, reviewing the data during a command conference meeting, and providing physical signatures by designated members of various teams to signify approval of the data. If commands or sequences are disapproved for some reason, the whole process must be restarted. Recording data and decision history is important for traceability reasons. Given that many steps and people are involved in this process, an easily accessible software tool for managing the process is vital to reducing human error which could result in uplinking incorrect data to the spacecraft. An uplink summary generator called ULSGEN was developed to assist this uplink content approval process. ULSGEN generates a web-based summary of uplink file content and provides an online review process. Spacecraft operations personnel view this summary as a final check before actual radiation of the uplink data. .

  13. Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    PubMed Central

    Sigüenza, Álvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández

    2012-01-01

    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound. PMID:22778643

  14. Sharing human-generated observations by integrating HMI and the Semantic Sensor Web.

    PubMed

    Sigüenza, Alvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández

    2012-01-01

    Current "Internet of Things" concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound.

  15. SPECSweb Post-Tracking Classification Method

    DTIC Science & Technology

    2011-07-01

    The Specular-Cued Surveillance Web (SPECSweb) multistatic tracker effectively reduces false track rate through the use of two amplitude thresholds...Cueing – Clutter 1 Introduction A concept referred to as the “Specular-Cued Surveillance Web (SPECSweb)” is being pursued to mitigate the data...on 5-8 July 2011. Sponsored in part by Office of Naval Research and U.S. Army Research Laboratory. 14. ABSTRACT The Specular-Cued Surveillance Web

  16. 78 FR 14689 - Medicare Program; Extension of the Payment Adjustment for Low-volume Hospitals and the Medicare...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-07

    ... use of a Web-based mapping tool, such as MapQuest, as part of documenting that the hospital meets the... only through the Internet on the CMS Web site at http://www.cms.hhs.gov/AcuteInpatientPPS/01_overview...)'' hospitals with claims in the March 2012 update of the FY 2011 MedPAR file, is also available on the CMS Web...

  17. Start Your Search Engines. Part One: Taming Google--and Other Tips to Master Web Searches

    ERIC Educational Resources Information Center

    Adam, Anna; Mowers, Helen

    2008-01-01

    There are a lot of useful tools on the Web, all those social applications, and the like. Still most people go online for one thing--to perform a basic search. For most fact-finding missions, the Web is there. But--as media specialists well know--the sheer wealth of online information can hamper efforts to focus on a few reliable references.…

  18. MAGMA: analysis of two-channel microarrays made easy.

    PubMed

    Rehrauer, Hubert; Zoller, Stefan; Schlapbach, Ralph

    2007-07-01

    The web application MAGMA provides a simple and intuitive interface to identify differentially expressed genes from two-channel microarray data. While the underlying algorithms are not superior to those of similar web applications, MAGMA is particularly user friendly and can be used without prior training. The user interface guides the novice user through the most typical microarray analysis workflow consisting of data upload, annotation, normalization and statistical analysis. It automatically generates R-scripts that document MAGMA's entire data processing steps, thereby allowing the user to regenerate all results in his local R installation. The implementation of MAGMA follows the model-view-controller design pattern that strictly separates the R-based statistical data processing, the web-representation and the application logic. This modular design makes the application flexible and easily extendible by experts in one of the fields: statistical microarray analysis, web design or software development. State-of-the-art Java Server Faces technology was used to generate the web interface and to perform user input processing. MAGMA's object-oriented modular framework makes it easily extendible and applicable to other fields and demonstrates that modern Java technology is also suitable for rather small and concise academic projects. MAGMA is freely available at www.magma-fgcz.uzh.ch.

  19. Web Program for Development of GUIs for Cluster Computers

    NASA Technical Reports Server (NTRS)

    Czikmantory, Akos; Cwik, Thomas; Klimeck, Gerhard; Hua, Hook; Oyafuso, Fabiano; Vinyard, Edward

    2003-01-01

    WIGLAF (a Web Interface Generator and Legacy Application Facade) is a computer program that provides a Web-based, distributed, graphical-user-interface (GUI) framework that can be adapted to any of a broad range of application programs, written in any programming language, that are executed remotely on any cluster computer system. WIGLAF enables the rapid development of a GUI for controlling and monitoring a specific application program running on the cluster and for transferring data to and from the application program. The only prerequisite for the execution of WIGLAF is a Web-browser program on a user's personal computer connected with the cluster via the Internet. WIGLAF has a client/server architecture: The server component is executed on the cluster system, where it controls the application program and serves data to the client component. The client component is an applet that runs in the Web browser. WIGLAF utilizes the Extensible Markup Language to hold all data associated with the application software, Java to enable platform-independent execution on the cluster system and the display of a GUI generator through the browser, and the Java Remote Method Invocation software package to provide simple, effective client/server networking.

  20. Design of Provider-Provisioned Website Protection Scheme against Malware Distribution

    NASA Astrophysics Data System (ADS)

    Yagi, Takeshi; Tanimoto, Naoto; Hariu, Takeo; Itoh, Mitsutaka

    Vulnerabilities in web applications expose computer networks to security threats, and many websites are used by attackers as hopping sites to attack other websites and user terminals. These incidents prevent service providers from constructing secure networking environments. To protect websites from attacks exploiting vulnerabilities in web applications, service providers use web application firewalls (WAFs). WAFs filter accesses from attackers by using signatures, which are generated based on the exploit codes of previous attacks. However, WAFs cannot filter unknown attacks because the signatures cannot reflect new types of attacks. In service provider environments, the number of exploit codes has recently increased rapidly because of the spread of vulnerable web applications that have been developed through cloud computing. Thus, generating signatures for all exploit codes is difficult. To solve these problems, our proposed scheme detects and filters malware downloads that are sent from websites which have already received exploit codes. In addition, to collect information for detecting malware downloads, web honeypots, which automatically extract the communication records of exploit codes, are used. According to the results of experiments using a prototype, our scheme can filter attacks automatically so that service providers can provide secure and cost-effective network environments.

  1. Search, Read and Write: An Inquiry into Web Accessibility for People with Dyslexia.

    PubMed

    Berget, Gerd; Herstad, Jo; Sandnes, Frode Eika

    2016-01-01

    Universal design in context of digitalisation has become an integrated part of international conventions and national legislations. A goal is to make the Web accessible for people of different genders, ages, backgrounds, cultures and physical, sensory and cognitive abilities. Political demands for universally designed solutions have raised questions about how it is achieved in practice. Developers, designers and legislators have looked towards the Web Content Accessibility Guidelines (WCAG) for answers. WCAG 2.0 has become the de facto standard for universal design on the Web. Some of the guidelines are directed at the general population, while others are targeted at more specific user groups, such as the visually impaired or hearing impaired. Issues related to cognitive impairments such as dyslexia receive less attention, although dyslexia is prevalent in at least 5-10% of the population. Navigation and search are two common ways of using the Web. However, while navigation has received a fair amount of attention, search systems are not explicitly included, although search has become an important part of people's daily routines. This paper discusses WCAG in the context of dyslexia for the Web in general and search user interfaces specifically. Although certain guidelines address topics that affect dyslexia, WCAG does not seem to fully accommodate users with dyslexia.

  2. SEM (Symmetry Equivalent Molecules): a web-based GUI to generate and visualize the macromolecules

    PubMed Central

    Hussain, A. S. Z.; Kumar, Ch. Kiran; Rajesh, C. K.; Sheik, S. S.; Sekar, K.

    2003-01-01

    SEM, Symmetry Equivalent Molecules, is a web-based graphical user interface to generate and visualize the symmetry equivalent molecules (proteins and nucleic acids). In addition, the program allows the users to save the three-dimensional atomic coordinates of the symmetry equivalent molecules in the local machine. The widely recognized graphics program RasMol has been deployed to visualize the reference (input atomic coordinates) and the symmetry equivalent molecules. This program is written using CGI/Perl scripts and has been interfaced with all the three-dimensional structures (solved using X-ray crystallography) available in the Protein Data Bank. The program, SEM, can be accessed over the World Wide Web interface at http://dicsoft2.physics.iisc.ernet.in/sem/ or http://144.16.71.11/sem/. PMID:12824326

  3. Building the Knowledge Base to Support the Automatic Animation Generation of Chinese Traditional Architecture

    NASA Astrophysics Data System (ADS)

    Wei, Gongjin; Bai, Weijing; Yin, Meifang; Zhang, Songmao

    We present a practice of applying the Semantic Web technologies in the domain of Chinese traditional architecture. A knowledge base consisting of one ontology and four rule bases is built to support the automatic generation of animations that demonstrate the construction of various Chinese timber structures based on the user's input. Different Semantic Web formalisms are used, e.g., OWL DL, SWRL and Jess, to capture the domain knowledge, including the wooden components needed for a given building, construction sequence, and the 3D size and position of every piece of wood. Our experience in exploiting the current Semantic Web technologies in real-world application systems indicates their prominent advantages (such as the reasoning facilities and modeling tools) as well as the limitations (such as low efficiency).

  4. Using the Web to Encourage Student-generated Questions in Large-Format Introductory Biology Classes

    PubMed Central

    Olson, Joanne K.; Clough, Michael P.

    2007-01-01

    Students rarely ask questions related to course content in large-format introductory classes. The use of a Web-based forum devoted to student-generated questions was explored in a second-semester introductory biology course. Approximately 80% of the enrolled students asked at least one question about course content during each of three semesters during which this approach was implemented. About 95% of the students who posted questions reported reading the instructor's response to their questions. Although doing so did not contribute to their grade in the course, approximately 75% of the students reported reading questions posted by other students in the class. Approximately 60% of the students reported that the Web-based question-asking activity contributed to their learning of biology. PMID:17339393

  5. Wikis, blogs and podcasts: a new generation of Web-based tools for virtual collaborative clinical practice and education.

    PubMed

    Boulos, Maged N Kamel; Maramba, Inocencio; Wheeler, Steve

    2006-08-15

    We have witnessed a rapid increase in the use of Web-based 'collaborationware' in recent years. These Web 2.0 applications, particularly wikis, blogs and podcasts, have been increasingly adopted by many online health-related professional and educational services. Because of their ease of use and rapidity of deployment, they offer the opportunity for powerful information sharing and ease of collaboration. Wikis are Web sites that can be edited by anyone who has access to them. The word 'blog' is a contraction of 'Web Log' - an online Web journal that can offer a resource rich multimedia environment. Podcasts are repositories of audio and video materials that can be "pushed" to subscribers, even without user intervention. These audio and video files can be downloaded to portable media players that can be taken anywhere, providing the potential for "anytime, anywhere" learning experiences (mobile learning). Wikis, blogs and podcasts are all relatively easy to use, which partly accounts for their proliferation. The fact that there are many free and Open Source versions of these tools may also be responsible for their explosive growth. Thus it would be relatively easy to implement any or all within a Health Professions' Educational Environment. Paradoxically, some of their disadvantages also relate to their openness and ease of use. With virtually anybody able to alter, edit or otherwise contribute to the collaborative Web pages, it can be problematic to gauge the reliability and accuracy of such resources. While arguably, the very process of collaboration leads to a Darwinian type 'survival of the fittest' content within a Web page, the veracity of these resources can be assured through careful monitoring, moderation, and operation of the collaborationware in a closed and secure digital environment. Empirical research is still needed to build our pedagogic evidence base about the different aspects of these tools in the context of medical/health education. If effectively deployed, wikis, blogs and podcasts could offer a way to enhance students', clinicians' and patients' learning experiences, and deepen levels of learners' engagement and collaboration within digital learning environments. Therefore, research should be conducted to determine the best ways to integrate these tools into existing e-Learning programmes for students, health professionals and patients, taking into account the different, but also overlapping, needs of these three audience classes and the opportunities of virtual collaboration between them. Of particular importance is research into novel integrative applications, to serve as the "glue" to bind the different forms of Web-based collaborationware synergistically in order to provide a coherent wholesome learning experience.

  6. A Framework for Integrating Oceanographic Data Repositories

    NASA Astrophysics Data System (ADS)

    Rozell, E.; Maffei, A. R.; Beaulieu, S. E.; Fox, P. A.

    2010-12-01

    Oceanographic research covers a broad range of science domains and requires a tremendous amount of cross-disciplinary collaboration. Advances in cyberinfrastructure are making it easier to share data across disciplines through the use of web services and community vocabularies. Best practices in the design of web services and vocabularies to support interoperability amongst science data repositories are only starting to emerge. Strategic design decisions in these areas are crucial to the creation of end-user data and application integration tools. We present S2S, a novel framework for deploying customizable user interfaces to support the search and analysis of data from multiple repositories. Our research methods follow the Semantic Web methodology and technology development process developed by Fox et al. This methodology stresses the importance of close scientist-technologist interactions when developing scientific use cases, keeping the project well scoped and ensuring the result meets a real scientific need. The S2S framework motivates the development of standardized web services with well-described parameters, as well as the integration of existing web services and applications in the search and analysis of data. S2S also encourages the use and development of community vocabularies and ontologies to support federated search and reduce the amount of domain expertise required in the data discovery process. S2S utilizes the Web Ontology Language (OWL) to describe the components of the framework, including web service parameters, and OpenSearch as a standard description for web services, particularly search services for oceanographic data repositories. We have created search services for an oceanographic metadata database, a large set of quality-controlled ocean profile measurements, and a biogeographic search service. S2S provides an application programming interface (API) that can be used to generate custom user interfaces, supporting data and application integration across these repositories and other web resources. Although initially targeted towards a general oceanographic audience, the S2S framework shows promise in many science domains, inspired in part by the broad disciplinary coverage of oceanography. This presentation will cover the challenges addressed by the S2S framework, the research methods used in its development, and the resulting architecture for the system. It will demonstrate how S2S is remarkably extensible, and can be generalized to many science domains. Given these characteristics, the framework can simplify the process of data discovery and analysis for the end user, and can help to shift the responsibility of search interface development away from data managers.

  7. 78 FR 68100 - Luminant Generation Company, LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... following methods: Federal Rulemaking Web site: Go to http://www.regulations.gov and search for Docket ID.../adams.html . To begin the search, select ``ADAMS Public Documents'' and then select ``Begin Web- based ADAMS Search.'' For problems with ADAMS, please contact the NRC's Public Document Room (PDR) reference...

  8. Net Venn - An integrated network analysis web platform for gene lists

    USDA-ARS?s Scientific Manuscript database

    Many lists containing biological identifiers such as gene lists have been generated in various genomics projects. Identifying the overlap among gene lists can enable us to understand the similarities and differences between the datasets. Here, we present an interactome network-based web application...

  9. Audiovisual Speech Web-Lab: an Internet teaching and research laboratory.

    PubMed

    Gordon, M S; Rosenblum, L D

    2001-05-01

    Internet resources now enable laboratories to make full-length experiments available on line. A handful of existing web sites offer users the ability to participate in experiments and generate usable data. We have integrated this technology into a web site that also provides full discussion of the theoretical and methodological aspects of the experiments using text and simple interactive demonstrations. The content of the web site (http://www.psych.ucr.edu/avspeech/lab) concerns audiovisual speech perception and its relation to face perception. The site is designed to be useful for users of multiple interests and levels of expertise.

  10. Sustainable Materials Management (SMM) Web Academy Webinar: Recycling Right: Tactics and Tools for Effective Residential Outreach (Part 2)

    EPA Pesticide Factsheets

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  11. Sustainable Materials Management (SMM) Web Academy Webinar: Recycling Right: Tactics and Tools for Effective Residential Outreach (Part 1)

    EPA Pesticide Factsheets

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  12. Comparing cosmic web classifiers using information theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Ourmore » study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.« less

  13. A simplistic model for identifying prominent web users in directed multiplex social networks: a case study using Twitter networks

    NASA Astrophysics Data System (ADS)

    Loucif, Hemza; Boubetra, Abdelhak; Akrouf, Samir

    2016-10-01

    This paper aims to describe a new simplistic model dedicated to gauge the online influence of Twitter users based on a mixture of structural and interactional features. The model is an additive mathematical formulation which involves two main parts. The first part serves to measure the influence of the Twitter user on just his neighbourhood covering his followers. However, the second part evaluates the potential influence of the Twitter user beyond the circle of his followers. Particularly, it measures the likelihood that the tweets of the Twitter user will spread further within the social graph through the retweeting process. The model is tested on a data set involving four kinds of real-world egocentric networks. The empirical results reveal that an active ordinary user is more prominent than a non-active celebrity one. A simple comparison is conducted between the proposed model and two existing simplistic approaches. The results show that our model generates the most realistic influence scores due to its dealing with both explicit (structural and interactional) and implicit features.

  14. AIRSAR Web-Based Data Processing

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne

    2007-01-01

    The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.

  15. repRNA: a web server for generating various feature vectors of RNA sequences.

    PubMed

    Liu, Bin; Liu, Fule; Fang, Longyun; Wang, Xiaolong; Chou, Kuo-Chen

    2016-02-01

    With the rapid growth of RNA sequences generated in the postgenomic age, it is highly desired to develop a flexible method that can generate various kinds of vectors to represent these sequences by focusing on their different features. This is because nearly all the existing machine-learning methods, such as SVM (support vector machine) and KNN (k-nearest neighbor), can only handle vectors but not sequences. To meet the increasing demands and speed up the genome analyses, we have developed a new web server, called "representations of RNA sequences" (repRNA). Compared with the existing methods, repRNA is much more comprehensive, flexible and powerful, as reflected by the following facts: (1) it can generate 11 different modes of feature vectors for users to choose according to their investigation purposes; (2) it allows users to select the features from 22 built-in physicochemical properties and even those defined by users' own; (3) the resultant feature vectors and the secondary structures of the corresponding RNA sequences can be visualized. The repRNA web server is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/repRNA/ .

  16. Semantic Web Services Challenge, Results from the First Year. Series: Semantic Web And Beyond, Volume 8.

    NASA Astrophysics Data System (ADS)

    Petrie, C.; Margaria, T.; Lausen, H.; Zaremba, M.

    Explores trade-offs among existing approaches. Reveals strengths and weaknesses of proposed approaches, as well as which aspects of the problem are not yet covered. Introduces software engineering approach to evaluating semantic web services. Service-Oriented Computing is one of the most promising software engineering trends because of the potential to reduce the programming effort for future distributed industrial systems. However, only a small part of this potential rests on the standardization of tools offered by the web services stack. The larger part of this potential rests upon the development of sufficient semantics to automate service orchestration. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. A common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their capabilities and shortcomings, is necessary to make progress in developing the full potential of Service-Oriented Computing. The Semantic Web Services Challenge is an open source initiative that provides a public evaluation and certification of multiple frameworks on common industrially-relevant problem sets. This edited volume reports on the first results in developing common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. Semantic Web Services Challenge: Results from the First Year is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.

  17. Integrating hydrologic modeling web services with online data sharing to prepare, store, and execute models in hydrology

    NASA Astrophysics Data System (ADS)

    Gan, T.; Tarboton, D. G.; Dash, P. K.; Gichamo, T.; Horsburgh, J. S.

    2017-12-01

    Web based apps, web services and online data and model sharing technology are becoming increasingly available to support research. This promises benefits in terms of collaboration, platform independence, transparency and reproducibility of modeling workflows and results. However, challenges still exist in real application of these capabilities and the programming skills researchers need to use them. In this research we combined hydrologic modeling web services with an online data and model sharing system to develop functionality to support reproducible hydrologic modeling work. We used HydroDS, a system that provides web services for input data preparation and execution of a snowmelt model, and HydroShare, a hydrologic information system that supports the sharing of hydrologic data, model and analysis tools. To make the web services easy to use, we developed a HydroShare app (based on the Tethys platform) to serve as a browser based user interface for HydroDS. In this integration, HydroDS receives web requests from the HydroShare app to process the data and execute the model. HydroShare supports storage and sharing of the results generated by HydroDS web services. The snowmelt modeling example served as a use case to test and evaluate this approach. We show that, after the integration, users can prepare model inputs or execute the model through the web user interface of the HydroShare app without writing program code. The model input/output files and metadata describing the model instance are stored and shared in HydroShare. These files include a Python script that is automatically generated by the HydroShare app to document and reproduce the model input preparation workflow. Once stored in HydroShare, inputs and results can be shared with other users, or published so that other users can directly discover, repeat or modify the modeling work. This approach provides a collaborative environment that integrates hydrologic web services with a data and model sharing system to enable model development and execution. The entire system comprised of the HydroShare app, HydroShare and HydroDS web services is open source and contributes to capability for web based modeling research.

  18. WALK 2.0 - using Web 2.0 applications to promote health-related physical activity: a randomised controlled trial protocol.

    PubMed

    Kolt, Gregory S; Rosenkranz, Richard R; Savage, Trevor N; Maeder, Anthony J; Vandelanotte, Corneel; Duncan, Mitch J; Caperchione, Cristina M; Tague, Rhys; Hooker, Cindy; Mummery, W Kerry

    2013-05-03

    Physical inactivity is one of the leading modifiable causes of death and disease in Australia. National surveys indicate less than half of the Australian adult population are sufficiently active to obtain health benefits. The Internet is a potentially important medium for successfully communicating health messages to the general population and enabling individual behaviour change. Internet-based interventions have proven efficacy; however, intervention studies describing website usage objectively have reported a strong decline in usage, and high attrition rate, over the course of the interventions. Web 2.0 applications give users control over web content generated and present innovative possibilities to improve user engagement. There is, however, a need to assess the effectiveness of these applications in the general population. The Walk 2.0 project is a 3-arm randomised controlled trial investigating the effects of "next generation" web-based applications on engagement, retention, and subsequent physical activity behaviour change. 504 individuals will be recruited from two sites in Australia, randomly allocated to one of two web-based interventions (Web 1.0 or Web 2.0) or a control group, and provided with a pedometer to monitor physical activity. The Web 1.0 intervention will provide participants with access to an existing physical activity website with limited interactivity. The Web 2.0 intervention will provide access to a website featuring Web 2.0 content, including social networking, blogs, and virtual walking groups. Control participants will receive a logbook to record their steps. All groups will receive similar educational material on setting goals and increasing physical activity. The primary outcomes are objectively measured physical activity and website engagement and retention. Other outcomes measured include quality of life, psychosocial correlates, and anthropometric measurements. Outcomes will be measured at baseline, 3, 12 and 18 months. The findings of this study will provide increased understanding of the benefit of new web-based technologies and applications in engaging and retaining participants on web-based intervention sites, with the aim of improved health behaviour change outcomes. Australian New Zealand Clinical Trials Registry, ACTRN12611000157976.

  19. Web-TCGA: an online platform for integrated analysis of molecular cancer data sets.

    PubMed

    Deng, Mario; Brägelmann, Johannes; Schultze, Joachim L; Perner, Sven

    2016-02-06

    The Cancer Genome Atlas (TCGA) is a pool of molecular data sets publicly accessible and freely available to cancer researchers anywhere around the world. However, wide spread use is limited since an advanced knowledge of statistics and statistical software is required. In order to improve accessibility we created Web-TCGA, a web based, freely accessible online tool, which can also be run in a private instance, for integrated analysis of molecular cancer data sets provided by TCGA. In contrast to already available tools, Web-TCGA utilizes different methods for analysis and visualization of TCGA data, allowing users to generate global molecular profiles across different cancer entities simultaneously. In addition to global molecular profiles, Web-TCGA offers highly detailed gene and tumor entity centric analysis by providing interactive tables and views. As a supplement to other already available tools, such as cBioPortal (Sci Signal 6:pl1, 2013, Cancer Discov 2:401-4, 2012), Web-TCGA is offering an analysis service, which does not require any installation or configuration, for molecular data sets available at the TCGA. Individual processing requests (queries) are generated by the user for mutation, methylation, expression and copy number variation (CNV) analyses. The user can focus analyses on results from single genes and cancer entities or perform a global analysis (multiple cancer entities and genes simultaneously).

  20. Semantic e-Learning: Next Generation of e-Learning?

    NASA Astrophysics Data System (ADS)

    Konstantinos, Markellos; Penelope, Markellou; Giannis, Koutsonikos; Aglaia, Liopa-Tsakalidi

    Semantic e-learning aspires to be the next generation of e-learning, since the understanding of learning materials and knowledge semantics allows their advanced representation, manipulation, sharing, exchange and reuse and ultimately promote efficient online experiences for users. In this context, the paper firstly explores some fundamental Semantic Web technologies and then discusses current and potential applications of these technologies in e-learning domain, namely, Semantic portals, Semantic search, personalization, recommendation systems, social software and Web 2.0 tools. Finally, it highlights future research directions and open issues of the field.

  1. Web 2.0, Library 2.0, and Librarian 2.0:Preparing for the 2.0 World

    NASA Astrophysics Data System (ADS)

    Abram, S.

    2007-10-01

    There is a global conversation going on right now about the next generation of the web. It's happening under the name of Web 2.0. It's the McLuhanesque hot web where true human interaction takes precedence over merely `cool' information delivery and e-mail. It's about putting information into the real context of our users' lives, research, work and play. Concurrently, a group of information professionals are having a conversation about the vision for what Library 2.0 will look like in this Web 2.0 ecosystem. Some are even going so far as to talk about Web 3.0! Web 2.0 is coming fast and it's BIG! What are the skills and competencies that Librarian 2.0 will need? Come and hear an overview of Web 2.0 and a draft vision for Library 2.0 and an opinion about what adaptations we'll need to make to thrive in this future scenario. Let's talk about the Librarian 2.0 in our users' future!

  2. Graph-Based Semantic Web Service Composition for Healthcare Data Integration.

    PubMed

    Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.

  3. Graph-Based Semantic Web Service Composition for Healthcare Data Integration

    PubMed Central

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602

  4. Evaluation of a cartoon-based knowledge dissemination intervention on scientific and ethical challenges raised by nutrigenomics/nutrigenetics research.

    PubMed

    Lafrenière, Darquise; Hurlimann, Thierry; Menuz, Vincent; Godard, Béatrice

    2014-10-01

    The push for knowledge translation on the part of health research funding agencies is significant in Canada, and many strategies have been adopted to promote the conversion of knowledge into action. In recent years, an increasing number of health researchers have been studying arts-based interventions to transform knowledge into action. This article reports on the results of an online questionnaire aimed at evaluating the effectiveness of a knowledge dissemination intervention (KDI) conveying findings from a study on the scientific and ethical challenges raised by nutrigenomics-nutrigenetics (NGx) research. The KDI was based on the use of four Web pages combining original, interactive cartoon-like illustrations accompanied by text to disseminate findings to Canadian Research Ethics Boards members, as well as to NGx researchers and researchers in ethics worldwide. Between May and October 2012, the links to the Web pages were sent in a personal email to target audience members, one thematic Web page at a time. On each thematic Web page, members of the target audience were invited to answer nine evaluation questions assessing the effectiveness of the KDI on four criteria, (i) acquisition of knowledge; (ii) change in initial understanding; (iii) generation of questions from the findings; and (iv) intent to change own practice. Response rate was low; results indicate that: (i) content of the four Web pages did not bring new knowledge to a majority of the respondents, (ii) initial understanding of the findings did not change for a majority of NGx researchers and a minority of ethics respondents, (iii) although the KDI did raise questions for respondents, it did not move them to change their practice. While target end-users may not feel that they actually learned from the KDI, it seems that the findings conveyed encouraged reflection and raised useful and valuable questions for them. Moreover, the evaluation of the KDI proved to be useful to gain knowledge about our target audiences' views since respondents' comments allowed us to improve our understanding of the disseminated knowledge as well as to modify (and hopefully improve) the content of the Web pages used for dissemination. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Development of the Land-use and Agricultural Management Practice web-Service (LAMPS) for generating crop rotations in space and time

    USDA-ARS?s Scientific Manuscript database

    Agroecosystem models and conservation planning tools require spatially and temporally explicit input data about agricultural management operations. The Land-use and Agricultural Management Practices web-Service (LAMPS) provides crop rotation and management information for user-specified areas within...

  6. Forest Inventory Mapmaker Users Guide

    Treesearch

    Patrick D. Miles

    2001-01-01

    The Forest Inventory Mapmaker Web application (http://www.ncrs.fs.fed.us/4801/fiadb/) provides users with the ability to easily generate tables and shaded maps. The goal of this manual is to present the basic concepts of the Web application to the user and to reinforce these concepts through the use of tutorials.

  7. Implementation and Deployment of the IMS Learning Design Specification

    ERIC Educational Resources Information Center

    Paquette, Gilbert; Marino, Olga; De la Teja, Ileana; Lundgren­-Cayrol, Karin; Lonard, Michel; Contamines, Julien

    2005-01-01

    Knowledge management in organizations, the learning objects paradigm, the advent of a new web generation, and the "Semantic Web" are major actual trends that reveal a potential for a renewed distance learning pedagogy. First and foremost is the use of educational modelling languages and instructional engineering methods to help decide…

  8. Web-Based Learning Design Tool

    ERIC Educational Resources Information Center

    Bruno, F. B.; Silva, T. L. K.; Silva, R. P.; Teixeira, F. G.

    2012-01-01

    Purpose: The purpose of this paper is to propose a web-based tool that enables the development and provision of learning designs and its reuse and re-contextualization as generative learning objects, aimed at developing educational materials. Design/methodology/approach: The use of learning objects can facilitate the process of production and…

  9. Next-Gen Search Engines

    ERIC Educational Resources Information Center

    Gupta, Amardeep

    2005-01-01

    Current search engines--even the constantly surprising Google--seem unable to leap the next big barrier in search: the trillions of bytes of dynamically generated data created by individual web sites around the world, or what some researchers call the "deep web." The challenge now is not information overload, but information overlook.…

  10. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.

    PubMed

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.

  11. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm

    PubMed Central

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the “quality of service” as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services. PMID:26504894

  12. NOAA Operational Tsunameter Support for Research

    NASA Astrophysics Data System (ADS)

    Bouchard, R.; Stroker, K.

    2008-12-01

    In March 2008, the National Oceanic and Atmospheric Administration's (NOAA) National Data Buoy Center (NDBC) completed the deployment of the last of the 39-station network of deep-sea tsunameters. As part of NOAA's effort to strengthen tsunami warning capabilities, NDBC expanded the network from 6 to 39 stations and upgraded all stations to the second generation Deep-ocean Assessment and Reporting of Tsunamis technology (DART II). Consisting of a bottom pressure recorder (BPR) and a surface buoy, the tsunameters deliver water-column heights, estimated from pressure measurements at the sea floor, to Tsunami Warning Centers in less than 3 minutes. This network provides coastal communities in the Pacific, Atlantic, Caribbean, and the Gulf of Mexico with faster and more accurate tsunami warnings. In addition, both the coarse resolution real-time data and the high resolution (15-second) recorded data provide invaluable contributions to research, such as the detection of the 2004 Sumatran tsunami in the Northeast Pacific (Gower and González, 2006) and the experimental tsunami forecast system (Bernard et al., 2007). NDBC normally recovers the BPRs every 24 months and sends the recovered high resolution data to NOAA's National Geophysical Data Center (NGDC) for archive and distribution. NGDC edits and processes this raw binary format to obtain research-quality data. NGDC provides access to retrospective BPR data from 1986 to the present. The DART database includes pressure and temperature data from the ocean floor, stored in a relational database, enabling data integration with the global tsunami and significant earthquake databases. All data are accessible via the Web as tables, reports, interactive maps, OGC Web Map Services (WMS), and Web Feature Services (WFS) to researchers around the world. References: Gower, J. and F. González, 2006. U.S. Warning System Detected the Sumatra Tsunami, Eos Trans. AGU, 87(10). Bernard, E. N., C. Meinig, and A. Hilton, 2007. Deep Ocean Tsunami Detection: Third Generation DART, Eos Trans. AGU, 88(52), Fall Meet. Suppl., Abstract S51C-03.

  13. Development of a laboratory niche Web site.

    PubMed

    Dimenstein, Izak B; Dimenstein, Simon I

    2013-10-01

    This technical note presents the development of a methodological laboratory niche Web site. The "Grossing Technology in Surgical Pathology" (www.grossing-technology.com) Web site is used as an example. Although common steps in creation of most Web sites are followed, there are particular requirements for structuring the template's menu on methodological laboratory Web sites. The "nested doll principle," in which one object is placed inside another, most adequately describes the methodological approach to laboratory Web site design. Fragmentation in presenting the Web site's material highlights the discrete parts of the laboratory procedure. An optimally minimal triad of components can be recommended for the creation of a laboratory niche Web site: a main set of media, a blog, and an ancillary component (host, contact, and links). The inclusion of a blog makes the Web site a dynamic forum for professional communication. By forming links and portals, cloud computing opens opportunities for connecting a niche Web site with other Web sites and professional organizations. As an additional source of information exchange, methodological laboratory niche Web sites are destined to parallel both traditional and new forms, such as books, journals, seminars, webinars, and internal educational materials. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Link Correlated Military Data for Better Decision Support

    DTIC Science & Technology

    2011-06-01

    automatically translated into URI based links, thus can greatly reduce man power cost on software development. 3 Linked Data Technique Tim Berners - Lee ...Linked Data - while Linked Data is usually considered as part of Semantic Web, or “the Semantic Web done right” as described by Tim himself - has been...Required data of automatic link construction mechanism on more kinds of correlations. References [1] B. L. Tim , “The next Web of open, linked data

  15. ChemCalc: a building block for tomorrow's chemical infrastructure.

    PubMed

    Patiny, Luc; Borel, Alain

    2013-05-24

    Web services, as an aspect of cloud computing, are becoming an important part of the general IT infrastructure, and scientific computing is no exception to this trend. We propose a simple approach to develop chemical Web services, through which servers could expose the essential data manipulation functionality that students and researchers need for chemical calculations. These services return their results as JSON (JavaScript Object Notation) objects, which facilitates their use for Web applications. The ChemCalc project http://www.chemcalc.org demonstrates this approach: we present three Web services related with mass spectrometry, namely isotopic distribution simulation, peptide fragmentation simulation, and molecular formula determination. We also developed a complete Web application based on these three Web services, taking advantage of modern HTML5 and JavaScript libraries (ChemDoodle and jQuery).

  16. Moving toward a universally accessible web: Web accessibility and education.

    PubMed

    Kurt, Serhat

    2017-12-08

    The World Wide Web is an extremely powerful source of information, inspiration, ideas, and opportunities. As such, it has become an integral part of daily life for a great majority of people. Yet, for a significant number of others, the internet offers only limited value due to the existence of barriers which make accessing the Web difficult, if not impossible. This article illustrates some of the reasons that achieving equality of access to the online world of education is so critical, explores the current status of Web accessibility, discusses evaluative tools and methods that can help identify accessibility issues in educational websites, and provides practical recommendations and guidelines for resolving some of the obstacles that currently hinder the achievability of the goal of universal Web access.

  17. Cleanups In My Community (CIMC) - RCRA and Base Realignment and Closure (BRAC) Federal Facilities, National Layer

    EPA Pesticide Factsheets

    This data layer provides access to Resource Conservation and Recovery Act (RCRA) Base Realignment and Closure (BRAC) sites as part of the CIMC web service. The Resource Conservation and Recovery Act, among other things, helps ensure that wastes are managed in an environmentally sound manner so as to protect human health and the environment from the potential hazards of waste disposal.In particular RCRA tightly regulates all hazardous waste from cradle to grave. In general, all generators, transporters, treaters, storers, and disposers of hazardous waste are required to provide information about their activities to state environmental agencies. These agencies, in turn pass on the information to regional and national EPA offices. Accidents or other activities at facilities that treat, store or dispose of hazardous wastes have sometimes led to the release of hazardous waste or hazardous constituents into soil, ground water, surface water, or air. When that happens, the RCRA Corrective Action program is one program that may be used to accomplish the necessary cleanup.This data layer shows those RCRA sites that are located at BRAC Federal Facilities. Additional RCRA sites and other BRAC sites (those that are not RCRA sites) are included in other data layers as part of this web service.Note: RCRA facilities which are not undergoing corrective action are not considered ??Cleanups?? in Cleanups in My Community. The complete set of RCRA facilities can be accessed via

  18. A Web-based Visualization System for Three Dimensional Geological Model using Open GIS

    NASA Astrophysics Data System (ADS)

    Nemoto, T.; Masumoto, S.; Nonogaki, S.

    2017-12-01

    A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.

  19. An advanced web query interface for biological databases

    PubMed Central

    Latendresse, Mario; Karp, Peter D.

    2010-01-01

    Although most web-based biological databases (DBs) offer some type of web-based form to allow users to author DB queries, these query forms are quite restricted in the complexity of DB queries that they can formulate. They can typically query only one DB, and can query only a single type of object at a time (e.g. genes) with no possible interaction between the objects—that is, in SQL parlance, no joins are allowed between DB objects. Writing precise queries against biological DBs is usually left to a programmer skillful enough in complex DB query languages like SQL. We present a web interface for building precise queries for biological DBs that can construct much more precise queries than most web-based query forms, yet that is user friendly enough to be used by biologists. It supports queries containing multiple conditions, and connecting multiple object types without using the join concept, which is unintuitive to biologists. This interactive web interface is called the Structured Advanced Query Page (SAQP). Users interactively build up a wide range of query constructs. Interactive documentation within the SAQP describes the schema of the queried DBs. The SAQP is based on BioVelo, a query language based on list comprehension. The SAQP is part of the Pathway Tools software and is available as part of several bioinformatics web sites powered by Pathway Tools, including the BioCyc.org site that contains more than 500 Pathway/Genome DBs. PMID:20624715

  20. Introducing Explorer of Taxon Concepts with a case study on spider measurement matrix building.

    PubMed

    Cui, Hong; Xu, Dongfang; Chong, Steven S; Ramirez, Martin; Rodenhausen, Thomas; Macklin, James A; Ludäscher, Bertram; Morris, Robert A; Soto, Eduardo M; Koch, Nicolás Mongiardino

    2016-11-17

    Taxonomic descriptions are traditionally composed in natural language and published in a format that cannot be directly used by computers. The Exploring Taxon Concepts (ETC) project has been developing a set of web-based software tools that convert morphological descriptions published in telegraphic style to character data that can be reused and repurposed. This paper introduces the first semi-automated pipeline, to our knowledge, that converts morphological descriptions into taxon-character matrices to support systematics and evolutionary biology research. We then demonstrate and evaluate the use of the ETC Input Creation - Text Capture - Matrix Generation pipeline to generate body part measurement matrices from a set of 188 spider morphological descriptions and report the findings. From the given set of spider taxonomic publications, two versions of input (original and normalized) were generated and used by the ETC Text Capture and ETC Matrix Generation tools. The tools produced two corresponding spider body part measurement matrices, and the matrix from the normalized input was found to be much more similar to a gold standard matrix hand-curated by the scientist co-authors. Special conventions utilized in the original descriptions (e.g., the omission of measurement units) were attributed to the lower performance of using the original input. The results show that simple normalization of the description text greatly increased the quality of the machine-generated matrix and reduced edit effort. The machine-generated matrix also helped identify issues in the gold standard matrix. ETC Text Capture and ETC Matrix Generation are low-barrier and effective tools for extracting measurement values from spider taxonomic descriptions and are more effective when the descriptions are self-contained. Special conventions that make the description text less self-contained challenge automated extraction of data from biodiversity descriptions and hinder the automated reuse of the published knowledge. The tools will be updated to support new requirements revealed in this case study.

  1. Genomic Enzymology: Web Tools for Leveraging Protein Family Sequence-Function Space and Genome Context to Discover Novel Functions.

    PubMed

    Gerlt, John A

    2017-08-22

    The exponentially increasing number of protein and nucleic acid sequences provides opportunities to discover novel enzymes, metabolic pathways, and metabolites/natural products, thereby adding to our knowledge of biochemistry and biology. The challenge has evolved from generating sequence information to mining the databases to integrating and leveraging the available information, i.e., the availability of "genomic enzymology" web tools. Web tools that allow identification of biosynthetic gene clusters are widely used by the natural products/synthetic biology community, thereby facilitating the discovery of novel natural products and the enzymes responsible for their biosynthesis. However, many novel enzymes with interesting mechanisms participate in uncharacterized small-molecule metabolic pathways; their discovery and functional characterization also can be accomplished by leveraging information in protein and nucleic acid databases. This Perspective focuses on two genomic enzymology web tools that assist the discovery novel metabolic pathways: (1) Enzyme Function Initiative-Enzyme Similarity Tool (EFI-EST) for generating sequence similarity networks to visualize and analyze sequence-function space in protein families and (2) Enzyme Function Initiative-Genome Neighborhood Tool (EFI-GNT) for generating genome neighborhood networks to visualize and analyze the genome context in microbial and fungal genomes. Both tools have been adapted to other applications to facilitate target selection for enzyme discovery and functional characterization. As the natural products community has demonstrated, the enzymology community needs to embrace the essential role of web tools that allow the protein and genome sequence databases to be leveraged for novel insights into enzymological problems.

  2. Genomic Enzymology: Web Tools for Leveraging Protein Family Sequence–Function Space and Genome Context to Discover Novel Functions

    PubMed Central

    2017-01-01

    The exponentially increasing number of protein and nucleic acid sequences provides opportunities to discover novel enzymes, metabolic pathways, and metabolites/natural products, thereby adding to our knowledge of biochemistry and biology. The challenge has evolved from generating sequence information to mining the databases to integrating and leveraging the available information, i.e., the availability of “genomic enzymology” web tools. Web tools that allow identification of biosynthetic gene clusters are widely used by the natural products/synthetic biology community, thereby facilitating the discovery of novel natural products and the enzymes responsible for their biosynthesis. However, many novel enzymes with interesting mechanisms participate in uncharacterized small-molecule metabolic pathways; their discovery and functional characterization also can be accomplished by leveraging information in protein and nucleic acid databases. This Perspective focuses on two genomic enzymology web tools that assist the discovery novel metabolic pathways: (1) Enzyme Function Initiative-Enzyme Similarity Tool (EFI-EST) for generating sequence similarity networks to visualize and analyze sequence–function space in protein families and (2) Enzyme Function Initiative-Genome Neighborhood Tool (EFI-GNT) for generating genome neighborhood networks to visualize and analyze the genome context in microbial and fungal genomes. Both tools have been adapted to other applications to facilitate target selection for enzyme discovery and functional characterization. As the natural products community has demonstrated, the enzymology community needs to embrace the essential role of web tools that allow the protein and genome sequence databases to be leveraged for novel insights into enzymological problems. PMID:28826221

  3. Large-area sheet task advanced dendritic web growth development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Hopkins, R. H.; Meier, D. L.; Schruben, J.

    1982-01-01

    Thermal models were developed that accurately predict the thermally generated stresses in the web crystal which, if too high, cause the crystal to degenerate. The application of the modeling results to the design of low-stress experimental growth configurations will allow the growth of wider web crystals at higher growth velocities. A new experimental web growth machine was constructed. This facility includes all the features necessary for carrying out growth experiments under steady thermal conditions. Programmed growth initiation was developed to give reproducible crystal starts. Width control permits the growth of long ribbons at constant width. Melt level is controlled to 0.1 mm or better. Thus, the capability exists to grow long web crystals of constant width and thickness with little operator intervention, and web growth experiments can now be performed with growth variables controlled to a degree not previously possible.

  4. Bioinformatics data distribution and integration via Web Services and XML.

    PubMed

    Li, Xiao; Zhang, Yizheng

    2003-11-01

    It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.

  5. Standard biological parts knowledgebase.

    PubMed

    Galdzicki, Michal; Rodriguez, Cesar; Chandran, Deepak; Sauro, Herbert M; Gennari, John H

    2011-02-24

    We have created the Knowledgebase of Standard Biological Parts (SBPkb) as a publically accessible Semantic Web resource for synthetic biology (sbolstandard.org). The SBPkb allows researchers to query and retrieve standard biological parts for research and use in synthetic biology. Its initial version includes all of the information about parts stored in the Registry of Standard Biological Parts (partsregistry.org). SBPkb transforms this information so that it is computable, using our semantic framework for synthetic biology parts. This framework, known as SBOL-semantic, was built as part of the Synthetic Biology Open Language (SBOL), a project of the Synthetic Biology Data Exchange Group. SBOL-semantic represents commonly used synthetic biology entities, and its purpose is to improve the distribution and exchange of descriptions of biological parts. In this paper, we describe the data, our methods for transformation to SBPkb, and finally, we demonstrate the value of our knowledgebase with a set of sample queries. We use RDF technology and SPARQL queries to retrieve candidate "promoter" parts that are known to be both negatively and positively regulated. This method provides new web based data access to perform searches for parts that are not currently possible.

  6. 40 CFR 52.254 - Organic solvent usage.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Air Quality Control Regions (the “Regions”), as described in 40 CFR part 81, dated July 1, 1979... contrivances designed for processing continuous web, strip, or wire that emit organic materials in the course... articles, machines, equipment, or other contrivances designed for processing a continuous web, strip, or...

  7. 40 CFR 52.254 - Organic solvent usage.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Air Quality Control Regions (the “Regions”), as described in 40 CFR part 81, dated July 1, 1979... contrivances designed for processing continuous web, strip, or wire that emit organic materials in the course... articles, machines, equipment, or other contrivances designed for processing a continuous web, strip, or...

  8. 40 CFR 52.254 - Organic solvent usage.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Air Quality Control Regions (the “Regions”), as described in 40 CFR part 81, dated July 1, 1979... contrivances designed for processing continuous web, strip, or wire that emit organic materials in the course... articles, machines, equipment, or other contrivances designed for processing a continuous web, strip, or...

  9. 40 CFR 52.254 - Organic solvent usage.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Air Quality Control Regions (the “Regions”), as described in 40 CFR part 81, dated July 1, 1979... contrivances designed for processing continuous web, strip, or wire that emit organic materials in the course... articles, machines, equipment, or other contrivances designed for processing a continuous web, strip, or...

  10. Teaching Hypertext and Hypermedia through the Web.

    ERIC Educational Resources Information Center

    de Bra, Paul M. E.

    This paper describes a World Wide Web-based introductory course titled "Hypermedia Structures and Systems," offered as an optional part of the curriculum in computing science at the Eindhoven University of Technology (Netherlands). The technical environment for the current (1996) edition of the course is presented, which features…

  11. A Web-based Examination System Based on PHP+MySQL.

    PubMed

    Wen, Ji; Zhang, Yang; Yan, Yong; Xia, Shunren

    2005-01-01

    The design and implementation of web-based examination system constructed by PHP and MySQL is presented in this paper. Three primary parts, including students',teachers' and administrators', are introduced and analyzed in detail. Initial application has demonstrated the system's feasibility and reasonability.*

  12. Watershed Central: Dynamic Collaboration for Improving Watershed Management (Philadelphia)

    EPA Science Inventory

    The Watershed Central web and wiki pages will be presented and demonstrated real-time as part of the overview of Web 2.0 collaboration tools for watershed management. The presentation portion will discuss how EPA worked with watershed practitioners and within the Agency to deter...

  13. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    NASA Astrophysics Data System (ADS)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google Earth layers using KML; generation of maps via WMS or ArcIMS protocols; and data manipulation with Unix utilities.

  14. SequenceCEROSENE: a computational method and web server to visualize spatial residue neighborhoods at the sequence level.

    PubMed

    Heinke, Florian; Bittrich, Sebastian; Kaiser, Florian; Labudde, Dirk

    2016-01-01

    To understand the molecular function of biopolymers, studying their structural characteristics is of central importance. Graphics programs are often utilized to conceive these properties, but with the increasing number of available structures in databases or structure models produced by automated modeling frameworks this process requires assistance from tools that allow automated structure visualization. In this paper a web server and its underlying method for generating graphical sequence representations of molecular structures is presented. The method, called SequenceCEROSENE (color encoding of residues obtained by spatial neighborhood embedding), retrieves the sequence of each amino acid or nucleotide chain in a given structure and produces a color coding for each residue based on three-dimensional structure information. From this, color-highlighted sequences are obtained, where residue coloring represent three-dimensional residue locations in the structure. This color encoding thus provides a one-dimensional representation, from which spatial interactions, proximity and relations between residues or entire chains can be deduced quickly and solely from color similarity. Furthermore, additional heteroatoms and chemical compounds bound to the structure, like ligands or coenzymes, are processed and reported as well. To provide free access to SequenceCEROSENE, a web server has been implemented that allows generating color codings for structures deposited in the Protein Data Bank or structure models uploaded by the user. Besides retrieving visualizations in popular graphic formats, underlying raw data can be downloaded as well. In addition, the server provides user interactivity with generated visualizations and the three-dimensional structure in question. Color encoded sequences generated by SequenceCEROSENE can aid to quickly perceive the general characteristics of a structure of interest (or entire sets of complexes), thus supporting the researcher in the initial phase of structure-based studies. In this respect, the web server can be a valuable tool, as users are allowed to process multiple structures, quickly switch between results, and interact with generated visualizations in an intuitive manner. The SequenceCEROSENE web server is available at https://biosciences.hs-mittweida.de/seqcerosene.

  15. An Ontology-Based Approach to Incorporate User-Generated Geo-Content Into Sdi

    NASA Astrophysics Data System (ADS)

    Deng, D.-P.; Lemmens, R.

    2011-08-01

    The Web is changing the way people share and communicate information because of emergence of various Web technologies, which enable people to contribute information on the Web. User-Generated Geo-Content (UGGC) is a potential resource of geographic information. Due to the different production methods, UGGC often cannot fit in geographic information model. There is a semantic gap between UGGC and formal geographic information. To integrate UGGC into geographic information, this study conducts an ontology-based process to bridge this semantic gap. This ontology-based process includes five steps: Collection, Extraction, Formalization, Mapping, and Deployment. In addition, this study implements this process on Twitter messages, which is relevant to Japan Earthquake disaster. By using this process, we extract disaster relief information from Twitter messages, and develop a knowledge base for GeoSPARQL queries in disaster relief information.

  16. Evaluation of End-User Satisfaction Among Employees Participating in a Web-based Health Risk Assessment With Tailored Feedback

    PubMed Central

    Colkesen, Ersen B; Niessen, Maurice AJ; Kraaijenhagen, Roderik A; Essink-Bot, Marie-Louise; Peek, Niels

    2012-01-01

    Background Web technology is increasingly being used to provide individuals with health risk assessments (HRAs) with tailored feedback. End-user satisfaction is an important determinant of the potential impact of HRAs, as this influences program attrition and adherence to behavioral advice. Objective The aim of this study was to evaluate end-user satisfaction with a web-based HRA with tailored feedback applied in worksite settings, using mixed (quantitative and qualitative) methods. Methods Employees of seven companies in the Netherlands participated in a commercial, web-based, HRA with tailored feedback. The HRA consisted of four components: 1) a health and lifestyle assessment questionnaire, 2) a biometric evaluation, 3) a laboratory evaluation, and 4) tailored feedback consisting of a personal health risk profile and lifestyle behavior advice communicated through a web portal. HRA respondents received an evaluation questionnaire after six weeks. Satisfaction with different parts of the HRA was measured on 5-point Likert scales. A free-text field provided the opportunity to make additional comments. Results In total, 2289 employees participated in the HRA program, of which 637 (27.8%) completed the evaluation questionnaire. Quantitative analysis showed that 85.6% of the respondents evaluated the overall HRA positively. The free-text field was filled in by 29.7 % of the respondents (189 out of 637), who made 315 separate remarks. Qualitative evaluation of these data showed that these respondents made critical remarks. Respondents felt restricted by the answer categories of the health and lifestyle assessment questionnaire, which resulted in the feeling that the corresponding feedback could be inadequate. Some respondents perceived the personal risk profile as unnecessarily alarming or suggested providing more explanations, reference values, and a justification of the behavioral advice given. Respondents also requested the opportunity to discuss the feedback with a health professional. Conclusions Most people were satisfied with the web-based HRA with tailored feedback. Sources of dissatisfaction were limited opportunities for providing additional health information outside of the predefined health and lifestyle assessment questionnaire and insufficient transparency on the generation of the feedback. Information regarding the aim and content of the HRA should be clear and accurate to prevent unrealistic expectations among end-users. Involving trusted health professionals in the implementation of web-based HRAs may enhance the use of and confidence in the HRA. PMID:23111097

  17. Oceans 2.0: a Data Management Infrastructure as a Platform

    NASA Astrophysics Data System (ADS)

    Pirenne, B.; Guillemot, E.

    2012-04-01

    Oceans 2.0: a Data Management Infrastructure as a Platform Benoît Pirenne, Associate Director, IT, NEPTUNE Canada Eric Guillemot, Manager, Software Development, NEPTUNE Canada The Data Management and Archiving System (DMAS) serving the needs of a number of undersea observing networks such as VENUS and NEPTUNE Canada was conceived from the beginning as a Service-Oriented Infrastructure. Its core functional elements (data acquisition, transport, archiving, retrieval and processing) can interact with the outside world using Web Services. Those Web Services can be exploited by a variety of higher level applications. Over the years, DMAS has developed Oceans 2.0: an environment where these techniques are implemented. The environment thereby becomes a platform in that it allows for easy addition of new and advanced features that build upon the tools at the core of the system. The applications that have been developed include: data search and retrieval, including options such as data product generation, data decimation or averaging, etc. dynamic infrastructure description (search all observatory metadata) and visualization data visualization, including dynamic scalar data plots, integrated fast video segment search and viewing Building upon these basic applications are new concepts, coming from the Web 2.0 world that DMAS has added: They allow people equipped only with a web browser to collaborate and contribute their findings or work results to the wider community. Examples include: addition of metadata tags to any part of the infrastructure or to any data item (annotations) ability to edit and execute, share and distribute Matlab code on-line, from a simple web browser, with specific calls within the code to access data ability to interactively and graphically build pipeline processing jobs that can be executed on the cloud web-based, interactive instrument control tools that allow users to truly share the use of the instruments and communicate with each other and last but not least: a public tool in the form of a game, that crowd-sources the inventory of the underwater video archive content, thereby adding tremendous amounts of metadata Beyond those tools that represent the functionality presently available to users, a number of the Web Services dedicated to data access are being exposed for anyone to use. This allows not only for ad hoc data access by individuals who need non-interactive access, but will foster the development of new applications in a variety of areas.

  18. Evaluation of end-user satisfaction among employees participating in a web-based health risk assessment with tailored feedback.

    PubMed

    Vosbergen, Sandra; Laan, Eva K; Colkesen, Ersen B; Niessen, Maurice A J; Kraaijenhagen, Roderik A; Essink-Bot, Marie-Louise; Peek, Niels

    2012-10-30

    Web technology is increasingly being used to provide individuals with health risk assessments (HRAs) with tailored feedback. End-user satisfaction is an important determinant of the potential impact of HRAs, as this influences program attrition and adherence to behavioral advice. The aim of this study was to evaluate end-user satisfaction with a web-based HRA with tailored feedback applied in worksite settings, using mixed (quantitative and qualitative) methods. Employees of seven companies in the Netherlands participated in a commercial, web-based, HRA with tailored feedback. The HRA consisted of four components: 1) a health and lifestyle assessment questionnaire, 2) a biometric evaluation, 3) a laboratory evaluation, and 4) tailored feedback consisting of a personal health risk profile and lifestyle behavior advice communicated through a web portal. HRA respondents received an evaluation questionnaire after six weeks. Satisfaction with different parts of the HRA was measured on 5-point Likert scales. A free-text field provided the opportunity to make additional comments. In total, 2289 employees participated in the HRA program, of which 637 (27.8%) completed the evaluation questionnaire. Quantitative analysis showed that 85.6% of the respondents evaluated the overall HRA positively. The free-text field was filled in by 29.7 % of the respondents (189 out of 637), who made 315 separate remarks. Qualitative evaluation of these data showed that these respondents made critical remarks. Respondents felt restricted by the answer categories of the health and lifestyle assessment questionnaire, which resulted in the feeling that the corresponding feedback could be inadequate. Some respondents perceived the personal risk profile as unnecessarily alarming or suggested providing more explanations, reference values, and a justification of the behavioral advice given. Respondents also requested the opportunity to discuss the feedback with a health professional. Most people were satisfied with the web-based HRA with tailored feedback. Sources of dissatisfaction were limited opportunities for providing additional health information outside of the predefined health and lifestyle assessment questionnaire and insufficient transparency on the generation of the feedback. Information regarding the aim and content of the HRA should be clear and accurate to prevent unrealistic expectations among end-users. Involving trusted health professionals in the implementation of web-based HRAs may enhance the use of and confidence in the HRA.

  19. Database Organisation in a Web-Enabled Free and Open-Source Software (foss) Environment for Spatio-Temporal Landslide Modelling

    NASA Astrophysics Data System (ADS)

    Das, I.; Oberai, K.; Sarathi Roy, P.

    2012-07-01

    Landslides exhibit themselves in different mass movement processes and are considered among the most complex natural hazards occurring on the earth surface. Making landslide database available online via WWW (World Wide Web) promotes the spreading and reaching out of the landslide information to all the stakeholders. The aim of this research is to present a comprehensive database for generating landslide hazard scenario with the help of available historic records of landslides and geo-environmental factors and make them available over the Web using geospatial Free & Open Source Software (FOSS). FOSS reduces the cost of the project drastically as proprietary software's are very costly. Landslide data generated for the period 1982 to 2009 were compiled along the national highway road corridor in Indian Himalayas. All the geo-environmental datasets along with the landslide susceptibility map were served through WEBGIS client interface. Open source University of Minnesota (UMN) mapserver was used as GIS server software for developing web enabled landslide geospatial database. PHP/Mapscript server-side application serve as a front-end application and PostgreSQL with PostGIS extension serve as a backend application for the web enabled landslide spatio-temporal databases. This dynamic virtual visualization process through a web platform brings an insight into the understanding of the landslides and the resulting damage closer to the affected people and user community. The landslide susceptibility dataset is also made available as an Open Geospatial Consortium (OGC) Web Feature Service (WFS) which can be accessed through any OGC compliant open source or proprietary GIS Software.

  20. Teacher education in the generative virtual classroom: developing learning theories through a web-delivered, technology-and-science education context

    NASA Astrophysics Data System (ADS)

    Schaverien, Lynette

    2003-12-01

    This paper reports the use of a research-based, web-delivered, technology-and-science education context (the Generative Virtual Classroom) in which student-teachers can develop their ability to recognize, describe, analyse and theorize learning. Addressing well-recognized concerns about narrowly conceived, anachronistic and ineffective technology-and-science education, this e-learning environment aims to use advanced technologies for learning, to bring about larger scale improvement in classroom practice than has so far been effected by direct intervention through teacher education. Student-teachers' short, intensive engagement with the Generative Virtual Classroom during their practice teaching is examined. Findings affirm the worth of this research-based e-learning system for teacher education and the power of a biologically based, generative theory to make sense of the learning that occurred.

  1. TOPSAN: a dynamic web database for structural genomics.

    PubMed

    Ellrott, Kyle; Zmasek, Christian M; Weekes, Dana; Sri Krishna, S; Bakolitsa, Constantina; Godzik, Adam; Wooley, John

    2011-01-01

    The Open Protein Structure Annotation Network (TOPSAN) is a web-based collaboration platform for exploring and annotating structures determined by structural genomics efforts. Characterization of those structures presents a challenge since the majority of the proteins themselves have not yet been characterized. Responding to this challenge, the TOPSAN platform facilitates collaborative annotation and investigation via a user-friendly web-based interface pre-populated with automatically generated information. Semantic web technologies expand and enrich TOPSAN's content through links to larger sets of related databases, and thus, enable data integration from disparate sources and data mining via conventional query languages. TOPSAN can be found at http://www.topsan.org.

  2. Rewired: Understanding the iGeneration and the Way They Learn

    ERIC Educational Resources Information Center

    Rosen, Larry D.

    2010-01-01

    The iGeneration is radically different from any previous generation of students and a variety of existing technologies can be used to engage and excite them in the learning process. The iGeneration is a creative, multimedia generation. They think of the world as a canvas to paint with words, sights, sounds, video, music, web pages, and anything…

  3. Ocean plankton. Determinants of community structure in the global plankton interactome.

    PubMed

    Lima-Mendez, Gipsi; Faust, Karoline; Henry, Nicolas; Decelle, Johan; Colin, Sébastien; Carcillo, Fabrizio; Chaffron, Samuel; Ignacio-Espinosa, J Cesar; Roux, Simon; Vincent, Flora; Bittner, Lucie; Darzi, Youssef; Wang, Jun; Audic, Stéphane; Berline, Léo; Bontempi, Gianluca; Cabello, Ana M; Coppola, Laurent; Cornejo-Castillo, Francisco M; d'Ovidio, Francesco; De Meester, Luc; Ferrera, Isabel; Garet-Delmas, Marie-José; Guidi, Lionel; Lara, Elena; Pesant, Stéphane; Royo-Llonch, Marta; Salazar, Guillem; Sánchez, Pablo; Sebastian, Marta; Souffreau, Caroline; Dimier, Céline; Picheral, Marc; Searson, Sarah; Kandels-Lewis, Stefanie; Gorsky, Gabriel; Not, Fabrice; Ogata, Hiroyuki; Speich, Sabrina; Stemmann, Lars; Weissenbach, Jean; Wincker, Patrick; Acinas, Silvia G; Sunagawa, Shinichi; Bork, Peer; Sullivan, Matthew B; Karsenti, Eric; Bowler, Chris; de Vargas, Colomban; Raes, Jeroen

    2015-05-22

    Species interaction networks are shaped by abiotic and biotic factors. Here, as part of the Tara Oceans project, we studied the photic zone interactome using environmental factors and organismal abundance profiles and found that environmental factors are incomplete predictors of community structure. We found associations across plankton functional types and phylogenetic groups to be nonrandomly distributed on the network and driven by both local and global patterns. We identified interactions among grazers, primary producers, viruses, and (mainly parasitic) symbionts and validated network-generated hypotheses using microscopy to confirm symbiotic relationships. We have thus provided a resource to support further research on ocean food webs and integrating biological components into ocean models. Copyright © 2015, American Association for the Advancement of Science.

  4. A teledentistry system for the second opinion.

    PubMed

    Gambino, Orazio; Lima, Fausto; Pirrone, Roberto; Ardizzone, Edoardo; Campisi, Giuseppina; di Fede, Olga

    2014-01-01

    In this paper we present a Teledentistry system aimed to the Second Opinion task. It make use of a particular camera called intra-oral camera, also called dental camera, in order to perform the photo shooting and real-time video of the inner part of the mouth. The pictures acquired by the Operator with such a device are sent to the Oral Medicine Expert (OME) by means of a current File Transfer Protocol (FTP) service and the real-time video is channeled into a video streaming thanks to the VideoLan client/server (VLC) application. It is composed by a HTML5 web-pages generated by PHP and allows to perform the Second Opinion both when Operator and OME are logged and when one of them is offline.

  5. Using component technologies for web based wavelet enhanced mammographic image visualization.

    PubMed

    Sakellaropoulos, P; Costaridou, L; Panayiotakis, G

    2000-01-01

    The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.

  6. 31 CFR 537.501 - General and specific licensing procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Licensing actions taken pursuant to part 501 of this chapter with respect to the prohibitions contained in this part are considered actions taken pursuant to this part. General licenses and statements of licensing policy relating to this part also may be available through the Burma sanctions page on OFAC's Web...

  7. 31 CFR 542.501 - General and specific licensing procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Licensing actions taken pursuant to part 501 of this chapter with respect to the prohibitions contained in this part are considered actions taken pursuant to this part. General licenses and statements of licensing policy relating to this part also may be available through the Syria sanctions page on OFAC's Web...

  8. Automatic Hidden-Web Table Interpretation by Sibling Page Comparison

    NASA Astrophysics Data System (ADS)

    Tao, Cui; Embley, David W.

    The longstanding problem of automatic table interpretation still illudes us. Its solution would not only be an aid to table processing applications such as large volume table conversion, but would also be an aid in solving related problems such as information extraction and semi-structured data management. In this paper, we offer a conceptual modeling solution for the common special case in which so-called sibling pages are available. The sibling pages we consider are pages on the hidden web, commonly generated from underlying databases. We compare them to identify and connect nonvarying components (category labels) and varying components (data values). We tested our solution using more than 2,000 tables in source pages from three different domains—car advertisements, molecular biology, and geopolitical information. Experimental results show that the system can successfully identify sibling tables, generate structure patterns, interpret tables using the generated patterns, and automatically adjust the structure patterns, if necessary, as it processes a sequence of hidden-web pages. For these activities, the system was able to achieve an overall F-measure of 94.5%.

  9. ICO amplicon NGS data analysis: a Web tool for variant detection in common high-risk hereditary cancer genes analyzed by amplicon GS Junior next-generation sequencing.

    PubMed

    Lopez-Doriga, Adriana; Feliubadaló, Lídia; Menéndez, Mireia; Lopez-Doriga, Sergio; Morón-Duran, Francisco D; del Valle, Jesús; Tornero, Eva; Montes, Eva; Cuesta, Raquel; Campos, Olga; Gómez, Carolina; Pineda, Marta; González, Sara; Moreno, Victor; Capellá, Gabriel; Lázaro, Conxi

    2014-03-01

    Next-generation sequencing (NGS) has revolutionized genomic research and is set to have a major impact on genetic diagnostics thanks to the advent of benchtop sequencers and flexible kits for targeted libraries. Among the main hurdles in NGS are the difficulty of performing bioinformatic analysis of the huge volume of data generated and the high number of false positive calls that could be obtained, depending on the NGS technology and the analysis pipeline. Here, we present the development of a free and user-friendly Web data analysis tool that detects and filters sequence variants, provides coverage information, and allows the user to customize some basic parameters. The tool has been developed to provide accurate genetic analysis of targeted sequencing of common high-risk hereditary cancer genes using amplicon libraries run in a GS Junior System. The Web resource is linked to our own mutation database, to assist in the clinical classification of identified variants. We believe that this tool will greatly facilitate the use of the NGS approach in routine laboratories.

  10. Accidental Discovery of Information on the User-Defined Social Web: A Mixed-Method Study

    ERIC Educational Resources Information Center

    Lu, Chi-Jung

    2012-01-01

    Frequently interacting with other people or working in an information-rich environment can foster the "accidental discovery of information" (ADI) (Erdelez, 2000; McCay-Peet & Toms, 2010). With the increasing adoption of social web technologies, online user-participation communities and user-generated content have provided users the…

  11. Developing a Multigenerational Creativity Website for Gifted and Talented Learners.

    ERIC Educational Resources Information Center

    Montgomery, Diane; Overton, Robert; Bull, Kay S.; Kimball, Sarah; Griffin, John

    This paper discusses techniques and resources to use to stimulate creativity through a web site for several "generations" of gifted and talented learners. To organize a web site to stimulate creativity, two categories of development issues must be considered: intrinsic person variables, and process variables such as thinking skills,…

  12. Web-Based Case Conferencing for Preservice Teacher Education: Electronic Discourse from the Field.

    ERIC Educational Resources Information Center

    Bonk, Curtis Jay; Malikowski, Steve; Angeli, Charoula; East, Judy

    1998-01-01

    The purpose of this study was to foster preservice teacher learning of educational psychology by creating a Web-based learning community using actual case situations experienced during field observations. Participants (146 undergraduate students) were assigned to two electronic-conferencing groups where they generated teaching vignettes related to…

  13. Towards Web Service-Based Educational Systems

    ERIC Educational Resources Information Center

    Sampson, Demetrios G.

    2005-01-01

    The need for designing the next generation of web service-based educational systems with the ability of integrating components from different tools and platforms is now recognised as the major challenge in advanced learning technologies. In this paper, we discuss this issue and we present the conceptual design of such environment, referred to as…

  14. Sustainable Materials Management (SMM) Web Academy Webinar: Advancing Sustainable Materials Management: Facts and Figures 2013 - Assessing Trends in Materials Generation, Recycling and Disposal in the United States

    EPA Pesticide Factsheets

    This is a webinar page for the Sustainable Management of Materials (SMM) Web Academy webinar titled Let’s WRAP (Wrap Recycling Action Program): Best Practices to Boost Plastic Film Recycling in Your Community

  15. A Web-Based Learning Tool Improves Student Performance in Statistics: A Randomized Masked Trial

    ERIC Educational Resources Information Center

    Gonzalez, Jose A.; Jover, Lluis; Cobo, Erik; Munoz, Pilar

    2010-01-01

    Background: e-status is a web-based tool able to generate different statistical exercises and to provide immediate feedback to students' answers. Although the use of Information and Communication Technologies (ICTs) is becoming widespread in undergraduate education, there are few experimental studies evaluating its effects on learning. Method: All…

  16. An Open-Source and Java-Technologies Approach to Web Applications

    DTIC Science & Technology

    2003-09-01

    program for any purpose (Freedom 0). • The freedom to study how the program works, and adapt it to individual needs (Freedom 1). Access to the source...manage information for many purposes. Today a key technology that allows developers to make Web applications is server-side programming to generate a

  17. 30 CFR 210.54 - Must I submit this royalty report electronically?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... with which either party may contract. (2) Web-based reporting—Reporters/payors may enter report data directly or upload files using the MMS electronic web form located at http://www.mrmreports.net. The... generated from a reporter's system application. (c) Refer to our electronic reporting guidelines in the MMS...

  18. 76 FR 52357 - Exelon Generation Company, LLC; PSEG Nuclear, LLC; Peach Bottom Atomic Power Station, Unit 3...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-22

    ... Amendment to Facility Operating License, Proposed No Significant Hazards Consideration Determination, and Opportunity for a Hearing and Order Imposing Procedures for Document Access to Sensitive Unclassified Non... on the NRC Web site and on the Federal rulemaking Web site, http://www.regulations.gov . Because your...

  19. Towards Next Generation Activity-Based Learning Systems

    ERIC Educational Resources Information Center

    Sampson, Demetrios G.; Karampiperis, Pythagoras

    2006-01-01

    The need for e-learning systems that support a diverse set of pedagogical requirements has been identified as an important issue in web-based education. Until now, significant research and development effort has been devoted to aiming towards web-based educational systems tailored to specific pedagogical approaches. The most advanced of them are…

  20. Interactive Web-Based Pointillist Visualization of Hydrogenic Orbitals Using Jmol

    ERIC Educational Resources Information Center

    Tully, Shane P.; Stitt, Thomas M.; Caldwell, Robert D.; Hardock, Brian J.; Hanson, Robert M.; Maslak, Przemyslaw

    2013-01-01

    A Monte Carlo method is used to generate interactive pointillist displays of electron density in hydrogenic orbitals. The Web applet incorporating Jmol viewer allows for clear and accurate presentation of three-dimensional shapes and sizes of orbitals up to "n" = 5, where "n" is the principle quantum number. The obtained radial…

  1. Just-in-Time Web Searches for Trainers & Adult Educators.

    ERIC Educational Resources Information Center

    Kirk, James J.

    Trainers and adult educators often need to quickly locate quality information on the World Wide Web (WWW) and need assistance in searching for such information. A "search engine" is an application used to query existing information on the WWW. The three types of search engines are computer-generated indexes, directories, and meta search…

  2. Heat barrier for use in a nuclear reactor facility

    DOEpatents

    Keegan, Charles P.

    1988-01-01

    A thermal barrier for use in a nuclear reactor facility is disclosed herein. Generally, the thermal barrier comprises a flexible, heat-resistant web mounted over the annular space between the reactor vessel and the guard vessel in order to prevent convection currents generated in the nitrogen atmosphere in this space from entering the relatively cooler atmosphere of the reactor cavity which surrounds these vessels. Preferably, the flexible web includes a blanket of heat-insulating material formed from fibers of a refractory material, such as alumina and silica, sandwiched between a heat-resistant, metallic cloth made from stainless steel wire. In use, the web is mounted between the upper edges of the guard vessel and the flange of a sealing ring which surrounds the reactor vessel with a sufficient enough slack to avoid being pulled taut as a result of thermal differential expansion between the two vessels. The flexible web replaces the rigid and relatively complicated structures employed in the prior art for insulating the reactor cavity from the convection currents generated between the reactor vessel and the guard vessel.

  3. UniPrime2: a web service providing easier Universal Primer design.

    PubMed

    Boutros, Robin; Stokes, Nicola; Bekaert, Michaël; Teeling, Emma C

    2009-07-01

    The UniPrime2 web server is a publicly available online resource which automatically designs large sets of universal primers when given a gene reference ID or Fasta sequence input by a user. UniPrime2 works by automatically retrieving and aligning homologous sequences from GenBank, identifying regions of conservation within the alignment, and generating suitable primers that can be used to amplify variable genomic regions. In essence, UniPrime2 is a suite of publicly available software packages (Blastn, T-Coffee, GramAlign, Primer3), which reduces the laborious process of primer design, by integrating these programs into a single software pipeline. Hence, UniPrime2 differs from previous primer design web services in that all steps are automated, linked, saved and phylogenetically delimited, only requiring a single user-defined gene reference ID or input sequence. We provide an overview of the web service and wet-laboratory validation of the primers generated. The system is freely accessible at: http://uniprime.batlab.eu. UniPrime2 is licenced under a Creative Commons Attribution Noncommercial-Share Alike 3.0 Licence.

  4. 42 CFR 423.128 - Dissemination of Part D plan information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... coverage determination and redetermination processes via an Internet Web site; and (iii) A system that... determination by contacting the plan sponsor's toll free customer service line or by accessing the plan sponsor's internet Web site. (8) Quality assurance policies and procedures. A description of the quality...

  5. 42 CFR 423.128 - Dissemination of Part D plan information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... coverage determination and redetermination processes via an Internet Web site; and (iii) A system that... determination by contacting the plan sponsor's toll free customer service line or by accessing the plan sponsor's internet Web site. (8) Quality assurance policies and procedures. A description of the quality...

  6. 42 CFR 423.128 - Dissemination of Part D plan information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... coverage determination and redetermination processes via an Internet Web site; and (iii) A system that... determination by contacting the plan sponsor's toll free customer service line or by accessing the plan sponsor's internet Web site. (8) Quality assurance policies and procedures. A description of the quality...

  7. Perceptions of Web Site Design Characteristics: A Malaysian/Australian Comparison.

    ERIC Educational Resources Information Center

    Fink, Dieter; Laupase, Ricky

    2000-01-01

    Compares the perceptions of Malaysians and Australians for four Web site design characteristics--atmospherics, news stories, signs, and products and services--as part of the integrated Internet marketing model. Hypothesizes that the predominant culture is not generalized to another culture, discusses validity and reliability, and suggest further…

  8. A Network of Automatic Control Web-Based Laboratories

    ERIC Educational Resources Information Center

    Vargas, Hector; Sanchez Moreno, J.; Jara, Carlos A.; Candelas, F. A.; Torres, Fernando; Dormido, Sebastian

    2011-01-01

    This article presents an innovative project in the context of remote experimentation applied to control engineering education. Specifically, the authors describe their experience regarding the analysis, design, development, and exploitation of web-based technologies within the scope of automatic control. This work is part of an inter-university…

  9. 7 CFR 3202.8 - Violations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... certification of a biobased product constitutes a violation of this part. (4) USDA BioPreferred Program Web site... remove the product information from the USDA BioPreferred Program Web site and actively communicate the..., resume use of the certification mark. USDA will also restore the product information to the USDA Bio...

  10. 7 CFR 3202.8 - Violations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... certification of a biobased product constitutes a violation of this part. (4) USDA BioPreferred Program Web site... remove the product information from the USDA BioPreferred Program Web site and actively communicate the..., resume use of the certification mark. USDA will also restore the product information to the USDA Bio...

  11. 7 CFR 3202.8 - Violations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... certification of a biobased product constitutes a violation of this part. (4) USDA BioPreferred Program Web site... remove the product information from the USDA BioPreferred Program Web site and actively communicate the..., resume use of the certification mark. USDA will also restore the product information to the USDA Bio...

  12. 60. The World-Wide Inaccessible Web, Part 1: Browsing

    ERIC Educational Resources Information Center

    Baggaley, Jon; Batpurev, Batchuluun

    2007-01-01

    Two studies are reported, comparing the browser loading times of webpages created using common Web development techniques. The loading speeds were estimated in 12 Asian countries by members of the "PANdora" network, funded by the International Development Research Centre (IDRC) to conduct collaborative research in the development of…

  13. 45 CFR 159.100 - Basis and scope.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Basis and scope. 159.100 Section 159.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH CARE REFORM INSURANCE WEB PORTAL § 159.100 Basis and scope. This part establishes provisions governing a Web...

  14. Web Based Parallel Programming Workshop for Undergraduate Education.

    ERIC Educational Resources Information Center

    Marcus, Robert L.; Robertson, Douglass

    Central State University (Ohio), under a contract with Nichols Research Corporation, has developed a World Wide web based workshop on high performance computing entitled "IBN SP2 Parallel Programming Workshop." The research is part of the DoD (Department of Defense) High Performance Computing Modernization Program. The research…

  15. 31 CFR 256.2 - Where can I find more information about, and forms for, Judgment Fund payments?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Financial Manual (TFM), Volume I, Part 6, Chapter 3100. The TFM is available on the Judgment Fund Web site at http://www.fms.treas.gov/judgefund. Contact information for the Judgment Fund Branch is also available on the Web site. ...

  16. 31 CFR 256.2 - Where can I find more information about, and forms for, Judgment Fund payments?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Financial Manual (TFM), Volume I, Part 6, Chapter 3100. The TFM is available on the Judgment Fund Web site at http://www.fms.treas.gov/judgefund. Contact information for the Judgment Fund Branch is also available on the Web site. ...

  17. 31 CFR 256.2 - Where can I find more information about, and forms for, Judgment Fund payments?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Financial Manual (TFM), Volume I, Part 6, Chapter 3100. The TFM is available on the Judgment Fund Web site at http://www.fms.treas.gov/judgefund. Contact information for the Judgment Fund Branch is also available on the Web site. ...

  18. 31 CFR 256.2 - Where can I find more information about, and forms for, Judgment Fund payments?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Financial Manual (TFM), Volume I, Part 6, Chapter 3100. The TFM is available on the Judgment Fund Web site at http://www.fiscal.treasury.gov/judgefund. Contact information for the Judgment Fund Branch is also available on the Web site. ...

  19. Knowledge management impact of information technology Web 2.0/3.0. The case study of agent software technology usability in knowledge management system

    NASA Astrophysics Data System (ADS)

    Sołtysik-Piorunkiewicz, Anna

    2015-02-01

    How we can measure the impact of internet technology Web 2.0/3.0 for knowledge management? How we can use the Web 2.0/3.0 technologies for generating, evaluating, sharing, organizing knowledge in knowledge-based organization? How we can evaluate it from user-centered perspective? Article aims to provide a method for evaluate the usability of web technologies to support knowledge management in knowledge-based organizations of the various stages of the cycle knowledge management, taking into account: generating knowledge, evaluating knowledge, sharing knowledge, etc. for the modern Internet technologies based on the example of agent technologies. The method focuses on five areas of evaluation: GUI, functional structure, the way of content publication, organizational aspect, technological aspect. The method is based on the proposed indicators relating respectively to assess specific areas of evaluation, taking into account the individual characteristics of the scoring. Each of the features identified in the evaluation is judged first point wise, then this score is subject to verification and clarification by means of appropriate indicators of a given feature. The article proposes appropriate indicators to measure the impact of Web 2.0/3.0 technologies for knowledge management and verification them in an example of agent technology usability in knowledge management system.

  20. On transform coding tools under development for VP10

    NASA Astrophysics Data System (ADS)

    Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao

    2016-09-01

    Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.

  1. miRanalyzer: a microRNA detection and analysis tool for next-generation sequencing experiments.

    PubMed

    Hackenberg, Michael; Sturm, Martin; Langenberger, David; Falcón-Pérez, Juan Manuel; Aransay, Ana M

    2009-07-01

    Next-generation sequencing allows now the sequencing of small RNA molecules and the estimation of their expression levels. Consequently, there will be a high demand of bioinformatics tools to cope with the several gigabytes of sequence data generated in each single deep-sequencing experiment. Given this scene, we developed miRanalyzer, a web server tool for the analysis of deep-sequencing experiments for small RNAs. The web server tool requires a simple input file containing a list of unique reads and its copy numbers (expression levels). Using these data, miRanalyzer (i) detects all known microRNA sequences annotated in miRBase, (ii) finds all perfect matches against other libraries of transcribed sequences and (iii) predicts new microRNAs. The prediction of new microRNAs is an especially important point as there are many species with very few known microRNAs. Therefore, we implemented a highly accurate machine learning algorithm for the prediction of new microRNAs that reaches AUC values of 97.9% and recall values of up to 75% on unseen data. The web tool summarizes all the described steps in a single output page, which provides a comprehensive overview of the analysis, adding links to more detailed output pages for each analysis module. miRanalyzer is available at http://web.bioinformatics.cicbiogune.es/microRNA/.

  2. PseKNC: a flexible web server for generating pseudo K-tuple nucleotide composition.

    PubMed

    Chen, Wei; Lei, Tian-Yu; Jin, Dian-Chuan; Lin, Hao; Chou, Kuo-Chen

    2014-07-01

    The pseudo oligonucleotide composition, or pseudo K-tuple nucleotide composition (PseKNC), can be used to represent a DNA or RNA sequence with a discrete model or vector yet still keep considerable sequence order information, particularly the global or long-range sequence order information, via the physicochemical properties of its constituent oligonucleotides. Therefore, the PseKNC approach may hold very high potential for enhancing the power in dealing with many problems in computational genomics and genome sequence analysis. However, dealing with different DNA or RNA problems may need different kinds of PseKNC. Here, we present a flexible and user-friendly web server for PseKNC (at http://lin.uestc.edu.cn/pseknc/default.aspx) by which users can easily generate many different modes of PseKNC according to their need by selecting various parameters and physicochemical properties. Furthermore, for the convenience of the vast majority of experimental scientists, a step-by-step guide is provided on how to use the current web server to generate their desired PseKNC without the need to follow the complicated mathematical equations, which are presented in this article just for the integrity of PseKNC formulation and its development. It is anticipated that the PseKNC web server will become a very useful tool in computational genomics and genome sequence analysis. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    PubMed Central

    2011-01-01

    Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies. PMID:22024447

  4. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation.

    PubMed

    Wilkinson, Mark D; Vandervalk, Benjamin; McCarthy, Luke

    2011-10-24

    The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies.

  5. Modelling of Tethered Space-Web Structures

    NASA Astrophysics Data System (ADS)

    McKenzie, D. J.; Cartnell, M. P.

    Large structures in space are an essential milestone in the path of many projects, from solar power collectors to space stations. In space, as on Earth, these large projects may be split up into more manageable sections, dividing the task into multiple replicable parts. Specially constructed spider robots could assemble these structures piece by piece over a membrane or space- web, giving a method for building a structure while on orbit. The modelling and applications of these space-webs are discussed, along with the derivation of the equations of motion of the structure. The presentation of some preliminary results from the solution of these equations will show that space-webs can take a variety of different forms, and give some guidelines for configuring the space-web system.

  6. The Génolevures database.

    PubMed

    Martin, Tiphaine; Sherman, David J; Durrens, Pascal

    2011-01-01

    The Génolevures online database (URL: http://www.genolevures.org) stores and provides the data and results obtained by the Génolevures Consortium through several campaigns of genome annotation of the yeasts in the Saccharomycotina subphylum (hemiascomycetes). This database is dedicated to large-scale comparison of these genomes, storing not only the different chromosomal elements detected in the sequences, but also the logical relations between them. The database is divided into a public part, accessible to anyone through Internet, and a private part where the Consortium members make genome annotations with our Magus annotation system; this system is used to annotate several related genomes in parallel. The public database is widely consulted and offers structured data, organized using a REST web site architecture that allows for automated requests. The implementation of the database, as well as its associated tools and methods, is evolving to cope with the influx of genome sequences produced by Next Generation Sequencing (NGS). Copyright © 2011 Académie des sciences. Published by Elsevier SAS. All rights reserved.

  7. Web 2.0 and Pharmacy Education

    PubMed Central

    Fox, Brent I.

    2009-01-01

    New types of social Internet applications (often referred to as Web 2.0) are becoming increasingly popular within higher education environments. Although developed primarily for entertainment and social communication within the general population, applications such as blogs, social video sites, and virtual worlds are being adopted by higher education institutions. These newer applications differ from standard Web sites in that they involve the users in creating and distributing information, hence effectively changing how the Web is used for knowledge generation and dispersion. Although Web 2.0 applications offer exciting new ways to teach, they should not be the core of instructional planning, but rather selected only after learning objectives and instructional strategies have been identified. This paper provides an overview of prominent Web 2.0 applications, explains how they are being used within education environments, and elaborates on some of the potential opportunities and challenges that these applications present. PMID:19960079

  8. Web Navigation Sequences Automation in Modern Websites

    NASA Astrophysics Data System (ADS)

    Montoto, Paula; Pan, Alberto; Raposo, Juan; Bellas, Fernando; López, Javier

    Most today’s web sources are designed to be used by humans, but they do not provide suitable interfaces for software programs. That is why a growing interest has arisen in so-called web automation applications that are widely used for different purposes such as B2B integration, automated testing of web applications or technology and business watch. Previous proposals assume models for generating and reproducing navigation sequences that are not able to correctly deal with new websites using technologies such as AJAX: on one hand existing systems only allow recording simple navigation actions and, on the other hand, they are unable to detect the end of the effects caused by an user action. In this paper, we propose a set of new techniques to record and execute web navigation sequences able to deal with all the complexity existing in AJAX-based web sites. We also present an exhaustive evaluation of the proposed techniques that shows very promising results.

  9. Web 2.0 and pharmacy education.

    PubMed

    Cain, Jeff; Fox, Brent I

    2009-11-12

    New types of social Internet applications (often referred to as Web 2.0) are becoming increasingly popular within higher education environments. Although developed primarily for entertainment and social communication within the general population, applications such as blogs, social video sites, and virtual worlds are being adopted by higher education institutions. These newer applications differ from standard Web sites in that they involve the users in creating and distributing information, hence effectively changing how the Web is used for knowledge generation and dispersion. Although Web 2.0 applications offer exciting new ways to teach, they should not be the core of instructional planning, but rather selected only after learning objectives and instructional strategies have been identified. This paper provides an overview of prominent Web 2.0 applications, explains how they are being used within education environments, and elaborates on some of the potential opportunities and challenges that these applications present.

  10. An overview of emerging technologies in contemporary decision support system development

    NASA Astrophysics Data System (ADS)

    Nursal, Ahmad Taufik; Omar, Mohd Faizal; Nawi, Mohd Nasrun Mohd

    2014-12-01

    The rapid development of Web technology has opened a new approach to Decision Support System (DSS) development. For instance, Social Media is one of the Web 2.0 digital platforms that allow the creation and exchanges of user-generate content through an interactive interface, high user control and mass participation. The concept and characteristics of Web 2.0 such as remote, platform-independent, context-rich and easy to use, which is fulfill the concept and purpose of DSS. This paper outlines some of the elementary concepts of Web 2.0 and social media technology which can be potentially integrated within DSS to enhance the decision-making process. Our initial investigation indicates that there is limited study attempt to embed Web 2.0 into DSS. Thus, this paper highlights the importance of Web 2.0 technology in order to foster the betterment of DSS development and its usability.

  11. Enhancing promotional strategies within social marketing programs: use of Web 2.0 social media.

    PubMed

    Thackeray, Rosemary; Neiger, Brad L; Hanson, Carl L; McKenzie, James F

    2008-10-01

    The second generation of Internet-based applications (i.e., Web 2.0), in which users control communication, holds promise to significantly enhance promotional efforts within social marketing campaigns. Web 2.0 applications can directly engage consumers in the creative process by both producing and distributing information through collaborative writing, content sharing, social networking, social bookmarking, and syndication. Web 2.0 can also enhance the power of viral marketing by increasing the speed at which consumers share experiences and opinions with progressively larger audiences. Because of the novelty and potential effectiveness of Web 2.0, social marketers may be enticed to prematurely incorporate related applications into promotional plans. However, as strategic issues such as priority audience preferences, selection of appropriate applications, tracking and evaluation, and related costs are carefully considered, Web 2.0 will expand to allow health promotion practitioners more direct access to consumers with less dependency on traditional communication channels.

  12. A Method for Transforming Existing Web Service Descriptions into an Enhanced Semantic Web Service Framework

    NASA Astrophysics Data System (ADS)

    Du, Xiaofeng; Song, William; Munro, Malcolm

    Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.

  13. 31 CFR 589.501 - General and specific licensing procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... chapter. Licensing actions taken pursuant to part 501 of this chapter with respect to the prohibitions contained in this part are considered actions taken pursuant to this part. General licenses and statements... page on OFAC's Web site: www.treasury.gov/ofac. ...

  14. Open Marketplace for Simulation Software on the Basis of a Web Platform

    NASA Astrophysics Data System (ADS)

    Kryukov, A. P.; Demichev, A. P.

    2016-02-01

    The focus in development of a new generation of middleware shifts from the global grid systems to building convenient and efficient web platforms for remote access to individual computing resources. Further line of their development, suggested in this work, is related not only with the quantitative increase in their number and with the expansion of scientific, engineering, and manufacturing areas in which they are used, but also with improved technology for remote deployment of application software on the resources interacting with the web platforms. Currently, the services for providers of application software in the context of scientific-oriented web platforms is not developed enough. The proposed in this work new web platforms of application software market should have all the features of the existing web platforms for submissions of jobs to remote resources plus the provision of specific web services for interaction on market principles between the providers and consumers of application packages. The suggested approach will be approved on the example of simulation applications in the field of nonlinear optics.

  15. Strategies to address participant misrepresentation for eligibility in Web-based research.

    PubMed

    Kramer, Jessica; Rubin, Amy; Coster, Wendy; Helmuth, Eric; Hermos, John; Rosenbloom, David; Moed, Rich; Dooley, Meghan; Kao, Ying-Chia; Liljenquist, Kendra; Brief, Deborah; Enggasser, Justin; Keane, Terence; Roy, Monica; Lachowicz, Mark

    2014-03-01

    Emerging methodological research suggests that the World Wide Web ("Web") is an appropriate venue for survey data collection, and a promising area for delivering behavioral intervention. However, the use of the Web for research raises concerns regarding sample validity, particularly when the Web is used for recruitment and enrollment. The purpose of this paper is to describe the challenges experienced in two different Web-based studies in which participant misrepresentation threatened sample validity: a survey study and an online intervention study. The lessons learned from these experiences generated three types of strategies researchers can use to reduce the likelihood of participant misrepresentation for eligibility in Web-based research. Examples of procedural/design strategies, technical/software strategies and data analytic strategies are provided along with the methodological strengths and limitations of specific strategies. The discussion includes a series of considerations to guide researchers in the selection of strategies that may be most appropriate given the aims, resources and target population of their studies. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Silicon web process development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Blais, P. D.; Davis, J. R., Jr.

    1977-01-01

    Thirty-five (35) furnace runs were carried out during this quarter, of which 25 produced a total of 120 web crystals. The two main thermal models for the dendritic growth process were completed and are being used to assist the design of the thermal geometry of the web growth apparatus. The first model, a finite element representation of the susceptor and crucible, was refined to give greater precision and resolution in the critical central region of the melt. The second thermal model, which describes the dissipation of the latent heat to generate thickness-velocity data, was completed. Dendritic web samples were fabricated into solar cells using a standard configuration and a standard process for a N(+) -P-P(+) configuration. The detailed engineering design was completed for a new dendritic web growth facility of greater width capability than previous facilities.

  17. The wireless Web and patient care.

    PubMed

    Bergeron, B P

    2001-01-01

    Wireless computing, when integrated with the Web, is poised to revolutionize the practice and teaching of medicine. As vendors introduce wireless Web technologies in the medical community that have been used successfully in the business and consumer markets, clinicians can expect profound increases in the amount of patient data, as well as the ease with which those data are acquired, analyzed, and disseminated. The enabling technologies involved in this transformation to the wireless Web range from the new generation of wireless PDAs, eBooks, and wireless data acquisition peripherals to new wireless network protocols. The rate-limiting step in the application of this technology in medicine is not technology per se but rather how quickly clinicians and their patients come to accept and appreciate the benefits and limitations of the application of wireless Web technology.

  18. Reviews

    NASA Astrophysics Data System (ADS)

    2007-09-01

    WE RECOMMEND Energy Foresight Valuable and original GCSE curriculum support on DVD Developing Scientific Literacy: Using News Media in the Classroom This book helpfully evaluates science stories in today's media Radioactivity Explained and Electricity Explained Interactive software ideal for classroom use TEP Generator Wind-up generator specially designed for schools SEP Energymeter A joule meter with more uses than its appearance suggests Into the Cool: Energy Flow, Thermodynamics and Life This book explores the physics behind biology CmapTools Handy software for mapping knowledge and resources LogIT Black Box This hub contains multiple sensors for endless experimental fun WEB WATCH Water Web 2.0

  19. An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information

    NASA Astrophysics Data System (ADS)

    Tsuruta, Masanobu; Masuyama, Shigeru

    We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.

  20. WebVR: an interactive web browser for virtual environments

    NASA Astrophysics Data System (ADS)

    Barsoum, Emad; Kuester, Falko

    2005-03-01

    The pervasive nature of web-based content has lead to the development of applications and user interfaces that port between a broad range of operating systems and databases, while providing intuitive access to static and time-varying information. However, the integration of this vast resource into virtual environments has remained elusive. In this paper we present an implementation of a 3D Web Browser (WebVR) that enables the user to search the internet for arbitrary information and to seamlessly augment this information into virtual environments. WebVR provides access to the standard data input and query mechanisms offered by conventional web browsers, with the difference that it generates active texture-skins of the web contents that can be mapped onto arbitrary surfaces within the environment. Once mapped, the corresponding texture functions as a fully integrated web-browser that will respond to traditional events such as the selection of links or text input. As a result, any surface within the environment can be turned into a web-enabled resource that provides access to user-definable data. In order to leverage from the continuous advancement of browser technology and to support both static as well as streamed content, WebVR uses ActiveX controls to extract the desired texture skin from industry strength browsers, providing a unique mechanism for data fusion and extensibility.

  1. GoWeb: a semantic search engine for the life science web.

    PubMed

    Dietze, Heiko; Schroeder, Michael

    2009-10-01

    Current search engines are keyword-based. Semantic technologies promise a next generation of semantic search engines, which will be able to answer questions. Current approaches either apply natural language processing to unstructured text or they assume the existence of structured statements over which they can reason. Here, we introduce a third approach, GoWeb, which combines classical keyword-based Web search with text-mining and ontologies to navigate large results sets and facilitate question answering. We evaluate GoWeb on three benchmarks of questions on genes and functions, on symptoms and diseases, and on proteins and diseases. The first benchmark is based on the BioCreAtivE 1 Task 2 and links 457 gene names with 1352 functions. GoWeb finds 58% of the functional GeneOntology annotations. The second benchmark is based on 26 case reports and links symptoms with diseases. GoWeb achieves 77% success rate improving an existing approach by nearly 20%. The third benchmark is based on 28 questions in the TREC genomics challenge and links proteins to diseases. GoWeb achieves a success rate of 79%. GoWeb's combination of classical Web search with text-mining and ontologies is a first step towards answering questions in the biomedical domain. GoWeb is online at: http://www.gopubmed.org/goweb.

  2. NMSim web server: integrated approach for normal mode-based geometric simulations of biologically relevant conformational transitions in proteins.

    PubMed

    Krüger, Dennis M; Ahmed, Aqeel; Gohlke, Holger

    2012-07-01

    The NMSim web server implements a three-step approach for multiscale modeling of protein conformational changes. First, the protein structure is coarse-grained using the FIRST software. Second, a rigid cluster normal-mode analysis provides low-frequency normal modes. Third, these modes are used to extend the recently introduced idea of constrained geometric simulations by biasing backbone motions of the protein, whereas side chain motions are biased toward favorable rotamer states (NMSim). The generated structures are iteratively corrected regarding steric clashes and stereochemical constraint violations. The approach allows performing three simulation types: unbiased exploration of conformational space; pathway generation by a targeted simulation; and radius of gyration-guided simulation. On a data set of proteins with experimentally observed conformational changes, the NMSim approach has been shown to be a computationally efficient alternative to molecular dynamics simulations for conformational sampling of proteins. The generated conformations and pathways of conformational transitions can serve as input to docking approaches or more sophisticated sampling techniques. The web server output is a trajectory of generated conformations, Jmol representations of the coarse-graining and a subset of the trajectory and data plots of structural analyses. The NMSim webserver, accessible at http://www.nmsim.de, is free and open to all users with no login requirement.

  3. Who We Are: Today's Students Speak Out

    ERIC Educational Resources Information Center

    Blandford, Ayoka

    2012-01-01

    Today's students have been nicknamed the "Digital Generation," "Millennials," "Net Generation" and "Generation Next." They are frequently identified by their technological prowess and seem to work well with multiple stimuli (for example, designing a web site while listening to iTunes and responding to texts). While many research studies have been…

  4. Dynamic Courseware Generation on the WWW.

    ERIC Educational Resources Information Center

    Vassileva, Julita; Deters, Ralph

    1998-01-01

    The Dynamic Courseware Generator (DCG), which runs on a Web server, was developed for the authoring of adaptive computer-assisted learning courses. It generates an individual course according to the learner's goals and previous knowledge, and dynamically adapts the course according to the learner's success in knowledge acquisition. The tool may be…

  5. iRefWeb: interactive analysis of consolidated protein interaction data and their supporting evidence

    PubMed Central

    Turner, Brian; Razick, Sabry; Turinsky, Andrei L.; Vlasblom, James; Crowdy, Edgard K.; Cho, Emerson; Morrison, Kyle; Wodak, Shoshana J.

    2010-01-01

    We present iRefWeb, a web interface to protein interaction data consolidated from 10 public databases: BIND, BioGRID, CORUM, DIP, IntAct, HPRD, MINT, MPact, MPPI and OPHID. iRefWeb enables users to examine aggregated interactions for a protein of interest, and presents various statistical summaries of the data across databases, such as the number of organism-specific interactions, proteins and cited publications. Through links to source databases and supporting evidence, researchers may gauge the reliability of an interaction using simple criteria, such as the detection methods, the scale of the study (high- or low-throughput) or the number of cited publications. Furthermore, iRefWeb compares the information extracted from the same publication by different databases, and offers means to follow-up possible inconsistencies. We provide an overview of the consolidated protein–protein interaction landscape and show how it can be automatically cropped to aid the generation of meaningful organism-specific interactomes. iRefWeb can be accessed at: http://wodaklab.org/iRefWeb. Database URL: http://wodaklab.org/iRefWeb/ PMID:20940177

  6. Food Web Response to Habitat Restoration in Various Coastal Wetland Ecosystems

    NASA Astrophysics Data System (ADS)

    James, W. R.; Nelson, J. A.

    2017-12-01

    Coastal wetland habitats provide important ecosystem services, including supporting coastal food webs. These habitats are being lost rapidly. To combat the effects of these losses, millions of dollars have been invested to restore these habitats. However, the relationship between restoring habitat and restoring ecosystem functioning is poorly understood. Analyzing energy flow through food web comparisons between restored and natural habitats can give insights into ecosystem functioning. Using published stable isotope values from organisms in restored and natural habitats, we assessed the food web response of habitat restoration in salt marsh, mangrove, sea grass, and algal bed ecosystems. We ran Bayesian mixing models to quantify resource use by consumers and generated habitat specific niche hypervolumes for each ecosystem to assess food web differences between restored and natural habitats. Salt marsh, mangrove, and sea grass ecosystems displayed functional differences between restored and natural habitats. Salt marsh and mangrove food webs varied in the amount of each resource used, while the sea grass food web displayed more variation between individual organisms. The algal bed food web showed little variation between restored and natural habitats.

  7. Use of ebRIM-based CSW with sensor observation services for registry and discovery of remote-sensing observations

    NASA Astrophysics Data System (ADS)

    Chen, Nengcheng; Di, Liping; Yu, Genong; Gong, Jianya; Wei, Yaxing

    2009-02-01

    Recent advances in Sensor Web geospatial data capture, such as high-resolution in satellite imagery and Web-ready data processing and modeling technologies, have led to the generation of large numbers of datasets from real-time or near real-time observations and measurements. Finding which sensor or data complies with criteria such as specific times, locations, and scales has become a bottleneck for Sensor Web-based applications, especially remote-sensing observations. In this paper, an architecture for use of the integration Sensor Observation Service (SOS) with the Open Geospatial Consortium (OGC) Catalogue Service-Web profile (CSW) is put forward. The architecture consists of a distributed geospatial sensor observation service, a geospatial catalogue service based on the ebXML Registry Information Model (ebRIM), SOS search and registry middleware, and a geospatial sensor portal. The SOS search and registry middleware finds the potential SOS, generating data granule information and inserting the records into CSW. The contents and sequence of the services, the available observations, and the metadata of the observations registry are described. A prototype system is designed and implemented using the service middleware technology and a standard interface and protocol. The feasibility and the response time of registry and retrieval of observations are evaluated using a realistic Earth Observing-1 (EO-1) SOS scenario. Extracting information from SOS requires the same execution time as record generation for CSW. The average data retrieval response time in SOS+CSW mode is 17.6% of that of the SOS-alone mode. The proposed architecture has the more advantages of SOS search and observation data retrieval than the existing sensor Web enabled systems.

  8. The Creative task Creator: a tool for the generation of customized, Web-based creativity tasks.

    PubMed

    Pretz, Jean E; Link, John A

    2008-11-01

    This article presents a Web-based tool for the creation of divergent-thinking and open-ended creativity tasks. A Java program generates HTML forms with PHP scripting that run an Alternate Uses Task and/or open-ended response items. Researchers may specify their own instructions, objects, and time limits, or use default settings. Participants can also be prompted to select their best responses to the Alternate Uses Task (Silvia et al., 2008). Minimal programming knowledge is required. The program runs on any server, and responses are recorded in a standard MySQL database. Responses can be scored using the consensual assessment technique (Amabile, 1996) or Torrance's (1998) traditional scoring method. Adoption of this Web-based tool should facilitate creativity research across cultures and access to eminent creators. The Creative Task Creator may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.

  9. High temperature (900-1300 C) mechanical behaviour of dendritic web grown silicon ribbons - Strain rate and temperature dependence of the yield stress

    NASA Technical Reports Server (NTRS)

    Mathews, V. K.; Gross, T. S.

    1987-01-01

    The mechanical behavior of dendritic web Si ribbons close the melting point was studied experimentally. The goal of the study was to generate data for modeling the generation of stresses and dislocation structures during growth of dendritic web Si ribbons, thereby permitting modifications to the production process, i.e., the temperature profile, to lower production costs for the photovoltaic ribbons. A laser was used to cut specimens in the direction of growth of sample ribbons, which were then subjected to tensile tests at temperatures up to 1300 C in an Ar atmosphere. The tensile strengths of the samples increased when the temperature rose above 1200 C, a phenomena which was attributed to the diffusion of oxygen atoms to the quasi-dislocation sites. The migration to the potential dislocations sites effectively locked the dislocations.

  10. CycloPs: generating virtual libraries of cyclized and constrained peptides including nonnatural amino acids.

    PubMed

    Duffy, Fergal J; Verniere, Mélanie; Devocelle, Marc; Bernard, Elise; Shields, Denis C; Chubb, Anthony J

    2011-04-25

    We introduce CycloPs, software for the generation of virtual libraries of constrained peptides including natural and nonnatural commercially available amino acids. The software is written in the cross-platform Python programming language, and features include generating virtual libraries in one-dimensional SMILES and three-dimensional SDF formats, suitable for virtual screening. The stand-alone software is capable of filtering the virtual libraries using empirical measurements, including peptide synthesizability by standard peptide synthesis techniques, stability, and the druglike properties of the peptide. The software and accompanying Web interface is designed to enable the rapid generation of large, structurally diverse, synthesizable virtual libraries of constrained peptides quickly and conveniently, for use in virtual screening experiments. The stand-alone software, and the Web interface for evaluating these empirical properties of a single peptide, are available at http://bioware.ucd.ie .

  11. Social Networking Sites as Communication, Interaction, and Learning Environments: Perceptions and Preferences of Distance Education Students

    ERIC Educational Resources Information Center

    Bozkurt, Aras; Karadeniz, Abdulkadir; Kocdar, Serpil

    2017-01-01

    The advent of Web 2.0 technologies transformed online networks into interactive spaces in which user-generated content has become the core material. With the possibilities that emerged from Web 2.0, social networking sites became very popular. The capability of social networking sites promises opportunities for communication and interaction,…

  12. Development of Web-Based Learning Application for Generation Z

    ERIC Educational Resources Information Center

    Hariadi, Bambang; Dewiyani Sunarto, M. J.; Sudarmaningtyas, Pantjawati

    2016-01-01

    This study aimed to develop a web-based learning application as a form of learning revolution. The form of learning revolution includes the provision of unlimited teaching materials, real time class organization, and is not limited by time or place. The implementation of this application is in the form of hybrid learning by using Google Apps for…

  13. Web-Based Learning as a Tool of Knowledge Continuity

    ERIC Educational Resources Information Center

    Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita; Rambely, Azmin Sham

    2013-01-01

    The outbreak of information in a borderless world has prompted lecturers to move forward together with the technological innovation and erudition of knowledge in performing his/her responsibility to educate the young generations to be able to stand above the crowd at the global scene. Teaching and Learning through web-based learning platform is a…

  14. Creating and Maintaining Data-Driven Course Web Sites.

    ERIC Educational Resources Information Center

    Heines, Jesse M.

    This paper deals with techniques for reducing the amount of work that needs to be redone each semester when one prepares an existing course Web site for a new class. The key concept is algorithmic generation of common page elements while still allowing full control over page content via WYSIWYG tools like Microsoft FrontPage and Macromedia…

  15. Changes in College Students' Perceptions of Use of Web-Based Resources for Academic Tasks with Wikipedia Projects: A Preliminary Exploration

    ERIC Educational Resources Information Center

    Traphagan, Tomoko; Traphagan, John; Dickens, Linda Neavel; Resta, Paul

    2014-01-01

    Motivated by the need to facilitate Net Generation students' information literacy (IL), or more specifically, to promote student understanding of legitimate, effective use of Web-based resources, this exploratory study investigated how analyzing, writing, posting, and monitoring Wikipedia entries might help students develop critical…

  16. W.E.B. Du Bois and the Women of Hull-House, 1895-1899.

    ERIC Educational Resources Information Center

    Deegan, Mary Jo

    1988-01-01

    Uses correspondence generated by the writing of "The Philadelphia Negro" to describe the collaborative relationship between W.E.B. DuBois and women sociologists. Suggests that this historical bond between Black men and White women in their search for a more egalitarian future has the potential to inform efforts toward greater equity now…

  17. Facilitating Participation: From the EML Web Site to the Learning Network for Learning Design

    ERIC Educational Resources Information Center

    Hummel, Hans G. K.; Tattersall, Colin; Burgos, Daniel; Brouns, Francis; Kurvers, Hub; Koper, Rob

    2005-01-01

    This article investigates conditions for increasing active participation in on-line communities. As a case study, we use three generations of facilities designed to promote learning in the area of Educational Modelling Languages. Following a description of early experience with a conventional web site and with a community site offering facilities…

  18. Developing a Web 2.0-Based System with User-Authored Content for Community Use and Teacher Education

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Sharp, Amy; Bulu, Sanser; Benz, Mike; Stough, Laura M.

    2010-01-01

    We report on an investigation into the design, development, implementation, and evaluation of an informational and instructional Website in order to generate guidelines for instructional designers of read/write Web environments. We describe the process of design and development research, the problem addressed, the theory-based solution, and the…

  19. Comparative Analysis of Homepage Website Visibility and Academic Rankings for UK Universities

    ERIC Educational Resources Information Center

    Weideman, Melius

    2013-01-01

    Introduction: The pressure on universities worldwide has increased to transform themselves from isolated educational institutions to profit-generating businesses. The visibility of a Web page plays a role in attracting potential clients, as more and more young users are depending on the Web for their everyday information needs. One of the purposes…

  20. Drying of fiber webs

    DOEpatents

    Warren, David W.

    1997-01-01

    A process and an apparatus for high-intensity drying of fiber webs or sheets, such as newsprint, printing and writing papers, packaging paper, and paperboard or linerboard, as they are formed on a paper machine. The invention uses direct contact between the wet fiber web or sheet and various molten heat transfer fluids, such as liquified eutectic metal alloys, to impart heat at high rates over prolonged durations, in order to achieve ambient boiling of moisture contained within the web. The molten fluid contact process causes steam vapor to emanate from the web surface, without dilution by ambient air; and it is differentiated from the evaporative drying techniques of the prior industrial art, which depend on the uses of steam-heated cylinders to supply heat to the paper web surface, and ambient air to carry away moisture, which is evaporated from the web surface. Contact between the wet fiber web and the molten fluid can be accomplished either by submersing the web within a molten bath or by coating the surface of the web with the molten media. Because of the high interfacial surface tension between the molten media and the cellulose fiber comprising the paper web, the molten media does not appreciately stick to the paper after it is dried. Steam generated from the paper web is collected and condensed without dilution by ambient air to allow heat recovery at significantly higher temperature levels than attainable in evaporative dryers.

  1. 42 CFR 423.128 - Dissemination of Part D plan information.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... plan sponsor's toll free customer service line or by accessing the plan sponsor's internet Web site. (8... redetermination processes via an Internet Web site; and (iii) A system that transmits codes to network pharmacies...— (1) A toll-free customer call center that— (i) Is open during usual business hours. (ii) Provides...

  2. 12 CFR Appendix A to Part 332 - Model Privacy Form

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... information that the institution collects and shares. All institutions must use the term “Social Security... the applicable opt-out methods described: Telephone, such as by a toll-free number; a Web site; or use... appropriate. An institution that allows consumers to opt out online must provide either a specific Web address...

  3. 12 CFR Appendix A to Part 216 - Model Privacy Form

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... information that the institution collects and shares. All institutions must use the term “Social Security... appropriate. An institution that allows consumers to opt out online must provide either a specific Web address that takes consumers directly to the opt-out page or a general Web address that provides a clear and...

  4. The Virtual Campus: Trends for Higher Education and Training.

    ERIC Educational Resources Information Center

    Verdejo, Felisa, Ed.; Davies, Gordon, Ed.

    This volume presents 27 papers given at a conference on the virtual campus. Papers are grouped into five parts: (1) keynote presentations, (2) global approaches, (3) evaluation studies, (4) collaborative learning and group activities, and (5) web tools and web applications. The papers are: "New Wine and Old Bottles? Tele-learning, Telematics,…

  5. 40 CFR 63.4282 - What parts of my plant does this subpart cover?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... are used in fabric and other textiles web coating and printing operations. The regulated materials for the web coating and printing subcategory are the coating, printing, thinning and cleaning materials... materials to a substrate on the coating or printing line to prepare it for coating or printing material...

  6. Transforming School Communities: Creating Dialogue Using Web 2.0 Tools

    ERIC Educational Resources Information Center

    Soule, Helen

    2008-01-01

    Web 2.0 tools should be an important part of every district's communication strategy, creating environments for collaboration in ways never possible before. Most of them are free, inexpensive, easy to use, and require little set up. When combined with basic communication principles and careful planning, they can expand a district's reach, increase…

  7. Assembling Webs of Support: Child Domestic Workers in India

    ERIC Educational Resources Information Center

    Wasiuzzaman, Shaziah; Wells, Karen

    2010-01-01

    This paper uses ethnographic and qualitative interview data with Muslim child domestic workers, their families and employers to investigate the social ties between young workers and their employers. Our analysis shows that working-class families use children's domestic work with middle-class families as part of a web of resources to protect them…

  8. From Static to Dynamic: Choosing and Implementing a Web-Based CMS

    ERIC Educational Resources Information Center

    Kneale, Ruth

    2008-01-01

    Working as systems librarian for the Advanced Technology Solar Telescope (ATST), a project for the National Solar Observatory (NSO) based in Tucson, Arizona, a large part of the author's responsibilities involve running the web site. She began looking into content management systems (CMSs), specifically ones for website control. A CMS is generally…

  9. From Zero to Web 2.0: Part 3

    ERIC Educational Resources Information Center

    Woodard, Amber

    2010-01-01

    When the Vise Library at Cumberland University set down the path to their digital makeover, they were optimistic that they could completely revolutionize their library's web presence. They tried to keep their goals manageable, but it turns out they did not "quite" achieve 100% success. In this article, the author describes how they have…

  10. Using the Cognitive Apprenticeship Web-Based Argumentation System to Improve Argumentation Instruction

    ERIC Educational Resources Information Center

    Tsai, Chun-Yen; Jack, Brady Michael; Huang, Tai-Chu; Yang, Jin-Tan

    2012-01-01

    This study investigated how the instruction of argumentation skills could be promoted by using an online argumentation system. This system entitled "Cognitive Apprenticeship Web-based Argumentation" (CAWA) system was based on cognitive apprenticeship model. One hundred eighty-nine fifth grade students took part in this study. A quasi-experimental…

  11. A Role-Playing Virtual World for Web-Based Application Courses

    ERIC Educational Resources Information Center

    Depradine, Colin

    2007-01-01

    With the rapid development of the information communication and technology (ICT) infrastructure in the Caribbean, there is an increasing demand for skilled software developers to meet the ICT needs of the region. Consequently, the web-based applications course offered at the University of the West Indies, has been redeveloped. One major part of…

  12. Wandering: A Web-Based Platform for the Creation of Location-Based Interactive Learning Objects

    ERIC Educational Resources Information Center

    Barak, Miri; Ziv, Shani

    2013-01-01

    Wandering is an innovative web-based platform that was designed to facilitate outdoor, authentic, and interactive learning via the creation of location-based interactive learning objects (LILOs). Wandering was integrated as part of a novel environmental education program among middle school students. This paper describes the Wandering platform's…

  13. Selection and Cataloging of Adult Pornography Web Sites for Academic Libraries

    ERIC Educational Resources Information Center

    Dilevko, Juris; Gottlieb, Lisa

    2004-01-01

    Pornography has become part of mainstream culture. As such, it has become a subject of academic research, and this, in turn, has implications for university libraries. Focusing on adult Internet pornography, this study suggests that academic libraries should provide access to adult pornographic Web sites by including them in their online catalogs.

  14. Ontology Research and Development. Part 2 - A Review of Ontology Mapping and Evolving.

    ERIC Educational Resources Information Center

    Ding, Ying; Foo, Schubert

    2002-01-01

    Reviews ontology research and development, specifically ontology mapping and evolving. Highlights include an overview of ontology mapping projects; maintaining existing ontologies and extending them as appropriate when new information or knowledge is acquired; and ontology's role and the future of the World Wide Web, or Semantic Web. (Contains 55…

  15. Characteristics and Communication--Effectiveness of "Fortune 500" Company Corporate Homepages

    ERIC Educational Resources Information Center

    Truell, Allen D.; Zhao, Jensen J.; Alexander, Melody W.; Whitesel, Joel A.

    2005-01-01

    The Internet and its component parts, email and the World Wide Web (Web), have had a tremendous impact on how companies communicate with their various audiences. Thus, the twofold purpose of this study was (a) to determine the characteristics of "Fortune 500" company homepage components and (b) to determine the communication effectiveness of…

  16. Community College Faculty and Web-Based Classes

    ERIC Educational Resources Information Center

    Smith, Vernon C.; Rhoades, Gary

    2006-01-01

    Web-based, e-learning classes, or online classes that use a proprietary course management system such as Blackboard, are an increasingly prominent part of higher education, particularly in community colleges. In fact, more than three-quarters of community colleges now offer the same course in face-to-face and online modes. And community colleges…

  17. Exploring the Influence of Web-Based Portfolio Development on Learning to Teach Elementary Science

    ERIC Educational Resources Information Center

    Avraamidou, Lucy; Zembal-Saul, Carla

    2006-01-01

    This qualitative case study examined web-based portfolio development in the service of supporting reflective thinking and learning within the innovative context of Professional Development Schools. Specifically, this study investigated the nature of the evidence-based philosophies developed by prospective teachers as the central part of the…

  18. Extensible 3D (X3D) Earth Technical Requirements Workshop Summary Report

    DTIC Science & Technology

    2007-08-01

    world in detail already, but rarely interconnect on to another • Most interesting part of “virtual reality” (VR) is reality – which means physics... Two Web-Enabled Modeling and Simulation (WebSim) symposia have demonstrated that large partnerships can work 9. Server-side 3D graphics • Our

  19. Automated grading of homework assignments and tests in introductory and intermediate statistics courses using active server pages.

    PubMed

    Stockburger, D W

    1999-05-01

    Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.

  20. A web-based tool for groundwater mapping and drought analysis

    NASA Astrophysics Data System (ADS)

    Christensen, S.; Burns, M.; Jones, N.; Strassberg, G.

    2012-12-01

    In 2011-2012, the state of Texas saw the worst one-year drought on record. Fluctuations in gravity measured by GRACE satellites indicate that as much as 100 cubic kilometers of water was lost during this period. Much of this came from reservoirs and shallow soil moisture, but a significant amount came from aquifers. In response to this crisis, a Texas Drought Technology Steering Committee (TDTSC) consisting of academics and water managers was formed to develop new tools and strategies to assist the state in monitoring, predicting, and responding to drought events. In this presentation, we describe one of the tools that was developed as part of this effort. When analyzing the impact of drought on groundwater levels, it is fairly common to examine time series data at selected monitoring wells. However, accurately assessing impacts and trends requires both spatial and temporal analysis involving the development of detailed water level maps at various scales. Creating such maps in a flexible and rapid fashion is critical for effective drought analysis, but can be challenging due to the massive amounts of data involved and the processing required to generate such maps. Furthermore, wells are typically not sampled at the same points in time, and so developing a water table map for a particular date requires both spatial and temporal interpolation of water elevations. To address this challenge, a Cloud-based water level mapping system was developed for the state of Texas. The system is based on the Texas Water Development Board (TWDB) groundwater database, but can be adapted to use other databases as well. The system involves a set of ArcGIS workflows running on a server with a web-based front end and a Google Earth plug-in. A temporal interpolation geoprocessing tool was developed to estimate the piezometric heads for all wells in a given region at a specific date using a regression analysis. This interpolation tool is coupled with other geoprocessing tools to filter data and interpolate point elevations spatially to produce water level, drawdown, and depth to groundwater maps. The web interface allows for users to generate these maps at locations and times of interest. A sequence of maps can be generated over a period of time and animated to visualize how water levels are changing. The time series regression analysis can also be used to do short-term predictions of future water levels.

  1. A document centric metadata registration tool constructing earth environmental data infrastructure

    NASA Astrophysics Data System (ADS)

    Ichino, M.; Kinutani, H.; Ono, M.; Shimizu, T.; Yoshikawa, M.; Masuda, K.; Fukuda, K.; Kawamoto, H.

    2009-12-01

    DIAS (Data Integration and Analysis System) is one of GEOSS activities in Japan. It is also a leading part of the GEOSS task with the same name defined in GEOSS Ten Year Implementation Plan. The main mission of DIAS is to construct data infrastructure that can effectively integrate earth environmental data such as observation data, numerical model outputs, and socio-economic data provided from the fields of climate, water cycle, ecosystem, ocean, biodiversity and agriculture. Some of DIAS's data products are available at the following web site of http://www.jamstec.go.jp/e/medid/dias. Most of earth environmental data commonly have spatial and temporal attributes such as the covering geographic scope or the created date. The metadata standards including these common attributes are published by the geographic information technical committee (TC211) in ISO (the International Organization for Standardization) as specifications of ISO 19115:2003 and 19139:2007. Accordingly, DIAS metadata is developed with basing on ISO/TC211 metadata standards. From the viewpoint of data users, metadata is useful not only for data retrieval and analysis but also for interoperability and information sharing among experts, beginners and nonprofessionals. On the other hand, from the viewpoint of data providers, two problems were pointed out after discussions. One is that data providers prefer to minimize another tasks and spending time for creating metadata. Another is that data providers want to manage and publish documents to explain their data sets more comprehensively. Because of solving these problems, we have been developing a document centric metadata registration tool. The features of our tool are that the generated documents are available instantly and there is no extra cost for data providers to generate metadata. Also, this tool is developed as a Web application. So, this tool does not demand any software for data providers if they have a web-browser. The interface of the tool provides the section titles of the documents and by filling out the content of each section, the documents for the data sets are automatically published in PDF and HTML format. Furthermore, the metadata XML file which is compliant with ISO19115 and ISO19139 is created at the same moment. The generated metadata are managed in the metadata database of the DIAS project, and will be used in various ISO19139 compliant metadata management tools, such as GeoNetwork.

  2. Web-based technical assistance and training to promote community tobacco control policy change.

    PubMed

    Young, Walter F; Montgomery, Debbie; Nycum, Colleen; Burns-Martin, Lavon; Buller, David B

    2006-01-01

    In 1998 the tobacco industry was released of claims that provided monetary relief for states. A significant expansion of tobacco control activity in many states created a need to develop local capacity. Technical assistance and training for new and experienced staff became a significant challenge for tobacco control leadership. In Colorado, this challenge was addressed in part through the development of a technical assistance and training Web site designed for local tobacco control staff and coalition members. Researchers, technical Web site development specialists, state health agency, and state tobacco control coalition staff collaborated to develop, promote, and test the efficacy of this Web site. The work group embodied a range of skills including tobacco control, Web site technical development, marketing, training, and project management. Persistent marketing, updating of Web site content, and institutionalizing it as a principal source of information and training were key to use by community coalition members.

  3. Wikis, blogs and podcasts: a new generation of Web-based tools for virtual collaborative clinical practice and education

    PubMed Central

    Boulos, Maged N Kamel; Maramba, Inocencio; Wheeler, Steve

    2006-01-01

    Background We have witnessed a rapid increase in the use of Web-based 'collaborationware' in recent years. These Web 2.0 applications, particularly wikis, blogs and podcasts, have been increasingly adopted by many online health-related professional and educational services. Because of their ease of use and rapidity of deployment, they offer the opportunity for powerful information sharing and ease of collaboration. Wikis are Web sites that can be edited by anyone who has access to them. The word 'blog' is a contraction of 'Web Log' – an online Web journal that can offer a resource rich multimedia environment. Podcasts are repositories of audio and video materials that can be "pushed" to subscribers, even without user intervention. These audio and video files can be downloaded to portable media players that can be taken anywhere, providing the potential for "anytime, anywhere" learning experiences (mobile learning). Discussion Wikis, blogs and podcasts are all relatively easy to use, which partly accounts for their proliferation. The fact that there are many free and Open Source versions of these tools may also be responsible for their explosive growth. Thus it would be relatively easy to implement any or all within a Health Professions' Educational Environment. Paradoxically, some of their disadvantages also relate to their openness and ease of use. With virtually anybody able to alter, edit or otherwise contribute to the collaborative Web pages, it can be problematic to gauge the reliability and accuracy of such resources. While arguably, the very process of collaboration leads to a Darwinian type 'survival of the fittest' content within a Web page, the veracity of these resources can be assured through careful monitoring, moderation, and operation of the collaborationware in a closed and secure digital environment. Empirical research is still needed to build our pedagogic evidence base about the different aspects of these tools in the context of medical/health education. Summary and conclusion If effectively deployed, wikis, blogs and podcasts could offer a way to enhance students', clinicians' and patients' learning experiences, and deepen levels of learners' engagement and collaboration within digital learning environments. Therefore, research should be conducted to determine the best ways to integrate these tools into existing e-Learning programmes for students, health professionals and patients, taking into account the different, but also overlapping, needs of these three audience classes and the opportunities of virtual collaboration between them. Of particular importance is research into novel integrative applications, to serve as the "glue" to bind the different forms of Web-based collaborationware synergistically in order to provide a coherent wholesome learning experience. PMID:16911779

  4. Molecular structure input on the web.

    PubMed

    Ertl, Peter

    2010-02-02

    A molecule editor, that is program for input and editing of molecules, is an indispensable part of every cheminformatics or molecular processing system. This review focuses on a special type of molecule editors, namely those that are used for molecule structure input on the web. Scientific computing is now moving more and more in the direction of web services and cloud computing, with servers scattered all around the Internet. Thus a web browser has become the universal scientific user interface, and a tool to edit molecules directly within the web browser is essential.The review covers a history of web-based structure input, starting with simple text entry boxes and early molecule editors based on clickable maps, before moving to the current situation dominated by Java applets. One typical example - the popular JME Molecule Editor - will be described in more detail. Modern Ajax server-side molecule editors are also presented. And finally, the possible future direction of web-based molecule editing, based on technologies like JavaScript and Flash, is discussed.

  5. Enrichment and Ranking of the YouTube Tag Space and Integration with the Linked Data Cloud

    NASA Astrophysics Data System (ADS)

    Choudhury, Smitashree; Breslin, John G.; Passant, Alexandre

    The increase of personal digital cameras with video functionality and video-enabled camera phones has increased the amount of user-generated videos on the Web. People are spending more and more time viewing online videos as a major source of entertainment and "infotainment". Social websites allow users to assign shared free-form tags to user-generated multimedia resources, thus generating annotations for objects with a minimum amount of effort. Tagging allows communities to organise their multimedia items into browseable sets, but these tags may be poorly chosen and related tags may be omitted. Current techniques to retrieve, integrate and present this media to users are deficient and could do with improvement. In this paper, we describe a framework for semantic enrichment, ranking and integration of web video tags using Semantic Web technologies. Semantic enrichment of folksonomies can bridge the gap between the uncontrolled and flat structures typically found in user-generated content and structures provided by the Semantic Web. The enhancement of tag spaces with semantics has been accomplished through two major tasks: (1) a tag space expansion and ranking step; and (2) through concept matching and integration with the Linked Data cloud. We have explored social, temporal and spatial contexts to enrich and extend the existing tag space. The resulting semantic tag space is modelled via a local graph based on co-occurrence distances for ranking. A ranked tag list is mapped and integrated with the Linked Data cloud through the DBpedia resource repository. Multi-dimensional context filtering for tag expansion means that tag ranking is much easier and it provides less ambiguous tag to concept matching.

  6. Standard Biological Parts Knowledgebase

    PubMed Central

    Galdzicki, Michal; Rodriguez, Cesar; Chandran, Deepak; Sauro, Herbert M.; Gennari, John H.

    2011-01-01

    We have created the Knowledgebase of Standard Biological Parts (SBPkb) as a publically accessible Semantic Web resource for synthetic biology (sbolstandard.org). The SBPkb allows researchers to query and retrieve standard biological parts for research and use in synthetic biology. Its initial version includes all of the information about parts stored in the Registry of Standard Biological Parts (partsregistry.org). SBPkb transforms this information so that it is computable, using our semantic framework for synthetic biology parts. This framework, known as SBOL-semantic, was built as part of the Synthetic Biology Open Language (SBOL), a project of the Synthetic Biology Data Exchange Group. SBOL-semantic represents commonly used synthetic biology entities, and its purpose is to improve the distribution and exchange of descriptions of biological parts. In this paper, we describe the data, our methods for transformation to SBPkb, and finally, we demonstrate the value of our knowledgebase with a set of sample queries. We use RDF technology and SPARQL queries to retrieve candidate “promoter” parts that are known to be both negatively and positively regulated. This method provides new web based data access to perform searches for parts that are not currently possible. PMID:21390321

  7. Group for High Resolution Sea Surface Temperature (GHRSST) analysis fields inter-comparisons—Part 2: Near real time web-based level 4 SST Quality Monitor (L4-SQUAM)

    NASA Astrophysics Data System (ADS)

    Dash, Prasanjit; Ignatov, Alexander; Martin, Matthew; Donlon, Craig; Brasnett, Bruce; Reynolds, Richard W.; Banzon, Viva; Beggs, Helen; Cayula, Jean-Francois; Chao, Yi; Grumbine, Robert; Maturi, Eileen; Harris, Andy; Mittaz, Jonathan; Sapper, John; Chin, Toshio M.; Vazquez-Cuervo, Jorge; Armstrong, Edward M.; Gentemann, Chelle; Cummings, James; Piollé, Jean-François; Autret, Emmanuelle; Roberts-Jones, Jonah; Ishizaki, Shiro; Høyer, Jacob L.; Poulter, Dave

    2012-11-01

    There are a growing number of level 4 (L4; gap-free gridded) sea surface temperature (SST) products generated by blending SST data from various sources which are available for use in a wide variety of operational and scientific applications. In most cases, each product has been developed for a specific user community with specific requirements guiding the design of the product. Consequently differences between products are implicit. In addition, anomalous atmospheric conditions, satellite operations and production anomalies may occur which can introduce additional differences. This paper describes a new web-based system called the L4 SST Quality Monitor (L4-SQUAM) developed to monitor the quality of L4 SST products. L4-SQUAM intercompares thirteen L4 products with 1-day latency in an operational environment serving the needs of both L4 SST product users and producers. Relative differences between products are computed and visualized using maps, histograms, time series plots and Hovmöller diagrams, for all combinations of products. In addition, products are compared to quality controlled in situ SST data (available from the in situ SST Quality Monitor, iQUAM, companion system) in a consistent manner. A full history of products statistics is retained in L4-SQUAM for time series analysis. L4-SQUAM complements the two other Group for High Resolution SST (GHRSST) tools, the GHRSST Multi Product Ensemble (GMPE) and the High Resolution Diagnostic Data Set (HRDDS) systems, documented in part 1 of this paper and elsewhere, respectively. Our results reveal significant differences between SST products in coastal and open ocean areas. Differences of >2 °C are often observed at high latitudes partly due to different treatment of the sea-ice transition zone. Thus when an ice flag is available, the intercomparisons are performed in two ways: including and excluding ice-flagged grid points. Such differences are significant and call for a community effort to understand their root cause and ensure consistency between SST products. Future work focuses on including the remaining daily L4 SST products, accommodating for newer L4 SSTs which resolve the diurnal variability and evaluating retrospectively regenerated L4 SSTs to support satellite data reprocessing efforts aimed at generating improved SST Climate Data Records.

  8. Evaluation of a metal shear web selectively reinforced with filamentary composites for space shuttle application. Phase 1 summary report: Shear web design development

    NASA Technical Reports Server (NTRS)

    Laakso, J. H.; Zimmerman, D. K.

    1972-01-01

    An advanced composite shear web design concept was developed for the Space Shuttle orbiter main engine thrust beam structure. Various web concepts were synthesized by a computer-aided adaptive random search procedure. A practical concept is identified having a titanium-clad + or - 45 deg boron/epoxy web plate with vertical boron/epoxy reinforced aluminum stiffeners. The boron-epoxy laminate contributes to the strength and stiffness efficiency of the basic web section. The titanium-cladding functions to protect the polymeric laminate parts from damaging environments and is chem-milled to provide reinforcement in selected areas. Detailed design drawings are presented for both boron/epoxy reinforced and all-metal shear webs. The weight saving offered is 24% relative to all-metal construction at an attractive cost per pound of weight saved, based on the detailed designs. Small scale element tests substantiate the boron/epoxy reinforced design details in critical areas. The results show that the titanium-cladding reliably reinforces the web laminate in critical edge load transfer and stiffener fastener hole areas.

  9. Hydrology and grazing jointly control a large-river food web.

    PubMed

    Strayer, David L; Pace, Michael L; Caraco, Nina F; Cole, Jonathan J; Findlay, Stuart E G

    2008-01-01

    Inputs of fresh water and grazing both can control aquatic food webs, but little is known about the relative strengths of and interactions between these controls. We use long-term data on the food web of the freshwater Hudson River estuary to investigate the importance of, and interactions between, inputs of fresh water and grazing by the invasive zebra mussel (Dreissena polymorpha). Both freshwater inputs and zebra mussel grazing have strong, pervasive effects on the Hudson River food web. High flow tended to reduce population size in most parts of the food web. High grazing also reduced populations in the planktonic food web, but increased populations in the littoral food web, probably as a result of increases in water clarity. The influences of flow and zebra mussel grazing were roughly equal (i.e., within a factor of 2) for many variables over the period of our study. Zebra mussel grazing made phytoplankton less sensitive to freshwater inputs, but water clarity and the littoral food web more sensitive to freshwater inputs, showing that interactions between these two controlling factors can be strong and varied.

  10. Cleanups In My Community (CIMC) - Base Realignment and Closure (BRAC) Superfund Sites, National Layer

    EPA Pesticide Factsheets

    This data layer provides access to Base Realignment and Closure (BRAC) Superfund Sites as part of the CIMC web service. EPA works with DoD to facilitate the reuse and redevelopment of BRAC federal properties. When the BRAC program began in the early 1990s, EPA worked with DoD and the states to identify uncontaminated areas and these parcels were immediately made available for reuse. Since then EPA has worked with DoD to clean up the contaminated portions of bases. These are usually parcels that were training ranges, landfills, maintenance facilities and other past waste-disposal areas. Superfund is a program administered by the EPA to locate, investigate, and clean up worst hazardous waste sites throughout the United States. EPA administers the Superfund program in cooperation with individual states and tribal governments. These sites include abandoned warehouses, manufacturing facilities, processing plants, and landfills - the key word here being abandoned.This data layer shows Superfund Sites that are located at BRAC Federal Facilities. Additional Superfund sites and other BRAC sites (those that are not Superfund sites) are included in other data layers as part of this web service.BRAC Superfund Sites shown in this web service are derived from the epa.gov website and include links to the relevant web pages within the attribute table. Data about BRAC Superfund Sites are located on their own EPA web pages, and CIMC links to those pages. The CIMC web service

  11. Food webs for parasitologists: a review.

    PubMed

    Sukhdeo, Michael V K

    2010-04-01

    This review examines the historical origins of food web theory and explores the reasons why parasites have traditionally been left out of food web studies. Current paradigms may still be an impediment because, despite several attempts, it remains virtually impossible to retrofit parasites into food web theory in any satisfactory manner. It seems clear that parasitologists must return to first principles to solve how best to incorporate parasites into ecological food webs, and a first step in changing paradigms will be to include parasites in the classic ecological patterns that inform food web theory. The limitations of current food web models are discussed with respect to their logistic exclusion of parasites, and the traditional matrix approach in food web studies is critically examined. The well-known energetic perspective on ecosystem organization is presented as a viable alternative to the matrix approach because it provides an intellectually powerful theoretical paradigm for generating testable hypotheses on true food web structure. This review proposes that to make significant contributions to the food web debate, parasitologists must work from the standpoint of natural history to elucidate patterns of biomass, species abundance, and interaction strengths in real food webs, and these will provide the basis for more realistic models that incorporate parasite dynamics into the overall functional dynamics of the whole web. A general conclusion is that only by quantifying the effects of parasites in terms of energy flows (or biomass) will we be able to correctly place parasites into food webs.

  12. 3DNOW: Image-Based 3d Reconstruction and Modeling via Web

    NASA Astrophysics Data System (ADS)

    Tefera, Y.; Poiesi, F.; Morabito, D.; Remondino, F.; Nocerino, E.; Chippendale, P.

    2018-05-01

    This paper presents a web-based 3D imaging pipeline, namely 3Dnow, that can be used by anyone without the need of installing additional software other than a browser. By uploading a set of images through the web interface, 3Dnow can generate sparse and dense point clouds as well as mesh models. 3D reconstructed models can be downloaded with standard formats or previewed directly on the web browser through an embedded visualisation interface. In addition to reconstructing objects, 3Dnow offers the possibility to evaluate and georeference point clouds. Reconstruction statistics, such as minimum, maximum and average intersection angles, point redundancy and density can also be accessed. The paper describes all features available in the web service and provides an analysis of the computational performance using servers with different GPU configurations.

  13. A WebGL Tool for Visualizing the Topology of the Sun's Coronal Magnetic Field

    NASA Astrophysics Data System (ADS)

    Duffy, A.; Cheung, C.; DeRosa, M. L.

    2012-12-01

    We present a web-based, topology-viewing tool that allows users to visualize the geometry and topology of the Sun's 3D coronal magnetic field in an interactive manner. The tool is implemented using, open-source, mature, modern web technologies including WebGL, jQuery, HTML 5, and CSS 3, which are compatible with nearly all modern web browsers. As opposed to the traditional method of visualization, which involves the downloading and setup of various software packages-proprietary and otherwise-the tool presents a clean interface that allows the user to easily load and manipulate the model, while also offering great power to choose which topological features are displayed. The tool accepts data encoded in the JSON open format that has libraries available for nearly every major programming language, making it simple to generate the data.

  14. Exploring technology impacts of Healthcare 2.0 initiatives.

    PubMed

    Randeree, Ebrahim

    2009-04-01

    As Internet access proliferates and technology becomes more accessible, the number of people online has been increasing. Web 2.0 and the social computing phenomena (such as Facebook, Friendster, Flickr, YouTube, Blogger, and MySpace) are creating a new reality on the Web: Users are changing from consumers of Web-available information and resources to generators of information and content. Moving beyond telehealth and Web sites, the push toward Personal Health Records has emerged as a new option for patients to take control of their medical data and to become active participants in the push toward widespread digitized healthcare. There is minimal research on the impact of Web 2.0 in healthcare. This paper reviews the changing patient-physician relationship in the Healthcare 2.0 environment, explores the technological challenges, and highlights areas for research.

  15. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less

  16. The BioHub Knowledge Base: Ontology and Repository for Sustainable Biosourcing.

    PubMed

    Read, Warren J; Demetriou, George; Nenadic, Goran; Ruddock, Noel; Stevens, Robert; Winter, Jerry

    2016-06-01

    The motivation for the BioHub project is to create an Integrated Knowledge Management System (IKMS) that will enable chemists to source ingredients from bio-renewables, rather than from non-sustainable sources such as fossil oil and its derivatives. The BioHubKB is the data repository of the IKMS; it employs Semantic Web technologies, especially OWL, to host data about chemical transformations, bio-renewable feedstocks, co-product streams and their chemical components. Access to this knowledge base is provided to other modules within the IKMS through a set of RESTful web services, driven by SPARQL queries to a Sesame back-end. The BioHubKB re-uses several bio-ontologies and bespoke extensions, primarily for chemical feedstocks and products, to form its knowledge organisation schema. Parts of plants form feedstocks, while various processes generate co-product streams that contain certain chemicals. Both chemicals and transformations are associated with certain qualities, which the BioHubKB also attempts to capture. Of immediate commercial and industrial importance is to estimate the cost of particular sets of chemical transformations (leading to candidate surfactants) performed in sequence, and these costs too are captured. Data are sourced from companies' internal knowledge and document stores, and from the publicly available literature. Both text analytics and manual curation play their part in populating the ontology. We describe the prototype IKMS, the BioHubKB and the services that it supports for the IKMS. The BioHubKB can be found via http://biohub.cs.manchester.ac.uk/ontology/biohub-kb.owl .

  17. The effects of neurotoxins on web-geometry and web-building behaviour in Araneus diadematus Cl.

    PubMed

    Hesselberg, Thomas; Vollrath, Fritz

    2004-09-15

    The process of orb weaving and the resultant orb web constitute a good example of a complex behavioural pattern that is still governed by a relatively simple set of rules. We used the orb spider Araneus diadematus as a model organism to study the effect of the three neurotoxins (scopolamine, amphetamine, and caffeine) on the spider's behaviour. Scopolamine was given at two concentrations, with the lower one showing no effects but the higher one reducing web-building frequency; there also appeared to be a weak effect on web geometry. Amphetamine and caffeine, on the other hand, both resulted in significant changes in both building frequency and web geometry, compared to the controls. Amphetamine webs retained their size but showed an increase in spiral spacing and radius irregularity, as well as a decrease in building efficiency. Caffeine led to a general decrease in size and a slight increase in spiral spacing, as well as radius irregularity. Furthermore, caffeine caused webs to be rounder. Our observations suggest that these neurotoxins disturb different parts of the web-building programme presumably by affecting different actions in the spider's CNS.

  18. A Cas9-based toolkit to program gene expression in Saccharomyces cerevisiae

    DOE PAGES

    Reider Apel, Amanda; d'Espaux, Leo; Wehrs, Maren; ...

    2016-11-28

    Despite the extensive use of Saccharomyces cerevisiae as a platform for synthetic biology, strain engineering remains slow and laborious. Here, we employ CRISPR/Cas9 technology to build a cloning-free toolkit that addresses commonly encountered obstacles in metabolic engineering, including chromosomal integration locus and promoter selection, as well as protein localization and solubility. The toolkit includes 23 Cas9-sgRNA plasmids, 37 promoters of various strengths and temporal expression profiles, and 10 protein-localization, degradation and solubility tags. We facilitated the use of these parts via a web-based tool, that automates the generation of DNA fragments for integration. Our system builds upon existing gene editingmore » methods in the thoroughness with which the parts are standardized and characterized, the types and number of parts available and the ease with which our methodology can be used to perform genetic edits in yeast. We demonstrated the applicability of this toolkit by optimizing the expression of a challenging but industrially important enzyme, taxadiene synthase (TXS). This approach enabled us to diagnose an issue with TXS solubility, the resolution of which yielded a 25-fold improvement in taxadiene production.« less

  19. A Cas9-based toolkit to program gene expression in Saccharomyces cerevisiae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reider Apel, Amanda; d'Espaux, Leo; Wehrs, Maren

    Despite the extensive use of Saccharomyces cerevisiae as a platform for synthetic biology, strain engineering remains slow and laborious. Here, we employ CRISPR/Cas9 technology to build a cloning-free toolkit that addresses commonly encountered obstacles in metabolic engineering, including chromosomal integration locus and promoter selection, as well as protein localization and solubility. The toolkit includes 23 Cas9-sgRNA plasmids, 37 promoters of various strengths and temporal expression profiles, and 10 protein-localization, degradation and solubility tags. We facilitated the use of these parts via a web-based tool, that automates the generation of DNA fragments for integration. Our system builds upon existing gene editingmore » methods in the thoroughness with which the parts are standardized and characterized, the types and number of parts available and the ease with which our methodology can be used to perform genetic edits in yeast. We demonstrated the applicability of this toolkit by optimizing the expression of a challenging but industrially important enzyme, taxadiene synthase (TXS). This approach enabled us to diagnose an issue with TXS solubility, the resolution of which yielded a 25-fold improvement in taxadiene production.« less

  20. Development of wide area environment accelerator operation and diagnostics method

    NASA Astrophysics Data System (ADS)

    Uchiyama, Akito; Furukawa, Kazuro

    2015-08-01

    Remote operation and diagnostic systems for particle accelerators have been developed for beam operation and maintenance in various situations. Even though fully remote experiments are not necessary, the remote diagnosis and maintenance of the accelerator is required. Considering remote-operation operator interfaces (OPIs), the use of standard protocols such as the hypertext transfer protocol (HTTP) is advantageous, because system-dependent protocols are unnecessary between the remote client and the on-site server. Here, we have developed a client system based on WebSocket, which is a new protocol provided by the Internet Engineering Task Force for Web-based systems, as a next-generation Web-based OPI using the Experimental Physics and Industrial Control System Channel Access protocol. As a result of this implementation, WebSocket-based client systems have become available for remote operation. Also, as regards practical application, the remote operation of an accelerator via a wide area network (WAN) faces a number of challenges, e.g., the accelerator has both experimental device and radiation generator characteristics. Any error in remote control system operation could result in an immediate breakdown. Therefore, we propose the implementation of an operator intervention system for remote accelerator diagnostics and support that can obviate any differences between the local control room and remote locations. Here, remote-operation Web-based OPIs, which resolve security issues, are developed.

Top